text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
(By Daniel O’Shea) As quantum computing continues to move rapidly forward, it also is becoming clear that there are some issues holding it back. Some of those issues, like the need for fault tolerance, will be solved in due time as the industry keeps researching, experimenting and innovating.
But there is at least one limiting challenge that needs to be addressed for the industry to keep up–and speed up–its pace of innovation: Quantum computing is facing a skills shortage.
This has been noted by research reports and was a subject of discussion during the Inside Quantum Technology Spring conference last May. Most recently, it was highlighted in a survey, commissioned by quantum software company Classiq, of 500 business managers across several industries who are familiar with quantum technology. The survey found that “lack of qualified manpower” is one of the biggest roadblocks to greater quantum deployment.
About half of the survey respondents said lack of quantum experts has prevented quantum computing from becoming even more popular than it already is.
The sector may not be lacking for people interested in obtaining better skills. Almost 95% of survey respondents said they would like to be trained in quantum, and more than 95% said they believed high schools and universities should offer more quantum computing training.
The skills shortage isn’t just a quantum problem. It’s a global problem affecting many different industries, and may also be a key component in ongoing disruptions and delays in global supply chains. In some sectors, there simply aren’t enough people that want to be involved. In quantum, there are some interested parties, but they lack the necessary skills and tools.
So, what can be done about this? Here’s a list of five things we need to see a lot more of if the industry is going to keep the quantum skills shortage from slowing its progress:
- More professional training offered by the leaders of the quantum computing ecosystem. The sector doesn’t just need PhDs. It also needs to help people who may be experts in other domains to better understand quantum. Some companies already are doing their part. Quantum skills training is a major part of IBM’s recently announced effort to provide digital skills and training to 30 million people by 2030, and the company in early 2021 launched its first quantum developer certification program.
- More university and even high school-level training. The sector doesn’t need just PhDs, but it definitely needs more PhDs, and that means weaving quantum computing into more existing science and engineering curriculum at these levels–the earlier the better.
- Greater promotion of quantum computing as a great career move. Those involved in the sector get it, but younger workers may not yet grasp how critical quantum technology will become over the next decade–and how much of a payoff there will be for skilled workers who seize the opportunity.
- More investment in software and other developer tools that make it easier to work with quantum without needing a PhD. About half of the respondents in Classiq’s survey said lack of developer software environments was an issue. This is an area where Classiq is trying to make a difference. “The issue is that to develop quantum software, today’s developers have to have a deep understanding of quantum physics and linear algebra,” said Classiq CMO Yuval Boger in an email to Inside Quantum Technology. “Not only does this require additional training, but it makes it difficult for domain-specific experts — those that are experts in chemistry, supply chain, machine learning or finance — from contributing to the quantum effort. Classiq solves this issue by offering a development platform that allows designers to focus on their goals, ‘the what’ of what they are trying to achieve, instead of on the ‘how’ of which qubits connect to which quantum gates, how to make this work across multiple computers, and how to fit everything within the constrained resources of the hardware.”
More focus on C-level awareness and understanding of quantum’s value. CEOs don’t need to build cooling fridges themselves, but if top-level executives have a better understanding of the value quantum technology can offer and how they need to direct their corporate strategies, budgets and organizational plans to take advantage of it, they can be key champions of the effort to help create a much larger quantum-skilled workforce. | <urn:uuid:aa2aad90-7e72-4d5f-999e-b679ac22f0c4> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/five-ways-to-combat-the-quantum-skills-shortage/amp/ | 2024-09-08T10:23:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00050.warc.gz | en | 0.964122 | 892 | 2.546875 | 3 |
The WannaCry ransomware attack made international headlines earlier this year when businesses around the world, including several hospitals in the UK and Russia’s interior ministry, lost access to their computers and data. Unlike some malware, WannaCry had a specific objective: to hit as many high-profile targets in as little time as possible in order to maximise the impact – and the spectacle – generated by the attack.
But the most alarming feature of the WannaCry attack wasn’t necessarily the choice of targets – it was the fact that it could be spread by sending a specific packet to a targeted SMBv1 server, effectively locking down an entire network with a single packet. In this blog, we’ll look at how that happened, and what we can learn about WannaCry’s impact on business data.
WannaCry was something of a one-two punch: the ransomware element was essentially the standard nasty stuff: infected machines were locked down and their files encrypted, with a demand for $300 worth of Bitcoin in exchange for decrypting them. In fact, while those responsible for the attack reportedly cleared half a million US dollars in a matter of days, there’s no evidence that anyone who paid up actually had their files restored. What made WannaCry so powerful was the way it spread. It exploited a vulnerability in the Server Message Block (SMB) protocol present in virtually every edition of Windows before Windows 10, meaning that once it got on a computer, it could spread across every vulnerable system within a network within moments. As a result, it was able to hit an estimated 230,000 computers around the world within a day.
It was particularly devastating to organisations that failed to keep computers patched, often because of the perceived logistical headache of applying updates without disrupting ongoing work. Microsoft was, in fact, aware of this vulnerability, and had deployed a patch on March 14, 2017 which was sadly too late to prevent the spread of WannaCry through networks around the world. If nothing else, WannaCry demonstrated that neglecting to keep systems up-to-date for fear of disruption to service delivery can result in far greater problems for your business.
WannaCry marked a major change in the ransomware threat. In the past, it’s been highly targeted, aimed at organisations with big budgets and where losing access to data simply isn’t an option. Now that criminals are combining blackmail tactics with security exploits to spread ransomware far and wide, they can use more of a scattergun approach, infecting as many people as possible in the hope that somebody pays up.
Unlike simpler cybercrimes, a WannaCry attack needs to be defended against in three different ways:
1) Minimise the threat of ransomware attacks in the first place. This requires both technical measures such as security scanning for incoming files and attachments, files on USB sticks, and even website visits. It also requires procedural measures such as educating staff about the risks and enforcing policies on smart and safe device use.
2) Contain the threat from spreading across a network. This means getting to grips with the structure of your network and the ways different computers share data across it. It also means making sure all your software – including your Operating System – is patched to the latest version to avoid security flaws.
3) Mitigate the damage if the worst happens. Make sure all data is backed up efficiently and comprehensively so that you can easily restore it when needed. It also means having a detailed recovery plan of how you would cope should systems become temporarily unusable.
4) Install anti-ransomware software. PowerNET recommends installing anti ransomware software like Sophos Intercept X. Sophos Intercept X features CryptoGuard, which prevents the malicious and spontaneous encryption of data by ransomware even trusted files or processes that have been hijacked. Once ransomware gets intercepted, CryptoGuard reverts your files back to their safe states.
Given both the scale and the scope of preparing for ransomware attacks, it can be a daunting prospect to do everything in-house. Consider getting a fresh set of eyes to look at the problem through an external review. Powernet offers a commitment-free review of your current IT environment and what you can do to help it support your business and your IT security better. Find out more at the link below. | <urn:uuid:d754111f-e022-4d0e-9645-407de8b4837c> | CC-MAIN-2024-38 | https://power-net.com.au/blog/ransomware-retrospective-a-look-at-wannacry-and-its-impact-on-business-data/ | 2024-09-11T22:24:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00650.warc.gz | en | 0.959633 | 881 | 2.640625 | 3 |
These days, businesses are constantly seeking efficient solutions and third-party expertise to protect their networks. Unified Threat Management (UTM) has emerged as a powerful tool, integrating multiple security functions into a single, streamlined platform.
This article delves into the concept of UTM, exploring its features, benefits, and the crucial role it plays in safeguarding modern enterprises against a wide range of cyber threats. Whether you’re an IT professional or a business owner, understanding UTM is essential for enhancing your organization’s security posture.
What this article will cover:
- Core components of UTM
- How UTM works
- Benefits of using UTM
- UTM vs. traditional security solutions
- Key features to look for in a UTM solution
- Implementation and best practices
- Case studies and real-world examples
- Future trends in UTM
Core components of UTM
The core components of Unified Threat Management (UTM) typically include the following:
- Firewall: Acts as the first line of defense by filtering incoming and outgoing traffic, preventing unauthorized access to the network.
- Antivirus and antimalware: Protects against viruses, ransomware, and other malicious software by scanning and eliminating threats.
- Intrusion Detection and Prevention Systems (IDPS): Monitors network traffic for suspicious activity and potential threats, providing alerts and automatically blocking harmful behaviors.
- Virtual Private Network (VPN): Enables secure remote access to the network by encrypting data transmission between remote users and the company’s internal network.
- Content filtering: Blocks access to inappropriate or harmful websites and content, enhancing productivity and preventing exposure to malicious sites.
- Spam filtering: Detects and blocks unwanted email messages, reducing the risk of phishing attacks and spam-related threats.
- Web Application Firewall (WAF): Protects web applications by monitoring and filtering HTTP traffic, safeguarding against common web-based attacks such as SQL injection and cross-site scripting (XSS).
- Data Loss Prevention (DLP): Prevents sensitive data from being transmitted outside the network, ensuring data integrity and compliance with regulations.
- Endpoint protection: Extends cybersecurity measures to devices connected to the network, ensuring comprehensive protection across all endpoints.
- Reporting and analytics: Provides detailed logs, reports, and analytics to help administrators monitor network activity, identify trends, and make informed information security management decisions.
By integrating these components, UTM offers a robust and simplified approach to network security, making it easier for organizations to manage and protect their digital assets.
How Unified Threat Management (UTM) works
Unified Threat Management (UTM) functions by integrating multiple security measures into a single, cohesive system. This integration streamlines network protection and simplifies the management of security protocols for businesses.
The architecture of UTM is designed to consolidate various security functions within a single appliance or software solution. This architecture typically includes:
Hardware or virtual appliance: UTM can be deployed on a physical device (hardware appliance) or as a virtual appliance running on existing infrastructure. This appliance serves as the central hub for all integrated security functions.
Modular design: UTM solutions are often modular, allowing businesses to add or remove security functions as needed. This flexibility ensures that organizations can tailor their security measures to meet specific requirements.
Integrated operating system: The UTM appliance runs on an integrated operating system that supports all the included security functions, ensuring seamless operation and compatibility.
Network interface: The appliance connects to the network, acting as a gateway that monitors and manages traffic flow, both inbound and outbound.
Integration of different security functions into a single appliance
One of the key advantages of UTM is the integration of multiple cybersecurity solutions into a single platform. These solutions typically include:
- Antivirus and antimalware
- Intrusion Detection and Prevention Systems (IDPS)
- Virtual Private Network (VPN)
- Content filtering
- Spam filtering
- Web Application Firewall (WAF)
- Data Loss Prevention (DLP)
- Endpoint protection
These functions work together to provide comprehensive security coverage, eliminating the need for separate, disparate solutions.
Centralized management and reporting
UTM solutions offer centralized management and reporting, which simplifies the administration of network security. This centralized approach includes:
Single management console: Administrators can manage all security functions from a unified console, reducing complexity and improving efficiency. This console provides an overview of the network’s security status, allowing for quick identification and resolution of issues.
Policy management: Security policies can be easily created, modified, and enforced across the entire network from the centralized console. This ensures consistent security standards and simplifies compliance with regulatory requirements.
Real-time monitoring: UTM solutions provide real-time monitoring of network traffic and security events. This immediate visibility helps in quickly detecting and responding to threats.
Comprehensive reporting: Detailed reports and analytics are available through the UTM console. These reports include logs of security events, traffic patterns, and compliance audits, helping administrators to make informed decisions and maintain a strong security posture.
Automated updates: UTM appliances receive regular updates to ensure they are protected against the latest threats. These updates are managed centrally, ensuring all integrated security functions remain current and effective.
By combining these elements, UTM solutions offer a streamlined, efficient approach to network security, providing robust protection while simplifying management and oversight.
Benefits of using UTM
Unified Threat Management (UTM) offers numerous advantages for organizations seeking comprehensive and streamlined network security solutions. Here are some key benefits.
UTM provides an all-in-one network security solution that integrates multiple protective measures, including firewall, antivirus, antimalware, intrusion prevention, VPN, content filtering, spam filtering, and more. This comprehensive approach ensures robust defense against a wide array of cyber threats.
With UTM, all security functions are managed from a single, centralized console. This simplifies the administrative process, reducing the need to manage and configure multiple disparate security systems. It also makes it easier to monitor network security, enforce policies, and respond to incidents.
Implementing a UTM solution can be more cost-effective than purchasing and maintaining multiple separate security products. The consolidation of security functions into a single appliance or software reduces hardware costs, simplifies licensing, and minimizes maintenance expenses.
UTM solutions are designed to optimize performance by integrating security management functions that work together seamlessly. This integration can lead to better overall network performance compared to using multiple standalone security products that might not be fully compatible or optimized for joint operation.
UTM appliances receive regular updates that cover all integrated security functions. This ensures that the system is always up-to-date with the latest threat definitions and security patches. Automated updates simplify maintenance and reduce the risk of vulnerabilities due to outdated software.
UTM vs. traditional security solutions
Unified Threat Management and traditional security solutions each offer unique advantages and challenges when it comes to protecting networks. UTM integrates multiple security functions—such as firewall, antivirus, antimalware, intrusion detection and prevention, VPN, content filtering, and spam filtering—into a single appliance or software solution. This all-in-one approach provides comprehensive coverage, ensuring all components work seamlessly together. In contrast, traditional security solutions involve deploying separate products for each function, which can lead to gaps in coverage and potential compatibility issues.
Management simplicity is another key differentiator. UTM solutions are managed through a single console, which simplifies administration, policy enforcement, and monitoring. This centralized management reduces complexity and makes it easier to deploy and configure security measures. On the other hand, traditional security solutions require managing multiple interfaces, increasing administrative overhead and the difficulty of coordinating updates, policies, and configurations.
Cost efficiency is a significant benefit of UTM. By consolidating multiple security functions into one appliance, UTM reduces the need for multiple licenses, hardware, and maintenance contracts, leading to a lower total cost of ownership. This is particularly advantageous for small and medium-sized businesses (SMBs) that need comprehensive security without the budget for multiple specialized products. Traditional security solutions, however, often incur higher costs due to the need to purchase, maintain, and license several standalone products, which also require more IT resources for deployment and integration.
In terms of performance and scalability, UTM solutions are designed to handle multiple network security functions efficiently within one system, minimizing performance degradation and offering easy scalability through additional modules or more powerful appliances. Traditional security solutions, however, may exhibit variable performance depending on the integration and network load, and scaling up these systems often requires significant additional investments and complex integration efforts.
UTM also provides unified reporting and enhanced visibility through centralized logging and monitoring, which facilitates quick threat detection and response. Conversely, traditional security solutions generate disparate reports from each product, requiring manual aggregation for a complete view, which can delay threat detection and response.
Finally, in terms of security effectiveness, UTM ensures all security components work in harmony, improving overall protection and facilitating faster updates. Traditional security solutions operate independently, potentially creating security gaps, and require individual updates that can lead to inconsistencies and vulnerabilities. Ultimately, the choice between UTM and traditional solutions depends on the specific needs, resources, and security goals of an organization.
Key features to look for in a UTM solution
When selecting a Unified Threat Management (UTM) solution, it’s important to focus on key features that ensure comprehensive and effective network security.
Choose a UTM solution that can grow with your organization. It should offer the ability to add new modules or upgrade the appliance to handle increased network traffic and evolving security needs. Scalability ensures that your security infrastructure remains robust as your business expands.
A user-friendly interface is crucial for efficient management. Look for a UTM with a single, intuitive console that allows for easy administration and policy enforcement. This simplifies the process of managing security functions and reduces the potential for errors.
Real-time Monitoring and Alerts
Real-time monitoring and alerts are essential for quickly detecting and responding to threats. Ensure the UTM provides real-time visibility into network traffic and security events, allowing you to take immediate action when necessary. This feature helps maintain a proactive security posture.
Regular Updates and Support
Regular updates are vital to protect against the latest threats. Choose a UTM solution that can receive automatic updates for all integrated security functions. Additionally, reliable technical support and customer service from the vendor are crucial for resolving issues and maintaining optimal performance. A vendor with a strong reputation in the industry and positive customer reviews can provide peace of mind.
UTM implementation and best practices
Steps to implement UTM in an organization
- Assessment and planning: Begin by assessing your organization’s current network security needs and infrastructure. Identify potential threats and vulnerabilities, and determine the specific security functions required from the UTM solution.
- Select the right UTM solution: Choose a UTM solution that fits your organization’s needs, considering factors like scalability, user-friendliness, real-time monitoring, and vendor support. Ensure the solution integrates well with your existing infrastructure.
- Deployment preparation: Prepare your network for UTM deployment. This includes configuring network settings, ensuring compatibility with existing systems, and planning for minimal disruption during installation.
- Install the UTM appliance: Physically install the UTM appliance or deploy the software solution on your network. Follow the vendor’s guidelines for a smooth installation process.
- Configuration: Configure the UTM according to your organization’s security policies and requirements. This involves setting up firewalls, VPNs, content filtering, and other integrated security functions.
- Testing and validation: Test the UTM solution to ensure all security functions are working correctly. Validate its performance under various scenarios to confirm it meets your security needs.
- Training: Train your IT staff on how to use and manage the UTM solution effectively. Ensure they understand the management console, reporting features, and how to respond to alerts.
- Ongoing management and monitoring: Regularly monitor the UTM’s performance and keep it updated. Use the centralized console to manage security policies, analyze reports, and respond to incidents promptly.
Best practices for maximizing UTM effectiveness
- Regular updates: Ensure your UTM solution is always up-to-date with the latest security patches and threat definitions. Regular updates help protect against new and emerging threats.
- Consistent policy enforcement: Implement and enforce consistent security policies across the organization. Regularly review and update these policies to adapt to changing security needs.
- Monitor alerts: Actively monitor real-time alerts and logs generated by the UTM. Promptly investigate and respond to any suspicious activity to mitigate potential threats.
- Periodic audits: Conduct regular security audits and vulnerability assessments to identify and address any weaknesses in your network security.
- User training: Educate employees about security best practices and the importance of adhering to company policies. User awareness can significantly reduce the risk of security breaches.
- Backup and recovery plans: Maintain robust backup and disaster recovery plans to ensure data integrity and business continuity in case of a security incident.
Common challenges and how to overcome them
- Complex configuration: Configuring a UTM solution can be complex, especially for large networks. To overcome this, follow the vendor’s guidelines closely and consider seeking assistance from experienced professionals if needed.
- Performance impact: Integrating multiple security functions can sometimes affect network performance. Mitigate this by choosing a UTM solution that is designed for high performance and regularly monitoring its impact on network traffic.
- Keeping up with updates: Ensuring the UTM solution is consistently updated can be challenging. Automate updates whenever possible and schedule regular maintenance checks to keep the system current.
- User resistance: Employees may resist new security measures, especially if they impact workflow. Overcome this by providing thorough training and demonstrating the importance of security for protecting the organization’s assets.
- Resource allocation: Implementing and managing a UTM solution requires resources and expertise. Ensure your IT team is adequately staffed and trained, and consider managed services if in-house resources are limited.
UTM case studies and real-world examples
Unified Threat Management (UTM) systems offer a versatile and comprehensive approach to network security, making them suitable for various use cases and real-world scenarios across different industries and organizational sizes.
Small and Medium-Sized Businesses (SMBs)
An SMB with limited IT resources needs robust security to protect against a range of cyber threats without the complexity and cost of multiple security products.
UTM provides an all-in-one solution that integrates firewall, antivirus, antimalware, VPN, and content filtering. This simplifies security management, reduces costs, and ensures comprehensive protection, allowing the business to focus on growth without worrying about cybersecurity.
A university needs to protect sensitive student and faculty data while managing internet usage across a large campus with numerous access points.
UTM can enforce content filtering to block inappropriate websites, provide secure VPN access for remote students and staff, and use intrusion detection and prevention systems (IDPS) to monitor and protect against unauthorized access and cyber attacks.
A healthcare provider must safeguard patient records and comply with stringent data protection regulations like HIPAA.
UTM offers data loss prevention (DLP) to ensure sensitive information does not leave the network unauthorized. It also includes regular updates and compliance reporting features to help maintain adherence to regulatory requirements, ensuring both security and compliance.
A retail chain with multiple locations needs to secure transaction data and protect against point-of-sale (POS) malware and network breaches.
UTM can centralize security management across all locations, providing firewall protection, malware detection, and secure VPN connections for remote access to corporate resources. This ensures consistent security policies and protection against cyber threats targeting POS systems.
Future trends in UTM
Unified Threat Management is evolving rapidly to meet the growing and sophisticated threats in the cybersecurity landscape.
One prominent trend is the integration of advanced threat detection technologies, such as machine learning and artificial intelligence, which enhance the ability to identify and respond to complex threats in real-time.
Cloud-based UTM solutions are also gaining traction, offering scalability and flexibility while reducing the need for on-premises hardware. This shift aligns with the increasing adoption of remote work, providing secure access for distributed workforces.
Another significant trend is the emphasis on user-friendly interfaces and automation, making it easier for organizations to manage security without extensive expertise.
Additionally, there is a growing focus on holistic security approaches that combine UTM with other cybersecurity strategies, such as zero-trust architecture and endpoint detection and response (EDR).
These trends reflect a broader movement towards comprehensive, adaptive, and user-focused security solutions that can keep pace with the dynamic nature of cyber threats.
Unified Threat Management (UTM) solutions are increasingly vital in today’s cybersecurity landscape, integrating multiple security functions into a single platform. UTM offers numerous benefits in scalability and cost effectiveness, which makes it particularly suitable for small and medium-sized businesses. | <urn:uuid:c6ffa210-3bda-46fb-a3e9-a6b1e0c40543> | CC-MAIN-2024-38 | https://www.ninjaone.com/blog/what-is-unified-threat-management/ | 2024-09-15T15:57:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00350.warc.gz | en | 0.906623 | 3,532 | 2.59375 | 3 |
The chapter on digital forensics covers identification, recovery, analysis, and preservation of evidence or trials in the digital system. In the era of technology, fraudulent activity became mainstream. It called for cybersecurity experts, a particular branch of digital nomads that can track the scene but sitting behind a computer. Digital devices leave traces of activity and data. A digital forensic expert can modify or crack them open to know what’s going on and report accordingly. Fixing the vulnerability is also part of the job. Most of the time, they work with government or specialized security teams. Nevertheless, to qualify as a digital forensic expert, knowledge of data extraction from various devices and operating systems is required.
Digital Forensic Experts are also known as Infosec Specialists, Digital Forensic Engineers, Digital Forensic Investigators, Digital Forensics Examiners, Digital Forensic Analysts, Digital Forensic Analysts, Digital Forensic Specialists, etc. Though there are many names we can call a forensic expert who works on offensive security, their target is quite the same. It is the experience level that differentiates which category the specialist should work on. Someone with more experience works on high-level breaches and threats. On the other hand, less experienced forensic experts work with smaller companies and serve customers directly.
More people than ever conduct office, transactions, and various delicate tasks online. Alongside it, the criminal justice system came online too. These slots have experts who know how to break down a crime scene digitally. As businesses are coming online, they have at least one computer that helps with online connectivity. It can open a backdoor for interested third parties. Criminals and hackers can infiltrate the system and steal information.
What does a Digital Forensic Expert Do?
Experts in the sector can extract evidence from all kinds of computing systems and prepare reports accordingly. Cyber-attacks can go the extra mile than typical website services taken out. Digital forensic expert sees through that and finds a loophole in the system that attacker used, traces it. Even if traces are no longer available, an expert will try to determine how the attack took place. In the end, fix the issue by minimizing loss and shut down the weak point.
Reconstructing digital information, analyzing data, solving crimes are day-to-day actions of a digital forensic expert. Other tasks with the title include data recovery from hard drive, finding and exploring evidence, writing investigation reports, work with the security department, so on. They are also called at crime scene houses to recover files from victim machines protected from the public and have secure login credentials. Private companies and government hire only the best digital forensic expert as it is a delicate job. Even before hiring an expert, the security and law enforcement department does swift background checking of the individual. It can be both digital and social.
Previously digital forensics was referred to as computer forensics. But mobile devices, cloud computing, the Internet of Things (IoT), and cybersecurity technologies became broader. It led to a whole sector for individuals to go far beyond technical knowledge and design secure workflow.
Licensing and Certifications:
There is no bound of knowledge and expertise one can have that is more than enough. As cybersecurity is gradually evolving, so are the techniques and education. A digital forensic expert needs to stay on top of the system. But to become a seasoned veteran in the sector, some licensing is required. They can vary by requirement from organization to organization. Private investigators need to go through a criminal background check and, most importantly, licensing exams.
There are lots of certifications available that helps get to the doorstep of a professional workplace. However, a university degree is not required but allows one to have one. Digital forensic experts usually get into the sector via certification and employer assistance. Some certifications are:
- Certified Computer Forensics Examiner (CCFE)
- Certified Penetration Tester (CPT)
- Certified Reverse Engineering Analyst (CREA)
- Certified Computer Examiner (CCE)
- Certified Forensic Computer Examiner (CFCE)
- Certified Ethical Hacker (CEH)
- EnCase Certified Examiner (EnCE)
- Certified Forensic Analyst (GCFA)
- Certified Information System Security Professional (CISSP)
- International Society of Forensic Computer Examiners (ISFCE)
- Global Information Assurance Certifications (GIAC)
There are also a few steps that go a long way to become a digital forensic expert; they are
Education: College degrees and programs are essential to get in front of potential employers and choose career paths. Computer Science, Computer Engineering, Information Security, Mathematics, Cybersecurity are a few highly beneficial courses.
Staying UpToDate: We can’t stress this enough, as practical skill in the sector matters and differentiates between a hobbyist and a professional. Most technically advanced people would browse through sectors of cybersecurity and ethical hacking. But they won’t go that far to take in into a professional career. However, a hobby helps to keep busy and acquiring essential skillset. But to be proficient in a field, there is a lot of grinding involved.
Choosing a career: Choosing a career in the digital forensic sector involves professional training, education, experience, and a certain level of adjustments. It can be pretty difficult for young people to choose the forensic sector over software development or other categories. There are tons of professional organizations like The Scientific Working Group on Digital Evidence (SWEDGE) that welcomes people in the scene.
Experience to become a Digital Forensic Expert:
We already broke down the sector into many categories essential to every aspect of becoming a digital forensic expert. But what matters in the industry and comes ahead of everything is experience. A digital forensic expert is a highly technical person with lots of expertise in the bag. Basics one should look forward to:
- Intermediate to advanced level programming in multiple languages, e.g., Python, Java, Bash, PHP, C+, assembly, etc.
- Low-level understanding of highly technical terms.
- Server-sided knowledge with data allocation, modification with different levels of access.
- Operating systems including Linux-based distributions, mobile.
- Network and hardware.
- Forensic tools and scripts, ability to develop an individual script for custom cases.
- Password cracking, not just brute force or dictionary attacks.
- Backup and cleaning traces.
- Encryption, decryption, cryptography, hashing, etc.
- Ability to gather delicate resources when necessary and ask for help without hesitation.
Of course, there are plenty more, and going through it will unlock more doors.
Understanding Data Forensics:
The two most common types of file systems are NTFS and FAT. In the earlier days, hard disk drives were not available. File Allocation System (FAT) was used in personal computers. It is utilized by pointing out files starting cluster. It was how files were used. FAT could carry out simple information like filenames, date, time, file attributes, and directory names. But after we succeeded with tech, new storage formats appeared that are easy to use and faster. Faster means more work done efficiently, without burning too many resources. FAT32, FAT 12, FAT 16, VFAT were some of the variants.
Once the commercial computer market exploded, NTFS took the world by storm. It killed FAT for the betterment of computing systems. They have enhanced file attributes, alternate data streams, file compression, encryption like LZ77, mount point, shadow copy, etc. They add logical volume to the system, which is great for files and tasks. But alongside makes them harder to crack. Different methods and protocols may surround the process of data extraction when a requirement is placed. It is one of the important points, and we couldn’t leave without mentioning it.
With the rapid growth of technology, becoming a digital forensic expert should give one a massive boost in early career succession. Demand is constantly growing in the tech sector, and while there are tons of great programmers, dedicated forensic experts are a little short by number. To fill that void, we need as many experience personals as possible.
Law enforcement agencies, organizations, cyber-criminal departments, network administrators are always looking for digital forensic experts. They pay high salaries to the suitable candidate. According to PayScale, $50,000-$114,000 are the average salary for digital forensic experts, and one position even offered $160,000. Attached to it are company beneficiaries and a genuinely motivating workplace.
Hope this article serves as a guide and provides essential information regarding becoming a digital forensic expert. | <urn:uuid:06dca879-e479-455f-b690-c0d45bf01d42> | CC-MAIN-2024-38 | https://www.hackingloops.com/forensic-expert/ | 2024-09-16T23:22:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00250.warc.gz | en | 0.944455 | 1,760 | 3.265625 | 3 |
I love stories that spotlight local innovators.
You’ve undoubtedly heard about lightRadio by now, those tiny, energy efficient cubes from Alcatel-Lucent (pictured above) that promise to one day replace those roadside eyesores, cell towers. What you probably haven’t heard is how Bell Labs in Murray Hill, NJ brought the concept to life.
Sarah Portlock of The Star Ledger reports on how Tod Sizer’s team fast-tracked the tech that would take Mobile World Congress in Barcelona by storm. lightRadio not only shrinks cellular antennas and the basestation — now a system-on-a-chip (SoC) co-developed with Freescale — it also “cloudifies” wireless communications, so to speak.
The device shrinks the antenna and radio devices at the top of a cell phone tower, relocating the network communications power systems — which sit at its base — to central data centers. As a result, the antenna casing can be smaller — about 2.3 inches, down from conventional antennas that are typically the size of an ironing board.
This is huge because the explosion of mobile data would otherwise cause cellular antenna arrays to get bigger and even more unsightly, not to mention draw even more power. But with lightRadio, wireless companies can leverage many of the cost and energy savings that cloud providers now enjoy and put a dent in the 18 million metric tons of CO2 that cellular basestations pump into the air every year. Plus, maintenance workers are sure to love never again having to lug heavy antennas and equipment atop poles and rooftops. A winning concept, no matter how you slice it.
So far, tests on the prototype have been a success and Verizon, Orange and China Mobile are exploring the tech.
Watch Tod Sizer explain the tech in the video below:
Image credit: Alcatel-Lucent | <urn:uuid:fe77ccb1-bcda-409b-92b9-43350122aa13> | CC-MAIN-2024-38 | https://www.ecoinsite.com/2011/03/njs-bell-labs-making-cell-towers-a-thing-of-the-past.html | 2024-09-19T12:10:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00050.warc.gz | en | 0.931655 | 392 | 2.53125 | 3 |
That's the number of emails sent around the world each second. That amounts to 60 billion daily or 20 trillion annually. This mind-boggling figure was revealed at Berlin during an Internet Security Conference.
A lot of it is actually spam or scam attack. Phishing as well is a growing or rather rocketing threat. 2005 saw three times the amount of phishing attempts as compared to 2004. Furthermore, according to the German Interior minister, phishing has been successful on 5% of all internet users.
I don't know where he got his figures, but this would mean that there are at least 50 million people out there who have been victims of phishing; that's slightly less than the population of the United Kingdom and if each victim lost $100, it would amount to an eye-popping $5 billion heist.
While email is battling with telephone and IM for the title of favourite business communication tool, its importance as a malware medium dwarfs that of the two others. Email remains by far the most vulnerable component of our modern communicating world.
Schemes like Yahoo's and AOL's fee based email services may provide us with a solution to the scam and phishing problems but user education and awareness still remain the most viable and effective way to stem these threats. | <urn:uuid:5628f671-d3a9-44e9-9ac7-59ad048486ef> | CC-MAIN-2024-38 | https://www.itproportal.com/2006/05/03/700000/ | 2024-09-19T10:38:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00050.warc.gz | en | 0.973472 | 260 | 2.75 | 3 |
Originally published by New Context.
One of the biggest weaknesses in any organization’s cybersecurity strategy is human error. Social engineering attacks take advantage of this vulnerability by conning unsuspecting people into compromising security and giving out sensitive information. Social engineers use various psychological hacks to trick you into trusting them or create a false sense of urgency and anxiety to lower your natural defenses. Attackers can then breach your physical or technological security to steal money or confidential information.
The only way to prevent being targeted by social engineering is to study the methods, psychological triggers, and technological tools these attackers use. Scammers use many different types of social engineering attacks, but some common giveaways can help you spot and avoid them.
To prevent a social engineering attack, you need to understand what they look like and how you might be targeted. These are the 10 most common types of social engineering attacks to be aware of.
Phishing is the most common type of social engineering attack, typically using spoofed email addresses and links to trick people into providing login credentials, credit card numbers, or other personal information. Variations of phishing attacks include:
Whaling is another common variation of phishing that specifically targets top-level business executives and the heads of government agencies. Whaling attacks usually spoof the email addresses of other high-ranking people in the company or agency and contain urgent messaging about a fake emergency or time-sensitive opportunity. Successful whaling attacks can expose a lot of confidential, sensitive information due to the high-level network access these executives and directors have.
In an old-school diversion theft scheme, the thief persuades a delivery driver or courier to travel to the wrong location or hand off a parcel to someone other than the intended recipient. In an online diversion theft scheme, a thief steals sensitive data by tricking the victim into sending it to or sharing it with the wrong person. The thief often accomplishes this by spoofing the email address of someone in the victim’s company—an auditing firm or a financial institution, for example.
Baiting is a type of social engineering attack that lures victims into providing sensitive information or credentials by promising something of value for free. For example, the victim receives an email that promises a free gift card if they click a link to take a survey. The link might redirect them to a spoofed Office 365 login page that captures their email address and password and sends them to a malicious actor.
In a honey trap attack, the perpetrator pretends to be romantically or sexually interested in the victim and lures them into an online relationship. The attacker then persuades the victim to reveal confidential information or pay them large sums of money.
Pretexting is a fairly sophisticated type of social engineering attack in which a scammer creates a pretext or fabricated scenario—pretending to be an IRS auditor, for example—to con someone into providing sensitive personal or financial information, such as their social security number. In this type of attack, someone can also physically acquire access to your data by pretending to be a vendor, delivery driver, or contractor to gain your staff’s trust.
SMS phishing is becoming a much larger problem as more organizations embrace texting as a primary method of communication. In one method of SMS phishing, scammers send text messages that spoof multi-factor authentication requests and redirect victims to malicious web pages that collect their credentials or install malware on their phones.
Scareware is a form of social engineering in which a scammer inserts malicious code into a webpage that causes pop-up windows with flashing colors and alarming sounds to appear. These pop-up windows will falsely alert you to a virus that’s been installed on your system. You’ll be told to purchase and download their security software, and the scammers will either steal your credit card information, install real viruses on your system, or (most likely) both.
Tailgating, also known as piggybacking, is a social engineering tactic in which an attacker physically follows someone into a secure or restricted area. Sometimes the scammer will pretend they forgot their access card, or they’ll engage someone in an animated conversation on their way into the area so their lack of authorized identification goes unnoticed.
In a watering hole attack, a hacker infects a legitimate website that their targets are known to visit. Then, when their chosen victims log into the site, the hacker either captures their credentials and uses them to breach the target’s network, or they install a backdoor trojan to access the network.
Social engineering represents a critical threat to your organization’s security, so you must prioritize the prevention and mitigation of these attacks as a core part of your cybersecurity strategy. Preventing a social engineering attack requires a holistic approach to security that combines technological security tools with comprehensive training for staff and executives.
Your first line of defense against a social engineering attack is training. Everyone in your organization should know how to spot the most common social engineering tactics, and they should understand the psychological triggers that scammers use to take advantage of people. A comprehensive social engineering and security awareness training course should teach staff to:
You also need to follow up your security awareness training with periodic tests to ensure your staff hasn’t become complacent. Many training programs allow for the administration of simulated phishing tests in which fake phishing emails are sent to staff members to gauge how many people fall for the social engineering tactics. Those staff members can then be retrained as needed.
Creating a positive security culture within your organization is critical for containing a social engineering attack that’s already happened. Your staff needs to feel comfortable self-reporting if they believe they’ve fallen victim to a social engineering attack, which they won’t do if they’re concerned about facing punishment or public humiliation. If these issues are reported as soon as they occur, the threat can be mitigated quickly before too much damage has occurred.
Finally, you need to implement technological security tools to prevent attacks on your organization and minimize the damage from any successful breaches. These tools should include firewalls, email spam filters, antivirus and anti-malware software, network monitoring tools, and patch management. | <urn:uuid:5434b63f-7785-47d7-85a2-53f9fb6a624d> | CC-MAIN-2024-38 | https://www.copado.com/resources/blog/12-types-of-social-engineering-attacks-to-look-out-for | 2024-09-09T18:02:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00050.warc.gz | en | 0.94641 | 1,264 | 3.21875 | 3 |
It has been a given that there are a lot of things that networks and workstations would be vulnerable to. At the top of the list are harmful files and sudden intrusions that are obviously up to no good. While resorting to firewalls may be seen as something that would prevent such attacks, intrusion detection systems cater more towards the inner system igniters, usually providing warnings prior to required action on the part of network administrators on the issue at hand.
Also, IDS monitors the behavior of the internal system since attacks of any sort may occur from files that can be initiated at any time or have already passed through the firewall for some reason beyond the set security policies.
It is a good practice to always check the network communications and identify possible security breaches. While intrusion detection systems can be able to apprehend abnormal processes, the presence of such intrusions within the internal system only proves that system and network security should be re-evaluated for stricter measures.
[tags]intrusion detection systems, network security, operating systems[/tags] | <urn:uuid:3e78470f-e73d-4d7d-a226-2152cd823f48> | CC-MAIN-2024-38 | https://www.it-security-blog.com/tag/intrusion-detection-systems/ | 2024-09-10T21:21:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00850.warc.gz | en | 0.941088 | 213 | 2.6875 | 3 |
Transformational Leadership is a concept for leaders to transform the workplace by inspiring team members to create change. Transformational leadership comes naturally to very few people—which means you can learn to become a transformational leader.
Let’s take a look at transformational leadership and how it works within the enterprise.
(This tutorial is part of our IT Leadership & Best Practices Guide. Use the right-hand menu to navigate.)
What is Transformational Leadership?
Transformational leadership is a style of leadership in which leaders encourage, inspire, and motivate employees to create change that overall helps to grow and shape the future success of a company. The executive level sets examples that contribute to three enterprise characteristics:
- Influential company culture
- Employee ownership
- An independent workplace without a level of micromanagement
Transformational Leaders lead with a level of trust to create a healthy work environment. This style trains staff members to take authority over decisions within their assigned jobs. This leadership style allows employees to:
- Be more creative
- Plan for future success
- Discover new solutions for problems that arise within their duties
Though Transformational Leadership might begin with the leader directly, it does not end there. Employees trained within this leadership style are also preparing to become transformational leaders themselves through company mentorship and training.
Thanks to its very nature—developing and delivering a team vision to change the culture of the company for the better—transformational leadership can be used especially in:
A brief history of Transformational Leadership
Transformational leadership was first conceptualized by James V. Downton, an American sociologist, in 1973. However, the leadership style was not fully developed until 1978, when American historian and political scientist James MacGregor Burns wrote Leadership. Burns studied various political leaders, including both Franklin D. Roosevelt and John F Kennedy. During this time, he developed his theory of Transformational Leadership.
This leadership concept was expanded further during the 1980s by Bernard M. Bass, an American scholar in leadership studies and organizational behavior. Bass noted that the transformational leadership model inspired followers to:
- Reach a higher level on the consciousness towards the company’s goals
- Rise above their self-interest for the organization
- Approach a higher level of needs
Transformational leadership is also associated with the Servant Leadership philosophy notably embraced by Mahatma Gandhi, Nelson Mandela, and other historical figures.
The 4 I’s of Transformational Leadership
Bass suggests that transformational leadership involves four different elements, known today as the 4 I’s. These factors are crucial for any leader who wants to inspire, nurture, and develop their employees.
Let’s take a look at each factor.
Character: Promoting trust to earn respect
Idealized influence refers to how Transformational Leaders exert their weight within a group. A transformational leader must serve as a role model for their followers. Instilling trust and respect of the leader evokes the followers to emulate this individual and internalize the leader’s ideals.
Their team exceptionally respects these leaders because of the example they set forth for others. This type of leader also provides a clear vision and a sense of belonging, which encourages individuals to follow long-term objectives and drives them to achieve their own goals within the organization.
This leader is a powerful role model, and based on the example set, their team of followers will imitate this leader an aspire to become the leader.
Character: Challenging the status quo
Intellectual stimulation means creating a diverse and open environment within the Transformation Leader’s organization. The setting is a space open to innovation and forming new ideas both for the company and for themselves.
The Transformational Leadership style challenges the status quo, encouraging team members to think outside of the box to reach their creative potential. This type of lead galvanizes their team to explore new ways of doing things and seek new opportunities to learn and grow within the organization.
This leadership style can play an influential role by openly pushing their followers to challenge their own beliefs and values (as well as those of the company) to stray from the norm.
Character: Encouraging, motivating, and inspiring others
Performance is a vital component of the Transformation Leadership style. A Transformational Leader must be able to motivate and inspire their team. This leadership plays the role of improving performance by encouraging their team’s morale through motivational techniques and presenting themselves as inspirational forces that drive their organization’s team members.
A Transformational Lead is a positive communicator of their high expectations to individual followers and encourages them on a solo level to gain their trust and commitment to the shared vision of the company’s goals and beliefs.
Instilling a clear vision that the Transformational Leader can voice to their team allows the leader to foster passion and motivation to meet their team member’s individual and the company’s goals.
Character: Communicating openly to support all team members
Together with fostering an open environment, a Transformational Leader is actively seeking to create a diverse and supportive space. In this workspace, all individuals and their differences are respected and celebrated.
A leader of the Transformational leadership style offers support and encouragement to their team. Creating this growing supportive relationship involves a leader that keeps an open line of communication so that:
- Team members feel comfortable sharing ideas.
- Leaders can offer direct recognition of team member contributions.
Open communication allows for the lead to act as a mentor and coach for the team members continuing to work towards developing, empowering, and inspiring their team to achieve more.
This leader is always happy to listen to other’s concerns or needs of their team. Individual consideration is key to creating future leaders.
Benefits of Transformational Leadership
Both the leaders and the team members of Transformational Leadership experience many positive outcomes, including:
- Active engagement
- Higher productivity
- Personal and professional satisfaction
- Positive attitude
- Lower stress
How to become a Transformational Leader
Becoming a Transformational Leader is something you work at; it’s not something you either have—or don’t. Transformational Leaders actively embrace and commit to the 4 I’s to become a Transformational Leader.
To move towards Transformational Leadership, start by assessing your current leadership style. Consider how your strengths can benefit the team you are leading. Acknowledge your weaknesses and gaps and consider ways that you can overcome or limit these.
Some steps to developing your leadership style include:
- Understanding your own strengths and weaknesses
- Developing an inspiring vision for the future
- Motivating everyone around you
- Involving yourself in the Transformational Leadership concept
- Building trust and loyalty with your team members
Developing these tools of transformational leadership and working to improve you and your team’s areas of weakness to reach your overall goals can put you on a path to becoming a transformational leader.
Transformational leadership for the enterprise
The transformational leadership style can be hugely affected when used appropriately in a company or organization setting.
Every working environment is different. In some cases, a team or certain individuals may require a leadership style with more management that involves a closer eye and more significant direction, especially in a situation where team members may be less skilled or newer to the company.
For more on business leadership, explore these resources:
- BMC Business of IT Blog
- Guide to IT Leadership & Best Practices, with 10+ articles
- Guide to IT Careers
- What Is a Chief Service Officer (CSO)?
- CIO Leadership Styles
- Must-Read Emotional Intelligence Books for CIOs and IT Leaders | <urn:uuid:413ac4ea-c43d-43c5-8b39-6fa6e402d072> | CC-MAIN-2024-38 | https://blogs.bmc.com/transformational-leadership/ | 2024-09-12T01:23:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00750.warc.gz | en | 0.945152 | 1,569 | 2.625 | 3 |
The conversation got underway slowly. It began last July, when the Paris-based think tank The Shift Project published a report [PDF] advocating a kind of social state of mind it called “digital sobriety.” The report alleged that electricity expended in the act of delivering streaming video during 2018 (including data center power) was equal to about 96 percent of electricity used in all of Spain in 2011.
Then, in October, the online issues blog Big Think brought more publicity to the report. The blog quoted a member of The Shift Project as saying that Netflix’s streaming of a half-hour-long video to one viewer resulted in the same amount of CO2 emissions as driving an ordinary gasoline-powered automobile four miles.
The debate picked up steam this January, when Stanford University consulting professor Jonathan Koomey weighed in on Twitter. Citing a corollary of widely-held industry observation that has come to be known as Koomey’s Law, Dr. Koomey called foul:
Big Think has since heavily revised the blog page that started all this, removing many of the claims and dramatically shrinking the size (as well as the claims) of the article.
The fact that there is a debate on this topic, at this general a level, at this stage of the global climate change crisis, speaks to how little we all know about the evolving, perhaps metamorphosing, topics of data center power and carbon emissions. On one side of the argument is the genuine disbelief, backed up by data (which we’ll share momentarily) in the theory that every viewing of a TV sitcom triggers the emission of as much carbon as a drive to the hospital. Perhaps no industry has done more to consistently re-engineer its core architecture for measurably reduced power consumption than the data center industry.
Yet there’s some validity to the report’s central concern. When television was a broadcast industry, transmitters may have had tremendous power boasted by their stations in “thousands of watts!” Yet they served tens of thousands of viewers simultaneously. Certainly, video streaming, being a data center-driven task, must consume substantially more power than wireless broadcast transmission in serving the same number of viewers.
Are we being too careless in not knowing just how many kilograms or pounds of CO2 we’re talking about off the top of our heads?
“To do these kinds of calculations, you have to have a certain base level of technical knowledge,” Koomey said in an interview with Data Center Knowledge. “And you have to have experience with the kinds of problems that can come into these calculations.”
fulton streaming chart 1
In 2017, Koomey co-authored a study, published in the Journal of Industrial Ecology, investigating the energy consumption of all classes of data being delivered via internet by data center servers. That study produced the latest rendition at the time of the phenomenon of Koomey’s Law: The electricity intensity of internet transmissions, measured in kilowatt-hours per gigabyte (kWh/GB), were continuing to drop at a logarithmic rate, with no end in sight. Using observed data, the team calculated that in 2015 data centers consumed 0.06 kWh/GB, compared to about 6.8 kWh/GB in 2000. At the rate of decline measured then, data centers would have stopped using electricity altogether before the end of 2019.
That being impossible, Koomey told us, it’s fair to assume that data centers today, at the start of 2020, consume as little as 0.015 kWh/GB in the transmission of data through the internet to a user. This would be an average for all classes of data-driven applications, including but not limited to streaming video.
By Netflix’s own public estimates, a one-hour-long 4K Ultra-HD video consumes 7GB of transmitted data on average. Using Koomey’s current-consumption estimate, that half-hour stream should consume 0.0525kWh of electricity.
As DCK readers know full well, the world’s data centers are not entirely fueled by coal-burning generators. In Koomey’s experience researching all forms of energy generation, a common rule of thumb is that every 1kWh of energy generated by coal yields 1 kg of CO2. But according to more direct measurements, in a hypothetical all-coal world, every kilowatt-hour would yield 52.5 grams of CO2 emissions.
For its report on online video, The Shift Project chose to use the energy conversion factor of 0.519 kg CO2 e (carbon dioxide equivalent) per 1kWh. Up until recently, that figure had been used by the UK Government’s Building Research Establishment as a 2014 estimate of energy emissions on account of electricity consumed by residential dwellings, including bungalows and huts.
But citing the dramatic new energy strategies applied to the UK’s energy grid since that time, British regulators in January 2019 adopted a downwardly revised figure of 0.233 kg CO2 e per 1kWh. It’s important to note, however, that this figure was never intended to be applied to data centers. Furthermore, it’s not a worldwide estimate, as The Shift Project claimed, but a rule-of-thumb number applied mostly to British households.
Is there a conversion factor that applies not just to the specific case of data center power but the narrower use case of serving streaming video? For the answer, we found Andrew Sauber, an engineer with cloud platform provider Linode. His previous experience includes responsibility for servers with the express purpose of streaming video — specifically for what was until 2018 the world’s principal online destination for South Korean soap operas.
In Sauber’s experience, each physical CDN server maintained a 40 Gbps link to the internet. Each connection to that server consumed 7,500 kbps of bandwidth. The power draw for each server, he reported, was 321W. Using that math, at full saturation (for which each operator was instructed to strive) each server would be providing 5,333 simultaneous streams.
By Sauber’s estimate, the server would consume 160.5 watt-hours to serve a half-hour video to more than 5,000 viewers simultaneously. Using that math, a half-hour soap opera streamed from his server to each viewer would consume 0.0301 watt-hours.
The Shift Project’s projection attempted to include all carbon emissions for energy generated through the entire streaming video delivery lifecycle, which would encompass internet transmission power, including routers and switches for all intermediate hops, as well as the device the viewer used. An ordinary 49-inch diagonal smart TV (one that connects to Netflix directly, without a dongle or attached PC) consumes 145W. As for internet transmission power, Koomey provided an estimate based on studies in which he directly participated: about 0.07kWh for the half-hour period.
Adding it all up, streaming half an hour of a Korean soap opera takes 0.1426 kWh of electricity. Sauber computed that an average light-duty automobile traveling four miles should consume some 6.28kWh. According to Natural Resources Canada, an internal combustion engine burning 1 gallon of gasoline produces about 8.7 kg CO2 e. A nearly four-mile drive in Sauber’s light-duty car would produce 1.52 kg of carbon emissions.
Using the corrected version of the metric Shift Project chose, that half-hour soap opera would produce about 0.0332 kg CO2 e. That puts The Shift Project’s estimate off by a factor of about 45. Put another way, every component in that soap opera streaming process would collectively pump the equivalent amount of carbons as that light-duty vehicle traveling about 461 feet. That’s about as long as the City Hall Building in Los Angeles (the one on the “Dragnet” police badge) is tall.
Even these results have probably become dated by current trends. Unlike Sauber’s old digs, Netflix uses a revolutionary microservices-driven server architecture its engineers invented. The old CDN was designed around the first generation of virtual machines, probably during the 2000s, when containerization was a class of advice given to co-workers who shared the same refrigerator in the break room. Today, Sauber’s Linode environment is highly containerized. Appliances that once handled the task of load balancing have been replaced by software-based microservices such as NGINX and HAProxy that distribute workloads much more efficiently, and at a very granular level.
“The reason that containers are so much more efficient than virtual machines is that they don't need to pretend to be a collection of physical hardware,” Sauber wrote in a note to DCK. “They simply use a variety of Linux kernel features (cgroups, network namespaces) that isolate the filesystem and network of a particular workload.”
Sauber personally contributes to these increases in efficiency as a contributor to Kubernetes, through his membership in the CNCF’s Cluster Lifecycle SIG. There, he’s personally witnessed efficiency gains in orders of magnitude, the precise measurements of which have yet to be fathomed.
So, the actual amount of carbon emissions resulting from operations of Netflix and Amazon Web Services (on top of which much of Netflix runs) is probably even lower than our estimate above.
How could The Shift Project overestimate by so much? Koomey noted that the French think tank often cited the work of Huawei Technologies researcher Dr. Anders Andrae, whose 2015 estimate of global electricity usage [PDF] speculated that by 2030 communications tech could be responsible for consuming more than half the total electricity generated in the world.
As Koomey told us, Andrae’s methodology is based on a forecast of projected increases in the world’s consumption of data. That forecast was based on multi-fold increases estimated at the start of the last decade. Andrae then, Koomey noted, applied blanket electricity-consumption figures against data consumption, and then increased his energy projections by the same factor as his data projections – without taking account of the efficiencies and innovations attained in both data distribution and energy consumption in recent years. In other words, the Andrae study appears to assume that a kilowatt-hour five or ten years from now will have the same efficiencies as a kilowatt-hour today.
What have we learned from this exercise? It should not take an under-researched overestimate of ITC energy usage or carbon emissions for people in the ITC industry to take serious note of how much power data centers, and the technology surrounding them, consume. Granted, servers are not powered by V8 engines or Franklin stoves, nor are they spitting out carbons as though they were.
fulton streaming chart 3 corrected.png
But if we are to take activist Greta Thunberg’s admonitions seriously, then we need to have a comprehension of the systems we’re using and the energy they utilize at least as thorough as Andrew Sauber’s. If we let folks on the talk-show circuit take over this topic, they will pollute the conversation as rapidly as greenhouse gases have polluted Earth’s atmosphere. Then, when we need the support – and eventually the good faith – of people in government, we’ll find them already hijacked by extremists.
Driving a gasoline-burning car even 460 feet still contributes to the greatest threat this planet faces. Now that we know it’s 460 feet, we have a better chance of making it 230, and from there, 115.
About the Author
You May Also Like | <urn:uuid:32dc0812-8869-45e4-b461-db606d390720> | CC-MAIN-2024-38 | https://www.datacenterknowledge.com/energy-power-supply/how-much-is-netflix-really-contributing-to-climate-change- | 2024-09-14T14:21:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00550.warc.gz | en | 0.949401 | 2,457 | 2.578125 | 3 |
Field Service Workers are regularly engaged to collect data or carry out inspections and assessments when visiting customer sites or remote area locations.
The data collected by Field Service workers, will be used by businesses who will analyse, process and build reports based on the large volumes of data collected. The accuracy and reliability of data collected is vitally important.
Traditionally businesses may have deployed mail surveys, telephone interviews, door-to-door surveys and interviews performed by Field Workers to collect data.
Digital Transformation is gradually changing many business operations and a great deal of processes which were traditionally executed manually are now accomplished making use of digital methods.
Technology is having a major impact not only how businesses research and analyse data, but primarily how data and information is collected. New tools and processes to data collection are improving data collection and analysis, leading to dramatic improvements and maximisation and optimisation of resources and operations.
Utilising Digital Data Collection methods enables organisations to not only obtain results quicker but also use the data to make data based decisions faster.
What is a Digital Form?
Digital Forms, also known as Mobile Forms are electronic versions of paper forms that can be completed using:- Laptop
- Smart Phone
- Any Mobile Device
Why Use Digital Forms ?
Digital forms can be a simple yet highly effective solution to overcome the challenges presented by paper based forms. Digital forms can be filled out directly using Smart phones and tablets in the field- When not connected to the internet or even low speed internet connections
- When working in remote locations
- To avoid damage, illegible handwriting or even lost and misplaced forms.
Advantages of Digital Forms
Time and Cost Saving
Using Digital Forms instead of paper-based forms provides a significant impact on improving time and cost savings on printing, storing and distribution costs. Businesses also spend a significant amount of time and money in Administration and double data entry processes incurred by paper based forms. Transferring information from paper based surveys is an error prone process.Digital Forms can save up to 20 man hours a week in administration costs
Improve data accuracy
Digital Forms can auto-populate fields based on prior data entered and also enable field-level validation. Digital data collection also eliminated data entry errors and data loss. Additional data can also be automatically be gathered such as Username, Geo-location and Time & Date.Real Time Reporting
The issue with Paper-based data collection is that there will always be a time lag before reports or decision can be made. With a digital platform, such as FieldElite – Mobile Workforce Management , data can be processed and analysed as it is collected. Providing data driven insights to provide proactive rather than reactive reports to improve and optimise operations in real time.It’s time to go Digital Forms!
Data Collection using Digital Forms will propel your company into the future and transform your data collection, data entry and analysis providing accurate data driven insights in real time. Digital forms are also mobile-optimized, updated in real time, and accessible by multiple parties, eliminating unnecessary meetings and emails. If you have a business and still haven’t used digital forms to gather information, contact Denizon today to organise a Demo of FieldElite – Mobile Workforce Management and discover how we can help you to transform your Field Service OperationsContact Us
- (+353)(0)1-443-3807 – IRL
- (+44)(0)20-7193-9751 – UK | <urn:uuid:be3ddbe2-95a6-490f-905e-f53ae9ec5d5c> | CC-MAIN-2024-38 | https://www.denizon.com/what-are-the-benefits-of-digital-forms-data-collection/ | 2024-09-20T19:45:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00050.warc.gz | en | 0.918209 | 702 | 2.546875 | 3 |
On December 14, 2020, nearly 70 million users were affected when a number of critical Google services went down for more than an hour. The outage highlighted both our daily dependency on essential IT services, which has only increased due to the pandemic, as well as the need to ensure that the critical services running across government, healthcare, finance and business networks always remain available. Ensuring this availability is a key mission behind World Backup Day, one that is especially relevant today.
A backup is simply a copy of critical files that ensures access in the event that the primary location is unavailable. When implemented along with a strong disaster recovery plan, data backups give organizations the chance to restart services quickly in case of a natural disaster or a cyberattack.
Backing up is a key part of robust cybersecurity planning, but especially in today’s world with a 25% uptick in ransomware attacks from 2019 to 2020. And while ransomware victims paid nearly $30 million to get their data back, backup data is the quickest and cheapest way to mitigate the damage and recover from this kind of attack. Backing up data is especially critical in healthcare, finance and government organization. From 2016-2020, about 91.8 million data records were stolen in ransomware attacks where the healthcare organizations either lost the data permanently or ended up paying attackers ransom to regain access to their data. With data backups, organizations can retrieve their crucial data and keep their critical health IT operations running.
There are a number of options for an organization deciding which backup method suits its purposes—Full, Differential, Incremental and Mirror Backup are some of the most common options, each with its own cost and benefit. A more important, and less-often asked, question for organizations however is which deployment option the organization prefers for its backup. The available options are:
- On-premises: This is preferred when the recovery time objective—the time taken to regain operations after an attack or network outage—is too short to involve off-premises elements. While it retrieves data and recovers IT services rapidly, disaster recovery is challenging in cases involving natural disasters and power outages.
- Cloud: This method comes with the benefits of scalability and lower capital investment costs. It is also more efficient during regional disasters and often comes with cloud-managed disaster recovery plans. The only catch is that organizations with strict data privacy regulations might find it difficult to adopt a fully cloud backup solution.
- Hybrid: With this method, the organization can decide to keep the production environment on-premises while storing the data backup in the cloud. While this ensures that critical data stays on-premises, it also ensures that the organization can leverage the scalability and diversification benefits of the cloud as well.
A strong backup and disaster recovery plan is a smart insurance policy that organizations need to protect themselves from data loss and its repercussions on their business. World Backup Day is a great reminder to align our priorities and get ahead of attackers with preemptive measures for a resilient IT infrastructure that can recover quickly from unforeseen outages. | <urn:uuid:244fad84-a02f-4e96-8789-1140a29b3adb> | CC-MAIN-2024-38 | https://blogs.infoblox.com/community/get-your-network-back-up-this-world-backup-day-2021/ | 2024-09-08T15:23:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00314.warc.gz | en | 0.954651 | 616 | 2.6875 | 3 |
VLAN Routing with Layer 3 Switch Routed Ports
When Layer 3 switches use SVIs, the physical interfaces on the switches act like they always have: as Layer 2 interfaces. That is, the physical interfaces receive Ethernet frames. The switch learns the source MAC address of the frame, and the switch forwards the frame based on the destination MAC address. To perform routing, any Ethernet frames destined for any of the SVI interface MAC addresses trigger the processing of the Layer 2 switching logic, resulting in normal routing actions like stripping data-link headers, making a routing decision, and so on.
Alternately, the Layer 3 switch configuration can make a physical port act like a router interface instead of a switch interface. To do so, the switch configuration makes that port a routed port. On a routed port, the switch does not perform Layer 2 switching logic on that frame. Instead, frames arriving in a routed port trigger the Layer 3 routing logic, including
Stripping off the incoming frame’s Ethernet data-link header/trailer
Making a Layer 3 forwarding decision by comparing the destination IP address to the IP routing table
Adding a new Ethernet data-link header/trailer to the packet
Forwarding the packet, encapsulated in a new frame
This third major section of the chapter examines routed interfaces as configured on Cisco Layer 3 switches, but with a particular goal in mind: to also discuss Layer 3 EtherChannels. The exam topics do not mention routed interfaces specifically, but the exam topics do mention L3 EtherChannels, meaning Layer 3 EtherChannels.
You might recall that Chapter 10, “RSTP and EtherChannel Configuration,” discussed Layer 2 EtherChannels. Like Layer 2 EtherChannels, Layer 3 EtherChannels also treat multiple links as one link. Unlike Layer 2 EtherChannels, however, Layer 3 EtherChannels treat the channel as a routed port instead of switched port. So this section first looks at routed ports on Cisco Layer 3 switches and then discusses Layer 3 EtherChannels.
Implementing Routed Interfaces on Switches
When a Layer 3 switch needs a Layer 3 interface connected to a subnet, and only one physical interface connects to that subnet, the network engineer can choose to use a routed port instead of an SVI. Conversely, when the Layer 3 switch needs a Layer 3 interface connected to a subnet, and many physical interfaces on the switch connect to that subnet, an SVI needs to be used. (SVIs forward traffic internally into the VLAN, so that then the Layer 2 logic can forward the frame out any of the ports in the VLAN. Routed ports cannot.)
To see why, consider the design in Figure 17-4, which repeats the same design from Figure 17-3 (used in the SVI examples). In that design, the gray rectangle on the right represents the switch and its internals. On the right of the switch, at least two access ports sit in both VLAN 10 and VLAN 20. However, that figure shows a single link from the switch to Router B1. The switch could configure the port as an access port in a separate VLAN, as shown with VLAN 30 in Examples 17-6 and 17-7. However, with only one switch port needed, the switch could configure that link as a routed port, as shown in the figure.
FIGURE 17-4 Routing on a Routed Interface on a Switch
Enabling a switch interface to be a routed interface instead of a switched interface is simple: just use the no switchport subcommand on the physical interface. Cisco switches capable of being a Layer 3 switch use a default of the switchport command to each switch physical interface. Think about the word switchport for a moment. With that term, Cisco tells the switch to treat the port like it is a port on a switch—that is, a Layer 2 port on a switch. To make the port stop acting like a switch port and instead act like a router port, use the no switchport command on the interface.
Once the port is acting as a routed port, think of it like a router interface. That is, configure the IP address on the physical port, as implied in Figure 17-4. Example 17-10 shows a completed configuration for the interfaces configured on the switch in Figure 17-4. Note that the design uses the exact same IP subnets as the example that showed SVI configuration in Example 17-6, but now, the port connected to subnet 10.1.30.0 has been converted to a routed port. All you have to do is add the no switchport command to the physical interface and configure the IP address on the physical interface.
Example 17-10 Configuring Interface G0/1 on Switch SW1 as a Routed Port
ip routing ! interface vlan 10 ip address 10.1.10.1 255.255.255.0 ! interface vlan 20 ip address 10.1.20.1 255.255.255.0 ! interface gigabitethernet 0/1 no switchport ip address 10.1.30.1 255.255.255.0
Once configured, the routed interface will show up differently in command output in the switch. In particular, for an interface configured as a routed port with an IP address, like interface GigabitEthernet0/1 in the previous example:
show interfaces: Similar to the same command on a router, the output will display the IP address of the interface. (Conversely, for switch ports, this command does not list an IP address.)
show interfaces status: Under the “VLAN” heading, instead of listing the access VLAN or the word trunk, the output lists the word routed, meaning that it is a routed port.
show ip route: Lists the routed port as an outgoing interface in routes.
show interfaces type number switchport: If a routed port, the output is short and confirms that the port is not a switch port. (If the port is a Layer 2 port, this command lists many configuration and status details.)
Example 17-11 shows samples of all four of these commands as taken from the switch as configured in Example 17-10.
Example 17-11 Verification Commands for Routed Ports on Switches
SW11# show interfaces g0/1 GigabitEthernet0/1 is up, line protocol is up (connected) Hardware is Gigabit Ethernet, address is bcc4.938b.e541 (bia bcc4.938b.e541) Internet address is 10.1.30.1/24 ! lines omitted for brevity SW1# show interfaces status ! Only ports related to the example are shown; the command lists physical only Port Name Status Vlan Duplex Speed Type Fa0/1 connected 10 a-full a-100 10/100BaseTX Fa0/2 notconnect 10 auto auto 10/100BaseTX Fa0/3 connected 20 a-full a-100 10/100BaseTX Fa0/4 connected 20 a-full a-100 10/100BaseTX Gi0/1 connected routed a-full a-1000 10/100/1000BaseTX SW1# show ip route ! legend omitted for brevity 10.0.0.0/8 is variably subnetted, 6 subnets, 2 masks C 10.1.10.0/24 is directly connected, Vlan10 L 10.1.10.1/32 is directly connected, Vlan10 C 10.1.20.0/24 is directly connected, Vlan20 L 10.1.20.1/32 is directly connected, Vlan20 C 10.1.30.0/24 is directly connected, GigabitEthernet0/1 L 10.1.30.1/32 is directly connected, GigabitEthernet0/1 SW1# show interfaces g0/1 switchport Name: Gi0/1 Switchport: Disabled
So, with two options—SVI and routed ports—where should you use each?
For any topologies with a point-to-point link between two devices that do routing, a routed interface works well.
Figure 17-5 shows an example of where to use SVIs and where to use routed ports in a typical core/distribution/access design. In this design, the core (Core1, Core2) and distribution (D11 through D14) switches perform Layer 3 switching. All the ports that are links directly between the Layer 3 switches can be routed interfaces. For VLANs for which many interfaces (access and trunk) connect to the VLAN, SVIs make sense because the SVIs can send and receive traffic out multiple ports on the same switch. In this design, all the ports on Core1 and Core2 will be routed ports, while the four distribution switches will use some routed ports and some SVIs.
FIGURE 17-5 Using Routed Interfaces for Core and Distribution Layer 3 Links
Implementing Layer 3 EtherChannels
So far, this section has stated that routed interfaces can be used with a single point-to-point link between pairs of Layer 3 switches, or between a Layer 3 switch and a router. However, in most designs, the network engineers use at least two links between each pair of distribution and core switches, as shown in Figure 17-6.
FIGURE 17-6 Two Links Between Each Distribution and Core Switch
While each individual port in the distribution and core could be treated as a separate routed port, it is better to combine each pair of parallel links into a Layer 3 EtherChannel. Without using EtherChannel, you can still make each port on each switch in the center of the figure be a routed port. It works. However, once you enable a routing protocol but don’t use EtherChannels, each Layer 3 switch will now learn two IP routes with the same neighboring switch as the next hop—one route over one link, another route over the other link.
Using a Layer 3 EtherChannel makes more sense with multiple parallel links between two switches. By doing so, each pair of links acts as one Layer 3 link. So, each pair of switches has one routing protocol neighbor relationship with the neighbor, and not two. Each switch learns one route per destination per pair of links, and not two. IOS then balances the traffic, often with better balancing than the balancing that occurs with the use of multiple IP routes to the same subnet. Overall, the Layer 3 EtherChannel approach works much better than leaving each link as a separate routed port and using Layer 3 balancing.
Compared to what you have already learned, configuring a Layer 3 EtherChannel takes only a little more work. Chapter 10 already showed you how to configure an EtherChannel. This chapter has already shown how to make a port a Layer 3 routed port. Next, you have to combine the two ideas by combining both the EtherChannel and routed port configuration. The following checklist shows the steps, assuming a static definition.
Step 1. Configure the physical interfaces as follows, in interface configuration mode:
A. Add the channel-group number mode on command to add it to the channel. Use the same number for all physical interfaces on the same switch, but the number used (the channel-group number) can differ on the two neighboring switches.
B. Add the no switchport command to make each physical port a routed port.
Step 2. Configure the PortChannel interface:
A. Use the interface port-channel number command to move to port-channel configuration mode for the same channel number configured on the physical interfaces.
B. Add the no switchport command to make sure that the port-channel interface acts as a routed port. (IOS may have already added this command.)
C. Use the ip address address mask command to configure the address and mask.
Example 17-12 shows an example of the configuration for a Layer 3 EtherChannel for switch SW1 in Figure 17-7. The EtherChannel defines port-channel interface 12 and uses subnet 10.1.12.0/24.
FIGURE 17-7 Design Used in EtherChannel Configuration Examples
Example 17-12 Layer 3 EtherChannel Configuration on Switch SW1
interface GigabitEthernet1/0/13 no switchport no ip address channel-group 12 mode on ! interface GigabitEthernet1/0/14 no switchport no ip address channel-group 12 mode on ! interface Port-channel12 no switchport ip address 10.1.12.1 255.255.255.0
Of particular importance, note that although the physical interfaces and PortChannel interface are all routed ports, the IP address should be placed on the PortChannel interface only. In fact, when the no switchport command is configured on an interface, IOS adds the no ip address command to the interface. Then configure the IP address on the PortChannel interface only.
Once configured, the PortChannel interface appears in several commands, as shown in Example 17-13. The commands that list IP addresses and routes refer to the PortChannel interface. Also, note that the show interfaces status command lists the fact that the physical ports and the port-channel 12 interface are all routed ports.
Example 17-13 Verification Commands Listing Interface Port-Channel 12 from Switch SW1
SW1# show interfaces port-channel 12 Port-channel12 is up, line protocol is up (connected) Hardware is EtherChannel, address is bcc4.938b.e543 (bia bcc4.938b.e543) Internet address is 10.1.12.1/24 ! lines omitted for brevity SW1# show interfaces status ! Only ports related to the example are shown. Port Name Status Vlan Duplex Speed Type Gi1/0/13 connected routed a-full a-1000 10/100/1000BaseTX Gi1/0/14 connected routed a-full a-1000 10/100/1000BaseTX Po12 connected routed a-full a-1000 SW1# show ip route ! legend omitted for brevity 10.0.0.0/8 is variably subnetted, 4 subnets, 2 masks C 10.1.2.0/24 is directly connected, Vlan2 L 10.1.2.1/32 is directly connected, Vlan2 C 10.1.12.0/24 is directly connected, Port-channel12 L 10.1.12.1/32 is directly connected, Port-channel12
For a final bit of verification, you can examine the EtherChannel directly with the show etherchannel summary command as listed in Example 17-14. Note in particular that it lists a flag legend for characters that identify key operational states, such as whether a port is bundled (included) in the PortChannel (P) and whether it is acting as a routed (R) or switched (S) port.
Example 17-14 Verifying the EtherChannel
SW1# show etherchannel 12 summary Flags: D - down P - bundled in port-channel I - stand-alone s - suspended H - Hot-standby (LACP only) R - Layer3 S - Layer2 U - in use f - failed to allocate aggregator M - not in use, minimum links not met u - unsuitable for bundling w - waiting to be aggregated d - default port Number of channel-groups in use: 1 Number of aggregators: 1 Group Port-channel Protocol Ports ------+-------------+-----------+----------------------------------------------- 12 Po12(RU) - Gi1/0/13(P) Gi1/0/14(P)
Troubleshooting Layer 3 EtherChannels
When you are troubleshooting a Layer 3 EtherChannel, there are two main areas to consider. First, you need to look at the configuration of the channel-group command, which enables an interface for an EtherChannel. Second, you should check a list of settings that must match on the interfaces for a Layer 3 EtherChannel to work correctly.
As for the channel-group interface subcommand, this command can enable EtherChannel statically or dynamically. If dynamic, this command’s keywords imply either Port Aggregation Protocol (PaGP) or Link Aggregation Control Protocol (LACP) as the protocol to negotiate between the neighboring switches whether they put the link into the EtherChannel.
If all this sounds vaguely familiar, it is the exact same configuration covered way back in the Chapter 10 section “Configuring Dynamic EtherChannels.” The configuration of the channel-group subcommand is exactly the same, with the same requirements, whether configuring Layer 2 or Layer 3 EtherChannels. So, it might be a good time to review those EtherChannel configuration details from Chapter 10. However, regardless of when you review and master those commands, note that the configuration of the EtherChannel (with the channel-group subcommand) is the same, whether Layer 2 or Layer 3.
Additionally, you must do more than just configure the channel-group command correctly for all the physical ports to be bundled into the EtherChannel. Layer 2 EtherChannels have a longer list of requirements, but Layer 3 EtherChannels also require a few consistency checks between the ports before they can be added to the EtherChannel. The following is the list of requirements for Layer 3 EtherChannels:
no switchport: The PortChannel interface must be configured with the no switchport command, and so must the physical interfaces. If a physical interface is not also configured with the no switchport command, it will not become operational in the EtherChannel.
Speed: The physical ports in the channel must use the same speed.
duplex: The physical ports in the channel must use the same duplex. | <urn:uuid:6ac36794-939a-4b55-bc12-ab9aac3cadf0> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=2990405&seqNum=4 | 2024-09-08T16:39:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00314.warc.gz | en | 0.874467 | 3,698 | 2.890625 | 3 |
According to the HIPAA Security Rule, technical safeguards are “the technology and the policy and procedures for its use that protect electronic protected health information and control access to it.” So while administrative safeguards involve people and access and physical safeguards involve the physical premises, technical safeguards look at the technology and platforms used to protect sensitive PHI.
Protected Health Information (PHI), as defined by the Privacy Rule, is “individually identifiable health information held or transmitted by a covered entity or its business associate, in any form or media, whether electronic, paper, or oral.”
The Privacy Rule is a national standard intended to protect patients’ protected health information (PHI). The HIPAA Privacy Rule requires healthcare organizations and their third parties to implement appropriate safeguards to protect the privacy of this information. It regulates things like appropriate use and disclosure of PHI, patient access to PHI, and patient rights.
According to the Security Rule, physical safeguards are, “physical measures, policies, and procedures to protect a covered entity’s electronic information systems and related buildings and equipment, from natural and environmental hazards, and unauthorized intrusion.” Each organization’s physical safeguards may be different, and should be derived based on the results of the HIPAA risk analysis.
The Health Insurance Portability and Accountability Act (HIPAA) sets a national standard for the protection of consumers’ PHI by mandating risk management best practices and physical, administrative, and technical safeguards. HIPAA was established to provide greater transparency for individuals whose information may be at risk, and the Department of Health and Human Services’ Office for Civil Rights (OCR) enforces compliance with the HIPAA Privacy, Security, and Breach Notification Rules. | <urn:uuid:503e37e8-3efb-43ba-aac7-9000fbf967bb> | CC-MAIN-2024-38 | https://kirkpatrickprice.com/glossary-category/hipaa-terms/ | 2024-09-11T03:34:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00114.warc.gz | en | 0.916481 | 352 | 3.25 | 3 |
What is Encryption?
Encryption is the process of scrambling data so it can be read only be someone with the means (or the key) to return it to its original state. Modern encryption scrambles data by using a secret value or key. The data can then be decrypted, or made readable, by using the corresponding encryption key. Encryption keeps data safe. It stops criminals or any unauthorized person from stealing or tampering with information. It can be used to ensure you know who you are communicating with online and to sign digital documents. Encryption protects information when browsing the internet, using a credit or debit card either in store or online or via a mobile app and when using secure messaging apps. Companies use encryption to protect their sensitive information, like customer information, trade secrets and financial records. While encryption may not prevent a data breach it does make the data that is leaked or stolen unreadable and therefore useless should it fall into the wrong hands.
What is an electronic signature?
An electronic signature is an electronic representation of a proof of consent, such as a finger-drawn signature on a tablet, a scanned signature, or a click on an "I agree" button on a webpage. Electronic signatures can be legally accepted in many countries under specific conditions.
What is a digital seal?
A digital seal is a high-assurance type of electronic signature. It is a cryptographic operation on a document, performed using a digital certificate that contains the identity of a legal person (e.g. an organization). The digital seal binds the content of the document to the organization, and prevents tampering. | <urn:uuid:afbc360e-8363-4c0d-bffc-bc0bbcee9a1a> | CC-MAIN-2024-38 | https://www.entrust.com/resources/learn/data-protection-security-regulations | 2024-09-11T02:22:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00114.warc.gz | en | 0.902646 | 324 | 3.375 | 3 |
Privacy Enhancing Technology
With the rise of digital healthcare, protecting patient data has become more critical.
Fortunately, we have privacy enhancing technologies (PETs) to help us protect our data online and offline.
This blog post will look at PETs and how they can provide data solutions by protecting your data. We’ll also explore some of the best privacy enhancing technologies available today and how you can make sure your data is protected with PETs.
What is Privacy Enhancing technology and how does it work?
Privacy enhancing technology (PET) is a term used to describe any tool, software, or hardware that helps protect user privacy. PETs use encryption algorithms and other techniques to protect against unauthorized access to sensitive data, including personal information or health records. For example, when you input your credit card number into an online store, you use a form of PET called end-to-end encryption that protects your credit card information from being stolen.
Why do we need Privacy Enhancing technology in healthcare?
As healthcare moves increasingly online, it becomes more critical to ensure that patient data remains private and secure. This is where PETs come in; they allow us to securely store and share sensitive medical information without compromising patient confidentiality. By using PETs in healthcare settings, we can ensure that patient data remains secure while allowing doctors to access the information they need to provide effective treatment.
How can Privacy Enhancing technology provide data solutions and protect you?
PETs use a variety of strategies as data solutions for protecting data both online and offline. For example, many PETs employ encryption algorithms that scramble the data so it cannot be read by anyone who does not have access to the key used for unlocking it.
Other privacy enhancing data solutions include firewalls that prevent unauthorized users from accessing sensitive systems; authentication tools that verify user identities before granting access; and digital signatures, which authenticate documents via cryptographic methods such as hashing algorithms or public key cryptography. All these technologies work together to provide comprehensive protection for your online and offline data.
What are some of the best Privacy Enhancing technologies available today?
Many privacy enhancing technologies are available today, from simple software solutions to complex hardware systems explicitly designed to protect large amounts of sensitive information. Some popular examples include Secure Sockets Layer (SSL), Virtual Private Networks (VPN), two-factor authentication (2FA) systems such as Google Authenticator or Authy, Zero Knowledge Proof protocols like Zcash, Homomorphic Encryption (HE) systems like Microsoft SEAL library, as well as distributed ledger technology such as Blockchain networks like Ethereum or Hyperledger Fabric.
Investing In Privacy Enhancing Technology: In Closing
Protecting patient confidentiality is essential for providing effective healthcare services in the modern world; however, doing so requires using powerful tools and data solutions such as privacy enhancing technologies (PET). This blog post explored PETs and how they work by providing secure storage and sharing options for sensitive medical information.
We also looked at some of the best privacy enhancing technologies available today and how they can help protect your online and offline data. By understanding how these powerful tools work together, we can ensure our patient’s confidential information remains safe while still allowing them access to the healthcare services they need when they need them most. Thank you for reading.
ABOUT THE AUTHOR
IPwithease is aimed at sharing knowledge across varied domains like Network, Security, Virtualization, Software, Wireless, etc. | <urn:uuid:ae186d20-c126-40a1-8ee4-fc19ccd3643e> | CC-MAIN-2024-38 | https://ipwithease.com/privacy-enhancing-technology/?wmc-currency=USD | 2024-09-16T00:24:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00614.warc.gz | en | 0.903204 | 696 | 2.5625 | 3 |
“Passwords are bad security,” says Eric Williams, Senior Sales Engineer at HID Global. “Everyone knows this. Passwords can be shared, phished, [or] stolen. And the user experience is not great, since the user must memorize this complex secret, which changes periodically.”
Since the 1960s, passwords have been the default security control for accessing computer devices and software. But today, poor password practices are one of the most common reasons for account compromise in the enterprise. While password security can be improved with the use of multi-factor authentication, the best way to address the root cause of weak passwords is to eliminate their use altogether.
Passwordless authentication technologies allow users to access applications or systems without needing a password at all. In the case of the FIDO2 authentication standard, for example, authentication is completed with a private key held on the local device, then matched with a public key registered with an online service. There is no need for the local end user to ever have an account password.
Combined with an extra verification step leveraging biometric controls, or a physical hardware token, this offers powerful security benefits above the use of a password. It is resilient against phishing and doesn’t require any additional effort on the part of the end user. Most importantly, this method is very difficult to hack.
To discuss the move toward passwordless authentication, Expert Insights sat down with Eric Williams, Senior Sales Engineer at HID Global. You can listen to our full conversation on the Expert Insights podcast:
The Workforce Is Going Passwordless
According to a recent Gartner report, by 2025 50% of the workforce will be using passwordless technologies. A key driver for this is a push for better protection in the form of data protection regulations, Williams explains. “Every year, new regulations pass around data protection. And they tend to be a bit modular, meaning that some of them are for highly regulated industries such as energy, banking, and finance. Others are to protect the data side for private citizens.”
“On the other side, there is GDPR which has a sweeping mandate in the EU, while the United States have, so far, mostly opted for a state-by-state approach. The authorities are still in the process of figuring out how to secure people’s data, and as they do so, there is a push to adopt security measures that enhance security, while improving the user experience.”
Improving the user experience is critical for the adoption of passwordless authentication, Williams explains. “A poor user experience is not just bad for the user, it’s bad for the implementor. It’s bad for whoever is enforcing it since people tend to find a way around obstacles. You need to make it very easy to do the right thing.”
For the enterprise, there can be no one-size-fits-all approach when it comes to authentication; rather, your security strategy must support a broad range of authentication options to suit user needs. For example, some users may not be comfortable, or able to use biometric authentication. In some circumstances, users can’t use their personal devices as an authentication token, but must instead use an ID badge, or a smart card.
All industries face unique challenges when it comes to passwordless adoption, and no two use cases are completely alike, Williams explains. In banking for instance, “You need to cover the entire customer journey, starting from onboarding, to creation of account, to transactions and interactions within the banking system – for example changing your address.” Passwordless solutions need to be flexible enough so that they can adapt to different situational requirements, without jeopardizing security or user-experience.
“We have solutions that help with that entire journey – starting with identity verification at the beginning, to providing passwordless authentication via push notification to the customer’s mobile phone. That’s all integrated, so the level of friction is zero! At the same time, we perform risk analytics behind the scenes to help with making decisions about when to request authentication. And sometimes that answer is to not authenticate at all, if the transaction seems safe enough.”
These systems, combined with improvements to authentication technologies, mean passwordless technologies vastly improve the user experience for the end user. “These security technologies have gotten better in recent years, everything from biometrics, to mobile authenticators, to the rise of FIDO credentials. The user experience has improved dramatically.”
Changing Risk In A Passwordless World
“As passwords start to disappear, cybercriminals are going to shift their mindsets to new targets”, Williams explains, namely to the device and to the user themselves. Already we are seeing rises in new forms of engineering to bypass multi-factor authentication controls, such as so-called ‘MFA fatigue,’ or ‘MFA spamming’ attacks. This is where a criminal will simply request a mobile authorization over and over until the user accepts the request simply to stop the notifications.
With the rise of AI, advanced social engineering attacks will have “renaissance moment,” Williams says. “Everything from text, to images, to video and even voices are becoming easier to generate and spoof, and harder to identify. It’s up to companies like [HID Global] to react to these new threats. We’re already doing that to some degree, with AI of our own, baked into our authentication solutions.”
HID Global is a division of ASSA ABLOY, a global multi-national leader in physical secure access – everything from revolving doors to passport control gateways. Historically, HID were focused on physical access, but since 2010 they have focused on identity and access management solutions for a range of industries, including state, local, and national governments worldwide.
Today, HID is a leading provider in biometric authentication solutions, and one of the largest providers of PKI solutions to the US government. HID delivers passwordless authentication for internal enterprise systems via their Digital Persona solution, which offers a range of authentication methods, and their Crescendo smartcard line, which combines physical and logical access technologies, such as PKI and FIDO to enable seamless authentication across physical and digital systems. HID also offer consumer authentication solutions, focused on identify verification and risk management for customers, like the banking and finance industry.
Recent announcements from HID include a partnership with Apple, designed to bring HID smart authentication cards into the Apple Wallet, and a new range of eco-friendly bamboo-based smart cards. “We’re an established, trustworthy company, with a long history in information security,” Williams says. Working with a trusted provider is paramount when looking to implement passwordless authentication, particularly in light of the social engineering risks on the rise. “You have to be a little bit careful about who you trust.”
The Future Of Passwordless Authentication
Although the momentum is certainly towards a passwordless future, “information security tends to evolve slowly,” Williams says. “While there’s a lot happening in the space right now, it will be a long time before we truly reach ubiquity for passwordless.” Organizations looking to secure their digital identities must stay informed about threats and have a solid security strategy in place.
“It is important to identify and implement a clear security strategy. Any IT professional will tell you that. I think that always starts with the ‘Why?’ Why are you implementing passwordless? Is there a specific regulation? Is there a specific threat? Once you have answered those questions, it’s time to put in the work to identify the right solutions for the right needs. You should look at solutions that will take you into the future, and this may involve looking at new protocols. Think about FIDO – it was not that many years ago that few people were aware of FIDO, and the people that stayed informed about FIDO are now ahead of the game.”
“Also, in many cases, a blended approach is the right way to go. Coming from a vendor, I’ll stress that selecting the right technology partner is paramount. Whether you go directly to a vendor like us, or if you utilize a solution provider of some sort, always do your own research, always stay informed on new trends. Stay connected with similar organizations, other IT professionals that you connect with to be aware of the solutions they are looking at. This will help you to avoid their mistakes and gain from their successes.”
Learn more about HID Global: https://www.hidglobal.com/
Listen on Spotify:
Listen on Apple Podcasts:
About Expert Insights:
Expert Insights provides leading research, reviews, and interviews to help organizations make the right IT purchasing decisions. | <urn:uuid:47a56006-fa24-4ff3-83bc-f903d44ef069> | CC-MAIN-2024-38 | https://expertinsights.com/insights/eric-williams-hid-global-interview/ | 2024-09-17T05:10:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00514.warc.gz | en | 0.934695 | 1,852 | 2.546875 | 3 |
Experts approximate that social media is flooded with over half a million videos and voice deepfakes.
These images or videos create artificial, visual lookalikes to real people and edit them into puppeted scenes. They can even copy the person’s vocal patterns to emulate how they sound, further convincing targets of the video’s validity.
Just a few years ago, we mainly associated deepfakes with adult content replicating celebrities, but now anyone can be the subject or target of a deepfake attack.
Think you’re conferencing with your boss because you can see and hear them? Well…maybe you’re not!
Deepfakes are synthetic media created using artificial intelligence (AI) techniques to convincingly manipulate images or videos. Essentially, they replace one person’s likeness with another’s…or generate people who don’t even exist.
The goal is typically to spread false information or portray somebody in a damaging light. It’s used for propaganda, like showing a beloved politician doing or saying something vile; or for financial gain, like getting your bank details or selling your personally identifiable information (PII) on the Dark Web.
While deepfake technology has potential for positive applications, such as special effects in movies, its misuse poses significant challenges. In practice, we typically hear a about deepfakes that are used for nefarious purposes.
A report by Pew Research Center found that 77% of Americans want to limit misleading deepfake content, but we often struggle to balance that with freedom of speech and data privacy. Meanwhile, deepfake fraud is increasing all over the globe.
One U.K. study found that only 21.6% of participants could correctly identify a deepfake. Though we’re developing laws against deepfaking the world over, we must learn how to spot false images so we can protect ourselves from spreading misinformation or harmful, false content!
If you’re suspicious of a video, pay attention to inconsistencies in facial expressions, lighting, and audio quality. Are there glitches and pixelation around the outline of the person? Does their mouth moving align with what they’re saying? Always verify the source of videos or images before sharing or believing them.
Some other tells that indicate a deepfake include…
- inconsistencies in facial features like eyes, ears, and mouth.
- unusual lighting or shadows.
- blurry edges or artifacts around the face might indicate manipulation.
- unnatural blinking patterns or lack of blinking.
- odd behavior, statements or content given the person’s known personality.
Be cautious about sharing personal information online. Deepfakes often rely on publicly available data. It’s hard to fake your voice if there is no audio of you speaking online! The less personal details you post, the harder it is to masquerade in your name. You can adjust privacy settings on social media platforms to limit your profile’s exposure.
If you’re the one targeted for this scam, you can use tools like Reverse Image search to see if these visuals appear elsewhere in a different, original context. Be wary of unverified sources and consider the overall credibility of any information you read outside of verified news sites.
While still under development, some tools are emerging to help identify deepfakes. Some advanced video analysis software can detect inconsistencies and anomalies that suggest a deepfake, which can prompt you to take a closer look.
Ultimately, you should rely on reputable news outlets and official channels for information, and be skeptical of any sensational or unverified content.
While these methods can help you identify potential deepfakes, they are not foolproof. As deepfake technology advances, it becomes increasingly difficult to detect them with the naked eye. The good news, however, is that we’re simultaneously developing new regulations to protect against malicious deepfakes, and building the technology to tell the difference between a real video and a fake one.
As technology keeps evolving in the battle of good versus evil, maintaining an informed and critical mindset is still the greatest asset in the fight against deepfakes. If a video or audio message gives you a suspicious gut feeling, it is always best to slow down and re-assess the validity of what you see; because it’s not always what you get! | <urn:uuid:2890a565-d47b-4368-bc5f-0cec438d8fa3> | CC-MAIN-2024-38 | https://www.mytechexperts.com/blog/deepfakes-faking-you-out/ | 2024-09-19T17:47:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00314.warc.gz | en | 0.916182 | 886 | 2.765625 | 3 |
Switches are devices used in connecting multiple devices together on a Local Area Network (LAN). In terms of networking, the switch would serve as a controller, which allows the various devices to share information. Ethernet switches can be used in the home, a small office or at a location where multiple machines need to be hooked up. There are two basic kinds of switches: managed switches and unmanaged switches. The key difference between them lies in the fact that a managed switch can be configured and it can prioritize LAN traffic so that the most important information gets through. On the other hand, an unmanaged switch behaves like a “plug and play” device, which cannot be configured and simply allows the devices to communicate with one another. This blog will compare the difference between managed vs. unmanaged switch, and why would choose one over the other?
A managed switch is a device that can be configured. This capability provides greater network flexibility because the switch can be monitored and adjusted locally or remotely. With a managed switch, you have control over network traffic and network access. Managed switches are designed for intense workloads, high amounts of traffic and deployments where custom configurations are a necessity. When looking at managed switches, there are two types available: smart switches and fully managed switches. Smart switches have a limited number of options for configuration and are ideal for home and office use. Fully managed switches are targeted at servers and enterprises, offering a wide array of tools and features to manage the immediate network.
Unmanaged switches are basic plug-and-play switches with no remote configuration, management or monitoring options, although many can be locally monitored and configured via LED indicators and DIP switches. These inexpensive switches are typically used in small networks, such as home, SOHO or small businesses. In scenarios where the network traffic is light, all that is required is a way for the data to pass from one device to another. In this case there is no need for prioritizing the packets, as all the traffic will flow unimpeded. An unmanaged switch will fill this need without issues.
Managed and unmanaged switches can maintain stability through Spanning Tree Protocol (STP). This protocol can prevent your network from looping endlessly, because it can search for the disconnected device. However, the managed switch is still the best solution for long-range usability and network performance. And it will cover the trends in the near future.
Network Redundancy: Managed switches incorporate Spanning Tree Protocol (STP) to provide path redundancy in the network. STP provides redundant paths but prevents loops that are created by multiple active paths between switches, which makes job for a network administrator easier and also proves more profitable for a business.
Remote management: Managed switches use protocols such as or Simple Network Management Protocol (SNMP) for monitoring the devices on the network. SNMP helps to collect, organize and modify management information between network devices. So IT administrators can read the SNMP data, and then monitor the performance of the network from a remote location, and detect and repair network problems from a central location without having to physically inspect the switches and devices.
Security and Resilience: Managed switches enable complete control of data, bandwidth and traffic control over the Ethernet network. You can setup additional firewall rules directly into the switch. And managed switches support protocols which allow operators to restrict/control port access.
SFP: The benefit of having multi-rate SFP slots is the network flexible expansion possibility, which allows the user to be able to use 100Mbps and 1Gbps SFP modules for either multi or single-mode fibre optic (or copper) as needed. If requirements change, the SFP module can be replaced and easily protect your switch investment.
Support multiple VLAN as per requirement: Managed switches allow for the creation of multiple VLANs where an 8-port switch functionally can turn into two 4-port switches.
Prioritise bandwidth for data subsets: The switches are able to prioritise one type of traffic over another allowing more bandwidth to be allocated through the network.
- Open ports on unmanaged switches are a security risk
- No resiliency = higher downtime
- Unmanaged switches cannot prioritize traffic
- Unmanaged switches cannot segment network traffic
- Unmanaged switches have limited or no tools for monitoring network activity or performance
After discussing the pros and cons of managed vs. unmanaged switch, we can thus conlude that for end users, network visibility and control can be highly valued in their plants and they are willing to pay for it. Although managed switches are costlier than unmanaged switches, managed switches definitely have more benefits and consistent network performance. When the network requirements may be expanded or better control and monitoring over network traffic is needed, managed switches may be considered.
Related Article: Why Is Managed Switch Good for Business Networks? | <urn:uuid:ce3f311a-26d0-421a-8de6-6acbac65d5c3> | CC-MAIN-2024-38 | https://www.fiber-optic-components.com/tag/unmanaged-switch | 2024-09-07T13:16:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00514.warc.gz | en | 0.934619 | 989 | 3.15625 | 3 |
Zero to Linux with Hal Pomeranz
This four-hour, hands-on course is a quick start into the world of Linux
Why are all the “cool kids” using Linux? Could it be that the best tools for both the Red and Blue teams run on Linux? Could it be because modern, cloud-based infrastructures are built on Linux and easily managed using Linux-based tools? Or that free Linux operating systems make it easy to build out your own home lab environment?
Don’t fear the Linux command-line. Embrace it! Learn how to get around in Linux with simple command line tricks and short-cuts to improve your efficiency. Understand the Linux philosophy and the basic building blocks that will have you up and going in no time. And know how to get help from the operating system so that you can continue to learn and grow on your own.
- The Linux file system
- cd, pwd, and ls
- Relative vs absolute pathnames
- Tab completion
LAB: Directory Jeopardy!
- File manipulation (cp, mv, and rm)
- Getting to know ls
- Getting help
- Command history searching and editing
- cat and less
- Effective use of wildcards
- su and sudo
LAB: Only Seven Commands? No Worries!
- The Unix/Linux command philosophy
- Slicing and dicing (cut and awk)
- Selecting (grep)
- Sorting and collecting (sort and uniq)
- Sampling (head, tail, wc)
LAB: Learning to Linux
- September 5th – 11:00 AM to 4:00 PM* EDT
*Class time begins an hour early to set up VMs and other resources
- Navigating the Linux directory structure
- Sampling and accessing files and directories
- Selecting, sorting, and manipulating data
- Shell pipelines
- Using command history
- Getting help
Who Should Take This Course
- Red teamers who want to leverage Linux tools for exploitation and post-exploitation
- SOC analysts who need to review data and alerts in the Linux environment
- Administrators and developers building and defending Linux infrastructures
Audience Skill Level
No familiarity with Linux is assumed. Experience with some command line (e.g. Windows cmd.exe or PowerShell) is helpful but not necessary.
What Each Student Will Be Provided With
Students will receive course slides in PDF form along with lab exercises which they can run on their own Linux system. This material can be downloaded from https://github.com/halpomeranz/LinuxCmdLine
A laptop with a working Linux virtual machine (or running Linux natively) | <urn:uuid:5baf57cc-0606-4696-b96d-7a637cfb4d21> | CC-MAIN-2024-38 | https://www.antisyphontraining.com/course/zero-to-linux-with-hal-pomeranz/ | 2024-09-08T18:57:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00414.warc.gz | en | 0.87926 | 565 | 2.59375 | 3 |
We produce 2.5 exabytes of data every day. That’s equivalent to 250,000 Libraries of Congress or the content of 5 million laptops. Every minute of every day 3.2 billion global internet users continue to feed the data banks with 9,722 pins on Pinterest, 347,222 tweets, 4.2 million Facebook likes plus ALL the other data we create by taking pictures and videos, saving documents, opening accounts and more.
We are at the limits of the data processing power of traditional computers and the data just keeps growing. While Moore’s Law, which predicts the number of transistors on integrated circuits will double every two years, proved remarkably resilient since the term was coined in 1965, those transistors are now as small as we can make them with existing technology. That’s why there’s a race from the biggest leaders in the industry to be the first to launch a viable quantum computer that would be exponentially more powerful than today’s computers to process all the data we generate every single day and solve increasingly complex problems.
Quantum Computers Solve Complex Problems Quickly
Once one of these industry leaders succeed at producing a commercially viable quantum computer, it’s quite possible that these quantum computers will be able to complete calculations within seconds that would take today’s computers thousands of years to calculate. Today, Google has a quantum computer they claim is 100 million times faster than any of today’s systems. That will be critical if we are going to be able to process the monumental amount of data we generate and solve very complex problems. The key to success is to translate our real-world problems into quantum language.
The complexity and size of our data sets are growing faster than our computing resources and therefore place considerable strain on our computing fabric. While today’s computers struggle or are unable to solve some problems, these same problems are expected to be solved in seconds through the power of quantum computing. It’s predicted that artificial intelligence, and in particular machine learning, can benefit from advances in quantum computing technology, and will continue to do so, even before a full quantum computing solution is available. Quantum computing algorithms allow us to enhance what’s already possible with machine learning.
Quantum Computers Will Optimise Solutions
Another way quantum computing will facilitate a revolution will be in our ability to sample the data and opti mise all kinds of problems we encounter from portfolio analysis to the best delivery routes and even help determine what the optimal treatment and medicine protocol is for every individual.
We are at a point with the growth of big data that we have changed our computer architecture which necessitates the need for a different computational approach to handling big data. Not only is it larger in scope, but the problems we’re trying to solve are very different. Quantum computers are better equipped to solve sequential problems efficiently. The power they give businesses and even consumers to make better decisions might just be what’s needed to convince companies to invest in the new technology when it becomes available.
Quantum Computers Could Spot Patterns in Large Data Sets
Quantum computing is expected to be able to search very large, unsorted data sets to uncover patterns or anomalies extremely quickly. It might be possible for the quantum computers to access all items in your database at the same time to identify these similarities in seconds. While this is theoretically possible today, it only happens with a parallel computer looking at every record one after another, so it takes an incredible amount of time and depending on the size of the data set, it might never happen.
Quantum Computers Could Help Integrate Data from Different Data Sets
Additionally, big breakthroughs are expected when quantum computers are available due to the integration of very different data sets. Although this may be difficult without human intervention at first, the human involvement will help the computers learn how to integrate the data in the future. So, if there are different raw data sources with unique schema attached to them (terminology and column headers) and a research team wants to compare them, a computer would have to understand the relationship between the schemas before the data could be compared. In order to accomplish this, breakthroughs in the analysis of the semantics of natural language need to happen, one of the biggest challenges in artificial intelligence. However, humans can give input which then trains the system for the future.
The promise is that quantum computers will allow for quick analysis and integration of our enormous data sets which will improve and transform our machine learning and artificial intelligence capabilities. | <urn:uuid:b43ff6fa-2bb0-4fd7-a626-abd2abe86820> | CC-MAIN-2024-38 | https://bernardmarr.com/how-quantum-computers-will-revolutionise-artificial-intelligence-machine-learning-and-big-data/ | 2024-09-10T01:21:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00314.warc.gz | en | 0.926752 | 922 | 3.1875 | 3 |
Do not allow COM port redirection in RDP is the name of a security setting stated in Windows servers CIS benchmarks/STIGs. A COM port is an I/O interface that enables the connection of a serial device to a computer. In some cases COM ports are called “serial ports”. Most computers are not equipped with COM ports anymore but there are many serial port devices still used in computer networks. The COM port can refer not only to physical ports but also to emulated ports, such as ports created by Bluetooth or USB-to-serial adapters.
This blog post will demonstrate:
- COM port redirection policy description.
- COM port potential vulnerability.
- How to mitigate COM port vulnerability.
- The potential impact of changing this setting on your production.
- COM port redirection recommended value.
- How to change COM port settings.
This server hardening policy setting will determine whether the redirection of data to client COM ports from the remote computer will be allowed in the RDP session. By default, RDP allows COM port redirection. It can be used, for example, to use a USB dongle in an RDS session.
When not enabled, users can redirect data to COM port peripherals or map the local COM ports while using the Remote Desktop Service session.
As stated by MITRE ATT&CK port redirection can lead to protocol tunneling- Adversaries may tunnel network communication to and from a victim system within a seperate protocol to avoid detection/network filtering and/or enable access to otherwise unreachable systems. Tunneling involves
explicitly encapsulating a protocol within another. This behavior may conceal malicious traffic by blending in with existing traffic and/or provide an outer layer of encryption (similar to a VPN). Tunneling could also enable routing of network packets that would otherwise not reach their intended destination, such as SMB, RDP, or other traffic that would be filtered by network appliances or not routed over the Internet. T-1572
Enable this object wherever's possible.
If the status is set to Disabled, Remote Desktop Services always allows COM port redirection. If the status is set to Not Configured, COM port redirection is not specified at the Group Policy level. However, an administrator can still disable COM port redirection using the Remote Desktop Session Host Configuration tool.
RDP users won't be able to access a client's COM port peripherals such as USB dongles and Bluetooth.
CALCOM'S RECOMMENDED VALUE:
HOW TO CONFIGURE RDP Do not allow COM port redirection:
1. Press Windows Logo+R, type gpedit.msc, and press Enter.
2. Click the arrow next to Computer Configuration under Local Computer Policy to expand it.
3. Click the arrow next to Administrative Templates to expand it.
4. Click All Settings to show all group policy settings.
5. Scroll down to Do not allow COM port redirection and double-click on it to view the setting.
6. Ensure the policy isn’t Disabled and click OK. (Enabled must be selected).
CalCom hardening solution is a hardening automation tool designed to help IT infrastructure teams automate hardening procedure. CalCom’s solution can detect if any port redirection is required for RDP session and indicate if the setting should be enabled or not. | <urn:uuid:f3e91b4e-b07a-40df-9957-90702ed59832> | CC-MAIN-2024-38 | https://www.calcomsoftware.com/rds-do-not-allow-com-port-redirection-the-policy-expert/ | 2024-09-10T00:18:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00314.warc.gz | en | 0.861918 | 703 | 2.640625 | 3 |
In today’s fast-paced digital landscape, software development has become an integral part of many organizations.
With the increasing demand for efficient and reliable software systems, it’s essential to have a structured approach to ensure successful project delivery.
This is where the Software Development Lifecycle (SDLC) comes into play.
What is SDLC?
If you’re an experienced developer, it’s very likely you are already implementing some of these steps, even if you don’t realize it.
The Software Development Lifecycle is a series of phases that guide the development process from conception to deployment.
This process, cost-effective and time-efficient, is used to design and build high-quality software.
It provides a framework for planning, executing, and monitoring any information system (such as hardware, software applications, or even tasks) to ensure they meet customer and business requirements.
The SDLC helps teams manage complexity, reduce risks, and improve overall quality.
What’s the goal of using or implementing SDLC?
The primary objective of adopting SDLC is to ensure that software development projects are completed efficiently, effectively, and with minimal errors.
By following a structured approach, organizations can:
- Reduce project timelines and costs
- Improve software quality and reliability
- Enhance collaboration and increase visibility among team members and stakeholders
- Increase customer satisfaction through timely delivery and meeting their requirements
The SDLC typically consists of six to eight phases. While some variations may exist, the following are the most common phases:
In this phase, the project scope, goals, and objectives are defined.
The team identifies stakeholders, outlines project timelines, and establishes a budget.
A thorough analysis of the problem domain and market requirements is also conducted.
Example: Imagine you’re building a new mobile app for a retail chain.
In the planning phase, you would define the project’s scope, identify stakeholders (e.g., customers, store managers), establish a timeline, and determine the budget, including the allocation of engineering members.
In this phase, the team creates a detailed design of the software system, including architecture, user interfaces, and system components.
The design should be based on the requirements gathered during the analysis phase.
Example: For your mobile app, you would create wireframes for each screen, define the UI/UX, and develop prototypes to test with users.
At this point along with the Planning Phase, it’s highly recommended to conduct several activities, such as SWOT Analysis, and Threat Modelling sessions. When established at an early stage, this heavily impacts in defining Security by Design for the project.
This is the core development phase where the software is built.
The team writes code, integrates components, and tests the system to ensure it meets requirements.
Example: Your development team would start building the mobile app using the design specifications created in the previous phase.
They would write code for each feature, integrate APIs, and test the app’s functionality.
This phase involves verifying that the software meets the required standards and functions as intended.
The team conducts various types of tests, including unit testing, integration testing, and user acceptance testing (UAT).
By the way, we also have an article about how we implement Performance Load Testing with Locust that might be interesting to you!
Example: Your testing team would perform unit testing for each feature, integrate testing to ensure API connections work correctly,
and conduct UAT with real users to validate the app’s functionality.
In this phase, the software is deployed to production, where it can be accessed by end-users.
The team ensures that the deployment process is smooth, secure, and meets the required standards.
Example: For your mobile app, you would deploy it on the App Store or Google Play,
configure any necessary servers or infrastructure, and ensure the app is accessible to users.
The maintenance phase involves monitoring the software in production, fixing issues, and making updates based on user feedback and changing business requirements.
Example: After deploying your mobile app, you would monitor usage patterns, fix bugs as they arise,
and make updates to improve the app’s performance or add new features based on customer feedback.
Although we want the project to succeed, at some point you may no longer provide support or maintenance for it.
The disposal phase involves retiring or decommissioning software that is no longer required or has been replaced by a newer version.
This phase ensures that old software does not become obsolete or pose security risks.
Example: If your mobile app is no longer meeting business requirements or is being replaced with a new version, you would retire the old app and ensure that all data is properly migrated to the new system, along with the removal of the involved resources, such as storage, databases, and so.
Additional Actions and Relating Phases
As we dive deeper into the Software Development Lifecycle, you may explore some additional actions and see how they relate to their matching phases.
- Threat Modeling: Conduct threat modeling during the Design Phase to identify potential security threats and vulnerabilities in your software system. This will help you design a more secure architecture and implement effective countermeasures.
- Supply Chain Security Awareness: During the Implementation Phase, ensure that your development team is aware of the importance of supply chain security and takes steps to validate the integrity of third-party components and libraries (eg. including version pinning, reproducible builds, provenance and attestation, Software Bill of Materials…).
- Privacy by Design: Incorporate privacy-by-design principles during the Design Phase to ensure that your software system respects user privacy from the outset. This includes implementing data minimization, data anonymization, and other privacy-enhancing measures.
- Data Encryption: Use data encryption techniques during the Implementation Phase to protect sensitive data at rest and in transit. This can include encrypting databases, files, and communication protocols.
- Least Privilege: Implement least privilege access controls during the Deployment Phase to ensure that users only have access to the resources they need to perform their job functions. This reduces the attack surface and minimizes the impact of a potential breach.
- Zero Trust Models: Adopt zero-trust models during the Implementation Phase by assuming that all users and devices are untrusted and validating their identity and permissions at every point of interaction.
- Multi-Tenancy and Tenant Isolation: For those applicable projects, make sure to implement multi-tenancy and tenant isolation during the Design Phase to ensure that different customers or organizations can share a single software system without compromising each other’s data. This includes implementing logical segregation, role-based access control, and other isolation mechanisms. | <urn:uuid:f619c085-0a2a-4480-95d0-a3c0352a2ee6> | CC-MAIN-2024-38 | https://www.midokura.com/software-development-lifecycle-how-we-run-it/ | 2024-09-13T18:16:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00014.warc.gz | en | 0.931076 | 1,393 | 2.9375 | 3 |
The manufacturing industry faces complex challenges today and constantly seeks new methods to reduce operational risks. One method of reducing risks is industrial predictive analytics. Disruptions can occur in any form, such as raw material shortages or geopolitical crises that can impact global supply management and the availability of goods to essential markets. With the help of the proper techniques, industries can face these issues head-on by becoming data-driven. This way, they can create more sustainable processes and integrate forward-thinking approaches for long-term benefits using predictive analytics in manufacturing.
What are industrial predictive analytics and demand forecasting?
This approach is an advanced method of using data from previous actions to predict future outcomes in business processes. Demand forecasting is used the most in data analytics, artificial intelligence (AI), machine learning, and statistics to pull out patterns that predict the likelihood of specific actions or behaviors. This way, large-scale business processes are optimized, quality failures are identified in advance, and faster remediations are taken to avoid downtime as much as possible.
How does industrial predictive analytics work?
There are five basic steps for building a framework for this:
- Define the problem: Proper research and a set of requirements are essential before implementing these models. Will they detect fraud? Will they be used to identify flood levels for the holiday season? Identifying the problem will help determine what method to use.
- Gather and organize data: Companies have decades of aggregated data from the past and a continual flood of real-time data from customer interactions. Before implementing them, data flows must be identified, and those datasets must be organized in a repository.
- Process data: Process the collected raw data points to become helpful in the models. All the anomalies must be cleaned to avoid any input or measurement errors.
- Develop predictive models: Data scientists can create these models using various methodologies depending on their data and the problem to be solved. Among the most widely used model categories are machine learning, regression models, and decision trees.
- Deployment: Verify the model’s accuracy and make the necessary adjustments. You can get AI consulting services from reputable providers. Make the results accessible to stakeholders through an app, website, or data dashboard as soon as they are deemed suitable.
What is the role of predictive maintenance?
Predictive maintenance (PdM) aims to prevent the equipment from failing by looking at previous data to find operational irregularities and possible flaws in the machinery. It uses past and current information from many organizational factors to stop operational hiccups before they arise. It takes into account three key organizational areas:
- Monitoring asset condition and performance in real time.
- Work order data analysis.
- Keeping inventory in check.
Predictive maintenance solutions can alert the staff if the machines malfunction. You can use it to identify problems, which can help the team minimize expensive operation pauses by alerting them when to schedule the next check-up session.
Application of predictive analytics in top 7 industries:
Numerous industries are changing due to the presence of these tools. It can stop fraud before it starts, grow a modest business into a titan, and even save lives.
Some industry examples for predictive analytics are:
Predictive analytics in manufacturing industry
Predictive analytics in manufacturing plays a vital role and helps in many industrial processes. Factory operations become more optimized with this feature as it predicts potential problems based on historical and real-time data. It also determines how teams can be more productive, manage inventory, and prevent future errors. These predictions help firms prepare to replace broken machinery, avoid producing low-quality goods, and ensure they have adequate raw materials to continue operating efficiently. When combined with asset management and work order software, using these procedures in this industry offers many benefits. Factories can manage work orders, lower expenses, improve asset management, enhance operational efficiency, and maintain their competitive edge.
Predictive analytics in retail
Predictive analytics implementation is critical in inventory and logistics management for all retailers, whether online or in physical stores. The technology lets merchants connect vast data to maximize operations efficiency and create a well-planned supply chain. We can keep checking past sales data, product and behavior purchases, and geographic references. For instance, let’s look at online retailers such as Amazon. They use specific techniques while suggesting what to buy next. This is done by collecting data from the previous purchase to understand the customer’s interest.
Predictive analytics in finance
One of the many industries using predictive analytics is finance. This algorithm can help prevent credit card fraud as it is trained on historical transaction data. This way, predictive analytics and statistics can distinguish legitimate and fraudulent transactions.
Another way that it helps is by utilizing credit card scores to decide whether to accept or reject loan applicants. Collecting churn data from customers helps banks reach out to prospective clients before they are likely to go to other banks. Fraud prevention is one of the primary applications of these models in the banking sector, as it can point out and interfere against potential fraud activity before it can occur.
Predictive analysis in healthcare
The healthcare industry is inclined towards using technology to reduce costs and streamline healthcare processes. These enhancements are proving to be a valuable tool for healthcare organizations. Here’s how:
- It allows access to all types of information within healthcare organizations, such as medical history, economics, and demographics. This allows doctors to gain valuable insights that can help them guide their decisions.
- It can also be used for population healthcare management. For example, patients’ data with a particular disease can be used to find a similar population cohort.
- It can also predict which patients are at a higher risk of something severe and start precautionary treatment.
Predictive analysis in marketing
Regarding industries that use predictive analytics, marketing is one of the most front-facing examples. It is a helpful tool when organizing a marketing event. Marketers must consider a few factors before organizing an event, such as how customers will respond to a campaign and how their actions can affect the success of a marketing campaign. These technologies can segment marketing leads by showing advertisements on websites and social media platforms that are personalized to their target audience’s interests and activities. Marketers in many industries use predictive analytics to examine customer behavior on historical and present data. New machine learning techniques for business systems can have accurate forecasts and identify individuals whose data corresponds with ideal customers.
Marketers can also use these for lead scoring to determine which prospects will most likely convert, making them more valuable to the business. It can also gauge the likelihood that a prospective customer will purchase goods or services. This allows them to plan how to contact prospects and what information to provide them.
Predictive analysis in automotive industry
As technology grows and changes the industry’s future, machine-learning solutions are becoming the secret to success for many automotive enterprises worldwide. They are everywhere, from determining what car you might want to buy next based on your browsing history to knowing when your vehicle might need a tune-up before it even starts acting up. They are not just helping car companies sell more cars but ensuring those cars run smoother and last longer.
Predictive analysis in construction industry
In construction, you need a solid foundation to keep things going. Predictive analytics in construction is one such tool that helps companies do just that.
It is similar to having a personal assistant for your construction projects in the construction industry. It gives teams an up-to-date picture of what’s happening on-site and coming up next, which helps them manage their workforce more effectively. No matter where they are, they can easily access worker information and project specifics. It’s like having a toolkit complete with valuable features, like scheduling tools and job lists, all geared at making construction projects go more smoothly and intelligently.
Predictive analytics in the energy industry
The use of predictive analytics streamlines energy management. It provides insights and predictions based on data-driven energy usage and demand forecasts. Successful machine learning projects focused on sustainability can enhance energy efficiency, save energy, increase customer happiness and loyalty, and help you plan and budget more precisely and effectively.
Seasonal, daily, and hourly energy supply and demand variations can be predicted using forecasting models. It can recognize and handle possible risks, hazards, and abnormalities before any abnormalities and adverse effects happen. Thus, predictive analytics in the energy industry forecast when specific energy sources will run out and determine the most effective energy distribution.
Predictive analytics in the chemical industry
Predictive analytics in the chemical industry has become essential to higher-quality chemical creation. Its ability to predict outcomes can help the chemical manufacturing process at several points in the production cycle, such as quality control, process enhancement, failure prediction and prevention, safety monitoring, etc.
Predictive analytics can change how an organization runs and performs when used effectively. This gives them a competitive edge and a chance to lower the risks of reduced downtime. With its sophisticated algorithms, industries can use data analytics and show real-time interferences to forecast future events. It is an essential tool for organizations and has caused a significant upgrade in improving process flow in various industries. | <urn:uuid:950c6009-2e10-4e42-bd52-0bfe8f843820> | CC-MAIN-2024-38 | https://indatalabs.com/blog/industrial-predictive-analytics | 2024-09-14T21:20:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00814.warc.gz | en | 0.929324 | 1,875 | 2.921875 | 3 |
A report from the European Union on cloud computing has proposed a standard definition of the concept, which is still viewed with some confusion in many quarters.
A cloud, the report asserts, is an “elastic execution environment of resources involving multiple stakeholders and providing a metered service at multiple granularities for a specified level of quality (of service).”
“’Clouds’ do not refer to a specific technology,” the report says, “but to a general provisioning paradigm with enhanced capabilities.” Among the functional characteristics of a compute cloud, as identified by the report, are ‘elasticity’, ‘reliability’ and ‘availability’.
The report suggests that cloud computing presents an significant opportunity for European IT and telecommunications company and, more broadly, will revolutionise the way IT services are bought and sold. “Europe could and should develop a ‘free market for IT services’ to match those for movement of goods, services, capital, and skills,” it says.
But there are challenges facing the cloud, it adds, and recommends EU-backed research, the development of interoperability standards and of a regulatory framework to clarify legal issues.
This latter point chimes with calls made last month by Microsoft’s general counsel Brad Smith for the US government “to modernise the laws, adapt them to the cloud, and adopt new measures to protect privacy and promote security.”
“There is no doubt that the future holds even more opportunities than the present, but it also contains critical challenges that we must address now if we want to take full advantage of the potential of cloud computing,” said Smith. | <urn:uuid:881dead0-8e7a-4405-ac10-f0d2c2bcbcee> | CC-MAIN-2024-38 | https://www.information-age.com/european-union-defines-cloud-computing-25393/ | 2024-09-14T19:38:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00814.warc.gz | en | 0.948284 | 353 | 2.78125 | 3 |
ISO 14001 Environmental Management Systems Certification
ISO 14001 Environmental Management Systems Certification can be used by any organization that wants to validate its resource efficiency, waste reduction, and drive down costs.
ISO 14001 sets out the criteria for an Environmental Management Systems (EMS). It sets out a framework that your company can follow in order to improve efficiency by reducing costs and also benefit the environment.
Over 300.000 organizations have been certified against ISO 14001.
– Source: ISO Survey
Some of the ISO 14001 certification benefits
Monitors and controls the effect of operations on the environment, now and in the future
Averts pollution and reduces waste
Environmentally engaged employees
Improved quality and services
Creates trust and credibility
Some of the Industries that can benefit the most
Certification process Step-by-Step
Review of the EMS
MSECB will conduct a review of the EMS to look for the main form of documentation
Audit is performed
An audit is performed by us to verify that your organization is in conformity with the requirements of the standard
Certification is granted
Upon verifying that your organization is in conformity with the requirements of the standard, a Management System Certification is granted
Not sure where to begin? Start here. | <urn:uuid:4cb60252-75c3-4cda-81df-a501afde59c6> | CC-MAIN-2024-38 | https://msecb.com/service/iso-14001-environmental-management-systems-certification/ | 2024-09-17T09:21:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00614.warc.gz | en | 0.939675 | 253 | 2.609375 | 3 |
When it comes to connecting services and applications together, cloud services have to be one of the most revolutionary technologies to date. Today 77% of enterprises have one or more application located in the cloud. Cloud service providers like AWS, Google Cloud and Microsoft Azure have helped to forge a gigantic market. Yet in spite of the scale of the cloud services market, edge computing is shaking up the equilibrium.
The model of networks supplemented by external cloud services is evolving. Setting up modern networks in this manner has been great for flexibility but is being outgrown as organisations seek to use more IoT devices. With integrated IoT and 5G devices, companies are being forced to move on from the old model of cloud services and legacy networks.
With edge computing, the performance of applications is increased considerably. Data that has been created by IoT devices is processed as close to the source as possible. The less distance the data has to travel the more responsive the device (or service is). Edge computing is not only being used to support applications but also to lay the groundwork for ‘smart cities’ as well.
An introduction to edge and fog computing: A primer in 2018
Edge Computing vs Fog Computing
Edge computing is not alone in shaking up networking as there has been a growth in the use of fog computing as well. In theory, both edge and fog computing are intended to improve the performance and response time of applications, but they do so in different ways.
In edge computing, programmable automation controllers or PACs process the data at the edge of the network. In contrast, fog computing is processed with a fog node or IoT gateway within the local area network. Edge computing is the more efficient of the two because it reduces the number of steps data takes to reach its destination.
The CTO’s guide to edge computing
Under edge computing devices are connected to a PAC where data is automatically collected and analysed. The PAC then makes the decision on whether to retain this data locally or send it on to the cloud. Fog computing takes a more complex approach. Devices are connected to a PLC or PAC automation controller which sends the data onwards to a protocol gateway or OPC server. At this stage, the data is translated so that it can be understood by the fog node. It is at this stage that data is processed and analysed.
The fog computing approach is more convoluted because it relies on a conversion process. Edge computing skips this process to keep the architecture simple. The simplicity of edge computing is one of the main reasons that it is challenging the cloud services model.
Written by Tim Keary, Freelance Tech Writer | <urn:uuid:b8a57486-a2d6-4730-a119-4d681d42b1b2> | CC-MAIN-2024-38 | https://www.information-age.com/redefining-cloud-technology-12127/ | 2024-09-19T20:15:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00414.warc.gz | en | 0.943576 | 528 | 2.78125 | 3 |
What Is User Provisioning?
User provisioning is a process by which individuals get assigned access and manage those access permissions to the resources whether inside the system or network. On this basis, we will integrate the user account creation and the assignment of the correct roles and permissions and will make sure that the users have the corresponding tools and sources for the effective processing of their tasks. User provisioning is the main subject in Identity & access management (IAM) and it secures system integrity.
Automated user provisioning refers to minimizing and ultimately eliminating the manual work for things such as employees registration and account management. This facilitates task execution without leaving any numbers around and ensures unified implemented policies of access controls as well as decreases human errors.
The User Provisioning Role In Systems Security
User provisioning must be carefully done so to keep a system from being compromised. If the access rights of the users are not managed appropriately this may result in numerous security problems, such as unauthorized access, data breaches, and insider threats among other.
Organizations can substantially reduce such risks by ensuring the user provisioning is actually in place. Using the principle of least privilege, which is a critical part of user provisioning, is a security measure that significantly contributes to the security of the systems. This principle provides the user with only the minimum level of access that they require for the successful completion of their work.
The measures of access limitation to be granted to just what is necessary can significantly moderate the chance of the account or an inside threat being compromised. Also, user provisioning allows organizations to establish the password policy that is firm. By having a good supply, rules for users can be laid down requiring them to design complex passwords and update them from time to time. This presumes that it will be able to block brute-force attacks and use weak or compromised passwords.
Good User Provisioning Characteristics
Implementing efficient user provisioning brings several benefits to organizations:The implementation of efficient user provisioning contains plenty of advantages to the organization.
Improved Security: The least privilege principle is strictly enforced during the user provisioning process whereby each user only sees the assets that she/he is responsible for. This covers the obligation for work routines and duties. Consequently, the incidence of unwarranted entry and data leakage would be at the lowest level.
Enhanced Productivity: Automation of the user provisioning process gives an organization the opportunity to provide the users with the required access and tools within a few minutes which they can use right away. It is a cost-saving method that eradicates any time loss increasing productivity.
Streamlined Onboarding and Offboarding: Correct user registration is the key to the simplification of the process of registration of new employees and the elimination of forgotten profiles of the ones who were dismissed. The new joiner can readily start using the systems required of them while the one that leaves can have their access terminated quickly when they leave, this significantly reducing security risks.
Compliance and Audit Readiness: Granted access to the users subsequently helps an organization to finish with the regulatory requirements, since it becomes a measure used to manage access controls and audit them too. This is critical in fields like medicine and finance where people need to trust the system on confidentiality and privacy regulation.
User Provisioning Process Explained
The user provisioning process typically involves the following steps: The user provisioning process typically involves the following steps:
User Registration: The procedure starts with the user entering his account details. This self-service can be performed through an online portal or a call to the IT department with a request.
User Verification: When the registration is done, the verification of the user’s identity is the next thing that is executed. Having this in mind, the application might require email verification or another form of identity verification to confirm the user’s real identity.
Role Assignment: Then, after the verification, the user gets the role/s which can be either according to the nature of the job he/she holds or the responsibilities. It thus determines the degree of access and permissions they will be granted in the system.
Access Provisioning: The subsequent task is to grant the right permissions, and allocate resources to the user. This applies to the creation of the user accounts with the passwords and the giving of authorizations to selected applications or data.
Ongoing Management: The procedure of user implementation is an ongoing process and therefore it requires periodic reviews and updates. This will mainly revolve around the management of user lifecycle events such as promotions, transfers, and exits, as well as disabling users without appropriate rights at regular intervals.
User Provisioning: Best Practices Framework
To ensure effective user provisioning, organizations should follow these best practices: To ensure effective user provisioning, organizations should follow these best practices:
Automate the Process: The automation of user provisioning lessens the burden on human effort, ensures consistency and, at the same time, reduces the probability of human mistakes. Employ identity and password management (IAM) tools as an easy way of simplification.
Implement Role-Based Access Control (RBAC): Role-based access control allows to create roles and grant permissions based on organizational roles or employee roles in the organization. It makes the provisioning process simple and prompts the users to get only the appropriate level of access.
Enforce Strong Password Policies: Set password policies that demand complex passwords, periodical password changes, and two-factor authentication. This way we become self-realized from the intruders and the password-related threats.
Regularly Review and Update Access: Conduct access reviews regularly for the purpose of determining if they are still correct. Discontinue or alter access for users who no longer require it.
Train Users on Security Awareness: Teach system users on what to do to keep the system secure including ensuring that passwords are kept safe, avoiding phishing attacks, and reporting illegal activities. This allows for a security-conscious culture reinforcement within the organization.
User Provisioning And The Challenges That Come With It
While user provisioning brings significant benefits, organizations may encounter challenges during its implementation: While user provisioning brings significant benefits, organizations may encounter challenges during its implementation:
Complexity: As companies get bigger, the problem of user provisioning becomes harder because there are more and more users and systems to be managed. Implementing automated provisioning solutions can definitely handle this issue.
Integration: Managing user provisioning with already in-place systems can be laborious, especially in settings where there are diverse technologies. An essential duty is to select the provisioning solution which is easy to connect with the existing system.
User Lifecycle Management: Access administration of users during their lifecycle, be it transfer, promotion or departure is a complex task. Organizations should set up the decision-making procedure and activities structure to solve the problems in a timely manner.
User Resistance: People will easily find faults in the changes of their access privileges and passwords. Effective communication and training can avert resistance and make sure customer acceptance of the latest provisioning processes.
User provisioning is the structurally designed element of the security system with security and functionality being the primary objective. By means of the application of effective user provisioning strategies, organizations may be able to minimize human errors, increase the general workforce productivity, make the onboarding and offboarding process easier and achieve compliance with the existing laws and regulations.
Proper methods of applying and overcoming of user provisioning difficulties will help organization to have an advantage of this process that makes system usage easier, more secure and more efficient.
Start your free trial today. | <urn:uuid:2665249a-e438-4c4f-b1e3-77d6ea23352f> | CC-MAIN-2024-38 | https://blog.avatier.com/the-key-to-a-secure-and-efficient-system/ | 2024-09-08T22:55:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00514.warc.gz | en | 0.935409 | 1,548 | 2.765625 | 3 |
It started out as the little technology that could.
Today, more people than ever are using the short-range Bluetooth wireless mode of communication to swap files and talk on phones using headsets and earpieces.
Since the technology is essentially designed to replace cables, users rely on it to wirelessly print from their notebooks or handheld PCs and connect to other devices such as LCD projectors.
“It is encouraging to see that consumers not only have heard of Bluetooth technology, but they are also using it for more advanced applications,” said Michael Foley, executive director of the Bluetooth Special Interest Group (SIG), in a statement.
Bluetooth awareness increased the most in the U.S, where more than 50 percent of the consumers who took part in a 2005 survey at least recognized the brand name and the technology.
This compares to a roughly 22 percent recognition level during the first such study conducted in 2003 and up to a 62 percent awareness level of Wi-Fi, the study showed.
The countries with highest recognition for Bluetooth include the UK and Germany (averaging about 88 percent), and Taiwan and Japan (about 67 percent), the study revealed.
More than two-thirds of those polled in Taiwan own at least one Bluetooth-enabled device. And Japanese consumers are the most willing to pay a few yen more for a Bluetooth device, even though there are comparatively few such devices in that country.
Worldwide, awareness for Bluetooth jumped from 60 percent in 2004 to 73 percent last year, the study revealed.
Bluetooth SIG commissioned the study, which polled consumers between the ages of 18 and 70 in the United States, United Kingdom, Germany, Japan and Taiwan. | <urn:uuid:decbb3ac-ce3a-46a1-9a4c-080a24a6057e> | CC-MAIN-2024-38 | https://www.datamation.com/mobile/recognition-green-light-for-bluetooth/ | 2024-09-08T23:58:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00514.warc.gz | en | 0.96213 | 339 | 2.546875 | 3 |
GRE protocol functionality adds one additional protocol on Cisco's multimedia core platforms (ASR 5500 or higher) to support mobile users to connect to their enterprise networks through Generic Routing Encapsulation (GRE).
GRE tunnels can be used by the enterprise customers of a carrier 1) To transport AAA packets corresponding to an APN over a GRE tunnel to the corporate AAA servers and, 2) To transport the enterprise subscriber packets over the GRE tunnel to the corporation gateway.
The corporate servers may have private IP addresses and hence the addresses belonging to different enterprises may be overlapping. Each enterprise needs to be in a unique virtual routing domain, known as VRF. To differentiate the tunnels between same set of local and remote ends, GRE Key will be used as a differentiator.
It is a common technique to enable multi-protocol local networks over a single-protocol backbone, to connect non-contiguous networks and allow virtual private networks across WANs. This mechanism encapsulates data packets from one protocol inside a different protocol and transports the data packets unchanged across a foreign network. It is important to note that GRE tunneling does not provide security to the encapsulated protocol, as there is no encryption involved (like IPSEC offers, for example).
GRE Tunneling consists of three main components:
Passenger protocol-protocol being encapsulated. For example: CLNS, IPv4 and IPv6.
Carrier protocol-protocol that does the encapsulating. For example: GRE, IP-in-IP, L2TP, MPLS and IPSec.
Transport protocol-protocol used to carry the encapsulated protocol. The main transport protocol is IP. | <urn:uuid:eb73d875-10cb-4c28-8eeb-8734ecb92e32> | CC-MAIN-2024-38 | https://www.cisco.com/c/en/us/td/docs/wireless/asr_5000/21-12_6-6/GGSN-Admin/21-12-GGSN-Admin/21-12-GGSN-Admin_chapter_010011.html | 2024-09-13T21:49:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00114.warc.gz | en | 0.862181 | 340 | 2.640625 | 3 |
Healthcare needs are getting more complex, and systems more stretched. To meet the demand for precision healthcare, hospitals need to evolve to become smarter and more efficient. The World Health Organization (WHO) estimates healthcare will face a shortfall of 10 million healthcare workers worldwide by 2030. Is it sustainable to increase the workload of already stretched practitioners? Smart hospitals could make healthcare work more efficient and improve the quality of patient care.
Smart hospitals and the rise of e-health
Smart hospitals are part of the e-health phenomenon. A smart hospital improves patient care using optimized and automated processes, particularly the Internet of Things (IoT), to connect medical devices to AI and data analysis. Smart tech helps healthcare practitioners focus on providing high-quality care while reducing the barriers associated with using complex medical technology.
Smart tech means smarter workflows
One figurehead of the smart hospital movement is Humber River Valley Hospital, Canada. It’s North America’s first fully-digital hospital (think: online appointment scheduling, digital check-ins and your test results emailed.) It boasts robots and other tech to automate almost 80 percent of its back-office services like pharmacy, laundry and food delivery. This gives clinical staff more time to give back to patients.
Be first to find out what’s happening in tech, leadership and cybersecurity.
Cleveland Clinic Abu Dhabi (CCAD,) UAE, uses digital apps to support treatments. Pre-treatment, patients use apps to communicate with staff. During their stay, they use smart pads to access their info, daily plans and order food. At discharge, the app sends prescriptions to local pharmacies then sends them their bill. Neat.
Can robots become care workers?
Smart tech could help meet labor shortages: this means more android helpers. During the first COVID-19 outbreak, Wuhan Hospital, China, was staffed entirely by robots for several days. Produced by start-up CloudMinds Technology, they improved patient care and gave respite to over-stretched workers.
Smart hospitals can automate and digitize traditional, labor-intensive processes. When healthcare is delivered quicker and more accurately, more people can be treated safer and quicker. Consider Johnson Control’s Code Blue Activation system; When a patient reaches critical condition, it calls up an emergency care team in just 24 seconds.
Could 3D printing replace donors?
There’s a relatively new tech in smart hospitals: 3D printing. Imagine the opportunities if surgical teams could print body parts, like prosthetic limbs or dental implants, on-demand. While the tech is in its infancy, there are promising advancements, though it’s without the speed and capabilities needed for large-scale adoption.
Engineers and medical researchers from the University of Minnesota recently created silicon-made scaffolding’ that could benefit those with spinal cord injuries by restoring muscle control. Cells are printed onto the structure then implanted into the patient’s spinal cord, with a bridge between cells.
Can smart hospitals save lives?
Given increased healthcare demands, hospitals may not have any choice but to change. Smart hospitals could improve the care we deliver and benefit our societies. But it will be a group effort.
Professor Daniel Kelly, Cardiff University’s School of Healthcare Sciences, thinks smart hospitals will be crucial for healthcare. But we have to understand how it will affect everyone, including those funding it. “As treatment for many diseases becomes more personalized, there’s an urgent need for health systems to evolve. Smart hospitals are symbolic of a future standard of healthcare where we can remove the reliance on paper records and current digital systems’ inefficiencies.
Translating the smart hospital concept into reality relies on significant investment and a radical redesign of processes involving clinicians and patients.
Professor Daniel Kelly, Cardiff University's School of Healthcare Sciences
Securing the smart hospital
From sensitive patient information to medical records, it’s easy to see why cybersecurity is a hot topic in healthcare. Hospital data and equipment are goldmines for hackers and criminal groups. Those goldmines become all the more accessible when introducing a connected system like a smart hospital.
Firstly, traditional medical devices are outdated – they aren’t secure by design. This means many wouldn’t have been tested for vulnerabilities or built to connect with smart appliances or the internet. (A tough task to foresee the fourth industrial revolution, to be fair.) This gap can allow attackers to gain illegal access to systems and data.
A survey by ENISA found the most critical threat to smart hospitals is malicious attacks – like malware blitzes and denial of service (DDoS) raids that take control of systems and devices, steal patient data (most likely to be sold on the dark web) and hold patient data hostage with ransomware. Since the COVID-19 pandemic, attacks on hospitals and health organizations have increased drastically, including this fatality triggered by ransomware in Germany.
Human errors still pose one of the biggest threats to patients. Take drug pumps; smart hospitals, or more specifically their workers, deliver controlled doses of medications to patients. These systems prevent over or under-administration, but they have an over-ride button for emergencies. But what if the device isn’t configured correctly? Someone could tamper with drug delivery, inadvertently or intentionally, which might have deadly consequences.
These technological advancements are not just projects for the IT department: The smart hospital involves participation between technologists and healthcare practitioners. Ongoing collaboration is needed to protect it from cyberthreats to ensure smart systems can save more lives. | <urn:uuid:99003b82-7d85-466e-b6dd-1f60af34c13f> | CC-MAIN-2024-38 | https://www.kaspersky.com/blog/secure-futures-magazine/smart-hospitals-healthcare/38830/ | 2024-09-13T22:47:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00114.warc.gz | en | 0.928863 | 1,138 | 2.75 | 3 |
Safeguarding important paperwork and documents is a key part of being responsible in both your personal and professional life. Even with options like digitization and cloud storage, hard copy documents are still vital because of multiple considerations:
- Legal Compliance: Some documents must be kept in their original form to meet legal requirements.
- Historical Importance: Certain records have a historical value that digital copies can’t replicate.
- Backup Assurance: Hard copies serve as a fail-safe in case of digital data corruption and loss.
- Easy Access: Sometimes, grabbing a physical document is quicker and easier than searching through digital files.
This article examines the risks of storing hard copies and outlines best practices for their protection.
Understanding Hard Copy Vulnerabilities
Recognizing the vulnerabilities of paper documents is key to understanding how to protect them over time.
Physical damage poses a major threat to your documents. Water from a leak, flood, or even a spill can destroy the integrity of paper documents. If hard copies are in a fire, even if they are not completely incinerated, soot and smoke can render them unusable. Such natural or manmade disasters can obliterate critical documents like birth certificates, medical records, insurance papers, property documents, passports, and bank records.
Environmental factors also affect document preservation. Extreme temperatures can make paper brittle, while high humidity can cause mold and warping. Sunlight exposure can fade ink and weaken paper fibers. Maintaining a stable, controlled environment is crucial to keeping your hard copy documents in good condition.
The Federal Emergency Management Agency (FEMA) provides a comprehensive checklist and guidelines to help you protect important records. Follow these recommendations to safeguard your documents from various threats and ensure they remain accessible when needed.
Strategies for Protecting Hard Copy Documents
Protecting hard copy documents requires careful planning and the right strategies. Let’s examine some effective ways to safeguard these valuable records.
Proper Storage Solutions
Choosing the right document storage solution is crucial for preserving important paperwork. Different storage options offer varying levels of protection:
- Filing Cabinets: Fine for organizing documents and keeping them easily accessible, filing cabinets are the most basic storage solution for hard copy documents.
- Fireproof Safes: Designed to protect valuables from fire and water damage, fireproof safes can be used storing a small quantity of hard copy documents.
- Lamination and Encapsulation: Lamination and encapsulation protect frequently handled documents from spills, fraying, tearing, and environmental damage. Laminated documents are water resistant but still susceptible to destruction if submerged.
- Professional Storage Providers: If you’re looking to store large volumes of documents, expert storage providers like Armstrong Archives offer climate-controlled environments and advanced security for hard copy documents.
Correct handling practices go a long way in preventing unnecessary damage to physical documents.
Here are some best practices:
- Use Clean Hands: Dirt and oils from your hands can damage paper over time.
- Avoid Food and Drinks: Keep substances that can soil your documents at a safe distance..
- Use Proper Tools: Use acid-free folders and gloves to handle documents to minimize wear. Depending on the storage and weather conditions, you may need to use dehumidifiers and UV-filtering sleeves.
Disasters can strike at a moment’s notice, which is why it’s important to prepare for the safety of your hard copy documents in advance.
Here’s what you can do:
- Create a Backup Plan: To ensure comprehensive protection, store copies of important documents in a separate location and keep digital backups as well.
- Use Waterproof Containers: Store documents in containers that can withstand water exposure.
- Develop an Emergency Response Plan: This plan should include evacuating documents to a safe location, using waterproof containers, and designating team members for quick action during natural disasters.
Digital Backup and Security
Securing digital backups of your hard copy documents ensures they are accessible in the event of damage. Hard copy document digitization and digital document protection should be considered during disaster preparedness planning.
Digitizing Paper Documents
Turning paper documents into digital copies is essential for easy access and preservation.
- Scan Documents: Use a high-quality scanner to create clear digital copies of your paper documents. When more than just a few documents are involved, it’s best to opt for a professional service. Armstrong Archives’ document scanning services streamlines document digitization, turning hard copies into editable digital files.
- Organize Files: Store digital files in an organized folder structure for easy retrieval.
- Backup Regularly: Implement a backup schedule to keep your digital copies up-to-date and secure.
Digital Security Measures
Protecting digital backups is crucial to prevent unauthorized access and data loss.
- Use Encryption: Encrypt digital files to ensure only authorized users can access them.
- Employ Cloud Storage: Utilize reputed cloud storage services for secure, off-site backups.
- Regular Updates: Keep your security software updated to guard against the latest cyber threats.
Regular Maintenance and Audits
Here’s how to keep your document preservation efforts on track.
Schedule Regular Reviews
Periodic checks on your hard copy documents and their storage environment are essential. Regular reviews help identify issues like wear and tear or environmental changes that could affect your records. By staying on top of these aspects, you can address potential problems before they cause significant damage.
If any documents have passed their retention period, consider shredding them to maintain an organized and secure record system.
Update Protection Methods
As technology evolves, so do methods for document preservation. Advances in artificial intelligence (AI), for instance, can now help with automated document classification and digitization. Similarly, blockchain technology offers enhanced security for digital records.
What strategies would help protect hardcopy information?
To protect hardcopy information, use proper storage solutions, handle documents with care, and prepare for disasters with backup plans. Review and update your document protection strategies regularly to stay current with the latest technologies.
What is the best way to protect a document?
The best way to protect hard copy documents is by placing them in acid-free containers in a climate-controlled storage facility. Digital backups of paper documents must be encrypted before being stored in a secure cloud storage.
How to protect paper documents?
To preserve paper documents, use acid-free folders and boxes, store them in a climate-controlled environment, and consider lamination or encapsulation for high-value items. For bulk storage, use professional services to ensure optimal preservation and management.
Get Expert Help and Resources for Document Protection with Armstrong Archives
Armstrong Archives specializes in document storage and management, with a full range of services for hard copy documents – from scanning and storage to secure shredding.
We also offer a wealth of resources on topics related to records management and document protection. Check out our comprehensive guide on preserving paper documents and learn about the benefits of climate-controlled storage.
Contact us today to learn how to protect hard copy documents and discover how we can help safeguard your valuable records. | <urn:uuid:1930cc53-895f-4135-8c87-b603d71cbdc3> | CC-MAIN-2024-38 | https://www.armstrongarchives.com/protect-hard-copy-documents/ | 2024-09-17T12:57:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00714.warc.gz | en | 0.889608 | 1,470 | 2.53125 | 3 |
When a company like Microsoft talks about the future of computing, you can expect a fair bit of self-serving market positioning - public software companies need to be careful to sell a vision of the future that doesn't jeopardize today's revenue streams. But, when a company like Microsoft releases a new version of its fundamental development framework - .NET, in this case - you can see more clearly the company's technical vision for the future of computing.
.NET 4.0 is expected to be released in early 2010, simultaneously with Visual Studio 2010. Visual Studio 2010 implements many of the new features in 4.0, and introduces some unique technologies. While the .NET 4.0 framework is effectively free, Visual Studio is a distinct revenue stream for Microsoft and a way for the company to monetize their substantial investment in software development frameworks and tooling.
It's been clear for some time that today's most popular programming languages are not particularly well-suited to building applications that take advantage of the parallel processing capabilities of multi-core and multi-processor systems. Therefore, .NET 4.0 introduces many enhancements and simplifications for those writing multi-threaded or parallelized applications.
The .NET languages also have gained the functional programming capabilities popularized in languages such as Objective Caml and Haskall. Functional programming is hard to succinctly define: functional languages eliminate the distinction between function and data, use functions as arguments to other functions and emphasize recursion rather than iteration. Functional language proponents believe functional programming can lead to significant advances in programming productivity and program functionality. For example, F# is a new .NET language that explicitly supports functional program paradigms, while still allowing object orientation and full .NET interoperability.
Perhaps surprisingly, .NET 4.0 also introduces formal support for the IronPython and IronRuby languages. These are implementations of the open source Python and Ruby languages engineered to run within the .NET Common Language RunTime (CLR). In theory, this allows Python and Ruby languages to be used within .NET applications.
Microsoft introduced the ADO.NET Entity Framework in .NET 3.5. The Entity Framework provides a common mapping and abstraction layer between various data stores (XML, RDBMS, etc.) and the typically object-oriented model of the data used in the application. Together with Language Integrated Query (LINQ), the Entity Framework allows an application to work directly against a conceptual model of the data, without having to use SQL or deal with the physical database implementation. .NET 4.0 significantly improves on the Entity Framework, and Microsoft hopes this release will lead to more widespread adoption.
.NET 4.0 also includes the modelling capabilities from Microsoft's "Oslo" initiative. The Oslo modeling framework incorporates a SQL Server-based repository for models; the "M" language for defining models and domain specific languages; and the "Quadrant" UI for working with models. It is more than just a tool for generating diagrams and specifications, however. An Oslo model is intended to form the basis of a model-oriented application in which key parts of the application logic are derived directly from the model.
Individually, the .NET 4.0 features are evolutionary rather than revolutionary. Taken in combination, however, they paint a vision of a development landscape that includes model-driven applications, functional programming, massive parallelism, and data abstraction. It's a persuasive and modern vision of software development and one that is likely to gain the support of the .NET programming community. | <urn:uuid:00bded64-3540-4995-a00f-80d1a62aaca1> | CC-MAIN-2024-38 | https://www.dbta.com/Columns/Applications-Insight/NET-40-and-the-Future-of-Windows-Programming-58272.aspx | 2024-09-17T11:46:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00714.warc.gz | en | 0.909615 | 713 | 2.515625 | 3 |
Joint Use Trivia: Everything You Never Knew About Utility Poles
We discuss utility pole assets often on the Alden Updater. You see them every day while you’re driving, walking, and just going about your business. If you are reading this blog, you may already know a lot about them. So, what are these mysterious objects of our attention, perhaps affection or possibily disdain? Utility poles...no matter how much you think you may know, there is always more to discover.
Here, we present ten fun facts about our favorite 40-foot assets.
- Utility poles are mostly made of Southern Yellow Pine. While Pinus palustris, commonly grown in a wide swath of land between Virginia and Texas, is the most popular choice for a utility pole, many other species of naturally long, straight trees are also used. Among them: Douglas Fir, Jack Pine, Lodgepole Pine, Western Red Cedar and Pacific Silver Fir.
- A pole by a different name is mostly the same. We may refer to them as “utility poles” here, but ask around the world, and they are known by a number of other names: electric poles, distribution poles, telephone poles, telecommunication poles, power poles, hydro poles, telegraph poles, and power posts, to name a few.
- There are a lot of utility poles out there. In fact, there are an estimated 180 million of them standing tall around the U.S. This means there are roughly 1.76 people for every utility pole in the United States.
- Some utility poles are really, really tall. Typical poles are 40 to 60 feet tall, but the world’s tallest power line suspension towers reside in Jiangyin, China. These behemoths are 1,137 feet tall each.
- Other utility poles are relatively tiny. The small town of Woodseaves, Staffordshire, England lays claim to the world’s smallest telegraph pole. It resides on the second tier of a bridge, and despite its diminutive size, boasts six cross pieces.
- Utility poles have been around for a long time. The first telephone pole was built in 1876 as part of a telephone installation for a friend of Alexander Graham Bell's employer.
- Some poles are fairly old themselves. One can expect a typical wooden utility pole to last between 40 and 60 years, and many in the field are 50 to 60 years of age already.
- Parts of old utility poles can be valuable. While U.S. utility companies transitioned to using porcelain insulators in the 1970s, the previously popular glass models have become collectable, and in some cases, valuable. Most are clear or aqua, but other colors can be found as well, including amber, blue, and dark garnet red, the rarest color of all.
- The most unique pole anywhere may just be in Florida. Look up along Interstate 4 outside Kissimmee, Florida, near the westernmost exit for the Walt Disney World resort and you will see Mickey Mouse in the lines. A unique tower found here features three hoops that form the shape of the cartoon rodent’s head and ears, with each hoop suspending wires.
- South of the Mason-Dixon line, exotic animals are not welcome near poles. For some reason, it is against the law in the southern capital of Atlanta to hitch a giraffe to a utility pole or street lamp. We will try to remember that, as should you. | <urn:uuid:027b2dee-e224-44b2-b967-fc33ef93bff4> | CC-MAIN-2024-38 | https://info.aldensys.com/joint-use/joint-use-trivia-everything-you-never-knew-about-utility-poles | 2024-09-09T02:40:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00614.warc.gz | en | 0.921461 | 723 | 2.8125 | 3 |
What Is HIPAA Compliance?
The significance of HIPAA Compliance has to be precisely understood by the medical agencies that deal with the medical records of patients. All medical offices must comply with the security standards of HIPAA. What is HIPAA? HIPAA refers to the federal Health Insurance Portability and Accountability Act of 1999. This United States legislation provides for data privacy and includes security provisions to safeguard sensitive information. The patient medical records and other healthcare information all come under the ambit of this act and all the healthcare centers and medical offices dealing with it have to necessarily comply with HIPAA. All the routine activities that will be carried out at such institutions that deal with records have to follow certain guidelines and regulations as put forward by this Act. This is to ensure that the confidentiality of patients is not breached at any time which otherwise may lead to a financial and emotional loss for the patients. The chief aim of this law is that the medical offices safeguard their patient confidentiality and ensure data security of healthcare information. This also helps in effectively controlling the administrative costs for the healthcare firms. For the users, it helps to keep their health insurance simple and secure.
Cloud Computing And HIPAA
Of late, this mammoth task of complying with HIPAA security protocol has become difficult due to the penetration of cloud services into medical offices all across the world. The data centers for this cloud data storage are located far away and also since the volume of medical records has increased drastically, medical organizations desperately feel the need for cloud security. When this is an issue, the solution lies in the effective implementation of cloud security and Cloud Access Security Broker (CASB) solutions do just that. What HIPAA law does is that it standardizes electronic data exchange thus justifying data security of healthcare information and medical records. It emphasizes the secured mode of transmission of confidential data in medical work environments. All this advocates the proper use of secure servers and the removal of storage media is strongly discouraged. These cloud security protocols required to bring about the security of sensitive patient data are greatly achieved by CASB solutions.
Significance of HIPAA Compliance
The primary goal of HIPAA is to ensure that the medical records of the patient in the healthcare centers remain safe and this is achieved through a series of security measures that standardizes the procedural and structural layouts. By this, the medical data stored in the cloud over multiple data centers remains safe. Also, since these data centers are often located in far-off places, the need for HIPAA compliance becomes very necessary thus insuring against the risk of sensitive medical records getting leaked intentionally or unintentionally. All these can be achieved by proper identification and segregation of sensitive data and tailor-made CASB solution to achieve HIPAA compliance for the cloud-saved medical data.
Why Do Healthcare Firms Needs to Comply with HIPAA?
For any healthcare system, patient data security is a must. Access Control mechanisms so designed to ensure the safety and protection of healthcare information from getting breached helps in protecting the privacy and confidentiality of patient data. It has to be ensured that the crucial information doesn’t get into the wrong hands which can further translate into confidentiality breaches and misuse of private contracts. HIPAA compliance for cloud-based processes can be achieved through CASB solutions with strong Cloud Access Control systems, thus restricting unauthenticated access.
CloudCodes Help Medical Firms Comply with HIPAA Compliance
Who all need to follow HIPAA compliance? All the business associates, covered entities handling protected health information need to be HIPAA compliant. The healthcare clearinghouses, healthcare providers as well as health plans that are into the electronic transmission transactions of the healthcare data come under the covered entities and need to adopt this standard. CloudCodes CASB solutions bring about absolute security. The required security protocols needed to comply with HIPAA can be implemented once the healthcare firms and medical offices deploy CloudCodes CASB solutions. Thus, CloudCodes CASB solutions help these medical firms achieve the much-required HIPAA compliance. | <urn:uuid:c9bfe525-6e72-438d-a4bb-4287524af9f6> | CC-MAIN-2024-38 | https://www.cloudcodes.com/blog/cloudcodes-helps-comply-with-hipaa.html | 2024-09-12T18:08:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00314.warc.gz | en | 0.949376 | 806 | 2.703125 | 3 |
There are many Mac security myths circulating among users. So how can you tell if the advice you’re reading is fact or fallacy? Read on to find out!
Fallacy: Macs don’t get viruses
The idea that there are no viruses for the Mac goes back to the beginning of Mac OS X, at the very beginning of this millennium. Most people associate this idea most strongly with the “I’m a Mac/I’m a PC” commercials from a decade ago, such as this one that ran in 2006:
Unfortunately, this is a myth. As with most good myths, though, there’s a slight element of truth.
Technically speaking, a virus is malware that spreads by itself, by attaching itself to other files. By this strict definition, there are no Mac viruses. However, by that token, there also aren’t very many Windows viruses these days, either. Viruses have mostly disappeared from the threat landscape.
The average person, though, understands a virus to be any kind of malicious software. (A better term for this is “malware.”) Since there definitely is malware for the Mac, as well as a plethora of other threat types, the spirit of the “there are no Mac viruses” claim is completely false. Don’t allow yourself to be misled!
Fact: There’s not much Mac malware out there
True malware is malicious in nature—thus the name, malicious software—with the goal of stealing or scamming data or money from the user. Examples of malware are backdoors that provide access to the computer, spyware that logs keystrokes and captures pictures with the webcam, ransomware that encrypts the user’s files in order to hold them for ransom, and other such nefarious programs.
On the Mac, true malware is rare. A “big spike” of new Mac malware happened in 2012, when 11 new pieces of malware appeared. The average Mac user has never seen any malware.
So why should Mac users be concerned? Because other threats are a rapidly growing problem on the Mac. Over the last several years, there has been an increasing amount of adware and Potentially Unwanted Programs (PUPs) for the Mac.
Adware is software that injects ads into websites where they don’t belong and changes your search engine to a different one. Adware is designed to scam advertisers and search engines. The infected Macs are no more than a vehicle for generating revenue fraudulently from advertisers and search engines, who pay these adware-producing “affiliates” for referrals.
PUPs are programs that are generally unwanted by users. These can include so-called “legitimate” keyloggers (marketed as a means for monitoring your kids or employees), scammy “cleaning” apps (Macs don’t need that kind of cleaning), supposed “antivirus” or “anti-adware” apps that don’t actually detect anything, and so on.
Adware and PUPs are a serious problem on the Mac right now. Although these things are not malware, they are a huge nuisance. Worse, they can create security vulnerabilities that make it more likely for you to get infected with actual malware. For example, in 2015, a vulnerability in a common PUP (MacKeeper) was used to install malware on Macs that had MacKeeper installed.
Fallacy: Macs are more secure than Windows
Many years ago, Apple abandoned the old “classic” Mac system in favor of one based on Unix, a mature and security-oriented system. Apple has made some great security improvements to macOS in recent years, and as a result, Macs are more secure today than they ever have been.
Of course, nothing is ever perfect, and macOS security is certainly far from it. There are plenty of ways to circumvent Mac security. Add to this the fact that security of Windows has improved over the years as well and it becomes difficult to say which system is more secure.
As with other such myths, there’s an element of truth here, though. Macs certainly suffer under a far smaller burden of threats than Windows. Many thousands of new Windows malware variants appear every day, while it’s a busy month in the Mac world if more than one new piece of malware appears. This means that, although there may not be any explicit, major security differences between the two systems, Macs do tend to be statistically safer simply due to the smaller number of threats.
Fact: macOS has built-in anti-malware software
Although this feature is well-hidden from the user, and cannot be turned off, this is true. Apple’s anti-malware software is called XProtect, and it consists of some basic signatures for identifying known malicious apps.
When you try to open an app for the first time, the system will check it against the XProtect signatures. If the app matches one of those signatures, the system won’t allow it to open.
Of course, there are a couple problems with XProtect. First, of course, as with any signature-based detection, it can only detect and block malware that Apple has seen before.
More importantly, though, it only detects malware. Since the vast majority of the threats for Macs are adware and PUPs, that leaves a lot that it doesn’t protect against. You shouldn’t rely on XProtect as your sole protection against threats, but nonetheless, this is very good layer of protection to have as an integral part of the system.
Fallacy: Macs don’t need security software
Antivirus software has gotten a bad rap on the Mac over the years. Thanks to historically low incidence of Mac malware, coupled with the system problems that some antivirus programs have been known to cause, Mac users are skittish about installing security software. Making matters worse, Mac “experts” will tell people that they don’t need security software, because macOS contains all the protection they need.
However, the number of Mac users infected by malware and other Mac threats has had exponential growth since 2010, when adware and PUPs weren’t really a thing on the Mac yet and when new malware sightings were few and far between. We’re seeing large numbers of people infected with Mac threats every day, on a much larger scale than even just a few years ago.
Clearly, there is an epidemic problem with threats—mostly adware and PUPs—on the Mac, and also clearly, the built-in security in macOS is not adequate to deal with this problem. It is becoming increasingly necessary for Mac users to have an additional layer of security, and in particular, to have something that is effective against adware and PUPs, which are the biggest problem. If you’re a Mac user, you might consider downloading software such as Malwarebytes for Mac, which removes adware, PUPs, and malware for free. | <urn:uuid:ee751b0b-b351-464c-a89d-d41ab5f81376> | CC-MAIN-2024-38 | https://www.malwarebytes.com/blog/news/2017/03/mac-security-facts-and-fallacies | 2024-09-20T02:49:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00614.warc.gz | en | 0.954143 | 1,481 | 2.921875 | 3 |
Human error has become a leading cause of cybersecurity breaches, with 73% of breaches in 2023 linked to phishing and pre-texting. Cybercriminals are increasingly exploiting the human element within organizations, as evidenced by the massive MGM breach in 2023, which began with a simple phishing phone call and resulted in millions of dollars in damages and significant downtime. This growing trend highlights the critical need for organizations to address “human risk,” the potential for user errors—whether intentional or accidental—that can lead to severe cybersecurity incidents.
As organizations adopt more web-based applications and hybrid work models, the risk associated with human error is increasing. Security awareness training is now a vital tool necessary in mitigating this risk by educating employees on how to recognize and respond to threats like phishing. Yet, for training to be effective, it must move beyond basic compliance exercises and be integrated into a broader human risk management strategy. By fostering a culture of security and changing employee behaviors, organizations can significantly reduce the likelihood of breaches caused by human error.
The Role of Security Awareness Training in Reducing Human Risk
Leveraging security awareness training is crucial in combating the rising threat of human risk in cybersecurity. With phishing and credential theft being among the most prevalent tactics used by cybercriminals, organizations must empower their employees to act as the first line of defense. Security awareness training achieves this by equipping employees with the knowledge and skills to recognize and respond to various threats, such as phishing emails and social engineering attempts, which often act as entry points for more advanced and damaging attacks. For example, a well-trained employee is less likely to fall victim to a phishing scam, which can prevent the initial access that cybercriminals need to infiltrate a network.
The effectiveness of security awareness training hinges on its ability to change behavior and foster a security-conscious culture within the organization. It’s not enough to simply present information; training must be engaging, relevant, and ongoing. This means incorporating phishing simulations, interactive content, and regular updates that reflect the latest threats and trends in cybersecurity. By doing so, organizations can ensure that their employees remain vigilant and prepared to defend against evolving cyber threats. Ultimately, when security awareness training is done right, it not only reduces the likelihood of human error leading to a breach but also instills a proactive mindset that can significantly strengthen an organization’s overall security posture.
Best Practices and Enhancing Effectiveness for Security Awareness Training
To maximize the impact of security awareness training, it’s essential to tailor the program to the specific needs and roles within the organization. Different departments and job functions face unique cybersecurity challenges, and a one-size-fits-all approach to training may not adequately address these diverse risks. Employees in finance or HR may be more targeted by phishing attempts aimed at accessing sensitive financial data or personal information, while IT staff need to be vigilant about threats related to system access and data integrity. By customizing training content to reflect these specific risks, organizations can ensure that each employee is better prepared to handle the threats they are most likely to encounter.
Effective security awareness training should also be an ongoing process, not a one-time event. Cyber threats are constantly evolving, and so must the training that employees receive. Incorporating real-world scenarios and simulations allows employees to practice their responses in a safe environment, making them more confident and capable in the face of actual threats.
To fully leverage security awareness training in mitigating human risk, it’s essential to adopt a holistic approach that integrates training with broader organizational strategies, measures its effectiveness, and fosters a strong security culture. Here are key strategies to consider:
- Track Key Performance Indicators (KPIs):
- Monitoring KPIs such as phishing simulation success rates, incident reports, and employee participation helps measure the effectiveness of the training program. Tracking these metrics allows organizations to identify areas for improvement and demonstrate the impact of training on reducing human risk.
- Integrate Training with Broader Security Initiatives:
- Security awareness training should align with the organization’s overall cybersecurity strategy, supporting initiatives like incident response, access control, and identity management. Integrating training with broader security measures ensures that employees understand how their actions fit into the larger security framework, making the training more impactful.
- Embed Security into Organizational Culture:
- Creating a culture where security is a shared responsibility involves consistent messaging from leadership and embedding security practices into everyday workflows. When security becomes part of the organizational DNA, employees are more likely to internalize training concepts and apply them consistently.
- Reinforce Positive Security Behaviors:
- Recognizing and rewarding employees who demonstrate good security practices can reinforce positive behavior and encourage others to follow suit. This approach not only boosts morale but also turns security awareness into a proactive, peer-supported effort across the organization.
- Ensure Training Adaptability and Relevance:
- Regularly updating training content to reflect new threats and tailoring it to specific roles within the organization keeps the training relevant and engaging. Adaptable training ensures that employees are always prepared for the latest risks, maintaining a high level of vigilance and reducing the likelihood of human error.
Addressing human risk through effective security awareness training is not just a necessity—it’s a strategic imperative. By integrating training with broader security initiatives, fostering a culture of shared responsibility, and continuously adapting to emerging threats, organizations can significantly reduce the likelihood of breaches caused by human error. A proactive and well-rounded approach to security awareness is key to safeguarding against the constantly shifting tactics of cybercriminals. | <urn:uuid:7ae6ba42-35fe-45d7-8a99-ef597506a8d1> | CC-MAIN-2024-38 | https://agileblue.com/leveraging-security-awareness-training-to-mitigate-human-risk/ | 2024-09-09T06:12:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00714.warc.gz | en | 0.941589 | 1,144 | 2.90625 | 3 |
What is SPF?
Sender Policy Framework (SPF) is an email authentication method designed to detect and prevent email spoofing. SPF allows domain owners to specify which mail servers are authorized to send emails on behalf of their domain. This is achieved by adding an SPF record to the domain's DNS settings, listing the authorized IP addresses.
SPF is crucial for maintaining the integrity of email communications. By ensuring that only legitimate servers can send emails from a domain, SPF helps prevent phishing attacks and email spoofing, where attackers send emails that appear to come from a trusted source. Implementing SPF enhances the overall security of email communications, protecting both the sender and the recipient.
Given the increasing sophistication of email-based threats, SPF’s role in email security is more important than ever. By using SPF, organizations can reduce the risk of their domain being used in malicious activities, thereby protecting their reputation and their clients.
How does SPF work?
SPF works by allowing domain owners to create a list of authorized IP addresses in the form of an SPF record, which is added to the domain's DNS. When an email is sent, the recipient's email server checks the SPF record to verify that the email is coming from an authorized server. If the sending server's IP address matches one in the SPF record, the email is considered legitimate; otherwise, it may be flagged as suspicious or rejected.
For example, if a company uses specific mail servers to send emails, they can list these servers' IP addresses in their SPF record. When emails are received, the recipient's server checks this record against the sender's IP address. If the IP address is not listed, the email may be marked as spam or rejected.
SPF is often used in conjunction with other email security technologies such as DKIM (DomainKeys Identified Mail) and DMARC (Domain-based Message Authentication, Reporting, and Conformance). DKIM adds a digital signature to verify the sender, while DMARC provides policies and reporting mechanisms for handling SPF and DKIM failures. Together, these technologies offer a robust defense against email fraud and phishing attacks.
How to set up SPF security for your email
Setting up SPF for your domain involves several steps to ensure your emails are properly authenticated and protected:
Step 1: Identify Authorized Mail Servers
List All Mail Servers:
- Identify all servers and third-party services that send emails on behalf of your domain. This includes internal mail servers, marketing platforms, and other email services.
Step 2: Create an SPF Record
Create the SPF Record:
- Format your SPF record as a TXT record in your domain's DNS settings.
- The record should include all authorized IP addresses and look something like this: v=spf1 ip4:192.168.0.1 include:thirdparty.com ~all.
- The ~all tag at the end specifies how to handle unauthorized emails (e.g., -all for strict rejection, ~all for soft fail).
Step 3: Publish the SPF Record
Add the SPF Record to DNS:
- Access your domain’s DNS management settings.
- Add the SPF record as a new TXT record.
Step 4: Test the SPF Configuration
Test Your SPF Record:
- Use online SPF testing tools to ensure your SPF record is correctly configured and working as intended.
- Send test emails to verify that they pass SPF checks.
Additional Security Measures
While SPF is a powerful tool, it should be used alongside other email security measures:
- DKIM (DomainKeys Identified Mail): Adds a digital signature to emails for additional verification.
- DMARC (Domain-based Message Authentication, Reporting, and Conformance): Provides policies and reporting for handling SPF and DKIM failures.
Together, these technologies create a comprehensive email authentication strategy that protects against various types of email fraud and phishing attempts. Implementing SPF is relatively straightforward and the security benefits it provides make it a vital part of any organization's email security measures.
Boost your email security with Darktrace
Darktrace's platform offers advanced AI solutions specifically designed to enhance email security. By integrating SPF with Darktrace's cutting-edge cybersecurity measures, organizations can achieve unparalleled protection against email threats. Discover the advantages of AI for cybersecurity and safeguard your emails with Darktrace. Learn more about our email solutions and how they can benefit your organization by visiting Darktrace's website. | <urn:uuid:ba73bd7c-fcd4-43be-881f-088a4200191a> | CC-MAIN-2024-38 | https://darktrace.com/cyber-ai-glossary/sender-policy-framework-spf | 2024-09-10T11:42:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00614.warc.gz | en | 0.910989 | 938 | 3.40625 | 3 |
Information About Dynamic Host Configuration Protocol
You can configure WLANs to use the same or different Dynamic Host Configuration Protocol (DHCP) servers or no DHCP server. Two types of DHCP servers are available—internal and external.
Internal DHCP Servers
The controllers contain an internal DHCP server. This server is typically used in branch offices that do not already have a DHCP server.
The wireless network generally contains a maximum of 10 APs or less, with the APs on the same IP subnet as the controller.
The internal server provides DHCP addresses to wireless clients, direct-connect APs, and DHCP requests that are relayed from APs. Only lightweight access points are supported. When you want to use the internal DHCP server, ensure that you configure SVI for client VLAN and set the IP address as DHCP server IP address.
DHCP option 43 is not supported on the internal server. Therefore, the access point must use an alternative method to locate the management interface IP address of the controller, such as local subnet broadcast, Domain Name System (DNS), or priming.
Also, an internal DHCP server can serve only wireless clients, not wired clients.
When clients use the internal DHCP server of the controller, IP addresses are not preserved across reboots. As a result, multiple clients can be assigned to the same IP address. To resolve any IP address conflicts, clients must release their existing IP address and request a new one.
Wired guest clients are always on a Layer 2 network connected to a local or foreign controller.
External DHCP Servers
The operating system is designed to appear as a DHCP Relay to the network and as a DHCP server to clients with industry-standard external DHCP servers that support DHCP Relay, which means that each controller appears as a DHCP Relay agent to the DHCP server and as a DHCP server at the virtual IP address to wireless clients.
Because the controller captures the client IP address that is obtained from a DHCP server, it maintains the same IP address for that client during intra controller, inter controller, and inter-subnet client roaming.
External DHCP servers can support DHCPv6. |
You can configure DHCP on a per-interface or per-WLAN basis. We recommend that you use the primary DHCP server address that is assigned to a particular interface.
You can assign DHCP servers for individual interfaces. You can configure the management interface, AP-manager interface, and dynamic interface for a primary and secondary DHCP server, and you can configure the service-port interface to enable or disable DHCP servers. You can also define a DHCP server on a WLAN. In this case, the server overrides the DHCP server address on the interface assigned to the WLAN.
For enhanced security, we recommend that you require all clients to obtain their IP addresses from a DHCP server. To enforce this requirement, you can configure all WLANs with a DHCP Addr. Assignment Required setting, which disallows client static IP addresses. If DHCP Addr. Assignment Required is selected, clients must obtain an IP address via DHCP. Any client with a static IP address is not allowed on the network. The controller monitors DHCP traffic because it acts as a DHCP proxy for the clients.
If slightly less security is tolerable, you can create WLANs with DHCP Addr. Assignment Required disabled. Clients then have the option of using a static IP address or obtaining an IP address from a designated DHCP server.
DHCP Addr. Assignment Required is not supported for wired guest LANs. |
You can create separate WLANs with DHCP Addr. Assignment Required configured as disabled. This is applicable only if DHCP proxy is enabled for the controller. You must not define the primary/secondary configuration DHCP server you should disable the DHCP proxy. These WLANs drop all DHCP requests and force clients to use a static IP address. These WLANs do not support management over wireless connections.
DHCP Proxy Mode versus DHCP Bridging Mode
When using external DHCP servers, the controller can operate in one of two modes: as a DHCP Relay or as a DHCP Bridge.
The DHCP proxy mode serves as a DHCP helper function to achieve better security and control over DHCP transaction between the DHCP server and the wireless clients. DHCP bridging mode provides an option to make controller's role in DHCP transaction entirely transparent to the wireless clients.
Handling Client DHCP |
DHCP Proxy Mode |
DHCP Bridging Mode |
Modify giaddr |
Modify siaddr |
Modify Packet Content |
Redundant offers not forwarded |
Option 82 Support |
Broadcast to Unicast |
BOOTP support |
Per WLAN configurable |
RFC Non-compliant |
Proxy and relay agent are not exactly the same concept. But DHCP bridging mode is recommended for full RFC compliance. | | <urn:uuid:8372c620-c1c9-4126-9239-a1012cf55a09> | CC-MAIN-2024-38 | https://www.cisco.com/c/en/us/td/docs/wireless/controller/8-5/config-guide/b_cg85/dhcp.html | 2024-09-10T11:08:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00614.warc.gz | en | 0.875406 | 994 | 2.859375 | 3 |
A team of researchers from the University of Bath and Goldsmith in association with the University of London has discovered that the technology used in Fitness Trackers can help thwart cyberattacks if engaged smartly.
Researchers say that a device developed by them using the tech behind the fitness trackers can signal changes made to passwords, files, and anti-virus software when plugged into a computer in a network. The alert will be made in the form of light, vibration, and sound.
Furthermore, the architected device is such that the in future it can be used to notify the system admins when an employee uses the company network to engage in activities like social media, shopping and dating services.
“As Humans are the weak link when it comes to cybersecurity, it is leaving us vulnerable to serious cyber threats”, says DR. Emily Collins, a research associate at the University of Bath’s School of Management.
Currently, a separate team of researchers is working on a sensor which could alert the admin when they stand up before their systems without locking the screen.
Dr. Collins expects that the research will provide innovative solutions to CIOs and CTOs who would like to improve their cybersecurity posture in their corporate environments.
For this, the team is reported to use ‘health psychology’ to pinpoint them on what motivates people to take action to protect their cybersecurity defense-line.
As a part of the National Cyber Security Program, funding is expected to reach the study from the Home Office as the program will be using ‘Adafruit’ Circuit Playgrounds. | <urn:uuid:7fe26091-94a3-44fa-9979-954518871f61> | CC-MAIN-2024-38 | https://www.cybersecurity-insiders.com/technology-used-in-fitness-trackers-can-help-thwart-cyber-attacks/ | 2024-09-12T21:38:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00414.warc.gz | en | 0.954761 | 323 | 2.9375 | 3 |
According to the EIA (Environmental Investigation Agency), data centers are estimated to account for around 1.0-1.5 percent of global electricity consumption. Most of this power is supplied to the information technology equipment and is then dissipated as heat.
Subsequently, much of the remainder is consumed by the cooling system, which is focused on removing this heat from the technical space and conveying it to atmosphere by the most reliable and energy-efficient means possible.
What if, rather than being treated as a troublesome waste product, this heat was to be regarded as a valuable commodity, to be harnessed and reused?
Aligned with its commitment to the COP21 Paris Agreement, the UK Government has pledged to achieve net-zero carbon by 2050. To realize this goal, decarbonizing the UK’s housing stock must be a priority as this alone is responsible for approximately 14 percent of the UK’s carbon emissions. The primary contributor to this carbon footprint is the combustion of fossil fuels for heating.
There has been much discussion about phasing out the installation of gas-fired boilers in newly constructed houses, and there are propositions to discontinue their sale. Originally slated for implementation by 2025, the ban on new boiler sales has been extended to 2035, affording homeowners additional time to transition away from fossil fuels. As a result, finding affordable, low-carbon alternatives to traditional gas-fired boilers has become a priority.
Harnessing residual heat for enhanced district heating
The proposed sustainability targets, as well as the move away from traditional heating systems provide an enticing prospect. Redirecting residual heat from data centers to district heating networks offers advantages to be claimed by various stakeholders.
Data center operators
Harvesting residual heat proves instrumental in reducing operating costs. This eliminates the need for a heat rejection plan, allowing resources to be allocated to critical construction elements like cooling.
Moreover, selling residual heat to Energy Service Companies (ESCos) provides an additional avenue for offsetting operating costs. Sustainability gains are also substantial. Shutting down parts of the cooling plant only modestly affects the data center’s energy consumption and carbon emissions.
The true environmental impact lies in the reduction of offsite carbon emissions from district heating network users, previously reliant on fossil fuels. To improve the likelihood of this, data center operators need to highlight the potential to Municipalities and Planning Authorities. This would bolster the facility’s environmental benchmarking and help secure Planning Approval.
Energy Service Companies (ESCos)
Data centers serve as a reliable and plentiful source of heat, offering a solution to customer needs while concurrently reducing carbon emissions. Utilizing low-grade residual heat from data centers as the primary source for heat pumps enables ESCos to supply hot water to their networks without requiring centralized boiler plant. As data centers increasingly derive their energy from renewable sources, the residual heat becomes truly zero-carbon.
Overcoming adoption challenges
Despite the advantages, the integration of data centers with district heating systems remains limited in practice. Notable examples are in the Nordics and The Netherlands rather than the UK. However, the primary hurdle lies not in the technical constraints but in economic considerations.
The challenge originally stems from the practical complexities of collecting and harnessing residual heat from data centers. Planning authorities actively encourage heat reclamation, but the lack of existing infrastructure poses a significant obstacle.
While planning conditions that mandate developers to allow for connections to ‘future’ heating networks is a positive move, this becomes futile where there is no corresponding plan for heat network development. Developers comply with the condition out of an obligation to meet regulatory requirements rather than in genuine expectation of the infrastructure ever being used.
From the perspective of data center operators, investing in the infrastructure only makes sense when it generates Operational Expenditure (OpEx) savings through the reduced power and water consumption. However, the misalignment in load profiles complicates this matter. As the heating network’s demands peak in winter whilst reducing in summer, the data center operates the opposite way, as it can take advantage of ‘free cooling’ during the colder months.
This misalignment in load profiles also impacts the ESCos. The viability of investing in heat pumps and pipework to connect data centers to their network hinges upon the assumption that the infrastructure will be fully utilized.
However, in practice, data center operators will be hesitant to guarantee this, as the availability of residual heat is contingent on operational factors that may lie beyond their control. This uncertainty, linked to factors like speed of IT equipment deployment and maintenance shutdowns, makes committing to an agreed-upon amount of heat challenging.
Even when these factors can be solved, there is another large challenge. How much should the energy cost? The disagreement forms as data center operators look to sell heat at prices that cover their infrastructure costs, while the ESCos aim to offer competitive prices to customers, ensuring their profitability and that their infrastructure and waste heat costs are met. Finding a mutually agreeable cost remains a key challenge in fostering a collaborative partnership between the two parties.
What are the solutions?
As a result of these challenges, devising a strategy that can solve these problems is imperative.
The UK confronts a critical need to expand its district heating network infrastructure, which significantly lags counterparts in the Nordics and other continental countries. Government intervention is needed, which could emulate the successful models implemented elsewhere.
Providing grants and tax incentives that can fund the cost of the new infrastructure would mean that planning conditions requiring data centers to connect to district heating networks might be realized. Building the necessary infrastructure would be challenging, but no more so than the creation of the UK’s broadband fiber network, or construction of the natural gas distribution network during the late 1960s.
This governmental support could also play a role in fortifying the business case for both data center operators and ESCos. Implementing tax breaks on energy costs can be instrumental in this regard. This would incentivize data center operators to contribute residual heat to district heating networks, coupled with subsidies for ESCos to harvest the residual heat from the facility. This would further push the two into a mutually beneficial, symbiotic relationship.
Addressing the challenge of the load mismatch requires a solution of building big. By developing extensive networks that serve various user types, both commercial and domestic, the demand profile is smoothed out.
This approach enables ESCos to capitalize on multiple heat sources based upon season, unit cost, and heat availability. Data centers could be selected to meet a summertime heating demand benefiting the operator in terms of energy savings and carbon reduction.
To minimize transmission losses and pumping energy, data centers – like Edge facilities – would need to be situated close to urban centers where the heating network’s users live. This might create planning obstacles, but if municipality planners came to see data centers as a source of low-grade, low-carbon heat, that could support the local community, then this consideration might add weight in zoning decisions.
The issue of reducing operating temperatures to promote decarbonization is also paramount. Many existing district heating networks operate in the medium temperature range (120-175⁰C), and most are designed for a supply temperature above 80-100⁰C. However, these temperatures can only be achieved by burning fossil fuels and preclude the use of ammonia heat pumps combined with low-grade heat sources. To achieve de-carbonization, it will be necessary to adopt much lower operating temperatures.
An example of success
These insights are supported by Cundall’s hands-on experience in Odense, Denmark and underscore the potential of effective residual heat utilisation. In a close partnership with the client, and local ESCo, Fjernvarme Fyn, we were responsible for the design of a hyperscale data center that exports residual heat into the local district heating network.
Fjernvarme Fyn’s expansive network services 65,000 metered connections. It is linked to several different heat sources and user types, which smooths the load and demand profiles. The data center’s integration into the district heating network is facilitated by multi-stage ammonia heat-pumps housed in a dedicated energy center.
These heat pumps leverage low-grade residual heat from the data center, boosting the temperature of the district heating network by 70-75⁰C. When fully built out, the data center will provide 165,000MWh of heat per year. Notably, this heat is inherently free from carbon emissions, given the data center’s reliance on renewable energy sources.
While replicating this model in the UK will require substantial investment and some political manoeuvring over several years, the demonstrable success of Odense signifies its potential. Harnessing residual heat from data centers to energise district heating networks is not merely a theoretical technical concept but is something that is being done elsewhere. If replicated in the UK, it would mark an important contribution to achieving 2050 net carbon zero targets for the UK.
More from Cundall
Defining net zero and how we look to achieve it in today’s resource stressed climate
As data centers come under increasing public and regulatory scrutiny, how do we expand the sector’s capacity without increasing its carbon footprint? | <urn:uuid:758b778c-e9b9-4290-b122-744be8555d72> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/opinions/improving-the-case-for-waste-from-data-centers/?utm_source=dlvr.it&utm_medium=twitter | 2024-09-20T06:32:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00714.warc.gz | en | 0.939347 | 1,891 | 3.359375 | 3 |
The GSA, History of the Administration and its Process
The General Services Administration or the GSA for short is an independent agency within the U.S. Federal Government. The GSA serves as the primary vehicle for the buying and selling of goods, services, and products to the various sectors of the federal government. More specifically, the GSA works to “deliver value and savings in real estate, acquisition, technology, and other mission-support services across government.” The products and services that are purchased and sold through the GSA range from intangible products such as software programs to tangible products such as government buildings and facilities.
Why was the GSA created?
The GSA was created in 1949 to help improve and support the functionality of the various products, services, and goods that are provided to the American populace on behalf of the U.S. Federal government. The GSA consolidated several government programs and agencies, including the Federal Works Agency, the Bureau of Federal Supply, and the Office of Contract Settlement, among others, into a single agency within the federal government. In the context of U.S. government operations in the year 1949, the original purpose for the creation of the GSA was to “dispose of war surplus goods, manage and store government records, handle emergency preparedness, and stockpile strategic supplies for wartime”, as well as manage numerous other bureaucratic tasks.
Efforts to formulate the GSA were initiated by former U.S. President Harry Truman in 1947, who requested assistance from another former U.S. President, Herbert Hoover, with the goal of reorganizing the various operations of the federal government. These reorganization efforts eventually to the form of the Hoover Commission, which then led to the creation of both the GSA, as well as the United States Department of Health, Education, and Welfare or the HEW for short. In the years since 1949, the GSA has expanded in size and function considerably, and now offers products and services ranging from real estate, acquisition services, and technology.
How can businesses and organizations sell their products and services through the GSA?
If a business or organization is looking to sell their products, services, or goods through the vehicle of the GSA, they must first submit an application to the GSA Multiple Award Schedule (MAS) program and go through a lengthy and rigorous process prior to receiving approval. GSA Multiple Award Schedule (MAS) “are long-term governmentwide, indefinite-delivery, indefinite-quantity (IDIQ) contracts that provide federal, state, and local government buyers commercial products and services at volume discount pricing.” Due to the enormous benefits of being approved for such a program, there are a number of financial requirements that must first be met.
For example, a business or organization seeking to sell their products through the GSA must have financial stability, have been in business for at least 2 years, and must be able to provide products and services that have or can be sold commercially. Moreover, proposals to the GSA Multiple Award Schedule (MAS) are broken down into three subcategories, which include administrative, technical, and pricing. While the pricing aspect of the proposal is considered to be the most important, there are a number of other variables that can factor into the GSA’s decision to approve or deny a particular product or service, including demonstrating why a particular service or product is superior to or more effective than another.
Does the GSA sell software products and ICT services?
The GSA has a long history of selling technology products, services, and goods. For example, the GSA created the Federal Telecommunications System in 1960, a government intercity telephone system. Alternatively, as it relates to more modern offerings, the GSA currently sells a wide range of software programs, Information and Computer Technology or ICT products, and security products and services. Furthermore, the GSA also sells a number of hardware products as well, including but not limited to laptops, desktops, keyboards, printers, scanners, electronic equipment, communications equipment, and fiber-optic equipment.
To this point, CaseGuard Studio’s automatic video, audio, PDF, email, and image redaction software is now available for purchase via the GSA as of January 2022. Notably, CaseGuard Studio is the only all-in-one-redaction software program that is currently available on the GSA marketplace, as no other software program that is currently being offered via the GSA contains the tools and features that consumers need to both manually and automatically redaction personal information and objects from content in a variety of formats and mediums. As such, consumers, government agencies, and a host of other businesses and organizations will now be able to take advantage of the cutting-edge technology contained within CaseGuard Studio. | <urn:uuid:f5c1e101-72e3-485e-b4bc-0122c478d164> | CC-MAIN-2024-38 | https://caseguard.com/articles/the-gsa-history-of-the-administration-technology/ | 2024-09-09T07:58:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00814.warc.gz | en | 0.962481 | 985 | 3.46875 | 3 |
Latency (also called "ping") is the more technical term for lag, which is when you experience response delays during gaming. High latency means more lag, which everyone knows makes gaming way less enjoyable. Low latency means less lag and smoother gameplay.
Generally, when you test your ping, an acceptable number is anywhere around 40 to 60 milliseconds (ms) or lower, while a speed of over 100 ms will usually mean a noticeable lag in gaming. Essentially, you want the latency from your gaming device to the internet server to be as close to 0 ms as possible, as this means it takes little to no time for one device to respond to another. | <urn:uuid:bef8ee80-d366-4dbc-95f9-3e96ad371570> | CC-MAIN-2024-38 | https://www.centurylink.com/home/help/internet/how-to-improve-gaming-latency.html | 2024-09-10T14:43:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00714.warc.gz | en | 0.970823 | 132 | 3 | 3 |
Network Security is the Practices, Policies and Security measures created and implemented to Monitor and Prevent unauthorized access in a computer network.
The term Networks can relate to objects including Local Area networks, WAN Links and DMZ Networks that all need to be protected.
Over a decade ago, when I was an IT consultant to a media and entertainment company, one of their agendas was to protect the data from being stolen.
They just cannot afford to lose the data into the public domain as it will severely affect the business relationship with their stakeholders and dent their reputation in the market.
The company’s internal policy was to embrace open source software, with a few Microsoft Windows Servers and Workstations, and we plugged in the most visible security holes with a mix of open source software and Active Directory Group Policies.
Those weren’t effective.
The inevitable happened…..!
We had a Network Security breach.
When we sat down to analyze what went wrong, there was a general consensus that the overall security framework was focused on just one aspect – Data Loss, and not so much on the various scenarios on how this data loss could happen from inside and outside the network.
We had a couple of good firewalls and properly configured DMZs and group policies, but these couldn’t prevent the data loss from happening.
So, what went wrong?
It took us 5 days to even realize that data was missing, and a solid 6 days after that to just understand how the data was stolen, by whom, and through which medium.
It was a simple case of privileged account abuse from the outside.
Tracing this malicious activity and establishing an audit trail was exhaustive and time consuming.
But, this was more than 10 years ago.
So, has the situation changed now?
Yes and No.
Yes, because we now have sophisticated SIEM, IDS/IPS, software patch management and network change management tools.
And, No, since we see that 60% of the attackers can compromise an organization’s data within minutes [source: Verizon 2015 Data Breach Investigations Report].
According to the monthly intelligence report from Symantec, 49.9% of spear phishing attacks that happened in November 2015 were on large enterprises, and 24.9% targeting small organizations.
Similarly, the number of reported vulnerabilities stood at 346, and there were 19.4 million new pieces of malware created for the same month.
These numbers are staggering; but they are just one part of the overall threat landscape that includes insider attacks, unauthorized USB activity, un-monitored change activity on critical network devices, SQL injection, suspicious changes to servers and SQL databases, and so on.
What do you need to improve network security? Where will you start?
Network Security Basics for 2024:
1. Keep software up-to-date
Unpatched and unsupported software in your organization is one of the biggest threats to network security.
Vulnerabilities in Microsoft and 3rd-party software that are not patched up on time just make it easier for hackers and malware to sneak into your IT environment, run exploits and compromise data, wreck business productivity and the overall security of the network.
Automating software patches, and creating a thorough patch management framework will reduce the security headache.
2. Centralize log collection, analysis and threat detection
Centralized collection of all the event logs, syslogs and flat files from all your network devices, servers, workstations, applications and databases and analyzing them for threats and detecting suspicious activities, is a general best practice while implementing an SIEM solution.
Continuous monitoring of all these devices and real-time alerts will enhance the security incident awareness in your network environment.
If we had thought about centralized log collections and implemented an SIEM solution, the data theft mentioned above would have been prevented to a great extent, and we would have had a complete audit trail of this suspicious event.
3. Enforce change management and disaster recovery
A configuration change on a core router, switch or firewall on your network can have three scenarios:
1. An authorized change from the network engineer successfully modifying the startup or running configs
2. An authorized change to a config file that made the network device unstable and caused network downtime
3. An unauthorized/malicious change to the config file from an insider or external bad actor compromising network security and business productivity
Apart from the aforementioned factors, a network device failure can add to the woes.
An automated Network change and configuration management (NCCM) solution addresses all of the above issues by backing up the configurations from hundreds of network devices, and restoring the last known good configuration on to the replacement router or switch, bringing it up-to-speed. Minimal downtime!
What’s more important is collecting the event logs from the NCCM system and centralizing it on the SIEM solution for real-time alerts and detailed audit trails with accurate time-stamps.
This integration is immensely helpful in continuously monitoring and detecting critical security events such as logon failures, account modifications and changes to global security settings.
4. Monitor switch port usage and track endpoint devices
User device tracking and mapping these devices to the specific switch ports improve your IT security by detecting and tracking all endpoint devices, immaterial of whether your network is BYOD-friendly or not.
It’s important to monitor and manage switch port utilizations to know important events, like for example, to which switch and switch port is a suspect device attached, what type of a device is that (laptop or mobile), the user identity logged on to that device, logon histories and other details such as IP address, MAC address and hostname.
You should also identify and blacklist rogue devices so that you can be alerted when the device hooks on to a switch.
5. Monitor user activity
Did you know that there were 79,790 security incidents in 2014, and 55% were from privileged account abuse? [Source: Verizon 2015 Data Breach Investigations Report].
And, that’s exactly why continuous monitoring of user activity is very important.
A network engineer or systems administrator must be alerted immediately if a specific server or network device is being accessed by a user on a holiday or outside of logon hours.
It could be a valid operation by the user, but also could be a suspicious activity.
Similarly, all other activities such as adding users to admin accounts, modifying privileged accounts, logon failures, etc., must trigger an alert so that the IT personnel concerned can take the right action to prevent a plausible attack or security compromise.
But, before anything, stop sharing passwords among administrators – this is only going to muddle credentials management, making it difficult to investigate when something goes wrong.
A disgruntled ex-employee might just login with that shared password, and wipe everything on a router or server.
And, you’d never know.
6. Monitor SQL server activity
Apart from monitoring user activity on network devices and servers, you must also be aware of the security incidents happening on database servers.
Based on the configuration of the SQL server and the SIEM solution collecting the logs and events, you should be aware of activities such as database error or exceptions, change attempts on database objects, duplicate connection attempts, alerts when an unknown users attempt to logon to the DB, cross-site scripting attempts, and so on.
Database security for business critical databases helps protect confidential information from being compromised.
7. Meet regulatory compliance standards
In the media and entertainment company data theft incident, we also found out that the data was transferred on to a USB hard drive – a seemingly simple, but uncontrollable activity at that time.
The situation would have been different if only we had followed some compliance regulations, and implemented an SIEM solution that alerted and ejected USB drives the moment they’re inserted into a USB port, anywhere in the organization.
There are a number of regulatory compliance standards, such as SOX, PCI DSS, DISA STIG, HIPAA, to name a few. The question you must ask yourself is, how easily you are demonstrating one or a few of these standards.
Investing in an SIEM solution provides you with built-in rules and reports to meet most compliance standards, helping you avoid regulatory fines and preventing expensive data breaches.
Though network security is the primary responsibility of the network engineer or information security manager, the end-users in the organization must also be made aware of the implications of security with proper training and periodic dissemination of information on common attacks like phishing and opening unknown executable files.
A powerful SIEM solution, in combination with change management, device tracking and software patch processes will address most of the existing security threats. We can then look further into scaling up to address future threats that will come our way.
Network Security FAQS
What are some common network security threats?
Common network security threats include malware, phishing attacks, denial of service (DoS) attacks, ransomware, and insider threats.
How can organizations ensure the security of their wireless networks?
Organizations can ensure the security of their wireless networks by implementing strong encryption protocols, requiring strong passwords, disabling unused network services, and using network segmentation to separate different types of traffic.
What is endpoint security and why is it important for network security?
Endpoint security refers to the security measures implemented on individual devices, such as laptops and smartphones, to protect them from cyberattacks. It's important for network security because endpoints can be vulnerable entry points for attackers seeking to infiltrate an organization's network.
How can organizations protect against social engineering attacks?
Organizations can protect against social engineering attacks by educating employees on how to recognize and avoid common tactics, such as phishing emails and phone scams, and implementing strict access controls and authentication procedures.
How can organizations prevent data breaches and data loss?
Organizations can prevent data breaches and data loss by implementing access controls, using encryption to protect sensitive data, monitoring network activity for signs of suspicious behavior, and educating employees on security best practices. | <urn:uuid:15425a5e-b7ca-463f-8600-743a4732053a> | CC-MAIN-2024-38 | https://www.networkmanagementsoftware.com/7-essentials-for-stronger-network-security/ | 2024-09-10T14:26:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00714.warc.gz | en | 0.94457 | 2,068 | 2.890625 | 3 |
“When we have multiple noise sources coming from all sides, . . . mitigation becomes a challenge, and that trend is likely to continue.”
Systems are getting smaller and denser, which creates many power-related challenges. As die shrinking reduces the relative distance between components, coupling coefficients go up. How we manage power distribution becomes critical: When we have so many devices in a small area, through-the-air coupling can sometimes interfere with system functionality. In the “old days,” we had the luxury
of putting distance between noise sources and victims. Now, we can rarely do that. In cases of a single noise source, we may find mitigation relatively easy. When we have multiple noise sources coming from all sides, however, mitigation becomes a challenge, and that trend is likely to continue.
Here is an example. We had a packaged DC-DC converter with a typical pin layout: pins for power input and output, pins for control signals, and a few auxiliary power connections for bias voltages in the few-milliamperes range. We built a circuit using good engineering practices, but it did not function properly. We found out that noise generated by high-current circuitry inside the module was coupling to a low-current supply pin on the DC-DC converter. This finding was surprising because you typically expect problems when noise couples to a sensitive signal, but here it was coupling with a power pin. The resulting strange functionality made the problem difficult to diagnose. The power converter still worked, but it put out the wrong DC voltage by a few millivolts. In some circuits, a couple of millivolts would not matter, but they caused serious problems in our precisely sized circuit. As designs become denser, surprises like this may pop up more frequently.
This is an excerpt from 7 Experts on New Approaches for Power Distribution
Network Design. The eBook was generously sponsored by KEMET Corporation and Mouser Electronics. | <urn:uuid:71e5b05e-dec0-4c3b-ac90-28c839279aae> | CC-MAIN-2024-38 | https://mightyguides.com/istvan-novak-when-shrinking-circuits-pay-attention-to-geometry/ | 2024-09-13T01:55:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00514.warc.gz | en | 0.962876 | 401 | 2.859375 | 3 |
Advanced Drone Swarms Revolutionize Early Wildfire Mitigation
A swarm of AI-driven drones has been developed for early wildfire detection and response.
The drones utilize thermal and optical imaging for autonomous fire assessment and reporting.
Developed by the University of Sheffield, with swarm coordination technology from the University of Bristol.
Tested by Lancashire Fire and Rescue Service in collaboration with Windracers.
The technology enhances early intervention capabilities and reduces wildfire risks.
Wildfires in the UK are becoming more frequent and severe due to climate change and other factors.
Main AI News:
A pioneering development in wildfire management has introduced a swarm of self-coordinating drones aimed at early intervention. Utilizing AI-driven thermal and optical imaging, these drones autonomously detect, assess, and report fire conditions, providing critical information to firefighting teams.
Developed by the University of Sheffield’s School of Electrical and Electronic Engineering, the AI system uses advanced computer vision techniques to operate effectively across various weather conditions. Paired with the University of Bristol swarm technology, these drones can quickly deploy fire retardants, monitor the situation, and return to base autonomously.
Lancashire Fire and Rescue Service, known for battling a significant wildfire in 2018, has tested this innovative drone swarm. The project is a collaboration with Windracers, a British developer of autonomous cargo aircraft, and leading AI and robotics experts from the Universities of Bristol and Sheffield. Together, they have developed a unique system for detecting and suppressing fires before they escalate.
The Windracer ULTRA aircraft used in the project can carry 100 kg of fire retardant and autonomously cover vast areas, potentially as large as Greece, during high-risk periods. As wildfires become more frequent and severe in the UK, driven by climate change and other factors, this technology significantly advances early wildfire mitigation strategies.
The development of AI-driven drone swarms for wildfire mitigation represents a significant shift in the market for firefighting technology. As wildfires become more prevalent and severe, the demand for advanced, autonomous solutions will likely increase. This innovation enhances the efficiency of firefighting efforts and reduces risks to human life and property. Companies involved in AI, robotics, and drone technology are well-positioned to capture a growing market share as governments and organizations seek out effective tools for disaster management. This technology could lead to new partnerships, investment opportunities, and a redefinition of best practices in emergency response. | <urn:uuid:1a25212b-c91e-40a7-8dd0-4eaa80aac994> | CC-MAIN-2024-38 | https://multiplatform.ai/advanced-drone-swarms-revolutionize-early-wildfire-mitigation/ | 2024-09-16T20:27:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00214.warc.gz | en | 0.929933 | 496 | 2.90625 | 3 |
What Are The Negative Affects of Malware?
As malware attacks continue to increase in frequency, businesses must take proactive steps to protect their enterprise data from theft or ransom.
IT security experts have revealed that there were over one million web attacks against individuals 365 days in 2015. Unfortunately the number of attacks didn’t slow down in 2016, which means that businesses took stronger security measures to protect their vital business data. Education and awareness of the negative affects of malware, as well as the steps needed to prevent these attacks, remains one of the best ways that businesses can properly protect their information and data.
How Can Malware Negatively Affect Your Business?
Hackers deliver malware to your computer by disguising emails, creating fake websites, or tricking you into directly downloading the malicious code. Once you have accidentally installed the malicious program, a number of activities can take place:
- The malware can begin to replicate;
- It can block access to certain files or the entire system;
- Your desktop can be spammed with unwanted ads; or
- It can capture keystrokes to better access secure areas of your enterprise network.
As these negative affects show, malware protection and detection is key to protecting your enterprise data.
What Are The Most Popular Types Of Malware?
Identifying the most popular types of malware attacks can help you to protect and detect malware attacks.
- Computer Virus — A virus is one of the most common types of malware. It is typically designed to replicate itself from one file to another for a wide variety of purposes.
- Trojan Horse — This type of malware is designed to remain harmless to programs, while simultaneously attempting to steal passwords of important files from the computer or account user.
- Worms — Worms are a type of malware that are designed to create widespread damage on an entire network by “jumping” from computer to computer.
- Spyware — As the name suggests, this type of malware is designed to monitor user activity, while simultaneously gathering information, including: passwords and login credentials.
- Logic Bombs — When triggered by a specific action, this type of malware can wipe a computer hard drive or crash an entire system.
- Ransomware — This new type of malware is designed to hold your computer, network, system, or data hostage until a set ransom is paid to the hackers.
In conclusion, understanding the harmful affects of malware can help you to create a proactive security approach to securing your network from current and future threats. Remember that employee education is the key to any IT security solution. To learn more about how to protect your network from malware attacks or for additional information on the latest data security threats contact Dynamic Quest! | <urn:uuid:e7ff142b-77c1-449d-ae4f-998c616fc38c> | CC-MAIN-2024-38 | https://dynamicquest.com/what-are-the-negative-affects-of-malware/ | 2024-09-20T10:27:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00814.warc.gz | en | 0.927579 | 550 | 3.015625 | 3 |
Vulnerability management is the process of identifying, assessing, and treating cyber vulnerabilities across systems and software used in an organisation. It is an ongoing, cyclical process to manage the vulnerabilities and report on the status. Vulnerability management is an important part of an organisation’s security program and is integral to reducing the attack surface.
The technology space is transient with systems and networks undergoing changes. Also, cyber-attacks are becoming more vicious and use sophisticated technology. Thus, vulnerability management is a continuous process to monitor and treat vulnerabilities.
Vulnerability, Risk, and Threat
While all three terms denote a security concern, they differ in meaning and also in their treatment approach.
A vulnerability as defined in ISO 27002 is a weakness in an asset or a group of assets that can be exploited by threats.
A threat is something that can exploit the vulnerability in a system or software.
A risk is potential damage that can be caused when a threat exploits a vulnerability.
The Vulnerability Management Process
Vulnerability management is an ongoing process. The process can use different terminologies in different organisations or contexts, but the process remains more or less similar in all cases.
The vulnerability management process requires a precursor as defined by Gartner’s Vulnerability Management Guidance Framework. It outlines 5 steps before vulnerability management begins.
- Determine the scope of the program
- Define roles and responsibilities
- Select vulnerability assessment tools
- Create and refine policies and SLAs
- Identify asset context sources
After this groundwork, you can begin the vulnerability management process.
The vulnerability management process can be broken down into 4 major steps.
- Identifying vulnerabilities
- Evaluating vulnerabilities
- Vulnerability treatment
This step is usually carried out using a Vulnerability Scanner, even though other methods are available, too. A vulnerability scanner is a tool that searches for known vulnerabilities in the IT infrastructure and reports them. It will perform the following tasks:
- Ping network-accessible systems or send TCP/UDP packets to systems to scan them
- On scanned systems, identify the services running and detect open ports
- Where possible, remotely login into systems to detect detailed system information
- Check system information against known vulnerabilities
- Report vulnerabilities identified in the system
If you are not using a vulnerability scanner, you can use other vulnerability management solutions that continuously gather data from systems without running scans. The end result should be the same though – i.e., the method should be able to identify vulnerabilities.
Vulnerabilities need to be evaluated to ascertain their severity and also so that the vulnerability management process is aligned with risk management. Vulnerability management solutions evaluate vulnerabilities by assigning risk ratings and scores. A popular scoring system is the Common Vulnerability Scoring System (CVSS). Read more about how CVSS works in What is the Common Vulnerability Scoring System?
These scores help you prioritise vulnerabilities. Along with the CVSS scores, you should also consider the below factors to get a complete view of the risks associated with a vulnerability.
- How difficult is it to exploit the vulnerability?
- Can it be exploited from the internet?
- Can you verify that the vulnerability is not a false positive?
- Is there a known code that can exploit the vulnerability?
- What would be the consequences of the vulnerability being exploited?
- How long has the vulnerability resided in the system?
- Have you implemented any security controls to address the vulnerability?
Not all vulnerability scanners and other vulnerability management tools are always perfect. There is a chance of false positives while identifying vulnerabilities. Thus, all vulnerabilities need to be validated.
Penetration testing is a comprehensive method to validate vulnerabilities. There are other validation methods, too. Evaluation methods are important because they can help uncover vulnerabilities that you didn’t know existed in your system or didn’t know were severe enough to treat. Read more on how pen testing is relevant to cybersecurity and GRC in Cybersecurity, GRC, and the Role of Penetration Testing.
A vulnerability once validated as a risk needs vulnerability treatment. Below are the options for vulnerability treatment.
- Remove vulnerability: Implement a fix for the vulnerability to remove it from the system. This is the ideal solution for any company.
- Reduce vulnerability: If the vulnerability cannot be completely eliminated from the system, it needs to be mitigated. If a security patch or control that can remove the vulnerability isn’t available, measures should be taken to reduce the risk associated with the vulnerability. This is usually a temporary solution till such a point where the control or security patch can be implemented.
- Monitor vulnerabilities: When vulnerabilities can neither be removed nor reduced, they need to be monitored to detect a threat or an attack.
- Accept vulnerabilities: This is for vulnerabilities that can neither be removed nor reduced. These vulnerabilities need to be accepted. When the vulnerability is low-risk and low-impact, or when the cost of fixing the vulnerability is greater than the impact of the vulnerability, it is recommended that the vulnerability be accepted.
Vulnerability management solutions also recommend treatment options. However, the option provided might not always be the most optimal solution. So, any option must be evaluated by security experts, system owners, and system administrators.
When vulnerability fixes are implemented, it is recommended that you run the vulnerability scans again to confirm that the vulnerability has been resolved.
Vulnerability management solutions come with different options for customising reports and a dashboard view to see how the vulnerability management program is performing. These reports help the security teams make decisions about the security controls and other techniques to be used to deal with each vulnerability.
Since vulnerability management is a regular and continuous process, it helps to have updated reports generated regularly so that vulnerabilities can be monitored.
Since vulnerability management is a cyclical process, the information from the reports needs to be used to improve the status of vulnerabilities and then repeat the above cycle right from identifying vulnerabilities.
What to look for in Vulnerability Management solutions?
At the core of vulnerability management is managing the exposure of your data and assets to known vulnerabilities. However, when you choose a vulnerability management solution, you should also consider the below factors.
Many vulnerability management solutions provide endpoint agents to continuously gather vulnerability data. These agent-based solutions can sometimes be quite bulky, impacting the performance of the endpoint. Choose a lightweight solution that will not impact the performance.
Vulnerability management solutions need to be fast. If they take too long to scan the networks and collect vulnerability data, chances are the data is already outdated by the time the tool reports it. This is a common problem with network-based vulnerability management solutions.
The vulnerabilities in the system should be instantly visible. A vulnerability management solution that shows a real-time dashboard can help you to see vulnerabilities in time and the further process to assess and treat them can be triggered.
Tackle changes with vulnerability management
There are a lot of organisational changes due to the demand for adding more systems and applications, rising adoption of cloud, hybrid work culture, etc. At the same time, the threat landscape is evolving, and the number of cyber attacks is increasing.
All these changes need a strong vulnerability management process. New changes such as onboarding a new partner, hiring, getting a cloud service, etc. are inevitable in a growing organisation. But it is also growing your attack surface. Protecting your organisation from these threats is critical and vulnerability management is an important part of this exercise. Read more in the blog Integrating Vulnerability Management into your ISMS.
To know more about how the 6clicks platform helps with vulnerability management and supports integration with vulnerability scanning platforms, get in touch with our team to take a free tour of the platform.
Written by Andrew Robinson
Andrew started his career in the startup world and went on to perform in cyber and information security advisory roles for the Australian Federal Government and several Victorian Government entities. Andrew has a Masters in Policing, Intelligence and Counter-Terrorism (PICT) specialising in Cyber Security and holds IRAP, ISO 27001 LA, CISSP, CISM and SCF certifications. | <urn:uuid:5587045b-8b98-4c50-8c83-89c478b4f2af> | CC-MAIN-2024-38 | https://www.6clicks.com/resources/blog/understanding-vulnerability-management | 2024-09-20T09:11:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00814.warc.gz | en | 0.929951 | 1,656 | 3.046875 | 3 |
Imaging technologies are increasingly part of the frontline for security, policing, warfare, and even areas such as fire management, border control, and building management.
These technologies include the traditional areas of CCTV and x-ray screening; however the use of electromagnetic spectrums is opening up new ways of looking at the world around us. This includes measures of infrared, thermal, millimetre wave and terahertz technologies portraying scenes in a very different form from what we see with the naked eye.
Even in areas such as CCTV, filtering techniques and computer representations are making the traditional viewing of these kinds of images more abstract. Further, we are increasingly likely to see the integrated use of these technologies in a specific location, or alternatively see them spread over a number of control points to evaluate risks in different ways.
The electronic imaging of people, goods, materials, areas, and even processes and movement is a fast developing aspect of how we monitor against a wide range of risks.
Imaging technologies can now create visual images on monitors or screens that are representations or abstract conceptualisations. For instance, the use of an electromagnetic wavelength is not measuring visible light, but rather differences in the emissions of particles. Similar issues with filters have been observed for low dosage full body scanners.
Moving to the further abstraction of a terahertz based system provides a general body shape reflecting blobs of various wavelengths of electromagnetic radiation while millimetre wave images also require a high degree of abstraction.
With this type of technology, the images being viewed are abstract, and the way in which they are displayed and the colours used are a representation rather than actual colour. In some cases, images are entirely synthetic – real items are being displayed in a representative form which may depart radically from the actual outline, shape, texture, surface composition, and form of the actual object.
This means that operators or people responsible for viewing image material are having to think of threat characteristics in conceptual rather terms. They are using abstract representations of a threat or risk for detection purposes rather than the normal manner in which humans see the world.
In terms of CCTV, we find for instance that using thermal cameras will allow the camera to view a scene or situation shrouded in darkness or mist, providing a very different image to the real life picture produce by a standard CCTV camera. Recognising body language takes on a new perspective under such conditions.
In military applications, the use of thermal imaging night vision goggles adds issues of depth perception as users learn how heat may impact within a three dimensional setting and what happens when you put your foot down in a particular hue of green or grey. These visual demands create a whole different set of recognition criteria and visual analysis techniques for effective monitoring performance.
Similarly, we have found with x-rays that aviation x-ray providers have reduced the possible number of filters available for x-ray screeners because of the increased choice results in prolonged viewing, which often leads to more confusion rather than improving decision resolution.
Technology developers need to give particular attention to the nature of the display and how to make it meaningful to operators. Not to do so would cripple the technology before it even starts. However, this is not always to say that the image representations are the best suited for people or even particular detection tasks or situations.
In addition, computer based visual analytic techniques are often added to the technology to assist in the detection task. This doesn’t always work to the specified level, and in some cases has been shown to detract from detection with more complex images. | <urn:uuid:052f812c-40ac-402d-97a4-acf7f02ec3b3> | CC-MAIN-2024-38 | https://cyberriskleaders.com/importance-of-visual-analysis-skills-in-effective-risk-detection/ | 2024-09-09T15:03:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00014.warc.gz | en | 0.936198 | 715 | 3.25 | 3 |
Data democratization is to equip people with easy access to data/information without extensive or expensive training. The goal of data democratization is to allow non-specialists to be able to gather and analyse data without requiring outside help and is often referred to as citizen access. Studies have shown that data-centric organizations make better strategic decisions, have higher efficiency, improved customer satisfaction, and generate more profits.
Data Democratization as named by Gartner as one of the “Top 10 Strategic Technology Trends for 2020” is to equip people with easy access to data/information without extensive or expensive training. The goal of data democratization is to allow non-specialists to be able to gather and analyze data without requiring outside help and is often referred to as citizen access. Studies have shown that data-centric organizations make better strategic decisions, have higher efficiency, improved customer satisfaction, and generate more profits. In fact, Forrester predicts that such organizations are on track to make US$1.8 trillion annually by 2021.
Even when an enterprise wants to embrace democratization there can be difficulties in making data available freely. Data may be stored in silos, making it difficult for employees in different departments to access data. It requires three key factors of technological enablement — Data Access, Machine Learning, and Deployment.
Data Access - It is a view of all your structured, unstructured, and cloud data to allow easy access to all your information. By doing this you can achieve faster insights with minimum cost. Gartner predicts that enterprises will spend 50% of their time and cost just accessing various silos and types of data.
Machine Learning - Humans actively analyze the data to find out what’s expected and what's not, the system analyses the data and automatically asks additional questions to find unexpected information, which in turn causes humans to dig deeper. For implementing machine learning processes it is important to understand your users, how they will access this information, and importantly how you will keep it secure.
Deployment - According to Gartner, more than 85% of machine learning projects fail because they are unable to achieve any significant value to the business. The final step of democratization is having the ability to quickly deploy insights as required.
For data democratization to succeed, it needs to be trustworthy. It means keeping data safe and secure. The public internet is insecure, but a digital ecosystem that connects its users using private connections can improve performance, scalability, and resilience while ensuring secure data exchange.
While Data Democratization can fuel innovation, it can pose a serious threat to data security if not deployed properly. The way that data is collected, handled, and analyzed has become a raging debate across consumers, digital enterprises, regulators, and government agencies alike. Just as data governance and security are essential to data privacy, if done right, they can act as a great stepping stone for data democratization. Data democratization and the positive culture it can create is therefore critical to the long-term success of any organization.
Businesses that wish to benefit from data democratization will have to create it intentionally. This means an organizational investment must be made in terms of budget, software, and training. In the world of data democratization, breaking down information silos is the first step toward user empowerment. This cannot be done without customizable analytics tools capable of desegregating and connecting previously siloed data, making it manageable from a single place.
Bimodal strategies should be considered in the overall data democratization strategy. The bimodal strategy is the practice of managing two separate but coherent styles: one focused on data predictability and the other on data exploration.
Regulations and Data laws are increasing around the globe and users are more aware than ever of the potential harms from the improper handling of data. Because of this, maintaining proper data privacy is an imminent requirement for a digital business. There can be no data privacy without data governance. Moreover, there can be no data governance without data security. These are all stepping stones and not roadblocks for responsible and safe data democratization.
This golden era of data democratization requires trust. A recent study found that 81% of executives rate data as very critical to their businesses’ outcomes, and 76% of CFOs agree that having a single version of the truth is essential. Organization size does impact who owns the content and how you execute—but all businesses, regardless of size, should invest in a data governance strategy. This way, as smaller organizations scale, they have a strong data strategy that grows with them. Traditionally, data has been managed and owned by personnel working in IT.
While the ownership of data may remain unchanged, successful data democratization requires universal accessibility throughout the organization. The company’s Business Intelligence team can play a crucial role in fostering data accessibility, coordinating with IT to create and deploy policies that take data out of silos and put it into the hands of users. You want everyone to be able to access the data they need, but you also don’t want them to do the analysis that leads to flawed business decisions. To achieve this, consider implementing a data governance plan that weeds out inefficiencies and boosts accountability within and between business teams.
The organizations that are on the path of data-driven culture would need to decentralize the data systems, with respective roles, and teams holding the ownership on sharing of the data. This will in turn lead to the democratization of data with appropriate policies, procedures, and security applied. The policies should define the business goal, the goal targets, and how to evolve the practices. Based on this, advanced technologies need to be adopted, practiced, and implemented. Proper training should be imparted for the employees on these new technologies as well.
Banking and Finance holds a massive amount of data which is built over time. Data is crucial in this sector owing to huge customer interactions and compliance requirements. The data in finance is critical not only from a utilization point of view but also from a security point of view. Making the right data available to the right person is a critical function as per the Data Democratization process. On the flip side, ensuring the required security for sensitive data is also equally important for Data Democratization. While data can help in credit scoring, loan risk management, discovering customer portfolios, it also helps in fraud detection, AML/BSA checks, financial reporting, record keeping, and IoT enabled security checks. The role of data democracy, therefore, becomes even more significant in Finance.
There is a huge amount of data generated and logged in different medical centers. One major flaw when it comes to Electronic Health Records and other health data is that going through all the different portals and gatekeepers can feel overwhelmed. As the growth and spread of data have generated more information than one could analyze, breakthroughs in artificial intelligence (AI) have helped overcome this challenge. Medical experts are developing algorithms to analyze huge quantities of data and extract insights. These algorithms get more efficient with an increase of data, which could improve predictive analysis, enable greater personalization and easy access to enhanced care.
The democratization of data not only helps in enhancing customer experience at a broader level but also acts as a catalyst in the sub-functions of retail as a sector. By making the right data available to the right team, data democracy can play a critical role in adept decision-making based on facts and insights. It can help in market segmentation, customer profiling for having better-personalized promotions, loyalty management, and improved sales strategy. IoT and insights- driven Data Democratization can also improvise supply chain management and operational back-office efficiency.
Data Democratization is changing the way data is consumed, analyzed, and applied in an organization. It makes data available to more people which they can analyze and get insights with the help of software applications. Organizations that are ready to embark on the journey of Data Democratization needs the support of the people, technology, and processes guided by the right strategies and implementation plans. The challenges with Data Democratization lie in sharing the data without breaking the legal and privacy policies of an individual or organization. The way to go about it is to draw the front lines, agree all the way from individuals through the Governments adhering to the well laid broad policies. | <urn:uuid:9d79607b-32ad-42aa-bcce-053d3587e207> | CC-MAIN-2024-38 | https://www.filecloud.com/blog/2021/01/data-democratization-why-fostering-a-better-data-culture-is-important/ | 2024-09-10T17:00:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00814.warc.gz | en | 0.938998 | 1,692 | 2.609375 | 3 |
In today’s interconnected digital world, protecting information and systems is paramount. Information technology (IT) security policies serve as the keystone for safeguarding organisations against millions of cyber threats. They provide a comprehensive framework that outlines the rules, guidelines, and procedures necessary to ensure the confidentiality, integrity, and availability (CIA triad) of sensitive data.
This blog will discuss the importance, best practices and benefits of IT security policy.
What is an IT Security Policy?
IT Security Policy identifies the rules and procedures for all individuals accessing and using IT assets and resources of an organisation. The policy includes acceptable and unacceptable actions, access controls, and the potential consequences for breaking the rules.
While implementing IT security, business goals, information security policy, and risk management strategy of an organisation should all be considered. By describing acceptable use and access controls, an IT security defines a corporate digital attack surface and acceptable risk level. This policy may also provide the security standards for incident response by specifying how users can be monitored and what measures can be taken if the policy is violated.
Why is the IT security policy important?
IT security guidelines are indispensable for organisations of all sizes and industries. They offer lots of benefits:
Reducing the risk of cyber attacks
By establishing clear guidelines for acceptable use, access control, and data protection, organisations can minimise the likelihood of successful cyber attacks.
Meet regulatory and compliance requirements
Many regulations, such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA) and the International Organisation for Standardisation (ISO), require organisations to implement robust IT security services. Developing these strategies is critical for achieving and maintaining regulatory compliance.
Improving incident response
In the event of a data breach or other security incident, a timely and accurate response is essential. Since, well-defined policies provide a structured approach to incident response, enabling organisations to quickly detect, contain, and remediate security breaches.
Users should understand what they can and cannot do on the company’s IT systems. An IT security policy will establish guidelines for permissible use and penalties for noncompliance.
Continuity of business
A cyberattack or other disruptive incident reduces productivity and costs the organisation money. IT security rules serve to reduce the likelihood of these situations and to address them effectively if they do occur.
Why do your organisation need an IT security policy?
The importance of IT security rules cannot be emphasised. Organisations require it because it clearly defines everyone’s responsibility for the protection of specific procedures and resources. It acts as a central document that anyone can use as a cybersecurity compass to provide guidance. Furthermore, the policy’s acceptance and endorsement by the company’s management demonstrates a high-level commitment to the security of the organisation’s IT infrastructure. In this approach, the security policy may function as both a technical reference point and a cultural object, providing physical evidence of the organisation’s commitment to cybersecurity.
What are the types of security policies?
The three types of IT security measures. They are listed below:
1. Program or organisational policy
This policy focuses on developing a company-wide blueprint that sets policies for all the organisation’s digital infrastructure.
2. Issue-specific policy
It is intended to address a specific issue, such as who has the authority to change the arrangement of an organisation’s workforce.
3. System-specific policy
It seeks to safeguard a specific system, such as the backend of a company’s website, by ensuring that only permitted users have access to it.
What are some best practices for IT security measures?
Some of the most effective practices for IT security policies are discussed below:
Use the COBIT framework
The Control Objectives for Information and Related Technologies (COBIT) framework is intended to help manage, implement, and enhance IT systems and technologies. An effective IT security strategy employs various principles, including end-to-end enterprise coverage and the use of integrated frameworks.
Have a strict password management policy
Passwords are typically required to access critical systems; thus, controlling them should be a top responsibility. Effective password management is forcing everyone to use unique, strong passwords and demonstrating how to change them safely when necessary.
Have an acceptable user policy
An acceptable user policy outlines the right method to use computers, the Internet, social media, email servers, and sensitive data. It is a great practice to never assume that people understand how to access and use data. By integrating essential instructions in your IT security policy, you provide everyone with a single source of truth to turn to.
Make a regular backup policy
A well-executed backup policy can help your business remain resilient. Many businesses adhere to the “3-2-1 rule,” which states that three copies of data should always be kept, two on various types of backup media, and one off-premises for disaster recovery.
- Top 11 Must-Have Elements in Your Information Security Policy
- Top 10 Information security policies for every business should have
What are the benefits of having an IT Security Policy?
There are many benefits of having an IT security policy for your organisation. Some of the top benefits are listed below:
Improve data protection
Creating an effective security strategy will automatically result in a security process that protects the IT environment against cyberattacks. Although some may view compliance as the primary motivator for written rules, the process of developing the policy requires security teams to review systems more thoroughly and address risks that may be overlooked in day-to-day operations.
Despite the IT Team’s best efforts, consumers will continue to click on phishing links, zero-day vulnerabilities may be identified, and organisational resource limits may force some vulnerabilities to remain exposed. Although compliance with security regulations, the business can still face damages.
In certain circumstances, executives target and blame IT or security personnel for an event. An IT or security team that can verify compliance with an executive-approved security policy demonstrates that all reasonable steps were made to prevent potential data breaches or other security threats. This policy can safeguard employees from unfair treatment and help them keep their jobs following a breach or other security disaster.
Smooth communication with executives and board members
Effective security policies need reports that can be shared with non-technical executives to build trust in IT and security staff. Policies simplify technical information to numerical reports and simple metrics that non-technical executives can comprehend and use to assess the status of security processes.
Clear reports allow for smooth communication with executives and the board of directors of an organisation, which helps to establish trust in the organisation’s security posture. Such reports not only illustrate that the business prioritises information security, but they also promote confidence, which can lead to increased support for extra resources.
Protection of Litigation
In the case of a breach or successful cyber security attack, government agencies or stakeholders may seek legal action against the organisation. Fortunately, legal criteria simply require “reasonable efforts,” which may be substantiated by documentation from an effective security strategy and reports demonstrating how the policies have been applied.
Organisations without regular reporting and processes will have to hustle to find out what documentation is needed to back previous efforts and then hope that they still have the archival logs or other data to construct that documentation. Organisations with formal record-keeping and reporting will already have a major amount of their evidence ready to present with no effort or disruption to business operations.
Regulatory compliance change
An effective security protocols should reflect the organisation’s compliance requirements. Auditors always request written policies to help them understand the organisation’s objectives and the type of proof they might expect to get.
Fulfilling a written policy that has already been aligned with a compliance framework makes it easier for the organisation to meet regulatory requirements. The organisation’s regular internal reports will automatically give evidence of compliance, with no additional effort or actions required.
Many organisations tend to view formal paperwork as a burden, but effective IT security policies ensure the protection and resilience of organisations. By providing a comprehensive framework for managing cyber security risks, organisations can safeguard their sensitive data, maintain business continuity, and enhance their overall reputation. Implementing and maintaining robust IT security protocols is an investment in the future of any organisation, providing tangible benefits that far outweigh the costs. | <urn:uuid:b22743cc-000a-4e55-8094-7793dddb07e0> | CC-MAIN-2024-38 | https://nswits.com.au/it-security-policies/ | 2024-09-11T22:18:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00714.warc.gz | en | 0.919569 | 1,707 | 2.96875 | 3 |
Protein aggregation – in which misfolded proteins clump together to form large fibrils – has been implicated in many diseases including Alzheimer’s, Parkinson’s, and type II diabetes.
While the exact role these fibrils play in diseases isn’t fully understood, many of the current treatments for diseases like Alzheimer’s and Parkinson’s target the aggregation process.
However, finding the right treatment protocols for these drugs, which can be toxic in large doses, is challenging.
Recently, researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) developed a model to better understand how drugs inhibit the growth of protein fibrils, offering a guide to develop more effective strategies to target protein aggregation diseases.
The researchers found that different drugs target different stages of protein aggregation and the timing of their administration plays a critical role in inhibiting fibril growth.
“Our research highlights the importance of understanding the relationship between the chemical kinetics of protein misfolding, the mechanisms by which drugs inhibit protein aggregation, and the timing of their administration,” said L Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics at Harvard University.
“This understanding could have important implications for intervention protocols to prevent pathological protein aggregation.”
Protein aggregation involves a number of steps, beginning with what’s called primary nucleation, in which the misfolded proteins join together to form a fibril, which then elongates.
Once a critical number of fibrils is formed, aggregation accelerates due to a process known as secondary nucleation, leading to exponential growth.
The first step associated with the formation of the fibrils is very slow, typically taking several decades, which could explain why Alzheimer’s often affects people in their old age.
However, once those first fibrils are formed, the disease can progress very rapidly.
Using mathematical methods from control theory, combined with the physics of protein aggregation, the researchers made theoretical predictions on how and when to intervene using drugs.
To test their results, the researchers looked at previously published data on the efficacy of drugs in a model organism, the round worm, C. elegans, where one can trigger the formation of Amyloid b, a misfolded protein associated with Alzheimer’s disease.
Its treatment is carried out using two compounds that inhibit the formation of Amyloid b: Bexarotene and DesAb29-35.
The researchers found that drug efficacy depends on whether the compound inhibits primary nucleation or secondary nucleation.
Bexarotene, for example, selectively inhibits the primary nucleation that happens early in the disease while DesAb29-36 inhibits secondary nucleation that happens later.
In the absence of drugs, Amyloid-b aggregation causes paralysis in the worms.
When Bexarotene was given at the onset of the disease in the larval stages, published data shows that there was a significant recovery of the worm’s mobility.
The data also showed that DesAb29-36 was more effective when administered later in the disease’s progression.
“By combining well-known concepts from two different fields, the kinetics of protein aggregation, and optimal control theory, we linked molecular-scale phenomena to macroscale strategies with relevance for a real, practical problem,” said Mahadevan.
“Our approach, which draws on a detailed understanding of the aggregation process and uses this understanding to design rationally potential strategies, is unique,” said Thomas C. T. Michaels, a postdoctoral fellow at Harvard and co-first author of the study along with Christoph Weber, who was a postdoctoral fellow at Harvard and is now a junior group leader at the Max Planck Institute for Complex Physical Systems in Dresden, Germany.
“It will allow people to test the efficacy of different compounds against aggregation under optimal conditions at the drug discovery and drug screening level.
From these optimal conditions, one could then extrapolate optimal conditions for a trial. So, in this sense, our work could help seed potential trials.”
The research was published in the Proceedings of the National Academy of Sciences.
The pathogenesis of these devastating diseases is closely associated with aberrant protein aggregation (Chiti and Dobson, 2006).
In the progression of amyloid aggregation, soluble proteins undergo a series of conformational changes and self-assemble into insoluble amyloid fibrils (Riek and Eisenberg, 2016).
Various strategies have been exploited to interfere with the process of amyloid aggregation by targeting different conformational species, including stabilizing monomers by antibodies (Ladiwala et al., 2012), redirecting monomers to nontoxic off-pathway oligomers by polyphenolic compounds (Ehrnhoefer et al., 2008), accelerating mature fibril formation by fibril binders (Bieschke et al., 2012; Jiang et al., 2013), inhibiting fibril growing by peptide blockers (Seidler et al., 2018), and disrupting amyloid assembly by nanomaterials (Hamley, 2012; Huang et al., 2014; Lee et al., 2014; Li et al., 2018; Han and He, 2018). Many of these strategies show promising inhibitory effects against toxic amyloid aggregation (Härd and Lendel, 2012; Arosio et al., 2014), but so far none has led to clinical drugs because of unsettled issues such as target selectivity, side effects, membrane permeability and penetration of the blood-brain barrier.
Amyloid β (Aβ) has long been targeted for drug development and therapeutic treatment of Alzheimer’s disease (Caputo and Salama, 1989; Haass and Selkoe, 2007; Sevigny et al., 2016). In addition to the common difficulties in targeting amyloid proteins, Aβ is especially challenging since it contains multiple species with various lengths generated by γ-secretases (Acx et al., 2014; Kummer and Heneka, 2014; Szaruga et al., 2017). Many studies have shown that Aβ42rather than Aβ40 is more prone to form toxic aggregates, and the ratio of Aβ42/Aβ40 is better correlated with the pathology rather than the amount of each individual Aβ species (Lewczuk et al., 2004; Jan et al., 2008; Kuperstein et al., 2010). However, selective inhibition of Aβ42 is very difficult because it is only two residues longer than Aβ40 at the C-terminus.
In this work, we targeted two key amyloid-forming segments of Aβ42 (16KLVFFA21 and 37GGVVIA42) based on the cryo-EM structure of Aβ42 fibrils reported recently (Gremer et al., 2017).
The designed sequences showed inhibitory effect to Aβ42 fibril formation. We further utilized a macrocyclic β-sheet mimic scaffold (Zheng et al., 2011; Cheng et al., 2012, 2013) to constrain the designed peptide inhibitors in β-conformation, which significantly enhanced the inhibitory effect on Aβ42aggregation. Furthermore, we show that the peptide inhibitor designed to target the C-terminus of Aβ42 can selectively inhibit Aβ42 aggregation, but not to that of Aβ40 or other amyloid proteins. Our work shed light on the application of structure-based rational design combined with chemical modification in the development of therapeutics for Alzheimer’s disease and other amyloid-related diseases.
More information: Thomas C. T. Michaels et al. Optimal control strategies for inhibition of protein aggregation, Proceedings of the National Academy of Sciences (2019). DOI: 10.1073/pnas.1904090116
Journal information: Proceedings of the National Academy of Sciences
Provided by Harvard University | <urn:uuid:4965e651-36d0-42d5-885d-afdc48ff7c61> | CC-MAIN-2024-38 | https://debuglies.com/2019/08/16/alzheimer-parkinson-researchers-developed-a-model-to-better-understand-how-drugs-inhibit-the-growth-of-protein-fibrils/ | 2024-09-15T16:34:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00414.warc.gz | en | 0.929075 | 1,671 | 3.453125 | 3 |
Wireless warning for businesses
Businesses who offer free WiFi to customers or visitors are being warned to take security seriously or risk being infected with a new breed of computer virus.
Researchers at Liverpool University this week revealed (FEB28) they have developed a virus which spreads “like a common cold” between wireless networks, and experts at a Shropshire computer firm say businesses need to be aware of the dangers of leaving their WiFi open to anyone.
Chris Pallett, of Bespoke Computing, in Telford, said as well as businesses which offered WiFi, the people accessing it also needed to be careful.
He said: “The research by Liverpool University is very interesting because it poses questions that a lot of people will not have thought about before.
“People merrily log on to free WiFi in coffee shops or libraries, but it’s worth stopping and thinking about what activities you carry out on publicly-available WiFi.
“What this new research shows is that if you are using a wireless internet connection that doesn’t use a password, you should really think twice before using any websites that contain personal information.
“Browsing news websites is ok, but anything that you use a password for – including social media websites – is probably a bad idea.”
Mr Pallett said the dangers were even greater for businesses who allowed visitors to access their wireless networks.
“The main problem arises when businesses allow visitors to use the same network that’s being used by the business themselves,” he said.
“You don’t know anything about the computer your visitor is using, or what hidden viruses or other bad stuff might be on it, and yet you happily let them plug straight into your network.
“Viruses are out there now that can compromise the wireless network, and then eavesdrop on what you’re doing to learn passwords and banking information.
“Any businesses that do offer public WiFi should really seek professional advice to ensure they have the right security in place.
“And if you are out and about and find yourself logging on to an unsecured wireless network, just think very carefully about what websites you access.”
For more information about computer security, visit www.bespokecomputing.com or call 01952 303404. | <urn:uuid:47857e3e-fd83-4b69-8cc5-ebe98096ec46> | CC-MAIN-2024-38 | https://bespokecomputing.com/wireless-warning-businesses/ | 2024-09-16T22:07:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00314.warc.gz | en | 0.965572 | 487 | 2.5625 | 3 |
WHAT IS ERGONOMICS ANYWAY?
Ergonomics is the science of studying how efficient people are in their work environment, as well as detailing what could be done to make them more productive throughout the day. In simple terms, an ergonomic study is conducted to determine if an employee is working to their full efficiency with the office supplied to them currently. If the study concludes that productivity would increase and the employee would benefit from a change in supplies such as a different chair, desk, mouse, monitor, etc., the needed changes are made to ensure the employee doesn’t sustain any injury and is able to work at full productivity level from then on.
Unfortunately, in many workplaces ergonomics is a little thought about the subject, resulting in employees developing long-term issues such as back pain, carpal tunnel syndrome, neck pain, vision problems, etc. The benefits of working in an ergonomic environment are endless, but some of the major benefits are attractive to employers as well as employees, such as reducing costs, increasing productivity, increasing employee morale and improves the quality of work.
The more people working in an ergonomic environment will result in less and fewer risk factors and in turn lower the number of workers compensation claims a business faces. Providing employees with a comfortable workspace reduces the risk of them obtaining injuries during their career, and gives them a sense of importance within the company to know they are being taken care of and their best interests are being looked out for by their employer.
Statistics show that many adults spend up to 70% of their waking hours sitting down and have little to no physical activity in their daily lives, the introduction of the computer led to a decline in physical activity due to the fact that many adults hold office jobs which require them to sit at a desk to work. Leading ergonomic furniture supplier, ISE Group offers a full line of ergonomic office furniture allowing consumers to pick which pieces and styles they would like and customize their office to fit their needs, as well as any accessories that may come in handy such as laptop and tablet drawers.
It’s not all on the shoulders of the employer to put employees an ergonomic environment, there are small things a worker can do each day to improve their work environment, take the strain off their back and improve their productivity throughout the day. Below are five (5) simple steps you can do to improve your work environment:
- Find your natural posture:
- Feet on the floor in front of you, hands on your lap, shoulders relaxed and leaned back slightly. This should feel comfortable.
- Mouse and keyboard placement:
- Height: 2 inches from your thighs
- Tilt: the keyboard down and away from you
- Position: keyboard and mouse shoulder distance apart
- Position your screen(s):
- Distance: sit back and extend your arm, your fingers should brush the monitor
- Height: close your eyes, open them, your site should land on the address bar of your web browser when at the correct height
- Adjust your chair:
- Shape: think back to posture
- Length: when sitting comfortably there should be about a fist size of space between the chair and your leg
- Height: your feet should be flat on the floor when sitting
- Get up and move!
- An ergonomic workspace can only go so far, physical activity is still a crucial part of maintaining a healthy lifestyle. Don’t forget to get up and stretch during your work day! | <urn:uuid:890dcb29-1605-4a46-8b88-91d49a74ce70> | CC-MAIN-2024-38 | https://pinnacleoffice.ca/importance-ergonomics-workplace/ | 2024-09-16T23:21:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00314.warc.gz | en | 0.970935 | 714 | 3 | 3 |
Service Accounts vs User Accounts
Service accounts and user accounts are prime targets for cyberattacks, and every organization has a combination of both types of accounts. Once one of these accounts is compromised, a cyberattacker can move laterally, infiltrate the business, and access critical data. To best protect against cyberattacks, it’s important to understand the basics of user accounts and service accounts—they are not the same thing!
A service account is a non-human privileged account usually located within operating systems and used to run applications or services. Service accounts are not associated with any human identity. A user account is an account tied to a human identity. A standard user account represents a human identity and typically has an associated password to prevent authorized access.
Let’s go over the fundamentals of service accounts and user accounts.
What is a service account?
A service account, sometimes referred to as a system account, is a non-human privileged account usually located within operating systems and used to run applications or services. As a type of privileged account, service accounts have associated privileges, including local system privileges. Service accounts require elevated privileges to function, connect to resources on the network, and access sensitive data and applications. Cybercriminals target service accounts because they have access to business-critical IT infrastructure and data.
Service account nomenclature
In Windows: Service accounts are referred to as:
- Local user account
- Domain user account
In Unix & Linux: Service accounts are referred to as:
In the cloud: Service accounts are known as:
- Cloud service account
- Cloud computer service accounts
- Virtual service accounts
Service account risks
Service accounts pose an interesting yet troubling risk to organizations. Service accounts are not associated with any human identity and may not be directly managed by a human. On top of this, service accounts’ privileges and functions make them critical to IT infrastructure and business applications. It's no exaggeration to say that service accounts are digital phantoms, as organizations usually do not keep records on existing service accounts.
It’s no surprise business leaders are terrified of directly managing their service accounts, lest something goes wrong and a business function is crippled—or worse! Changing service account credentials can have a chain reaction on dependencies. It’s challenging for organizations to deal with their service accounts when there are no records detailing what the accounts do and what they affect.
Learn more about managing service accounts: Back to Basics: Service Account Management 101
Service accounts go unchecked because:
- The person who created the service account left and did not give anyone any information about the service account
- The service account’s original system no longer exists but the account still remains, uncontrolled
- A service account was originally created for a temporary reason like a program install, but the account remains in place after the task is complete
- Cloud-based service accounts used in development or DevOps are hard to manage, with microservices and containers getting spun up with privileges and burned down quickly without proper cleanup
- Containers used in DevOps often hardcode or reuse credentials
What is a user account?
User accounts are the accounts you are most likely familiar with. Simply put, a user account is an account tied to a human identity. Securing user accounts is critical in safeguarding an organization’s systems and data. Let’s take a closer look at the two primary kinds of IT user accounts: standard and privileged.
Standard user accounts:
This is the user account you are most likely familiar with—a standard user account represents a human identity and typically has an associated password to prevent authorized access. Active Directory user accounts are an example of standard user accounts. You probably have a number of these accounts yourself both at work and at home. In a typical organization, most employees have standard user accounts as they don’t require special data or elevated access rights.
Privileged user accounts:
While securing standard user accounts is important, privileged user accounts have access to sensitive information and elevated privileges. Organizations can have three times more privileged user accounts than physical employees, which requires a balancing act between security and productivity. Privileged user accounts provide administrative access to enterprise systems, according to the permissions levels.
Privileged user accounts are typically used by system administrators, as they manage particular systems, environments, or other IT infrastructure. Privileged users require elevated privileges to do the following:
While most non-IT employees only have standard user accounts to do their jobs, IT staff can have multiple accounts. An IT administrator could have multiple standard user accounts and privileged accounts, allowing them to access different systems and perform different tasks.
Learn more about privileged accounts: The 7 Deadly Privileged Accounts You MUST Discover, Manage, and Secure | <urn:uuid:0db8e4b4-d536-4391-94f9-12551777d1a1> | CC-MAIN-2024-38 | https://delinea.com/blog/service-accounts-vs-user-accounts | 2024-09-18T04:19:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00214.warc.gz | en | 0.92364 | 969 | 2.9375 | 3 |
Misallocation of Vaccines Leads to 75,000 Additional US Deaths…At Least
As we prepare to roll out the vaccine across the United States, we are faced with an unparalleled opportunity. But there is also a danger of squandering the opportunity. States will distinguish themselves by the speed with which they get their populations to herd immunity, and by the degree to which they minimize the number of people who die during the period when the vaccine is provided.
The number of vaccines available per month is outside of your control. Therefore, as a decision maker, the lever for affecting outcomes lies in your decisions about what order (and which of) your citizens get the vaccine. Certain populations should clearly be at the front of the line (e.g. health care workers without SARS COV2 antibodies). But many questions remain: does the benefit of knowing who has antibodies justify the cost of an antibody test? Should you prioritize people more likely to spread COVID (front-line workers) or people more vulnerable to adverse outcomes (comorbid/elderly).
Vaccination Strategies for Minimizing Deaths
How vaccines are distributed can make a huge difference in the duration of economic effects, hospitalizations, and even deaths. Our analysis shows that nationwide an optimized distribution strategy is 90% likely to avoid more than 75,000 deaths over simpler distribution strategies. The same analysis shows that even for a medium sized state, improved vaccination strategies could reduce the duration of the pandemic by two months and could reduce the number of deaths due to COVID infection by more than 1,000 people.
This is accomplished by how we make tradeoffs among three guiding principles:
Don’t vaccinate people who have already had the infection.
Vaccinate people who are more likely to infect others.
Vaccinate vulnerable people (elderly and people with comorbidities are at higher risk of dying.)
There is extensive research indicating likelihood of reinfection is low and that infection with SARS-COV2 confers long term immunity. One might therefore think the lowest hanging fruit would be to prioritize immunizations for the population that does not have antibodies or memory T cells for SARS-COV2. Unfortunately, only 10-20% of those who have had the illness are “visible” as indicated by a confirmed COVID test. The remaining 80-90% of the already immune population is indistinguishable from the people who are not immune. So which tool should you use to reveal who is in this “not immune” population – antibody tests or statistical analysis? The answer to this question is pivotal and will probably vary from county to county and city to city. What is your current strategy to answer this question?
The other pivotal question is whether a strategy will save more lives by focusing on vaccinating socially active or by focusing on vaccinating vulnerable populations? It is relatively straightforward to determine who is in the most vulnerable population (elderly and people with comorbidities) but knowing which broad strategy saves more lives is not as clear.
The charts below show how the virus may spread with an optimized strategy vs. a less effective strategy. In the less effective strategy, some vaccines are wasted on individuals who already have antibodies. In addition, the sub-optimal strategy would not attempt to identify recipients for the vaccine as a function of how likely they are to spread the virus to others.
The better strategy minimizes the use of vaccines on people who do not need them and focuses on individuals more likely to spread the virus to others. Because this strategy slows down the rate of spread, it also allows more time to vaccinate the population not already immune.
The costs and benefits are also not limited to directly saving lives. Since the optimal strategy would also end community transmission more quickly, there would be economic benefits as well. For a medium sized state like Wisconsin, economic benefits would run in the billions of dollars (think opening a convention center 45 days before your neighboring states are able to do so). For the same state, the cost of getting the order wrong is a thousand or more (preventable) COVID deaths and a society shut down for 1 month+ longer than need be. As a decision maker for vaccine distribution, you get to decide whether to be seen as the hero…or the villain. But to be the hero, you need the right tools.
How Hubbard Decision Research Can Help
For 20+ years, Hubbard Decision Research has been “spreading the gospel” about probabilistic methods across many areas of industry and government. Probabilistic forecasting has been shown to yield the best results across all the industries where it has been measured and studied. We have the experience and expertise to bring these methods into any organization and any challenge. This year, HDR has also built a reputation for accurate forecasting and predictions with COVID related issues for businesses and municipalities, as well as national forecasting. We have presented webinars for the GFOA, and worked on COVID related operational risk projects for school districts, insurance and reinsurance companies, and a variety of other industries. Our Applied Information Economics methodology has been applied across industries, government, and the military and focuses on improving decisions through probabilistic modeling.
Turn your vaccine distribution solution into an optimized quantitative solution. HDR offers a 20% discount on our rates for governments and nonprofits. Contact us to learn more.
Watch the interview with BSW that was previously aired on Tuesday, August 4th at 6:30pm CDT. Doug talks about his ground shaking exposé on the failure of popular cyber risk management methods, How To Measure Anything in Cybersecurity. This particular book is a Palo Alto Networks Cybersecurity Canon Award winner and the first of a series of spinoffs from his successful first book, How To Measure Anything: Finding the Value of “Intangibles” in Business. It is cited by the Center for Internet Security RAM Version 1.0 as a “thorough and practical guidance on using probability analysis for cybersecurity decision making.”
In the interview, Doug talks about his life’s work which is about building better “business impact” decision makers in any department of any sized organization and in any industry. He has sold over 150,000 copies of four different books in eight different languages. He offers powerful online training and consulting services revolving around his quantitative methodology, Applied Information Economics (AIE), for his global client base of Fortune 500 companies, federal and state governments, the United States military, and major non profits including the United Nations.
For this interview, Doug is particularly pleased with his shirt choice! Enjoy!
We heard you loud and clear and are happy to accommodate! We are extending our promotional offer on the NEW AIE Analyst Series through Friday, August 28th. Receive a dollar-for-dollar discount of all your previous webinar expenditures with HDR, up to 75% off the price of the new AIE Analyst Series, per person – if you book by Friday, August 28th.
The new AIE Analyst Series regular price is $1,950. This means you could take the entire series for as little as $487.50, if you have at least $1,462.50 in previous webinar expenditures. If you have more than $1,462.50 is previous webinar expenditures, invite a friend or colleague and (based on your remaining spend) they could also receive up to 75% off the series as well. This offer includes recordings of all courses in the series and access to online materials, even if you are unable to attend some or all of the live workshops.
Regular Prices on new courses included in the AIE Analyst Series:
Calibrated Probability Assessments – $580 (Pandemic Price: $325)
Creating Simulations in Excel: Basic – $375 (Pandemic Price: $310)
Creating Simulations in Excel: Intermediate – $375 (Pandemic Price: $310)
One Elective Course – $150 (Pandemic Price: $95)
(For Elective Course, Choose from: HTMA in Project Mgt, HTMA in Innovation, HTMA in Cybersecurity Risk, Failure of Risk Management)
Even courses that were part of the previous AIE Analyst series, such as Decisions Under Uncertainty and Empirical Measurements, have new methods and new spreadsheet tools. And now the new Computer Based Training (CBT) components mean that you can review hours of content online at your own pace and take the review quizzes online.
If you have any questions or if you are interested to take advantage of this offer, please contact us at firstname.lastname@example.org to verify your previous webinar expenditures and to receive your unique promo code in order to claim your personalized discount at checkout.
HDR is honored to partner with QA. QA is the UK’s biggest training provider of virtual, online and classroom training in technology, project management and leadership. HDR will provide a “taster” event on quantifying IT/Cyber Risk on August 19th and August 20th.
Day 1: In this first session, we identify the widespread lack of quantitative IT risk analysis in (UK) organizations and the dangers posed by relying on risk matrices. Doug will then explain how our quantitative approach of AIE (Applied Information Economics) can help you make better, cost-effective risk analysis, which measurably reduce uncertainty and risk.
Date: Wednesday 19 August 2020
Time: 6:30am – 7:30am CDT
Day 2: Analyzing risk always involves a degree of subjectivity and associated uncertainty. In this session we focus how to estimate that uncertainty and how to reduce it – two essential activities in quantitative risk analysis. We do this by reviewing the results of the exercise given at the end of the previous session. We identify some of the obstacles to estimating uncertainty and show how you can be calibrated to overcome these obstacles and make better estimates.
Doug Hubbard is the CEO of Hubbard Decision Research, founded in 1998. It provides consultancy and training in quantitative methods to support decision making. He is the creator of AIE (Applied Information Economics) whose principles underpin this quantitative approach. These methods have been adopted by businesses across many sectors and by government organizations.
Doug started his career as a management consultant at Coopers and Lybrand after gaining his MBA in 1988. As well as providing management consultancy, he is a sought-after speaker and the author of a number of books, including The Failure of Risk Management: Why It’s Broken and How to Fix It, How to Measure Anything: Finding the Value of “Intangibles” in Business and How to MeasureAnything in Cybersecurity Risk. The first two books are now set texts for exams for membership of the Society of Actuaries. His articles and research have also been published in a number of periodicals and learned journals, including Nature, The IBM R&D Journal, and The American Statistician.
Fred Hickling is a cybersecurity consultant and a QA associate trainer. Over the years, he has become aware of how little quantitative IT risk assessment is done in the UK. Introduced to Doug Hubbard’s work last year, he appreciated the extent to which this lack was a problem, as well as a way to fix it. He introduces this event – a step in bringing the benefits quantitative risk assessment to the attention the IT professionals in the UK.
Fred is a director of Networks and Systems Ltd, as well as being a non-executive director of another company not in the IT sector. He has numerous industry certifications, including CISSP, CISM, CISMP and CCISO, as well as several physics degrees.
The Failure of Risk Management 2E is yet another in a string of very popular publications written by Doug Hubbard. Doug’s books are used as textbooks in dozens of prestigious university courses at the graduate level. His first book is required reading for the Society of Actuaries exam prep and one of the all-time, best-selling books in business math. His fourth book is a Palo Alto Networks Cybersecurity Canon award winner.
How to Measure Anything: Finding the Value of Intangibles in Business (one of the all-time, best-selling books in business math)
The Failure of Risk Management: Why It’s Broken and How to Fix It (1E/2E)
Pulse: The New Science of Harnessing Internet Buzz to Track Threats and Opportunities
How to Measure Anything in Cybersecurity Risk (co-authored with Richard Seiersen & won the Palo Alto Networks Cybersecurity Canon award)
Take a look at this short video to see what they’re saying about The Failure of Risk Management 2E. Be sure to use the pause button to catch all the details and learn more interesting facts about Doug’s writing below the video.
Doug is also published in the prestigious science journal, Nature, in addition to publications as varied as The American Statistician, CIO Magazine, Information Week, DBMS Magazine, Architecture Boston, OR/MS Today, The IBM Journal of Research & Development and Analytics Magazine.
Doug Hubbard and his team’s work is mentioned in high regard often in articles, by peers and clients alike. Perhaps it’s because HDR utilizes native Excel to create custom automation models for each client’s specific needs – without any limitations of an existing software solution and without annual licensing or subscription fees that often come along with traditional software solutions.
This week an article in InfoSec 2020 was brought to our attention where Doug is mentioned specifically by the Union Pacific CISO, Rick Holmes. Union Pacific is the second largest railroad system in the United States and is one of the largest transportation companies in the world.
A portion of the article reads, “UP assesses and analyzes risk from four different perspectives – those of an insurance company or actuarial expert, a compliance auditor, a legal advisor and the mind of an attacker. Key to the process, however, is the risk probability modeling that the cyber risk assessment team developed in order to statistically convey to upper management the likelihood of a cyber event occurring and the calculable monetary loss that would result.
For this, UP recruited management consultant and author Douglas Hubbard, who helped devise a framework that analyzed and categorized UP’s computing environment into various asset classes.” | <urn:uuid:a97d631e-6501-48d6-a98e-421007a47d19> | CC-MAIN-2024-38 | https://hubbardresearch.com/category/news/ | 2024-09-19T11:32:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00114.warc.gz | en | 0.956126 | 2,965 | 2.875 | 3 |
Cards, and most commonly smart cards – with an embedded electronic chip, are increasingly accepted as the credential of choice for securely controlling physical access. Smart cards are used to authenticate individuals, to determine the appropriate level of access, and physically admit the cardholder to a facility, most commonly using a card reader at the point of entry. Multiple access applications can be contained on a smart card, granting users access to both physical and logical resources without the need for multiple credentials. Access rights can be changed dynamically or revoked, depending on perceived threat level or if the system is in any way compromised.
Here are some examples of where data securely held on a card is particularly useful:
Protecting access to secure government facilities and infrastructure is important to protect critical functions and to counter disruptive threats in today’s World. As one of the largest employers in most societies, federal, local and district governments use physical access control and the issuance of secure ID cards to ensure that only those individuals who are permitted to be in specific facilities are granted access.
Protecting campuses and safeguarding students has become a critical operation for all educational establishments from kindergarten through to universities. Smart cards, each issued with unique keys to provide access to campuses, faculties and individual classrooms, are now a fundamental requirement for staff, visitors and increasingly for students of all ages. Cards are typically presented to an authenticating reader, linked to a physical access control system (PACS) which verifies and grants access to an individual.
Electronic data, securely written to a smart card at the point of issuance can be used to grant access not only to the front door, but also privileged access to more secure departments or assets within the building itself. A smart card is typically presented to a card reader, linked to a physical access control system (PACS) which authenticates the individual and permits access. Access rights can be revoked or amended at the touch of a button
ID cards for all event attendees can be personalised and issued at the point of entry using desktop ID card printers, smart cards and badging software. Data encoded on to an electronic chip on the card can be used to grant access via turnstiles or card readers located within the event. Access to specific areas, such as VIP lounges, or selected conferences can also be allocated at the point of issuance.
One of the most important uses of ID cards is for securing entry points to critical infrastructure facilities such as airports, hospitals and ports. Employees working on these sites are issued with a secure card, often based on centrally government standards, containing electronic data that grants access to the site and secure areas within the site itself. Access is granted through turnstiles and card readers. For the most secure areas, multi-factor authentication might be required. This would involve combining the smart card with pre-loaded secure electronic data and password or PIN associated with the card or a biometric data such as a fingerprint or iris scan, read at the point of entry.
Both employee smart ID cards, patient ID and instantly issued visitor cards can be used to grant access to secure areas within a medical facility including ward access, associated with pre-defined visiting times. Access is typically granted using a card reader linked to a physical access control system (PACS).
For more information, complete this form. | <urn:uuid:c141bcbe-8432-4999-99bf-cc6ac60d9235> | CC-MAIN-2024-38 | https://magicard.com/application/physical-access/ | 2024-09-08T13:43:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00214.warc.gz | en | 0.945669 | 669 | 2.9375 | 3 |
05 Mar What Is The Internet Of Things And Machine To Machine Technology?
The internet revolutionized the way we live and work on a daily basis. From the moment we get up, to right before we go to sleep, and almost everything in between – we are rarely offline. Now, a new type of internet is about to revolutionize the way our machines work for us. The Internet of Things (IoT) and Machine to Machine (M2M) technology is an emerging field where everyday objects are interconnected, creating the ability to send and receive data between objects without the need of human interaction. With such enormous potential in terms of efficiency gains, Norscan has been focusing their research efforts on coming up with solutions for a wide arrange of applications. We caught up with Yolande Cates, who’s leading Norscan’s IoT/M2M research efforts, to learn more.
What is Machine to Machine technology?
In its most basic form, Machine to Machine (M2M) Technology is simply devices sending information between each other either in wired or wireless fashion. M2M has a long history in industrial automation in things like SCADA systems. Within the M2M/IoT world, this is generally an end device, such as a sensor, that autonomously sends information to another device that allows either a more complex system to take an action or the data is made available to a user who can then use that information.
What is the Internet of Things?
The Internet of Things (IoT) is used to describe devices that gather information and then use the internet to make that information available to others. Wearable technology, remotely programmable thermostats, and smart meters are examples of relatively sophisticated consumer devices in the IoT.
What can be done with this technology?
If you can measure it, it can be monitored in the IoT. Some common types of things being monitored are temperature, humidity, location, and energy use, but its potential is seemingly endless!
In the medical field, the IoT is being used with heartbeat sensors, blood sugar monitoring, blood pressure, weight scales, and smart pill boxes; the data from each of these is pushed to the cloud and stored securely on a server. The data can then be accessed by authorized users. In many ways it is simply adding the communication layer to existing technologies. The idea is to make information flow more quickly to those who need it.
For logistics, barcode scanners, RFID tags, and smart containers can work together to provide a real-time view of where everything is. In theory this should make things like just in time (jit) delivery smoother and more efficient.
In agriculture, sensors can be used to monitor the temperature and humidity of grain bins, the level of fertilizer or fuel in a tank and even the soil composition. With this information available remotely, farmers can see trends over time and feel more secure leaving for a period of time.
In addition to monitoring, a connected device can take action based on the information it is given. In the logistics and inventory example, the system could be programmed to automatically order more stock once inventory starts running low.
What is Norscan researching and developing within this field?
Norscan has a long history in the development of sensors and in wired communication in the telecoms sector. That expertise is being applied to new sensor types in different industries. Norscan’s has historically allowed users to access their sensor data remotely over an Ethernet connection or a Plain Old Telephone Service (POTS) phone line – this is the earlier version of the IoT. Bringing that same level of availability to a browser or mobile device is the next step in the evolution of Norscan’s product line.
Why does it matter?
For businesses, the IoT leads to the possibility of improved efficiency and potential new business models. The average consumer is beginning to see more and more IoT devices – smart thermostats and fitness monitoring being the first two that come to mind.
As the IoT grows it is expected to impact people beginning in the home through home automation and throughout society. Cities are beginning to use IoT devices to dynamically adjust parking fees to maintain an availability level, for example. | <urn:uuid:fe0e906f-b70a-45bf-81bf-2f81311cf1da> | CC-MAIN-2024-38 | https://www.norscan.com/what-is-the-internet-of-things-and-machine-to-machine-technology/ | 2024-09-08T13:27:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00214.warc.gz | en | 0.932523 | 865 | 3.25 | 3 |
In today’s world, our data is only a click away. This gets it easy for hackers… And when you find out what they can do with your personal information, you will feel the urge to achieve real anonymity and browse the internet safely.
What are they (hackers) after?
Any sensitive, “private” and valuable information to steal, extort and practically ruin your life.
They can get this information from at least 6 different sources.
The same 6 that you will learn to hide from today.
6 Reasons to Achieve REAL Anonymity & Browse the Internet Safely
Only a big task like this one would be backed up with bigger reasons.
If a cybercriminal decides to target you, then you can be sure he/she will very likely achieve its sinister goal.
You don’t want it. So, here are 6 reasons to stay anonymous online:
- Hackers may steal your identity and disguise as being you, to avoid guilt for crimes.
- Hackers may steal money from you, and your relatives (friends & family).
- Hackers may open bank accounts and ask for credit cards in your name.
- Hackers are capable of using your own credit card for their own purchases.
- Hackers might file a fake tax return with your Social Security Number (SSN)
- Also, if valuable enough, hackers could sell your data to others in black markets.
Are these reasons good enough for you to take action?
Re-read these reasons over and over again until you are faced with the reality of it.
Now, if you haven’t decided yet… Keep reading.
Let’s see exactly how Hackers and almost anyone can violate your online privacy.
How can ANYONE Violate Your Privacy Online
You are now aware of what can happen to you.
But you still don’t know how hackers are even capable of infiltrating networks or devices.
The following are just the most common methods used:
- Password Bypassing – Hackers steal it or guess it… Especially if they’re short, predictable, and repeat throughout different accounts. Security questions won’t prevent their access. Most times, they can get the answer with ease (publicly shared information).
- Phishing – Legitimate-looking emails, from “trustworthy” companies, aren’t always what they seem. Hackers have gotten better at stealing your information with fill-in forms and malicious file attachments.
- Malware – If you have been reading the MyITGuy blog for a while, you might be familiar with this term: malicious software that harms, steals, or deletes data out of your devices. There are hundreds of different types of malware (Virus, Spyware, and Ransomware are the best-known).
- Mobile Apps – Did you know that there are malicious apps inside of the Google Play Store and Apple App Store? If any gets installed on your mobile devices, then hackers have access to all your data stored (contacts and emails, for example).
- Network Vulnerabilities – Is your home network encrypted? Are you sure? If it isn’t, anyone could spy you all day long.
What about Public WiFi networks (hotels, restaurants, libraries, among others)? These are some of the favorite places hackers like to be around. They could even approach you with a fake network, which steals your data if you get hooked into it.
You see, this is no joke.
Cyber Criminals carry weapons to dismantle your information anytime, anywhere.
And all we wanted was to be connected to each other for growth.
Now, all we ask is to be left alone. What is the right way to do it?
Well, you don’t have to hide from everyone… But you should definitely erase any footprint left online.
You never know who’s watching.
How can you Browse Safely through the Internet?
Here’s how you do it.
12-Steps to Achieve Real Anonymity Online & Browse the Internet Safely
Before even starting, I must say something important:
All 10 steps won’t fit everyone. But all of them (at least, most of them) will work for you if you want to take the needed action to stay protected.
They include step-by-step instructions to achieve real anonymity online. Therefore, protect your most valuable data while moving around the internet.
This article is about being free and independent of why and where your data is disclosed.
Because your privacy and anonymity are at risk now that both the real and online word are merged together.
Geotracking, Facial Recognition, Smart Devices, IoT… Being “alone” seems impossible these days… Or is it?
- Safe Texting & Messaging – You can start small, right now, by doing some changes to your routinary messaging apps. You use them for work, but mostly to communicate with your loved ones… Those who know the most from you.
Being said, you wouldn’t be leaving sensitive information on the wrong hands.
And I’m sorry for being the one who ruins it for you, but WhatsApp isn’t trustworthy enough. Their end-to-end protection is not as effective as they say it is…
Instead, you can consider other alternatives as Wire, Signal, and Threema.
They all follow rigorous standards (company and infrastructure jurisdiction in Switzerland, Germany, and Ireland)…. And they do not follow cooperation with intelligence agencies as iMessage, Skype, and Facebook Messenger does. - Secure Email – As it was previously mentioned, you use messaging apps and text to talk with friends and your family… But when we talk about work, email is most common.
You spent most of your professional life using this tool (email) and getting any information leaked could cost hundreds or even thousands to your business.
If you want to achieve real anonymity online, you can use ProtonMail, Tutanota, Hushmail, or LuxSci email providers.
First and foremost, ProtonMail it’s based in Switzerland, and it doesn’t ask for date of birth, phone number, or other personal data when registering.
Free package comes with a 500 MB storage, but there’s a 5 to 500GB upgrade.
Now, there is good and bad news around Cookies.
Good news: Session cookies (as those which Amazon uses to recognize you and improve your experience) are the most common type. They don’t collect any personal information whatsoever… First-party cookies are a long-term form of these.
Instead, the bad news is that third-party or flash cookies will keep track of your browsing history, demographics, spending habits, and more.
You can hardly disable the latest (do not only hold bigger amounts of data but in the case of zombie cookies, they can be restored back again without your consent). - Be careful about what you post online – All we wanted was to be connected. But our dream wish became a nightmare. Now, everyone has access to every angle of our life, which makes it overlooked (because it is the “new normal”).
Being said, if there’s something you don’t want everyone to find about, then the best idea would be to avoid posting about it on social media — for the entire world to see.
Take a step back and consider the whole picture of what you’re sharing: locations you are visiting now or that you visited, friends and family member’s life, sensitive personal statements, among others.
Easy to prevent? Indeed. - App permissions – Have you seen those mobile apps requests on both iOS and Android? Did you know that they ask for more than what they actually need?
Take a look back and try to remember all the permission that you conceded to App makers: Microphone access (Is it going to record everything you say?), Location (Is it going to track your location?), Rolodex or Address book (Do they need to know about who your friends are?).
Luckily for you, you can un-do or turn off all unwanted permissions. Even if they (phone and app makers) make it hard to take back, you’ll find a way on “Settings”. - Adblocker – Advertising networks have improved the way they target better and more specific ads at you… And with that, intrusiveness as a consequence.
You probably know Google and Facebook. These two platforms are the biggest players, therefore you can be sure that they track all your web movements (all while not having the account logged in… or not even having an account with either).
Yes, they’re trustable companies… But they keep some shady practices in place as well… So you better start doing something about it.
The closest/fastest solution? Installing an ad blocker. It won’t work like a miracle, but it’s better than nothing.
Chrome’s Ad-block or the Brave Browser itself can sinkhole ad network DNS requests at your local router level. - Home assistants – Do you have Google Home (and Now), Amazon Echo (and Alexa), Siri or Cortana at home?
You better start practicing your throw… Into the trash can.
IoT (Internet of Things) is clearly providing an end-point to humanity’s outsourcing and automation capabilities. But it’s also hurting other valuable parts of how we work.
Did you know that they keep using the same technology protocols they used in the 80s? By default, the security configuration is not only outdated but thin and vulnerable.
These always-on digital snoops are poisonous to privacy and anonymity, and there is no meaningful way to make them less privacy-invasive. - Password Protection – If someone opens the doors of your devices… It’s directly accessing your private life as well. Intelligent password protection seems fair now?
For this reason, I’ll be contrarian to what some “experts” say and are that: you should avoid Biometric passwords. Showing skepticism about them could save you some headaches.
Why exactly? biometric systems work with easily-falsified data (fingerprinting is remarkably easy to replicate, as they can be taken from photos, for example).
Instead, for the traditional method: passphrases and password managers.
In the first place, passphrases are long streaks of text that results in impossible or at least extremely difficult to guess (even with password cracking tools).
And password managers (as 1Password and KeePass) can make your life easier. You can save secure passphrases into an also-secured platform that holds it for you.
Can you imagine having to remember 100s of different login credentials? - Safe Browsing – The web browsers market is very competitive, and security is one of those features that they all keep working on upgrading to get an advantage.
Let’s start with Firefox and its recent updates. They’re definitely faster and easier to use… but is it more secure? Definitely. Its nonprofit organization (Mozilla Foundation) doesn’t care about making billions by selling your browsing data to advertisers.
Besides, being open-source, it lets anyone curious enough to check its source code and confirm there aren’t “sneaky features” that may compromise your anonymity.
Then, we have Chromium. The name might sound familiar to you… Well, the reason is that this open-source browser project is what Google Chrome is based on.
It has been confirmed that Chromium is safe… But not as well-polished as Firefox or Chrome itself. Why not? No automatic updates, to mention one. It means you’ll be vulnerable with an insecure version of the software if you forget and let weeks pass by without any update whatsoever.
Of course, we cannot let Microsoft Edge out of the ring (“new Internet Explorer”). Funny enough, it’s based on Chromium’s edge, and all other considered-private browsers: Sandboxing, Windows defender’s Application guard, and SmartScreen.
Finally, what’s considered the “most-private” browser of all: Tor.
This metadata-resistant software is not perfect… But it’s the best we’ve got so far.
There’s even an official Tor Browser app for Android devices (and OnionBrowser offers a non-official iOS app as well). - VPN… Or not? – If you’re a bit “techy”, then you have definitely been waiting for VPNs to be part of this list…. And here it is.
But before considering the 10th step as the definitive one, let’s see it with different eyes. Without any doubt, a VPN (Virtual Private Network) might hide you from risky encounters… Reality is, all a VPN does is move you to someone else’s server.
This is a great and legitimate reason if you’re at the local coffee shop… Or when you’re traveling (at the Airport or Hotel WiFi network). Just take into account that real anonymity is not one of these great reasons.
This is remarkably true if you’re trying to get away using a “free” VPN tool. The 3 best, premium choices available are NordVPN, CyberGhost, and ExpressVPN.
Differently to what we mentioned as safety tools (Brave and Tor browsers, for example)… A VPN will keep showing your traffic to its provider.
Meaning, if someone hacks its way and takes control of your VPN’s server, you’re even more vulnerable than you were before.
Well, your safeness and privacy are important for me… For us.
That’s why the MyITGuy has added two extra layers of protection.
Who would be benefited from them?
Mostly, high-charge business people/owners who are aware of the plethora of dangers that await silently out there.
We tend to forget or just completely ignore the fact that cybercriminals are capable of anything, to achieve their sinister goals.
If you reach this point and consider to add the 2 last steps… Is because you’re being pursued.
Let ‘s stop that. - File-Sharing – Do you tend to share important and confidential files with associates? You should definitely check this out…
Dropbox and similar Cloud-storage services are popular and easy-to-use… But unfortunately, they’re far from being secure.
In the first place, you can’t set specific viewing, sharing, or editing permissions of your files. So, you won’t even know who got in… And of course, you won’t see either who deleted or overwrites your files.
Hopefully, for us, there are better and safer alternatives…
OnionShare is one of those. Available for Windows, macOS, and Ubuntu.
Remember about the Onion/Tor project?
They are also in charge of this Tor-powered file-sharing service.
But if you aren’t fully convinced about it, there are other alternatives to consider (WebRTC and FilePizza) that equally eliminates process steps, and adds security measurements.
For the use of the software, the application starts a server with an onion service and associated address on the user’s computer. Third parties can then access this address via the Tor browser and upload or download files.
Of course, the transfer speed is fast, private, and anonymous (you won’t find data stored on a server). - New, Digital Identity – This is for the most dedicated, hardcore readers.
And it’s also for those who would like to have fun being “someone else” temporarily.
Yes. It’s possible to create your new, digital identity to enforce anonymity and safe internet browsing.
What consists of having a new identity? Mostly, having a different name, and those kinds of information differently (physical and email address, and phone number).
fakenamegenerator.com/ is a tool that perfectly solves this problem.
Just take one thing into consideration: some platforms (as Facebook) have advanced systems in place to detect and even block users that register with fake identities.
Apart from that, depending on how serious your situation is… You should definitely consider changing this new identity consistently.
With invented and constantly-changing identity data, your footprints will be untraceable. So don’t break character and die with the lie.
At this point, you should definitely be hidden from 90% of cybercriminals out there… At least from the most novice.
Of course, this is only if you have performed every step included on this list (or at least most of them).
I won’t tell you to avoid communication with friends and family members… Or to never use your real identity in the real world, back again.
But what I do recommend you, is to be aware of all the dangerous presences that wander around the internet. Only that way, you’re out and safe from predator’s eyesight.
I say No to infringing the laws yourself… But I scream YES to thinking in favor of your business assets – and most importantly, of your beloved ones.
What do you say… Want to achieve REAL anonymity once for all?
I’ll personally help you to achieve safe internet browsing! | <urn:uuid:68ae99ad-7df0-4d4a-b6a1-7274f4a9a2b5> | CC-MAIN-2024-38 | https://www.gomyitguy.com/blog-news-updates/achieve-real-anonymity-browse-internet-safely | 2024-09-11T00:50:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00014.warc.gz | en | 0.934358 | 3,680 | 2.5625 | 3 |
What is a Rootkit?
When you run an antivirus scan, you expect it will detect all kinds of malware and malicious code. However, rootkits are an insidious type of computer virus that are capable of remaining undetected, sometimes for years! They can allow hackers real-time access to your system, record and transmit your valuable information, and cause all sorts of havoc.
In this article, we’ll explore what rootkits are, how they work, and why they’re a significant threat to your digital safety. At the end, we will discuss some simple ways to stay safe from rootkits and other cybersecurity threats.
How Do Rootkits Work?
Rootkits are a specific type of malware that are particularly difficult to detect, allowing hackers to have access to compromised systems for weeks, months, or even years.
Rootkits are notorious for their ability to hide themselves and their activities on your computer. Since they operate at the root level of your operating system, they can mask their presence from users and antivirus programs.
Their ability to hide extends beyond themselves and some rootkits can even disguise other harmful software!
Gaining Unauthorized Access
A rootkit infection typically results from hackers exploiting a security weakness in your system or, more commonly, social engineering attacks in which users voluntarily download and open infected files. This can come in the form of compromised email attachments or downloads, and even from infected USB-storage devices.
What makes rootkit malware particularly dangerous is their ability to evade detection. They can intercept and alter system calls, manipulate data, all while staying under the radar of antivirus software. It isn’t uncommon for antivirus software to simply fail to detect rootkits, leaving your computer infected for long periods of time.
Common Types of Rootkits
Kernel Mode Rootkit
This is one of the most dangerous types of rootkits. It operates at the kernel level, which is the core of the operating system. Having access to the kernel means the rootkit has privileged access and can control anything on the system. It can intercept and manipulate calls to the operating system, hide its presence, and control system processes.
Kernel mode rootkits are challenging to detect and remove because of their deep integration with the OS.
These rootkits operate at the application layer and are easier to install compared to kernel mode rootkits. They intercept and modify the behavior of standard system calls made by applications. While they are less powerful than kernel mode rootkits, they can still be used to steal user information and hide malicious activities.
They are generally easier to detect and remove than kernel mode rootkits.
This type of rootkit, often called a bootkit, targets the master boot record (MBR) of a computer system, which is the program responsible for starting the operating system during the boot process. By infecting the MBR, the bootkit can load before the OS, gaining control of the system from the very beginning of the computer startup process.
This makes them particularly stealthy, as they can initiate their defense countermeasures before typical security software has been activated.
Firmware rootkits are embedded in the firmware of hardware devices, like your motherboard’s BIOS or a router’s firmware. They are extremely persistent, surviving even complete operating system reinstallation.
Because they reside in hardware, they are difficult to detect and remove without specialized knowledge and tools.
These rootkits target specific applications and modify their behavior to avoid detection. They can be used to log keystrokes in a specific application, steal data, or provide unauthorized access to the application’s features.
Because they operate within the confines of an application, they are typically more limited in scope compared to system-level rootkits.
Memory rootkits reside entirely in your computer’s RAM and thus leave minimal traces on your computer’s harddrive. This type of rootkit is effective in evading detection because it disappears upon reboot, since RAM is volatile memory, which does not persist through power cycles.
However, this also means that they need to be reinstalled each time the system starts, which can be a limiting factor in their use.
Real World Rootkit Examples
Stuxnet emerged as a groundbreaking and sophisticated rootkit targeting Iran’s nuclear program, particularly the Natanz uranium enrichment facility.
Widely believed to have been developed by the United States and Israel, although never officially confirmed, Stuxnet represented a new level of cyber warfare. It’s thought to have been in development since at least 2005 and remained operational for years before its discovery in 2010.
The rootkit specifically targeted industrial control systems used in the Iranian nuclear program, causing significant damage to the centrifuges used in uranium enrichment and is estimated to have destroyed over 1,000 centrifuges and significantly set back Iran’s nuclear ambitions.
Image from https://cyberhoot.com/cybrary/stuxnet/
Ryuk Ransomware (2017 – Present)
First identified in 2017, Ryuk Ransomware has been a persistent threat, often using rootkit capabilities to gain control of victims’ networks. Although the creators of Ryuk are not definitively known, it is believed to have ties with North Korean hacking groups or Russian criminal organizations.
Ryuk is known for targeting large organizations, including healthcare, government, and private sector entities, demanding large ransom payments. It has caused extensive damage, including disrupting operations and causing financial losses.
Notable attacks include crippling the IT infrastructure of several U.S. newspapers in 2018 and disrupting the operations of Universal Health Services in 2020.
Uncovered in 2018, DarkTequila was a complex, multifaceted malware operation that predominantly targeted users in Mexico, likely developed by cybercriminals based in the region. The rootkit had been operational for years, possibly since 2013, before it was detected, and specialized in stealing banking credentials and personal data without being detected.
The malware was often spread by infected USB flash drives – driving home the importance of being very wary about plugging any suspect drives into your PC!
The damage done by DarkTequila included financial fraud and significant data breaches, affecting private individuals and businesses alike.
Why Rootkits are Dangerous
Computer Performance Issues
Rootkits can drastically affect the performance of a computer. They often consume a lot of system resources, leading to noticeable slowdowns, system crashes, and unreliable application behavior.
This not only hinders productivity but can also be a sign of deeper, more damaging activities happening in the background, like data extraction or additional malware installation.
Due to their ability to hide and evade detection, rootkits can remain on a system for a long time, either causing ongoing damage or waiting quietly for the opportune moment to strike. They can disable security software and update themselves to evade newly developed detection methods, making them a persistent threat that’s hard to eradicate.
Data Security Risks
The presence of a rootkit in a system poses severe risks to data security. Rootkits can function as keyloggers, helping attackers steal sensitive information such as login credentials, credit card and bank account details, personal information, and confidential corporate data.
The stealthy nature of rootkits means that data theft can occur over extended time periods without detection, leading to significant privacy violations, identity theft, financial fraud, and corporate espionage.
Loss of Control
One of the most insidious aspects of rootkit attacks is the unauthorized control they grant cybercriminals over the infected system. Attackers can use rootkits as a backdoor to remotely execute commands, alter system configurations, install additional malware, and manipulate system functionalities, all without the user’s knowledge.
This loss of control can turn your computer into a tool for perpetuating cyberattacks like spreading malware, launching distributed denial-of-service (DDoS) attacks, or participating in botnets.
Simple Prevention Tips
Regular Software Updates
Keeping software up-to-date is crucial in the fight against rootkits. Software developers regularly release updates that patch security vulnerabilities. By ensuring your operating system and applications are current, you can close the gaps that rootkits exploit.
Set your system to update automatically or make it a habit to check for updates frequently.
Invest in Reliable Security Software
A robust antivirus or anti-malware software is your first line of defense against rootkits in particular and malicious software in general. Look for software that specifically includes rootkit detection and rootkit removal capabilities. These security tools can scan for and identify suspicious behavior, helping to catch rootkits before they embed themselves deep within your system.
Additionally, ensure your security software is always kept up to date with the latest malware definitions.
Utilize Effective Firewalls
A firewall acts as a gatekeeper for your computer or network, controlling incoming and outgoing network traffic based on security rules. It’s an essential tool in preventing unauthorized access to your system, which can help stop rootkits before they infiltrate your computer.
Operating systems like Microsoft Windows and MacOS ship with built-in firewalls. Ensure that these are enabled – as they won’t do you any good if they have been turned off!
Be Aware of Phishing Attempts
Many rootkits (and other types of computer viruses like trojans) find their way onto computers through phishing – deceptive practices that trick users into revealing sensitive information or downloading malware.
Always be cautious with emails or messages from unknown sources. Avoid clicking on suspicious links or downloading attachments from untrusted emails. Educate yourself and your team, if applicable, about how to recognize phishing attempts.
The landscape of cybersecurity threats, including rootkits, is constantly evolving. Staying informed about new threats and trends in cybersecurity can significantly improve your ability to defend against them.
By reading this article you’re already doing a better job of staying informed than the average user, and we urge you to continue following reputable cybersecurity news sources and subscribe to security bulletins from trusted organizations!
Keep Your System Safe
Rootkits pose a serious threat to our digital security and the difficulty involved in detecting them makes prevention essential. Understanding what they are, how they operate, and the dangers they present is the first step in safeguarding yourself against these threats.
Keeping your computer and software updated, staying vigilant against phishing and social engineering threats, and using an antivirus which includes anti-rootkit capabilities are all essential for keeping your computer safe.
If you suspect that your computer may have an infection, or just want to make sure that your protection is adequate, we provide comprehensive computer virus detection and removal services for the Columbia Midlands. | <urn:uuid:9fc2bb13-a270-403b-a813-0a1922264f31> | CC-MAIN-2024-38 | https://bristeeritech.com/it-security-blog/what-is-a-rootkit/ | 2024-09-14T14:04:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00614.warc.gz | en | 0.934821 | 2,219 | 3.21875 | 3 |
FTC Proposes New Rules On Child Data Collection
Federal Trade Commission wants to regulate many more types of personal information for websites, mobile games, and online services that knowingly interact with children under the age of 13.
The Federal Trade Commission is proposing new privacy rules that would change how websites and online services that interact with children can collect, store, or share much of the personal information they gather on minors.
On Thursday, the FTC released proposed amendments to the Children's Online Privacy Protection Act (COPPA). According to the agency, the updates are meant "to respond to changes in online technology, including in the mobile marketplace."
Specifically, the FTC's proposals--open for public comment until Nov. 28--seek to update rules for how businesses that collect children's information notify others, obtain parental consent, as well as keep collected data secure and confidential. It would also update self-regulatory safe harbor provisions by requiring members of such programs to undergo an annual audit and report the results to the FTC. Finally, it would expand COPPA to cover not just websites, but also online services such as mobile applications and even some types of text messaging services.
The COPPA Rule, as it's officially known, was written in 1998 and went into effect in 2000, and regulates how websites and online service providers can interact with children under the age of 13. Notably, the rule requires parental consent before collecting, using, or disclosing any information on children under the age of 13, and stipulates that only the minimum necessary personal information can be gathered.
The FTC last looked at updating COPPA in 2005, but made no changes. The regulations weren't due to be reexamined for 10 years, but due to "the rapid-fire pace of change ... including an explosion in children's use of mobile devices, the proliferation of online social networking and interactive gaming" since 2005, the FTC began reexamining COPPA in April 2010.
[Software that tracks laptop computers is another concern. Learn more here.]
The COPPA amendments aim to regulate many more types of personal data. "One of the most significant proposed changes to the COPPA Rule is to the definition of 'personal information.' The definition of 'personal information' is important as the COPPA Rule only applies to operators whose websites or online service are directed to children or who have actual knowledge that they are collecting personal information from a child under the age of thirteen," said Eric Bukstein, an attorney at Hogan Lovells, in a blog post. Such personal information would include not just names and addresses, but geolocation data (that can be resolved to street and city name), screen and user names (if such information is shared with others by the data collector), persistent identifiers, as well as photographs, audio, or video of the child.
In addition, the FTC has proposed changing how websites obtain the verifiable parental consent required by COPPA. New techniques to be allowed would include "electronic scans of signed parental consent forms, video-conferencing, and use of government-issued identification checked against a database, provided that the parent's ID is deleted promptly after verification is done," said the FTC. Businesses that only use collected information internally could also verify identities via email. The FTC is also proposing to have a voluntary 180-day program for any businesses that want to suggest other verification techniques, and have the FTC accept or deny them.
In short, the changes would make many more businesses subject to COPPA, over which the FTC has recently been keeping a much closer eye. Notably, it fined mobile application developer W3 Innovations last month for COPPA violations.
"That settlement, coupled with the FTC's express recognition of the need for rule changes to address new technologies and services, suggests that the FTC will likely enforce the COPPA Rule much more broadly than it has in the past," said Bukstein. "This means that any media that is targeted at children under the age of thirteen will have to analyze whether it can be considered an 'online service' and take appropriate steps to comply with COPPA if necessary." Notably, any company that collects children's data and serves them advertising will first have to obtain parental consent to do so.
Industry associations and privacy rights groups have begun commenting on the proposed COPAA changes. In particular, the Direct Marketing Association (DMA) has criticized the FTC's push for more stringent forms of parental consent. "In the report, the FTC recommends doing away with the existing 'sliding scale' approach to parental consent, in which the required method of consent varies based on how the operator uses a child's personal information," the DMA said in a statement. "DMA believes that this approach has proven to be a sound means for protecting children online and supports retaining a system that strikes the right balance between providing parents with control and not inhibiting children's beneficial Internet experiences."
Meanwhile, the Center for Democracy and Technology (CDT), has raised concerns over using government-issued identification to verify children's identity. "This method only proves that the operator has received someone's ID; it cannot verify that the person on the ID is a parent of the minor," said the CDT.
Join us for GovCloud 2011, a day-long event where IT professionals in federal, state, and local government will develop a deeper understanding of cloud options. Register now.
About the Author
You May Also Like
How to Evaluate Hybrid-Cloud Network Policies and Enhance Security
September 18, 2024DORA and PCI DSS 4.0: Scale Your Mainframe Security Strategy Among Evolving Regulations
September 26, 2024Harnessing the Power of Automation to Boost Enterprise Cybersecurity
October 3, 202410 Emerging Vulnerabilities Every Enterprise Should Know
October 30, 2024
State of AI in Cybersecurity: Beyond the Hype
October 30, 2024[Virtual Event] The Essential Guide to Cloud Management
October 17, 2024Black Hat Europe - December 9-12 - Learn More
December 10, 2024SecTor - Canada's IT Security Conference Oct 22-24 - Learn More
October 22, 2024 | <urn:uuid:6b408204-588d-4c4f-aac8-420f642e931f> | CC-MAIN-2024-38 | https://www.darkreading.com/cyber-risk/ftc-proposes-new-rules-on-child-data-collection | 2024-09-17T02:11:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00414.warc.gz | en | 0.956085 | 1,246 | 2.640625 | 3 |
Multicast - PIM dense vs sparse mode
When deploying multicast in a network topology, multicast routing must be employed to ensure the correct forwarding of multicast traffic. In particular, the multicast routing protocol that is most often used is Protocol Independent Multicast (PIM). There are two types PIM that can be employed:
In PIM dense mode, the multicast traffic is initially flooded to all parts of the network using a construct called the shortest path tree (SPT). Parts of the network that don't want or hasn't requested multicast will prune back the traffic. In this sense, dense mode is a “push” model. PIM dense mode primarily uses the SPT, with each source having a separate distribution tree.
PIM sparse mode initially sends no traffic until a receiver on a network segment indicates it is interested in receiving the traffic. The traffic is then sent to that network segment by the RP (Rendezvous Point) in a shared tree topology. This is called a “pull” model. PIM sparse mode typically uses the shared tree topology, most often called a Root Point Tree (RPT) initially. This delivers greater simplicity of implementation and lower overhead. When a host wants to receive the multicast traffic, it sends an IGMP join message upstream toward the RP. Once traffic for a group is flowing, if the volume of traffic and the number of receivers warrant it, PIM sparse mode can switch over to the SPT for greater efficiency.
There is a third option called PIM sparse dense mode in which sparse or dense mode is used on a per multicast group basis. | <urn:uuid:4264e631-5063-42f1-8bab-6c6033737e3d> | CC-MAIN-2024-38 | https://notes.networklessons.com/multicast-pim-dense-vs-sparse-mode | 2024-09-19T15:10:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00214.warc.gz | en | 0.923912 | 341 | 2.96875 | 3 |
The United States Capitol: The Iconic Seat of American Democracy
Have you ever wondered about the impressive building that serves as the epicenter of American democracy? Look no further than the United States Capitol, located in Washington, DC. In this blog post, we will delve into the significance of the United States Capitol, its architectural marvel, and the vital role it plays in shaping the country’s governance.
1. The Significance of the United States Capitol:
The United States Capitol holds immense significance as the meeting place of the United States Congress, where the Senate and the House of Representatives convene. It serves as a symbol of American democracy, a place where laws are debated, bills are passed, and important decisions impacting the nation are made. The Capitol is a crucial component of the checks and balances system that defines the United States’ political landscape.
The only MSP – top-notch IT services provider in Washington, DC
2. Architectural Marvel:
The Capitol is a stunning architectural masterpiece that combines classical and neoclassical design elements. It was designed by renowned architect William Thornton, with subsequent modifications and expansions over the years. The building features a central dome adorned with a magnificent statue called the Statue of Freedom, which stands tall as a symbol of liberty and democracy. The Capitol’s grandeur truly reflects the importance of the work carried out within its walls.
3. A Brief History:
The construction of the United States Capitol began in 1793 and was completed in several stages. The building has witnessed significant events throughout the nation’s history, including the inauguration of presidents, the passage of landmark legislation, and historic speeches. Notably, the Capitol has also experienced moments of turmoil, such as the British invasion during the War of 1812, which led to its destruction and subsequent restoration.
4. Architectural Features and Spaces:
The United States Capitol encompasses various architectural features and spaces that are worth exploring. The Rotunda, located beneath the central dome, showcases impressive artwork and historical paintings. The Hall of Statues, also known as the National Statuary Hall, exhibits statues representing notable figures from each state. Additionally, there are several chambers and meeting rooms where legislators debate and make decisions that shape the nation’s governance.
5. Public Access and Tours:
Visitors have the opportunity to explore the United States Capitol through guided tours. These tours offer a unique glimpse into the history, art, and legislative process of the nation. Visitors can marvel at the stunning architecture, learn about significant events that have taken place within its walls, and gain insights into the inner workings of American governance. It is an experience that provides a deeper understanding of the country’s democratic principles.
The United States Capitol stands as a beacon of American democracy, a place where the voices of the people are heard, and policies are debated and enacted. Its architectural grandeur and historical significance make it a must-visit destination for anyone interested in the nation’s governance and history. Whether you marvel at its majestic dome, explore its ornate chambers, or witness the legislative process first-hand, the United States Capitol is a testament to the enduring strength of American democracy.
Driving Directions to Business IT Solutions & IT Services Provider in Washington, DC Metro Area | Intelice Solutions an IT Support Company From This POI
Driving Directions To The Next POI | <urn:uuid:ae35eecc-b87c-4f35-aafd-09b7909e5718> | CC-MAIN-2024-38 | https://www.intelice.com/the-united-states-capitol/ | 2024-09-19T15:08:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00214.warc.gz | en | 0.948747 | 689 | 3.65625 | 4 |
What are Wavelength Services?
Wavelength Services are large bandwidth connections that deliver high-speed data communication and transmission. Key aspects of wavelength services include:
- Dedicated Wavelengths or light paths within a fiber-optic network, operate at a specific frequency.
- High Capacity: Often in the range of 10 Gbps (gigabits per second), 100 Gbps, or even 400 Gbps per wavelength, which is important for organizations with substantial data transfer needs.
- Low Latency: Because it’s known for its low latency, wavelength services are crucial for applications that require real-time data transfer, such as financial trading, healthcare, or manufacturing.
- Point-to-Point: It can be configured as point-to-point connections, where data is transmitted between two specific locations.
- Security: These services are often considered more secure than traditional internet connections because they offer dedicated and predictable bandwidth, reducing the risk of network congestion or performance degradation due to shared resources.
- Customization: Customers are able to choose the wavelength capacity and other parameters that best suit their specific requirements.
Wavelength services are commonly used for various applications, including data center interconnects, cloud computing, content delivery networks (CDNs), scientific research, high-performance computing (HPC), and the transportation of large volumes of data over long distances.
Wavelength Services: Benefits to Business
For large businesses—especially those bogged down by existing network issues— Wavelength refers to a dedicated high-capacity data transport to meet the demand for high-speed, low-latency, and reliable data transmission for various applications, including cloud computing, data center connectivity, and high-performance computing.
Beyond the aforementioned features, wavelength services provide several key benefits to businesses, including:
- High-Speed Data Transfer: Wavelength services offer high data transmission capacities, often ranging from 10 Gbps to 400 Gbps per wavelength, which is crucial for businesses that need to move large volumes of data quickly.
- Reliability: Wavelength services offer dedicated and predictable bandwidth, reducing the risk of network congestion and performance degradation due to shared resources. Reliability is critical for businesses that require consistent network performance.
- Scalability: Wavelength services are scalable, allowing businesses to increase bandwidth as needed. As a rule of thumb, wavelengths are good for consistent data requirements and generally tend to trend up.
- Customization: Customers can often customize their wavelength services to meet their specific needs, including choosing the wavelength capacity, the number of wavelengths, and the endpoints. This flexibility enables businesses to tailor their network solutions.
- Data Center Interconnects: Wavelength services are commonly used for connecting data centers, enabling businesses to replicate and synchronize data between geographically dispersed data centers. This is crucial for disaster recovery, high availability, and load balancing.
- Cloud Connectivity: Wavelength services facilitate direct connections to cloud service providers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). This provides businesses with dedicated and high-capacity links to their cloud resources.
- High-Performance Computing (HPC): Organizations involved in scientific research, simulations, or data-intensive tasks benefit from the high bandwidth and low latency of wavelength services, which are ideal for HPC clusters.
- Content Delivery Networks (CDNs): CDNs can use wavelength services to efficiently distribute content and reduce latency, improving the user experience for businesses that rely on content delivery.
For a wide range of businesses that require fast, reliable, and customizable data transport solutions, Wavelength Services are valuable as they empower organizations to meet their data connectivity needs while ensuring high performance, security, and scalability.
FiberLight’s Wavelength Services: Tiers, Speeds, and Offerings
The specific tiers and speeds offered via wavelength services can vary depending on the service provider and the technology they use. These range anywhere from 10Gbps, which is suitable for small to medium-sized businesses with moderate data transfer requirements to 400Gbps, the highest commercially available wavelength service speeds, which is typically used where and when massive data transfer capacity is essential, like backbone networks and major data hubs.
FiberLight’s low-latency wavelength services are designed to grow with a customer’s data needs. The choice of wavelength speed depends on factors like the volume of data to be transferred, the latency tolerance of the applications, and the budget of the organization.
When considering a wavelength service, it’s crucial for businesses to assess their specific needs and work with service providers to determine the most suitable speed tier and other customization options to optimize their network performance. | <urn:uuid:4c42996b-308b-46dc-8032-ff8a223e5f54> | CC-MAIN-2024-38 | https://www.fiberlight.com/blog/accelerate-business-objectives-with-wavelength-services/ | 2024-09-20T21:14:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00114.warc.gz | en | 0.940372 | 956 | 2.90625 | 3 |
What is cloud architecture?
Cloud architecture is how a series of interconnected components, from software tools and applications to networking, server and storage, combine to form a cloud of shared resources. It is the complete infrastructure of hardware and software that businesses and institutions use to create, index, store, and share vast amounts of data.
The most basic building blocks of cloud architecture are represented as being either front-end, back-end, or cloud-based delivery.
What are the fundamentals of cloud architecture?
There are three major components present in virtually every cloud service. The front-end is represented by the clients and devices used to virtualize, or to access and manage, all cloud data. These front-end tools can range from virtual web and mobile applications to complex analysis and automation tools, depending on the particular needs of an organization.
The back-end is comprised of the virtual servers, storage, and infrastructure such as CPUs and GPUs, network switches, and accelerator cards that power user access and queries. Unlike with traditional network hardware and in-house data centers, cloud enables companies to easily scale as their needs change without needing to purchase and maintain their own equipment.
Lastly, cloud-based delivery is the critical point connecting the front- and back-end, powered by SaaS, PaaS, and IaaS platforms. There are hundreds of different use cases, all of which can be easily customized to the particular needs of any organization.
What is cloud architecture framework?
A cloud architecture framework is the “rules of the road” or best practices behind a working cloud environment. While many parts of the cloud are based purely on technology, cloud architecture framework includes everything from components to roles, policies, security—even training.
At the component and sub-component level, the two most crucial factors to cloud architecture framework are their interoperability (their ability to communicate and send large amounts of data) and portability (their ability to move to a different cloud or server without difficulty). Provisioning is another significant consideration—how your cloud will adapt when changes or needs arise for available resources.
Another part of the framework is security. Elements like multi-factor authentication, account creation and maintenance, data classification, and proper logging of all network activity need to be carefully considered when establishing your cloud environment’s framework.
Lastly, a cloud architecture framework addresses end-to-end orchestration. This is the coordinated management of the entire cloud environment to ensure it is working to meet its intended goals. This includes frequent audits of the cloud itself, from security to performance to compliance.
What are the types of cloud architecture?
With three different types of cloud architecture to access and store data in the cloud, organizations can choose the service model that best serves their particular needs: public, private, or hybrid.
Regardless of the model, the security, flexibility, and cost savings of a cloud experience continue to attract new businesses and IT professionals every day.
The public cloud is, as the name implies, a complete third-party framework of computing resources like networking, memory, processing, and storage. This is the most common type of cloud computing today, allowing businesses to scale their resources as needed without purchasing or maintaining their own hardware or software.
There are times when using a private, or on-premises, cloud is necessary. In this model, the entire cloud system is managed by the organization. The decision to maintain a private cloud environment is often due to data security and sovereignty, industry compliance, or storage and processing resources availability. A private cloud can be hosted either by a third party or as part of a company’s own data center.
Lastly, hybrid cloud offers a best-of-both-worlds solution, in which an organization maintains an optimized private cloud for their own resources while still being able to leverage the vast resources of the public cloud due to cost and scalability. A hybrid cloud combines public and private cloud elements connected securely over a virtual private network (VPN) or private channel.
What’s the difference between the cloud’s front end and its back end?
The cloud’s front end represents the point at which a user interacts with the software clients, user interfaces, and client-devices or networks. This can be as simple as an email application or as complex as deep AI-based analytics tools. When provided to the user as an application, it is referred to as Software as a Service (SaaS). At the same time, the cloud architecture’s back end can simply be referred to as the actual hardware behind the cloud—everything from data storage to processors to network switches—also known as Infrastructure as a Service (IaaS).
What are the benefits of cloud architecture?
- Flexibility and scalability:
Cloud design enables enterprises to scale resources up or down based on demand. This scalability is important for sustaining corporate development while adjusting to changing workloads without requiring sizeable upfront hardware expenditures.
- Reliability and availability:
Cloud providers deliver high levels of dependability and availability through redundancy and failover systems. This contributes to a more robust IT environment by guaranteeing that data and applications are still available even in the event of hardware failures or interruptions.
- Agility and innovation:
Cloud computing helps companies reduce time-to-market by enabling quick application and service deployment. Cloud architecture's agility enables businesses to innovate in a competitive and dynamic environment by reacting swiftly to market developments and trying out new concepts.
What are the components of cloud architecture?
Cloud architecture: Connects front-end and back-end components to enable seamless functionality. This architecture is critical for realizing the full potential of cloud computing since it provides flexibility, accessibility, and dependable performance. Here are the fundamental elements of front-end and back-end cloud architecture.
- Front-end cloud architecture: The user interface is the visual interface that allows users to interact with cloud services. It includes features like dashboards, graphical displays, and navigation to ensure an intuitive and user-friendly experience.
- Client-side components: Client-side components are software and programs installed on users' devices, such as web browsers and mobile apps. These components support connection with the cloud infrastructure, allowing users to access and alter data smoothly. An important consideration in front-end cloud architecture is ensuring compatibility and optimal performance across diverse client devices.
- User experience optimization: Optimizing the total interaction between users and cloud apps is what user experience optimization is all about. This involves minimizing load times and responsiveness and maintaining a uniform experience across devices.
2. Back-end cloud architecture: The server infrastructure serves as the cloud system's backbone, holding the processing power and storage resources required to process and store data. Back-end architects create and maintain server clusters, ensuring they can grow dynamically to accommodate shifting demands. Load balancing and fault tolerance measures are used to maintain optimal performance and dependability.
- Database management: The back-end cloud architecture incorporates powerful database management technologies that effectively store and retrieve data. This includes selecting proper database models, establishing schemas, and implementing data security mechanisms. Scalability and data consistency are essential concerns in ensuring that the database can develop in tandem with the demands of the company.
- Measures for security and compliance: Security is a top priority in cloud architecture. Back-end components use encryption, access restrictions, and authentication protocols to secure data from unwanted access. Stringent security measures and frequent audits ensure compliance with industry norms and standards, fostering confidence among users and stakeholders.
What is cloud-based delivery?
Cloud-based delivery refers to the way users access, manage, and use the data itself. Depending on the type of application, this could be anything from a simple web portal to analytics or network management. This combination of virtual software and centralized hardware is what powers enterprise-level accessibility and flexibility, while also providing scalable, secured storage of large amounts of data. Whether public, private, or hybrid, you customize your IT solution to your precise workload and security needs.
How is cloud architecture used?
Cloud-native architecture is a system that’s purpose-built to run entirely in the modern cloud. Its most significant advantage over legacy systems like on-premises servers is flexibility and scalability. When it comes to modern cloud applications (versus the traditional native “monolithic” application model), the use of specialized microservices has been a significant development.
A cloud-native application is better thought of as one large application made from dozens, if not hundreds or thousands, of these microservices or application programming interfaces (APIs). This model also enables applications to be developed more simply while providing critical updates in days rather than weeks or months. Development teams and IT professionals greatly benefit from this integrated work environment, which enables members of the team to handle specific tasks while automating processes like compiling and deployment.
The number of cloud architecture applications continues to grow rapidly in almost every industry. Specialized applications (e.g., SalesForce and Marketo) are powering businesses to be more collaborative and iterative while increasing productivity and reducing downtime.
What are the strategy for implementation and migration of cloud architecture?
Strategy for Cloud Architecture Implementation and Migration:
Planning and building the cloud architecture and implementing migration techniques require important details:
- Planning and designing cloud architecture:
- Current infrastructure assessment: Assess the infrastructure, applications, and data first. Evaluate which workloads can be moved to the cloud and which need redesign.
- Define objectives and requirements: Outline the business goals driving the cloud migration. Determine performance, scalability, and security. This stage is essential for aligning cloud architecture with business goals.
- Selecting appropriate cloud services: Select the correct cloud services based on requirements. Choose IaaS, PaaS, and SaaS based on application needs.
- Architecture design: Plan a scalable, secure, and integrated cloud infrastructure. Define component interactions to create a unified, efficient framework. Use high-availability and disaster-recovery best practices.
- Cost estimation and optimization: Estimate new cloud architecture costs with a complete cost study. To match costs with utilization, use reserved instances, rightsizing resources, and auto-scaling.
2. Migration strategies and considerations:
- The phased migration approach: Prioritize essential, complex, and dependent workloads for phased migration. This reduces risk and makes each migration step a learning experience.
- Data migration: Data migration techniques should include data volume, consistency, and downtime. Cloud technologies and services simplify migration.
- Application refactoring: Consider restructuring or rearchitecting apps that need to be modified for cloud compatibility. This might entail making code adjustments, making use of cloud-native services, or performance tuning for the cloud.
- Testing and validation: Test apps extensively on the cloud before deployment. Verify performance, security, and functionality to guarantee a smooth transfer and quickly resolve any concerns.
- Monitor and optimize: Use comprehensive monitoring technologies to track performance and discover post-migration concerns. Use real-time data and user input to optimize resources, setups, and prices.
What are the best practices of cloud architecture?
The following aspects are crucial for building and executing cloud architectures.
- Scalability and flexibility:
Utilize auto-scaling methods to dynamically alter resources based on demand, maximizing cost efficiency. Make sure the design can accommodate greater traffic without sacrificing performance by horizontal scaling resources. Architecting for flexibility helps firms adjust to shifting needs.
- Security and compliance standards:
Protection of data and infrastructure requires strong security. Use encryption for sensitive data in transit and at rest. Implement strong authentication, authorization, and access controls. Perform regular security audits to find and fix issues. To ensure legal compliance and user confidence, follow industry-specific compliance standards and laws for data processing.
- Integration and interoperability:
Create integration between cloud services and on-premises systems. Communicate seamlessly between components and services using APIs. Choose cloud solutions with open standards for platform compatibility. Integrating data and collaboration throughout the enterprise streamlines operations and maximizes cloud investments.
- Performance optimization strategies:
Use performance optimization to maximize resource use. Use CDNs to improve content delivery and latency. Caching commonly requested data reduces response times. Track performance data to find bottlenecks and improve resource allocation.
How HPE can help you in cloud architecture?
HPE continues to develop and integrate all of the components necessary to create a working cloud environment. Building upon our legacy of high-performance computing and data storage, HPE has brought that same focus to creating the entire cloud ecosystem from edge to core to cloud and back.
HPE’s cloud architecture solutions go beyond simple hardware and software. The ability to visualize and manage your entire cloud environment, combined with analytics and AI-assisted tools with near-endless possibilities, gives organizations more flexibility than ever before.
Whether it’s platform, software, or infrastructure as a service (PaaS, SaaS, IaaS, respectively), HPE is leading the way for business and industry to integrate their organization into the cloud environment. Compared to the traditional networking model, HPE’s cloud architecture provides best-in-class security, flexibility, and scalability—while also providing tremendous cost-savings benefits. | <urn:uuid:16d8282d-61ab-4600-8d22-b5abe9e996aa> | CC-MAIN-2024-38 | https://www.hpe.com/dk/en/what-is/cloud-architecture.html | 2024-09-20T20:09:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00114.warc.gz | en | 0.920273 | 2,730 | 3.390625 | 3 |
Information (or data) leakage is undesired behavior in machine learning during which information that should not be in the training data inflates the model’s ability to learn, causing poor performance at prediction time or in production. Models subject to information leakage do not generalize well to unseen data.
There are multiple types of data leakage, including:
Avoiding or detecting information leakage early is important to prevent models from learning the wrong signals and overestimating their value before they go into production.
In addition to following data science best practices, model interpretability is a great tool to identify and fight information leakage.
At C3 AI, data scientists are well-versed in information leakage problems and how to detect them. C3 AI carefully splits the data into separate groups – training, validation, and test sets – and keeps the test set intact to report the final performance after the model has been optimized on the validation set. For time-series data, C3 AI applications always apply a cut-off timestamp or time-series cross-validation. | <urn:uuid:751ad416-7dfa-481a-9b0d-41a85be008a7> | CC-MAIN-2024-38 | https://c3.ai/glossary/data-science/information-leakage/ | 2024-09-08T15:36:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00378.warc.gz | en | 0.918126 | 213 | 3.09375 | 3 |
350% growth in open cybersecurity positions from 2013 to 2021
The New York Times reports that a stunning statistic is reverberating in cyber security: Cyber security Ventures’ prediction that there will be 3.5 million unfilled cyber security jobs globally by 2021, up from one million positions in 2014.
If you plan to build a career in Cyber Security or shift your current career to Cyber Security, the time could not be better. If you are from an IT background or non-IT background, there is a place for everyone in this career with no age limitation or regardless of your academic study/degree. You just need to have a good and effective plan to get yourself ready for this rapidly growing career and to invest time, effort and maybe a small budget to get yourself ready in a short period of time, but from where should you start?
Today’s world is different than before with everything connected online from your mobile devices, smart TV, cars to power grids and water pumps; everything is being attacked beside the importance of information, which becomes the main asset of any business.
WHAT IS CYBER SECURITY?
Cyber security refers to the technologies and techniques used to protect information and systems from being stolen, compromised or attacked. This includes unauthorized or criminal use of electronic data, attacks on networks and computers, and viruses and malicious codes. Cyber security is a national priority and critical to the well-being of all organizations.
Accordantly many organizations start to look for cyber security professionals to protect their business and comply with law and regulation that enforce all organizations to protect all their information (i.e. HIPAA, SOX, and GDPR.)
The good news about a cyber-security career is that it requires different types of skills; both technical and non-technical. If any organization needs to protect their systems/information, they need to have technical control in place such as Antivirus, Firewall but they also need to have non-technical controls such as policy/ procedures, business continuity planning and information security awareness.
Now I think you are convinced that this could be the right path for you. But the main challenge is, from where to start? Should you start by taking a cyber security training course? Ok, but which course will be suitable for you? Or read a book. What is the right approach?
First Step – To identify your objective based on your qualifications and background.
As mentioned at the beginning of this article, based on your technical background, you should decide your path. In this article, we will discuss the cyber security career path for IT staff coming from Network, Development and support background. However, in the next article, we will discuss the career path for candidates who do not have a technical background.
Second Step: Build your Skill Matrix.
This Matrix is an Excel sheet that includes the most common cyber security jobs and the skills you need to have in order to apply for this job. You can download a free skill matrix template and the good news is that you will find many common skills between different cyber security jobs (i.e. Vulnerability assessment, Risk assessment…). You will need to learn those skills, know how to perform them, understand which tools are used and what is their output. It’s important for you to practice everything in a virtual environment.
Third Step: Create your own checklist.
This Checklist will help you track your progress and map it to the skills matrix.
Fourth Step: Select the right resources
Selecting the right resources for the study is very essential; you can use online courses, instructor-led classroom, books or free resources such as YouTube, Blogs, and Forums. We suggest you go for online courses due to their reasonable prices, particular demonstration and clear course goals. This list will show you the available courses mapped to the Skill Matrix.
Fifth Step: Getting Certified
Holding a professional cyber security certificate such as CISSP, CEH, is also very important to demonstrate your skills. It is considered as a replacement for the academic degree if it’s not a cyber security degree. In the table below, you will find an average salary survey for cyber security professional certificate holder according to Forbes :-
Sixth Step: Getting real-life experiences
Apart from taking courses and professional certificates, you need to get a significant real life experience.
Some platforms will give you an opportunity to join a real cyber security project, this is a golden chance and will save you a lot of time and effort. Please check this link.
Seven Step: Update your CV and LinkedIn profile
Finally, you will need to update your CV and LinkedIn profile to update the new skills you learned. | <urn:uuid:1b67e026-0079-4700-81ac-05d6c8a151b1> | CC-MAIN-2024-38 | https://www.infosec4tc.com/a-step-by-step-guide-to-help-you-build-your-career/ | 2024-09-09T21:15:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00278.warc.gz | en | 0.948806 | 956 | 2.921875 | 3 |
Edge computing has emerged as a transformative technology for the Internet of Things (IoT), fundamentally altering how data is processed and managed within IoT ecosystems. By enabling data processing closer to the source, edge computing significantly enhances IoT infrastructure, leading to improved efficiency, reduced latency, and enhanced security. This article delves into the intricacies of edge computing in the IoT domain, exploring its impact and the potential it holds for the future of IoT.
Introduction to Edge Computing in IoT
The Internet of Things, a network of interconnected devices capable of collecting and exchanging data, has seen exponential growth in recent years. IoT devices range from simple sensors to complex industrial machines. Traditionally, IoT devices would send all collected data to centralized cloud-based services for processing and analysis. However, this approach often leads to high latency and increased bandwidth usage, which can be detrimental in scenarios requiring real-time data processing. This is where edge computing comes into play.
Edge computing refers to data processing at or near the source of data generation, rather than relying solely on a central data-processing warehouse. This means that data can be processed by the device itself or by a local computer or server, which is located close to the IoT device.
Enhanced Efficiency and Reduced Latency
One of the primary advantages of edge computing in IoT is the significant reduction in latency. By processing data locally, the need to send all data to a central cloud for processing is eliminated, thereby reducing the time it takes for the data to be processed and the response to be sent back. This is particularly crucial in applications where real-time processing is essential, such as autonomous vehicles, industrial automation, and smart grids.
Moreover, edge computing reduces the bandwidth required for data transmission, which is particularly important given the growing number of IoT devices and the massive volume of data they generate. By processing data locally and only sending relevant or processed data to the cloud, edge computing alleviates the strain on network bandwidth.
Improved Security and Privacy
Another critical aspect of edge computing in IoT is the enhancement of security and privacy. By processing data locally, sensitive information does not have to travel over the network to a centralized cloud, reducing the exposure to potential security breaches during transmission. Local data processing also means that in the event of a network breach, not all data is compromised, as some of it remains on the local device or edge server.
Furthermore, edge computing allows for better compliance with data privacy regulations, as data can be processed and stored locally, adhering to the legal requirements of the region in which the IoT device is located.
Enabling Advanced IoT Applications
Edge computing unlocks the potential for more advanced IoT applications. For instance, in the field of healthcare, wearable devices can monitor patient health data in real-time, processing and analyzing data on the spot to provide immediate feedback or alert healthcare providers in case of an emergency. In industrial settings, edge computing allows for predictive maintenance of machinery, where sensors can process data on the machine’s performance and predict failures before they occur.
Challenges and Considerations
Despite its advantages, implementing edge computing in IoT comes with its own set of challenges. One of the primary concerns is the management and maintenance of edge computing nodes. Unlike centralized cloud servers, edge devices are distributed and may be located in remote or hard-to-reach areas, making management and maintenance more challenging.
Additionally, ensuring the security of edge computing devices is crucial, as these devices could become targets for cyber-attacks. Unlike centralized data centers, which typically have robust security measures in place, edge devices may not have the same level of security, making them vulnerable.
The Future of Edge Computing in IoT
Looking ahead, the future of edge computing in IoT appears promising. With advancements in technology, edge devices are becoming more powerful, capable of handling more complex data processing tasks. This evolution is expected to drive further adoption of edge computing in various sectors.
In conclusion, edge computing represents a paradigm shift in how data is processed within IoT infrastructures. By enabling data processing closer to the source, it addresses the challenges of latency, bandwidth usage, and security. Although there are challenges in implementing edge computing, its benefits are significant, paving the way for more efficient, secure, and advanced IoT applications. As technology continues to evolve, edge computing is set to play an increasingly pivotal role in the IoT landscape, driving innovation and enabling new possibilities. | <urn:uuid:b2e62571-e58c-4f36-b8ae-75f0f3cdccce> | CC-MAIN-2024-38 | https://iotbusinessnews.com/2023/12/05/44454-the-impact-of-edge-computing-on-data-processing-and-iot-infrastructures/ | 2024-09-14T16:49:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00778.warc.gz | en | 0.92166 | 892 | 3.21875 | 3 |
In this article, you will discover:
- What is ISO 27001 and BYOD
- Risks associated with BYOD
- BYOD Security requirements in ISO 27001
- BYOD Tools and which one should you use
What’s ISO 27001?
ISO 27001 is the globally recognized standard for information security management systems (ISMS), providing companies of any size and from all sectors with a systematic and structured approach to managing and protecting sensitive information.
It employs a Plan-Do-Check-Act (PDCA) cycle and provides a framework for organizations to:
- Identify and assess information security risks
- Implement control to mitigate risks
- Monitor and review the effectiveness of of those controls on an ongoing basis
More information about ISO 27001 can be found in this article.
Bring Your Own Device (BYOD) is an IT policy that permits employees to use their personal mobile devices, such as smartphones, tablets, and laptops, to access company data and systems. This trend has rapidly gained traction, particularly with the surge in remote work following the COVID-19 pandemic. BYOD offers employees greater flexibility, enhances job satisfaction, and saves cost for organizations.
What are Risks Related to BYOD?
While BYOD policies can offer cost savings and boost employees’ satisfaction, they can also come with various challenges that need to be addressed proactively to protect your business data. Being aware of these challenges enables organizations to address them effectively, thereby enhancing the overall efficiency of their BYOD initiatives.
BYOD presents multiple risks:
- Shadow IT: Employees may use unauthorized hardware or software without IT department oversight, such as unapproved USB drives or consumer-grade software, potentially increasing security vulnerabilities.
- Lack of Uniformity: Employee devices can have varied operating systems (iOS, Android, ChromeOS, etc), which can complicate collaboration and management efforts.
- Data Leaks: Misuse of information by employees or device theft can lead to data breaches.
- Malware: Increased exposure to malware due to the absence of control over the applications installed by employees on their personal devices.
- Compliance Violations: Non-compliance with privacy laws like GDPR or healthcare regulations such as HIPAA can result in loss of trust and hefty fines. Storing sensitive data on personal devices also poses risks like inadequate security and accidental sharing. Additional details regarding GDPR and HIPAA compliance can be found here and here.
- Legal Issues: Unauthorized searches of employee devices for company data can raise legal concerns, including privacy rights, accidental removal of personal data, and handling of company data seized by law enforcement. Failure to address these issues through clear BYOD policies can lead to legal disputes and significant expenses for the company.
BYOD Security for ISO 27001
While ISO 27001 does not explicitly mention BYOD, numerous controls closely align with BYOD concerns, emphasizing the secure use of personal devices and data protection.
Here are some relevant Annexes in ISO 27001:2013:
A.6.2.2 Teleworking (A.6.7 Remote Working in 2022 Version)
Given that employees may use their personal mobile devices outside of office premises, this control applies to BYOD. Your organization’s BYOD policy should mandate the implementation of security measures for accessing, processing, and storing information.
A.13.2.1 Information Transfer Policies and Procedures
This control requires the documentation of measures for safeguarding information transferred via any communication equipment, including employees’ personal mobile devices. Therefore, if you haven’t created separate policies or procedures for information transfer, these requirements can be incorporated into your BYOD policy.
A.13.2.3 Electronic Messaging
Similarly, if protocols for protecting electronic messages have not been specified in other documents, your BYOD policy is the appropriate avenue for addressing this aspect.
Updates to ISO 27001 in October 2022 included Annex A 8.1, which replaced the previous Mobile Device Policy (ISO 27001:2013 Annex A 6.2.1), requiring organizations to create a policy addressing user endpoint device configuration and handling.
ISO 27001:2022 Annex A 8.1 provides additional recommendations for organizations permitting the use of personal devices for work-related tasks:
- Employ software tools to segregate personal and work activities on devices, ensuring the security of organizational information. Features like Containerization in MDM tools can help achieve this effectively, as discussed below.
- Employees should consent to certain conditions to access their personal devices, including:
- Acknowledging their responsibility for physically safeguarding devices and performing essential software updates.
- Agreeing not to claim ownership rights over the company’s data.
- Agreeing that data on the device can be remotely wiped if lost or stolen, aligning with legal guidelines for personal information protection. This functionality is integral to MDM tools.
- Establish guidelines regarding the rights to intellectual property generated using user endpoint devices.
- Address statutory restrictions on personnel’s access to private devices and how to manage such access.
- Permitting staff to use personal devices may entail legal liabilities due to third-party software applications. Companies should review their software licensing agreements with providers to mitigate risks.
BYOD Security Tools
There are some technical tools available to help organizations achieve the BYOD requirements in ISO 27001:
Mobile Device Management (MDM)
One of the most commonly employed BYOD solutions is Mobile Device Management (MDM).
- Scope: focuses solely on mobile devices and their security.
- Key MDM features:
- Zero-Touch Enrollment: Devices get enrolled with MDM as soon as they are activated.
- Device Configurations: MDM software can disable copy-paste, screenshot capture, clipboard, Bluetooth, removable media, and other wireless sharing features. Furthermore, administrators can block unapproved file-sharing apps to restrict data sharing.
- Device and Data Security: MDM tools can enforce various security measures, such as encryption, using strong passwords, regular backups, user authentication and so on, to safeguard the device and its data.
- Remote Device Locking & wiping and Maintenance: Lost or stolen devices can be locked and wiped remotely. Device updates and troubleshooting can also be done over the air.
- Containerization: create secure “containers” for corporate data & apps separate from personal data, with data encryption & authorization.
- Policy Enforcement: Companies can pre-determine configurations, restrictions and applications and mass-deploy these policies on multiple devices, streamlining device management.
- Location Tracking: Administrators can view the current location as well as historical location data of devices.
- Application and Content Management: MDM facilitates centralized management of all mobile content, ensuring applications are consistently updated and readily accessible to employees as needed.
- Audit & Compliance Reporting: MDM can provide automated loggings, compliance reporting and dashboards to track device compliance with security frameworks like ISO 27001 and organizational policies.
For a more comprehensive understanding of MDM, please see this article.
Enterprise Mobility Management (EMM)
EMM is an expansion of MDM, offering a wider range of functionalities and capabilities.
- Covers the entire mobile ecosystem within an organization, including application, content and identity management.
- Explicitly designed for managing apps and content on mobile devices, not suitable for MAC or Windows management.
- Key EMM features: EMM solutions encompass all MDM features and some additions:
- Mobile Application Management (MAM) focuses mainly on managing applications. It allows for distribution, security, updating and configuring of software running on mobile devices.
- Mobile Content Management (MCM) enables secure access to corporate content and data on all endpoints. It can push, access, store, and distribute content from the company’s internal repository in a secure manner.
- Identity and Access Management (IAM) facilitates user authentication and enforces policy-based rights and permissions. It enables IT teams to categorize users into groups, each group having predefined permissions and restrictions.
Unified Endpoint Management (UEM)
UEM combines the capabilities of both MDM and EMM solutions while introducing advanced features to offer holistic monitoring, management, and security for all endpoints.
- Manages other endpoints beyond mobile devices, including PCs, rugged devices, IoT devices, wearables, etc through a single console.
- Key UEM features: UEM solutions include MDM and EMM functionalities and some additions:
- Centralized Management Console, with complete visibility into the IT environment and on any asset
- Software and OS Deployment: Enables automated deployment of software and operating systems across the organization’s network from a central console, limiting manual intervention.
- Patch Management and Update Installation: Automatically scans endpoints for software, firmware or vulnerabilities and applies patches swiftly to fix vulnerabilities across all network endpoints.
- Threat Detection and Mitigation: Integrates with Endpoint Detection And Response (EDR) and other security technologies to identify abnormal device behaviors indicative of ongoing or potential threats, triggering appropriate security actions.
- Seamless Integration With Other Tools: Integrates effortlessly with helpdesk software, productivity and collaboration tools, and enterprise mobility solutions for enhanced efficiency and a unified IT environment.
Which BYOD Solution is Right for My Business?
Choosing between Mobile Device Management (MDM), Enterprise Mobility Management (EMM), or Unified Endpoint Management (UEM) depends on several factors, including business requirements, device management, security needs, integration, and cost considerations.
When to Choose MDM?
MDM is ideal for businesses with relatively simple IT systems but a large fleet of mobile devices requiring ongoing management. Educational institutions and small businesses managing mobile devices for basic tasks benefit from MDM’s cost-effectiveness and simplicity.
When to Choose EMM?
EMM suits environments with diverse devices and operating systems (iOS, Android, Linux, ChromeOS). It offers advanced application and content management features suitable for organizations with specialized applications and sensitive data, like mid-sized financial services firms.
When to Choose UEM?
UEM is the ultimate solution for managing any endpoint, real or virtual, regardless of device or operating system. It’s ideal for businesses with large and growing device landscapes, distributed workforces, and stringent security requirements. UEM offers scalability, adaptability, and future-proofing capabilities, making it a top choice for organizations undertaking digital transformations. | <urn:uuid:9ae3302a-f5fe-4d80-9c5f-c61d249b7699> | CC-MAIN-2024-38 | https://blog.getagency.com/compliance/byod-security-for-iso-27001/ | 2024-09-08T19:47:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00478.warc.gz | en | 0.900591 | 2,165 | 2.734375 | 3 |
Every computer in a network is identified with an internet protocol (IP) address, which it uses to communicate with other devices on the same network. IP addresses come in different forms, the more common form, known as IPv4, gives each computer a 32bit identifier (e.g. 192.168.34.12).
On some networks, security of digital assets and applications is maintained by specifying which IP addresses can access which resources. An IP spoofing attack happens when a malicious actor masks their identity by presenting themselves with the IP address of a legitimate device to gain access to resources that would otherwise be beyond their reach.
For instance, access to a server might be limited to a specific set or range of IP addresses. A hacker manipulates its network packets so that the sender’s address reads as that of a legitimate computer. By doing this, the attacker tricks the server into thinking the packets are coming from an authorized device.
Hackers use IP spoofing in a number of different ways, including staging DDoS attacks, in which attackers drain the resources of a server by flooding it with bogus network traffic. IP spoofing can also be used in man-in-the-middle attacks. In this case, the attacker stands in between two communicating parties, spoofing each of their addresses to the other. This way, each of the victims sends their network packets to the attacker instead of directly sending it to its real destination.
The biggest defense against MitM attacks conducted through IP spoofing is to use encrypted communications. When the information being two parties is encrypted with a key that only they hold, it will make sure that even if a malicious party manages to intercept the traffic, they won’t be able to read or manipulate its contents. Authenticating user identities also prevents hackers from gaining unauthorized access to network resources by simply spoofing their IP address. | <urn:uuid:58186637-3e13-4696-b79e-5cc6af4bf5c4> | CC-MAIN-2024-38 | https://doubleoctopus.com/security-wiki/threats-and-tools/ip-spoofing/ | 2024-09-10T01:38:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00378.warc.gz | en | 0.955601 | 378 | 4.03125 | 4 |
The Internet of Things (IoT) has significantly changed many industries and applications by enabling better oversight. Many professionals are becoming familiar with the benefits of IoT in logistics and transportation and how the technology could enhance their businesses.
1. Improving Safety
People working in the logistics and transportation industries often drive large, heavy vehicles for shifts lasting several hours. The IoT can increase the information they get, reducing the likelihood of accidents that harm themselves and others.
In one example, an automobile brand unveiled an IoT solution that allows communication between vehicles, city infrastructure and vulnerable road users. Some early tests of the technology have alerted drivers of nearby construction sites or school zones.
Developers are working on options that streamline communication between vehicles, road signs, lights and cellular towers. They believe when drivers get more real-time details, they have the information to keep themselves and everyone around them safer.
Many people are also learning about the benefits of IoT in logistics after installing sensors. Those additional allow them to monitor specifics about vehicles and drivers and then make actionable decisions based on the data.
Such approaches promote safety by warning people of vehicle issues that need immediate attention to avoid catastrophic outcomes. They can also tell supervisors which drivers may have dangerous road habits and need coaching to address them.
Some IoT solutions combine with cameras, giving managers a comprehensive picture of each potentially dangerous driving event. They can then use that information to determine what happened and why drivers behaved in certain ways. Such insights are excellent for giving people specific feedback and empowering them to change for the better.
2. Reducing Transit-Related Risks
A lot can happen in the time between goods leaving factories and reaching their destinations. One of the benefits of IoT in logistics is that it can provide real-time information. If a refrigerated truck’s door is left open, the right people know about that issue immediately rather than hours later.
IoT sensors also indicate if items get dropped or handled roughly. Such patterns could highlight problems with specific logistics partners or indicate the products may need packaging that offers better protection.
The IoT is also helpful for coping with extreme temperatures. Some take automatic actions at specific thresholds, ensuring perishable goods don’t get too warm. Others can trigger audible alerts to remind drivers to turn on the air conditioning in cabs. This functionality can minimize risks and reduce insurance costs, making IoT sensors worth pursuing.
Some decision-makers also rely on IoT products to tackle cargo theft, which can happen along the entire supply chain. Factors such as unsecured facilities, insufficient driver training and seasonality can elevate cargo theft risks. Although some criminals target warehouse-stored goods, others look for merchandise on the move, hoping to encounter situations where drivers get distracted and leave expensive goods unattended.
IoT sensors let fleet managers see the location and status of trucks or individual shipments. Precise and timely information can assist in quickly recovering stolen goods and prosecuting the perpetrators. However, this technology could also stop thefts in progress, such as if someone can remotely activate an alarm and flashing lights to attract attention and deter criminals.
3. Increasing Visibility
Estimates suggest there will be more than 24 billion IoT-connected devices used worldwide by 2030. Although consumer devices account for part of that number, many products are industrial solutions that aid preparedness.
If you’ve ever seen a truck full of goods pull up to a retail location, you probably know teams of people have anticipated its arrival and are ready to immediately begin unloading the boxes. However, heavy traffic, road accidents and incorrect routing can cause drivers to arrive at their destinations later than expected. Conversely, many make up time, especially if traveling at off hours when the roads are less busy.
Team members familiar with the benefits of IoT in logistics use sensor data for better planning. They can see the real-time location of trucks in transit and estimated arrival times. Whenever the trucks arrive, they’ll have the knowledge to staff enough people so the goods get unloaded promptly. This approach prevents delays and enhances productivity for everyone involved.
Researchers also developed an IoT solution that allowed parents to see the real-time locations of school buses and when the vehicles should arrive at each stop. Such information helps them get their kids ready each morning and ensure they don’t wait outside for too long.
School administrators can also see details such as how many children are on the bus at any time. Those insights allow making data-driven decisions about adding or changing routes to better serve all learners who avail of them.
4. Getting Goods to the Right Places
The sheer volume of parcels moving worldwide means mistakes happen and some packages end up in the wrong places. However, some logistics brands, such as UPS, are using smart packaging solutions to reduce such instances.
The company’s approach involves wearable devices for employees and package tags to reduce manual efforts and improve statistics. Currently, one in every 400 parcels gets loaded onto the wrong vehicle.
Those events cause confusion and customer dissatisfaction. However, leaders hope to reduce the rate to one in 800 by increasing packaging connectivity. Statistics showed 50 locations using this method have achieved a 1-in-1,000 rate of packages placed onto the wrong vehicles, indicating potential beyond what leaders expect.
IoT tracking can also minimize customers’ need to call customer service to find out about parcels in transit. Containers equipped with IoT sensors can provide up-to-date information for anyone with relevant details on packages moving through the system.
Will You Experience the Benefits of IoT in Logistics and Transportation?
There are practical and effective ways to apply IoT to the logistics and transportation industries. Anyone working in those areas should strongly consider how the technology could improve their processes. | <urn:uuid:194e55dd-92cd-45ad-8fc3-b5de3937b87c> | CC-MAIN-2024-38 | https://www.iotforall.com/4-ways-iot-drives-the-future-of-logistics-and-transportation | 2024-09-11T07:26:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00278.warc.gz | en | 0.953196 | 1,166 | 2.59375 | 3 |
With the new HelpRansomware guide, you will find out what cybercrime is, the most common types, and its effects on cybersecurity.
What is cybercrime?
Cybercrime is an act perpetrated on the Internet, punishable by law.
Attacks on a computer system, such as ransomware, are considered a cybercrime.
According to the Cambridge Dictionary, the commonly used term cybercrime is a:
“Crime or illegal activity that is done using the internet:”
As stated on the Cyber Security Breaches Survey 2021, the major cyber threats are:
- Phishing attacks;
- Others impersonating organizations in email or online;
- Viruses, spyware, or malware;
- Denial of service attacks;
- Hacking of online banks accounts;
- Takeovers of accounts;
- Unauthorized accessing of files;
Phishing is a practical, easy, and inexpensive way for cybercriminals to access users’ confidential information.
This cybercrime occurs by sending a fraudulent email or other similar media hoping that the user will “bite”.
Victims download this malware by opening an attachment or a link on the device that includes financial or other data.
The Webroot report analyzes the incidence of phishing attacks for each industry.
The most affected are cryptocurrency exchanges (55%) and gaming platforms (50%).
Some of the offenses related to the use of information and communication technologies that are sanctioned under the current substantive Peruvian Law on Computer Crimes and Criminal Code are the following:
- Misuse of devices and information mechanisms;
- Illegal access to computer systems;
- Sexual Harassment;
- Crimes against Intellectual Rights;
- Crimes against the Public Administration.
Computer Misuse Act 1990 is the primary legislation in the UK relating to cybercrime attacks.
It lists five categories of computer misuses:
- The Unauthorized access to computer material;
- Unauthorized access with intent to commit or facilitate the commission of further offenses;
- Unauthorized acts with intent to impair the operation of a computer. The offense is committed if the person behaves recklessly;
- Unauthorized acts (section 3ZA) causing or creating a risk of serious damage. This section is aimed at those who seek to attack the critical national infrastructure;
- 5. Making, supplying, or obtaining articles for use in an offense under sections 1,3, or 3ZA.
Cybercrime in the United States
Title 18 of the United States Code, sect. 1030 referring to fraud and related activity in connection with computers punishes whoever:
- Has accessed a computer without authorization or exceeding authorized access, and have obtained information that could be used to the injury of the United States;
- Intentionally accessed a computer without authorization or exceeded authorized access, and thereby obtained financial information from any US institution or any protected computer;
- Intentionally accessed without authorization any non-public computer of a department or agency of the United States, that is exclusively for the use of the government;
- The Knowingly accessed a protected computer to defraud and obtain anything of value;
- Knowingly accessed to a protected computer and caused the transmission of a program, information, code, or command, and as a result, intentionally caused damage or loss;
- Knowingly accessed with intent to defraud traffics which affect interstate or foreign commerce;
- Intend to extort from any person any money or other thing of value and threat to cause damage to a protected computer.
In Colombia, Law 1273 of 2009 deals with cyber offenses.
The reform of the Criminal Code has brought to the creation of a new document for information and data protection.
This law refers to attacks against the confidentiality, integrity, and availability of data and information systems.
Action Fraud is the UK’s national service run by the City of London Police working alongside the National Fraud Intelligence Bureau (NFIB).
Only individuals, public bodies, and not-for-profit organizations can report offenses committed online by phone or through the online reporting tool.
After registering and logging in, you will have access to a dashboard. You will be able to track, save and resume a partial report.
Businesses, charities, or other organizations currently suffering a cyber-attack can only ask for assistance by phone since online reporting is not allowed.
Moreover, under the General Data Protection Regulation (GDPR) rules, it is mandatory that you also report data breaches to the ICO within 72 hours.
You may also report scams and fraud to the state consumer protection office, report to the local police or federal government. However, agencies usually don’t follow up and can’t recover lost money.
While state authorities offer the option to report cybercrime, it might be a slow process, just as the times of justice.
In the event of ransomware attack or cryptolocker versions attack, the best option available to you is to contact a specialized company, such as HelpRansomware.
This type of malware prevents users from accessing their files and devices and requires an anonymous online payment in exchange for access.
HelpRansomware allows you to recover encrypted files from multiple devices.
Our team of cybersecurity experts takes quick and efficient action to recover documents infected with malware.
The Montevideo Police Headquarters, through the Departamento de Delitos Tecnológicos (Department of Technological Crimes), is charged with tackling computer crimes.
Users can contact this unit via the web, email, or telephone number.
As mentioned above, the Computer Misuse Act 1990 is the main UK legislation that deals with offenses or malicious attacks against computer solutions.
We may find other indications in the following laws:
- Data Protection Act 2018, including UK GDPR (EU law) that also sets out data protection requirements for immigration and other areas of law;
- Communications Act 2003, including cybersecurity obligations;
- Privacy and Electronic Communications Regulations 2003, applying to public electronic communications service providers;
- The Network and Information Systems Regulations, imposing obligations on operators, or essential services and digital service providers;
- The regulation of Investigatory Powers Act 2000 that governs surveillance and interception of communications data;
- ● The Computer Misuse Act 1990, listing cybercrime offenses which may be prosecuted in conjunction with violations under the Theft Act 1968 and the Fraud Act 2006;
- Official Secrets Act 1989.
In the United States, cybersecurity laws and regulations are enacted both at the federal and state level, while strict data protection rules have also been established for healthcare and banking.
The most prominent acts are the:
- Federal Computer Fraud and Abuse Act, 18 USC sect. 1030 is the primary statute providing for criminal and civil penalties;
- Electronic Communications Protection Act protects communications in storage and transit;
- Cybersecurity Information Sharing Act of 2015 (“CISA”);
- Gramm-Leach-Bliley Act of 1999 (GLBA);
- Health Insurance Portability and Accountability Act (HIPAA) of 1996.
What is the penalty for cybercrime?
Cyber sanctions in the UK depend on the type of threat or attack.
According to regulation 30 of The Cyber Regulations 2020, a person who commits an offense under any provision related to Finance or licensing offenses is liable:
- On summary conviction, to imprisonment for a term not exceeding 6-12 months or a fine (or both);
- On conviction on indictment, to imprisonment for a term not exceeding 7 years or a fine (or both).
A person who commits an offense against confidentiality is liable on the same sanctions, while the imprisonment term cannot exceed 2 years or a fine (or both).
In case of information offenses, cybercriminals are liable on summary conviction to imprisonment for a term not exceeding 6 months or a fine (or both).
As for the USA, the penalties for conspiracy to violate or for violations or attempted violations are imprisonment for not more than one year and / or a fine of not more than $ 100,000 ($ 200,000 for organizations) for the first offense.
For all subsequent convictions, fraudsters are liable on imprisonment for not more than 10 years and / or a fine of not more than $ 250,000 ($ 500,000 for organizations).
Cybercrime is an act perpetrated over the Internet, punishable by law. We can draw the following conclusions from this guide:
- Cybercrime consists of insults, slander and online threats, child pornography, fraud, internal security, piracy, etc.;
- Phishing is a method used by cybercriminals to access user data;
- Several countries are enforcing new laws and creating specific police units to tackle cybercrime.
Attacking a computer system through a virus such as ransomware is considered a cybercrime.
Therefore, the best alternative to getting a formal police report is contacting a company specializing in cybersecurity.
HelpRansomware helps you to recover data safely, in a short time and in full respect of your privacy. | <urn:uuid:c04a4fbc-371c-4473-b311-cc2eacb3a14d> | CC-MAIN-2024-38 | https://helpransomware.com/cybercrime/ | 2024-09-12T11:42:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00178.warc.gz | en | 0.912808 | 1,848 | 3.609375 | 4 |
Phish Scale: New method helps organizations better train their employees to avoid phishing
Researchers at the National Institute of Standards and Technology (NIST) have developed a new method called the Phish Scale that could help organizations better train their employees to avoid phishing.
How does Phish Scale work?
Many organizations have phishing training programs in which employees receive fake phishing emails generated by the employees’ own organization to teach them to be vigilant and to recognize the characteristics of actual phishing emails.
CISOs, who often oversee these phishing awareness programs, then look at the click rates, or how often users click on the emails, to determine if their phishing training is working. Higher click rates are generally seen as bad because it means users failed to notice the email was a phish, while low click rates are often seen as good.
However, numbers alone don’t tell the whole story. “The Phish Scale is intended to help provide a deeper understanding of whether a particular phishing email is harder or easier for a particular target audience to detect,” said NIST researcher Michelle Steves. The tool can help explain why click rates are high or low.
The Phish Scale uses a rating system that is based on the message content in a phishing email. This can consist of cues that should tip users off about the legitimacy of the email and the premise of the scenario for the target audience, meaning whichever tactics the email uses would be effective for that audience. These groups can vary widely, including universities, business institutions, hospitals and government agencies.
The new method uses five elements that are rated on a 5-point scale that relate to the scenario’s premise. The overall score is then used by the phishing trainer to help analyze their data and rank the phishing exercise as low, medium or high difficulty.
The significance of the Phish Scale is to give CISOs a better understanding of their click-rate data instead of relying on the numbers alone. A low click rate for a particular phishing email can have several causes: the phishing training emails are too easy or do not provide relevant context to the user, or the phishing email is similar to a previous exercise. Data like this can create a false sense of security if click rates are analyzed on their own without understanding the phishing email’s difficulty.
Helping CISOs better understand their phishing training programs
By using the Phish Scale to analyze click rates and collecting feedback from users on why they clicked on certain phishing emails, CISOs can better understand their phishing training programs, especially if they are optimized for the intended target audience.
The Phish Scale is the culmination of years of research, and the data used for it comes from an “operational” setting, very much the opposite of a laboratory experiment with controlled variables.
“As soon as you put people into a laboratory setting, they know,” said Steves. “They’re outside of their regular context, their regular work setting, and their regular work responsibilities. That is artificial already. Our data did not come from there.”
This type of operational data is both beneficial and in short supply in the research field. “We were very fortunate that we were able to publish that data and contribute to the literature in that way,” said NIST researcher Kristen Greene.
As for next steps, Greene and Steves say they need even more data. All of the data used for the Phish Scale came from NIST. The next step is to expand the pool and acquire data from other organizations, including nongovernmental ones, and to make sure the Phish Scale performs as it should over time and in different operational settings.
“We know that the phishing threat landscape continues to change,” said Greene. “Does the Phish Scale hold up against all the new phishing attacks? How can we improve it with new data?” NIST researcher Shaneé Dawkins and her colleagues are now working to make those improvements and revisions. | <urn:uuid:347a9a04-3375-42e5-a3f4-87f7e90afe63> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2020/09/21/phish-scale-train-employees-avoid-phishing/ | 2024-09-13T17:54:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00078.warc.gz | en | 0.960068 | 837 | 2.515625 | 3 |
Market Trends of Agriculture in India
Cereals and Food Grains Leading the Agriculture Market in India
The high degree of adoption of crop diversification by the producers of cereals and grains is a key driver in managing risks and soil sustainability. Furthermore, cereals like rice, wheat, sorghum, corn, millet, and barley comprise a higher degree of carbon fixation in the soil. The high quantity of grasses with more vegetative coverage reduces soil and wind erosion and provides resistance to biotic and abiotic stress. The crop rotations are highly economical activities in addition to contract services and high value-added processing and packaging of agricultural products.
India is the world's second-largest producer of rice, wheat, and other cereals. The massive demand for cereals in the global market is creating an excellent environment for the export of Indian cereal products. According to the first estimate for 2020-21 by the ministry of agriculture of India, the production of major cereals like rice, maize, and bajra stood at 102.36 million ton, 19.88 million ton, and 9.23 million ton, respectively. The latest data on wheat production is available for the period of 2019-20, and it is estimated to be 107.49 million ton (4th advance estimates).
India is the largest producer as well as the largest exporter of cereal products in the world. According to the Agricultural & Processed Food Products Export Development Authority (APEDA), India's export of cereals stood at USD 12,872.64 million during the year 2021-22. Rice (including Basmati and Non-Basmati) occupy the major share in India's total cereals export at 75% (in value terms) during the same period. Whereas other cereals, including wheat, represent only a 25% share of total cereals exported from India during this period. As per ITC TradeMap, the major export destinations for the year 2021-22 are Bangladesh, Iran, Saudi Arabia, Nepal, Vietnam, and Benin.
Increasing Demand for Vegetables
According to the Indian Council of Agricultural Research (ICAR), increasing awareness regarding the consumption of vegetables to meet various dietary requirements and nutritional needs has raised the demand for vegetables, consequently leading to an increase in the area of vegetable production. As reported by the Ministry of Agriculture and Farmers Welfare (India) in 2021, the per capita gross availability of vegetables in India had increased from 388.7 grams per day in 2017-2018 to 400 grams per day in 2020-2021. This indicates the expanding supply and consumption pattern of vegetables, which will further project the demand for the market studied in the coming years.
In India, exotic vegetables such as mushrooms, green olives, fresh broccoli, and many other items have been gaining popularity in recent times among urban populations and gourmet hotels, restaurants, and caterings food services owing to them being rich in essential nutrients, low-calorie, and rich in vitamins.
On the supply side, to cater to this growing demand, farmers are growing a wide range of vegetables because vegetables are short-duration crops that have multiple harvests, resulting in a better cash flow for the farmers. In the year 2021, the Nilgiris district in Tamil Nadu produced the finest quality of lettuce, while freshly grown avocados can be found in Himachal Pradesh.
In February 2022, the Punjab Agricultural University (PAU), Ludhiyana, developed several varieties of strawberries, figs, date palm, grapes, broccoli, Chinese cabbage, celery, lettuce, sweet pepper, and baby corn, which can be cultivated by farmers for consumers as well as domestic consumption. Among vegetables, Palam Samridhi and Punjab Broccoli I are the varieties developed by the PAU. Saag Sarson and Chini Sarson-I varieties are recommended by the university. | <urn:uuid:932f8a8c-34ec-4898-872c-2142ed50d07b> | CC-MAIN-2024-38 | https://www.mordorintelligence.com/industry-reports/agriculture-industry-in-india/market-trends | 2024-09-18T15:36:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00578.warc.gz | en | 0.936708 | 792 | 2.640625 | 3 |
A new research paper by the London School of Economics (LSE) has found “the probability of cost effectiveness was low” in a high-profile UK trial of telehealth.
Previous research into different aspects of the same trial, known as the whole system demonstrator (WSD), has also expressed doubts about the remote monitoring of patients in their own homes.
The paper found the cost per quality adjusted life year (QALY), a standard measure in this kind of analysis, was £92,000 ($140,000) more than the cost of standard care.
This figure is in excess of the cost effectiveness threshold of £30,000 which is set by the UK National Institute for Health and Clinical Excellence (NICE). The probability of telehealth being cost effective was only 11 per cent, said the LSE.
NICE typically rejects drugs or treatments with a QALY of more than £20,000-£30,000.
The study covered 965 patients with long-term conditions, of which 534 received telehealth equipment and support while 431 received the usual care. It was published in the British Medical Journal.
Even if equipment costs fall by 80 per cent then telehealth would still be slightly more expensive than conventional care, said the study.
The results are not such a surprise, given one of the LSE’s researchers outlined early findings last summer. The £92,000 figure was first mentioned at the time.
Martin Knapp (pictured), professor of social policy at the LSE, told Reuters that the research did not mean telehealth was a waste of time but it did need to be targeted better. Better technology and greater scale might help, he added.
“We have got to find ways of better adjusting the equipment to suit the circumstances of the individual patient,” said Knapp.
Previous research has looked into other aspects of the same WSD trial. The most recent paper, which was published earlier in March, was led by City University and focused on whether telehealth can deliver psychological benefits to patients with chronic conditions.
The effect on remote monitoring on patients’ health-related quality of life was “weak or non-existent”, said the City University-led study.
Another study by the Nuffield Trust last year did find benefits from telehealth but only “modest” cost savings.
The UK’s WSD trial, and its subsequent analysis, has attracted interest around the world because of its scale. It involved thousands of participants when typically such trials involve smaller numbers. | <urn:uuid:93f3ee39-a181-4e2b-91ae-e8ba218fd7f9> | CC-MAIN-2024-38 | https://www.mobileworldlive.com/old_latest-stories/uk-study-finds-low-chance-of-cost-saving-with-telehealth/ | 2024-09-21T02:59:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00378.warc.gz | en | 0.97498 | 531 | 2.53125 | 3 |
If you feel like you have too many browser tabs open at any given time, then you'll be happy to know that you can sometimes save certain browser tabs as a standalone application on your device. This will give them their own icon and make accessing them much easier than constantly navigating to them through your web browser.
When someone mentions cookies, people start paying attention. Chocolate chip, oatmeal raisin, snickerdoodles… Browser? While Browser cookies aren’t the most scrumptious, they do need some attention. Nowadays, many websites you visit have a popup asking if they want to allow cookies for that site and knowing what you are agreeing to is important. In today’s blog, we will describe what cookies are, how they work, and why they can sometimes be better than cookies with chocolate chips.
It’s important to keep the software on your computer updated. If your operating system or web browser or some other important application is out of date, it could lead to things not working properly while also leaving you susceptible to threats. However, hackers are disguising malware to look like important web browser updates.
With technology being an integral part of our lives and society at large, cyberthreats continue to evolve and pose significant risks. One such threat that is on the rise is browser hijacking attacks. Let’s explore the dangers of these attacks, including the techniques employed by hackers, and how small and medium-sized businesses can protect themselves.
Since its domain was first registered on September 15, 1997, Google has exploded from a relatively simple search engine to the massive assortment of platforms and services that fall under the Alphabet umbrella. That being said, most people tend to think of very specific aspects of Google’s Search function… like the amusing Easter Eggs that the platform has become somewhat famous for.
There is a lot of misinformation and misperceptions out there related to network security, especially where small businesses are concerned. In particular, browser security is one aspect where many individuals’ knowledge simply falls flat, and they buy into myths that put their data at risk. Let’s clear up some of these misconceptions so you can go about your day in a more secure way.
One of the reasons that information technology keeps changing is for the sake of the user and their convenience using it. However, if this convenience comes at the sacrifice of your business’ cybersecurity, it just isn’t worth it. This is the crux of why we always recommend that any organization seeking to use password management should invest in a reputable password management software, rather than the built-in capabilities of modern browsers.
A few weeks ago, Microsoft presented several of their latest projects at a live event. As expected, there was a lot of focus put on the new Windows operating system, Windows 10, in addition to their in-development browser Spartan; but what we didn’t expect Microsoft to show off was a slew of brand spanking new consumer technologies.
If you’re looking to maximize your productivity, then having the web-based resources you browse to everyday is a must. You can take this idea one step further by assigning a shortcut icon to a specific web page you frequent. Let’s discuss the process for how you can do this.
It probably isn’t hard to think of a time when you’ve stumbled across something that would be useful for work while you were doing some personal browsing. What if I told you there was an easy way to send a website to your browser to view later? Thanks to Google Chrome, this is the case.
How often have you been browsing the web on your phone, only to find something that would be legitimately useful for your work—maybe it was a tip you wanted to try out, or a bit of information that would be helpful to know—so you wanted to be able to access it from your workstation? There’s actually a very easy way to make this happen, thanks to the multi-platform nature of the Google Chrome browser.
Google Chrome, used by about two-thirds of Internet users, is an infamous battery hog…or at least, that used to be the case. Google recently released Chrome 108, and with it, a feature called “Energy Saver.” Let’s talk about how to enable it.
It’s easy to open up far more tabs on your web browser than you need, especially when so many tools are cloud-based. If you find yourself in need of a quick way to close all other tabs besides a handful or so, we’ve got just the tip for you. You can close all open tabs to the right of your preferred window, or you can close all tabs outright.
If you want to optimize productivity, then you’ll want to take a look at the startup page for your Google Chrome web browser. If you change this setting, you can shave off countless minutes every week while you fumble around trying to find your favorite or most frequently visited page.
How often does this scenario happen to you? You’re going about your workday and are being quite productive, when all of a sudden you close the wrong tab in your web browser, putting an end to your productivity. This isn’t crippling downtime or anything, but it’s an inconvenience that we know you can do without. Thankfully, modern web browsers let you reopen closed tabs or windows to get back to where you left off.
Goodbyes are always painful, but we suspect that this one for Microsoft’s Internet Explorer will be more on the bittersweet side of things. Long a staple in the web browsing world, Internet Explorer has largely been removed from devices running Windows 10 and Windows 11. Let’s take a moment to discuss the approach Microsoft is using to slowly phase Internet Explorer out of the web browsing space.
When you go to a website you have never been to before, there is often a splash page that asks if you would like to accept cookies. It doesn’t mean you are getting a care package, it just means that you accept a formal interaction with the website you’re on. Let’s take a look at browser cookies
Bookmarks are an essential part of being productive with your Internet browser, but what happens when you switch to a different one, like Google Chrome? Do you have to manually add all of your bookmarks back to the browser? Nope! Let’s go over how you can import your bookmarks directly to Google Chrome and save some time.
The Internet browser is easily one of the most-used applications in this day of cloud-hosted resources and online content… but for all that use, is it also one of the most-secured applications? In some ways, yes… but there’s always a few extra steps that can help you improve your protections.
Most accounts these days require a password of some sort, and as such, the average user has countless of these codes that need to be kept both secure and top-of-mind. Some web browsers have built-in password management tools to help make them more user-friendly, but with so much convenience involved, one has to ask whether or not these built-in management tools are as secure as they should be.
Privacy is a sensitive subject nowadays, especially online. Regardless of the browser you have elected to use, properly using it will have a large impact. Let’s review a few ways that you and your team can help secure your business and its resources and go over these settings.
We’ve not been shy about promoting the use of VPNs (virtual private networks) as a means of protecting your security while you are online. However, we wanted to take a bit of time to specify what a VPN can - and cannot - do to help you.
Data and cybersecurity is hard enough without vulnerabilities coming from one of your most utilized applications. That’s the scenario after a bug was found in some of today’s most popular Internet browsers putting billions of people’s data security at risk. Let’s take a brief look at the vulnerability and how you can ensure that it won’t be a problem for you or your company.
Everyone knows some keyboard shortcuts. The normal ones that allow you to copy, cut, paste, lock your computer, select all, and more. Today, we thought we would tell you about some browser-based shortcuts that can definitely save you time and effort.
If you consider it, it’s amazing how much trust people have in Internet-based companies. They not only believe that these companies will fulfill their expectations, but that they will work to provide protection for some of their most valuable and sensitive information. Let’s take a look at some of the data collection practices that companies use and what they do with that data.
Google Chrome is rolling out a neat little update for everyone over the next week (it may already be out for some users by the time this posts). It’s a feature that I know I’m personally going to love, and I didn’t even realize how badly I needed it until now.
Let’s take a look!
Have you ever been glued to the computer monitor while compulsively hitting your browser’s refresh button? You might have done this while waiting for an online sale to drop, or while waiting for someone to respond in an online forum to your witty comment. Did you know that you can set your Google Chrome browser to refresh itself automatically?
Google Chrome is the most-used browser in the world by a wide margin, which is part of the reason that it is so incredible that many people don’t know a lot about its built-in features. While we certainly can’t go through all of them in a single blog, we can offer a few tips describing the best of them.
Navigation is important for any computing system--particularly the Internet, where there are countless destinations. The Internet is comprised of various web pages, images, videos, and many other valuable little bits of content that are all connected by a web of links. These links are the cornerstone of the Internet, and we’ll explain the details of how they work and what they are.
Does it feel like your web browser is running slower than it should? Or is your browser prone to freezing up and crashing? If so, there’s one easy troubleshooting tip that you’ll want to try: clearing the cache.
The use of a browser’s tabs has become the default way that many people move around the web. In fact, there’s a good chance that this blog is just one of many browser tabs you have queued right now. However, there’s an equally good chance that you aren’t using browser tabs to their full potential. For this week’s tip, we’ll explore some of the features that browser tabs offer.
Everyone has accidentally closed an important web browser tab before they were finished with it. What can you really do about it, though? You might expect that you have to search for the page again, but there’s a much easier way to do it. In your Google Chrome browser on a PC or smartphone, you can reopen closed tabs relatively easily.
Whenever you download a file from the Internet, the file will, by default, go to an aptly-titled folder in Windows called Downloads. Unless you change the default settings, your files will always be saved here. But what if you want to make it so that your downloads go somewhere else? You can accomplish this pretty easily. We’ll walk you through how to do it for some of the most popular browsers, including Google Chrome, Microsoft Edge, and Mozilla Firefox.
If you use the Internet every single day, you’ll start to realize that you can use it more effectively for achieving your goals. In cases like this, it’s important to look at ways you can improve your overall use of the Internet, as it’s the key way you access important information, applications, and contacts. Here are some day-to-day tips that you can use to help improve your mastery of the Internet.
Your computer is mostly just a machine used to accomplish specific tasks. This doesn’t mean that you shouldn’t know all of the advanced tips that help you get the most out of it, though. Here are some of the best shortcuts that you can use to take full advantage of your workstation.
It’s no secret that, if given the choice, many users would elect to use Google Chrome over Microsoft Edge. To remedy this, Microsoft has adjusted Edge to be more customizable to the user’s preferences. For this week’s tip, we’ll look at how these features and settings can be set up.
For most users the Internet browser is one of the most utilized applications on their computer or mobile device. With the influx of aggressive problems, it is mighty useful to know which Internet browser is the best for keeping your data, identity, and network secure. Today, we will take a look at the five most popular Internet browsers found on desktop and laptop computers and decipher which are the most reliable.
Browser cookies might not sound delicious, but they are a particularly important part of your browser’s technology. Do you actually know what they do, though? Today’s tech term will explain just what these cookies are, as well as the purpose they serve for your organization.
The Internet is home to a vast amount of knowledge. Undoubtedly you’ll find yourself revisiting certain sites more often than others to take advantage of the information contained within. Thankfully, the bookmark system is a great way to make this happen, giving users an easy and efficient way to navigate back to frequently-visited websites.
When you are surfing the web, do you know if you are secure? Typically, your browser will tell you when a site is secure or not. This is especially important if you are putting in sensitive information, like passwords or credit card information. Google Chrome is stepping up it’s game to keep users safe.
Let’s be honest - not all of us have the best memories. This makes the ability for many browsers to remember our passwords seem like a godsend. However, is this capability actually a good thing for your cybersecurity? The answer may not surprise you.
In the fairy tale of Hansel and Gretel, the titular characters decided to leave a trail of breadcrumbs behind them, so they could find their way back home. While this strategy didn’t work out very well for the siblings, the same concept is used in computing today. We even refer to it as breadcrumb navigation in honor of the German fairy tale.
How to Anonymously Browse the Internet
--> Some places on the Internet are only suitable for secret browsing. Maybe you're shopping for a present and don't want your links to show in your browsing history, or maybe you don't want the customized ads to reflect a private interest. Whatever your reason is for wanting to anonymously browse the web, here's how you do it.A good business practices extreme caution when using the Internet, thanks to hackers using any means possible to unleash threats against organizations of all sizes. You teach your employees how to avoid threats and to avoid suspicious websites, but what if that’s not enough to keep hackers out of your network infrastructure?
The Internet can be a dangerous place thanks to the anonymity it provides. Yet, this anonymity is limited, especially if you take part in questionable Internet browsing activities. Take, for instance, the hack of Ashley Madison, a website dedicated to cheating on one’s spouse. This July, a hacker group called the “Impact Team,” infiltrated the site and is now threatening to expose these cheaters.
Agent Chrome is a pretty well-known guy in Google City. In fact, you could say he was the top of his class at Browser University, and everybody who is anybody knows who he is. When on the job, Agent Chrome sometimes needs to lay low and avoid the prying eyes of the masses around him. And this particular operation, rightfully dubbed “Incognito,” is one such occasion.
Google is the world’s most popular search engine, and it’s grown so outrageously popular that there’s even a verb named after it; “to google” is to search for something on the Internet using Google. However, to this day, Google continues to surprise the public with fun Easter eggs, hidden functions which provide a level of entertainment to the user.
In addition to Microsoft’s upcoming new operating system, Windows 10, the software company has released that there is a new web browser in production. This new browser, code-named “Spartan,” is expected to have similar functionality to Mozilla’s Firefox and Google Chrome, and will be released alongside Windows 10.
It’s a good feeling to have your workstation’s web browser set up exactly how you like it. With all of your favorite websites bookmarked and your most-visited sites quickly appearing in a drop down menu as soon as you type in a letter or two, you’re able to efficiently navigate the Internet and quickly find exactly what you’re looking for. But what happens to your bookmarks when your hard drive crashes?
Sometimes, you just need to take a closer look at the Internet. For this article, we are not referring to a search engine; instead, we're talking about literally taking a closer look by using your Web browser's zoom feature, and it's as easy as giving your mouse click wheel a spin.
Is your web browser's home page looking drab and boring? Do you wish that you could change it to something fresh and exciting but you just don't know how to do it? Introducing Home Page Settings! By using Home Page Settings, you can give your Web Browser that fresh look you have always wanted, and the best part is that it's free! | <urn:uuid:c0f20264-512c-4b97-9f32-8f6d030bb8af> | CC-MAIN-2024-38 | https://www.directive.com/blog/tags/browser.html | 2024-09-08T23:31:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00578.warc.gz | en | 0.951452 | 3,756 | 2.65625 | 3 |
Superintelligence promises incredible advancements and solutions to the world’s biggest challenges, yet it also presents an ominous threat to society. As the lines between innovation and catastrophe blur, understanding the risks of AI is crucial. Read on for recommendations for moving forward in this uncharted territory.
Generative Artificial Intelligence’s emergence has led enterprises, tech vendors, and entrepreneurs to explore many different use cases for this disruptive technology while regulators seek to comprehend its wide-ranging implications and ensure its responsible use. Learn how enterprises can leverage GAI in our webinar, Welcoming the AI Summer: How Generative AI is Transforming Experiences.
Concerns persist as tech visionaries warn that AI might surpass human intelligence by the end of the decade. Humans are still far from fully grasping its potential ramifications and understanding how to collaborate with the technology and effectively mitigate its risks.
In a groundbreaking announcement in July, OpenAI unveiled that it has tasked a dedicated team with creating technologies and frameworks to control AI that surpasses human intelligence. It also committed to dedicating 20% of its computing resources to address this critical issue.
While initially exciting, the prospect of superintelligence also brings numerous challenges and risks. As we venture into this uncharted territory, understanding AI’s evolution and its potential implications on society becomes essential. Let’s explore this further.
Types of AI
AI can be broadly categorized into the following three types:
- Narrow AI
Narrow AI systems are designed to excel at specific tasks, such as language translation, playing chess, or driving autonomous vehicles. Operating within well-defined boundaries, they cannot transfer knowledge or skills to other domains. Common examples include virtual assistants like Siri and Alexa, recommendation algorithms on streaming platforms, and image recognition software.
- General AI
General AI possesses human-like cognitive abilities and can perform various intellectual tasks across various domains. Unlike narrow AI, general AI has the potential to learn from experiences and apply knowledge to different scenarios.
- Super AI (Superintelligence):
Super AI represents a hypothetical AI that surpasses human cognitive abilities in all domains. It holds the promise to solve complex global challenges, such as climate change and disease eradication.
Tech thinkers across the globe have raised an alarm
Amidst growing concerns about the risks of superintelligence, the departure of Geoffrey Hinton, known as “the Godfather of AI,” from Google was one of the most significant developments in the AI realm. Hinton is not alone in his concern about AI risks. More than 1,000 tech leaders and researchers have signed an open letter urging a pause in AI development to give the world a chance to adapt and understand the current developments.
These leaders emphasized that development should not be done until we are certain that the outcomes will be beneficial and when the AI risks are fully known and can be managed.
In the letter, they highlighted the following five key AI risks:
- Machines surpassing human intelligence: The prospect of machines becoming more intelligent than humans raises ethical questions and fears of losing control over these systems. Ensuring that superintelligence remains beneficial and aligned with human values becomes crucial
- Risks of “bad actors” exploiting AI chatbots: As AI technologies evolve, malicious actors can potentially exploit AI chatbots to disseminate misinformation, conduct social engineering attacks, or perpetrate scams
- Few-shot learning capabilities: Superintelligent AI might possess the ability to learn and adapt rapidly, presenting challenges for security and containment. Ensuring safe and controlled learning environments becomes essential
- Existential risk posed by AI systems: A significant concern is that superintelligent AI could have unintended consequences or make decisions that could jeopardize humanity’s existence
- Impact on job markets: AI’s rapid advancement, especially superintelligence, might disrupt job markets and lead to widespread unemployment in certain sectors, necessitating measures to address this societal shift
As we already have seen some risks associated with this technology materialize, cautiously approaching the advancement of its progress is necessary.
Recommendation for moving forward
To mitigate AI risks and the risk of superintelligence while promoting its development for positive societal outcomes, we recommend enterprises take the following actions:
- Create dedicated teams to monitor the development – The government needs to appoint relevant stakeholders in regulatory positions to monitor and control these developments, particularly to protect the large population that does not understand the technology from its potential consequences
- Limit the current development – As the letter suggested, the government should implement an immediate moratorium on developing and using certain types of AI. This pause would give everyone enough time to understand the technology and associated risks better. While Italy has used its legal architecture to temporarily ban ChatGPT, efforts like this will not have a significant impact if carried out individually
- Define policies – Regulatory agencies should start working on developing policies that direct researchers on how to develop the technology and define key levels for alerting regulatory agencies and others
- Promote public awareness and engagement – Promoting awareness about AI and superintelligence is crucial to facilitate informed debates and ensure the technology aligns with societal values
- Form international collaborations – Isolated initiatives won’t help the world. Larger collaboration among governments to define regulations and share knowledge is needed
While new technologies have always brought changes to the existing norms, disrupted established industries, and transformed societal dynamics, ensuring these advancements are beneficial to a larger audience is essential. | <urn:uuid:c3d7bc9c-d4d4-4b1f-bf2d-ca4ed5194c30> | CC-MAIN-2024-38 | https://www.everestgrp.com/it-services/from-sci-fi-to-reality-unraveling-the-risks-of-superintelligence-blog.html | 2024-09-08T21:51:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00578.warc.gz | en | 0.931141 | 1,096 | 2.6875 | 3 |
Enterprise Switches: Everything You Should Know
Enterprise switches, as the name implies, are typically deployed in networks with a large number of switches and connections, which can be also called campus LAN switch. This term has nothing to do with certain types of switches but suggests the network environment for which network switches are designed. This article will look at the three tiers of enterprise switches and the difference among enterprise switches, data center switches and switches for home network use.
Enterprise Switches in Hierarchical Internetworking Design
Three hierarchical internetworking design is commonly used in today’s enterprise networks in which LAN is divided into three layers: core layer, distribution layer, and access layer. Here is a hierarchical model example of the FS network switches.
Figure 1: Three-tier enterprise network model
Each layer has its own features and functionality, leading to the devices in three layers feature differently.
Core switches are high-capacity backbone switches generally positioned in the center of the network core layer, serving as the gateway to a wide area network (WAN) or the Internet.
Distribution switches, as the bridge and link between the core layer switches and the access layer switches shown in the figure above, are also called aggregation switches. It ensures that the packets are properly routed between subnets and VLANs in networking.
Access switches, which are also referred to as edge switch, is the lowest and the most fundamental layer in the hierarchical internetworking model of the three layers. They facilitate the connection of the end-node devices such as APs and wired devices to the network.
Switch Comparison Items | Core Switch | Distribution Switch | Access Switch |
Working layer | Core layer | Distribution layer | Access layer |
Features | Layer 3 switches, highest reliability, functionality and throughput | Layer 3 switches, higher reliability, functionality and throughput | Layer 2 switches, relatively lower reliability, functionality and throughput |
Numbers of switches in a network | The least (normally one or two) | Usually between the number of the other two | The most |
Main functions supported | Very high forwarding rate, QoS, redundant components, etc. | Packet filtering, QoS, and application gateways, etc. | Port security, VLANs, Fast Ethernet/Gigabit Ethernet, PoE, etc. |
Cost | Highest | Higher | Relatively Lower |
Example | 25G switch, 40G switch, 100G switch | 10G switch | Gigabit switch |
Data Center Switch vs Enterprise Switch vs Home Network Switch
You may get an overview of enterprise switch from above text. Then this part will further introduce enterprise switch by comparing it with data center switch and home network switch. Switches vendors provide network switches designed for the different network environments, such as FS N series switches designed for high-performance data center environments, FS S3910 series switches ideal for SMBs, enterprise, and campus networks and others for home network use. The following comparison will help you know more about enterprise switch.
Data Center Switch
As today’s data center architecture moves from a hierarchical model to a leaf-spine model in which spine switches serve as the core of the network and leaf switches deliver networking connection points for servers, datacenter switches are always featured with high port density and high bandwidth required to handle both north-south traffic (traffic between users outside a data center to the data center server or traffic from data center server to the Internet) and east-west traffic (traffic between servers in a data center) flows.
Different from data center switches, end-users connect to network whatever their devices are used such as PCs, laptops, printers, etc. Enterprise switches are thus required to track and monitor users and endpoint devices to protect every connection point from security issues. To meet the specific network environments, some enterprise switches have particular capabilities like PoE function. With PoE technology, enterprise network switches are able to manage the energy consumption of many end devices connected to switches. To fully grasp the intricacies and the transformative journey of enterprise switches, I recommend delving into the article "The Evolution of Enterprise Switches: A Historical Perspective."
Home Network Switch
Compared with the data center and enterprise network, the amount of traffic in home networks is not high despite the scale of the home network varies, which means the switch requirements are much lower. In most of the case, the switch takes the responsibility to only expand network connections and transfer data from one device to another without the need to handle the data congestion. Unmanaged plug-and-play switches are typically used for home networks as a perfect solution due to its simple management without setup required and lower cost than managed switches. For the SOHO office with fewer than 10 users, generally, a single 16-port Ethernet switch is enough. But for those tech geeks who love to build fast and secure home networks, managed switches are often preferred.
Should I Use Data Center Switches or Enterprise Switches in the Enterprise Network?
If you are managing a medium to large enterprise network, you may encounter the dilemma of whether to use a data center switch in an enterprise network. In fact, for a large enterprise network, apart from the basic access layer connection, the redundancy at uplink levels like distribution and core layer should be much higher than the access ones, which means the first thing you should consider is high availability when designing an enterprise network. To cope with a large amount of traffic considering the minimal risk of failure, two or more aggregation or core layer switches can be deployed in each layer so that a failure of one switch will not disrupt the other.
When there is a complex network with large number of servers to manage on the enterprise, network virtualization is needed to optimize the speed and reliability of the network. Compared with the traditional LAN enterprise switches, data center switches supporting richer functions will help the high dense virtual machine environments successfully deployed and better handle the east-west traffic grown with the virtualization. | <urn:uuid:d2cc2eea-c4a5-4f2f-a05f-e99930ef5c89> | CC-MAIN-2024-38 | https://community.fs.com/article/enterprise-switches-everything-you-should-know.html | 2024-09-10T03:18:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00478.warc.gz | en | 0.914201 | 1,213 | 2.703125 | 3 |
String data type (LotusScript® Language)
Specifies a variable used to store text strings, using the character set of the IBM® software application that started LotusScript®. All strings are stored internally as Unicode characters. Strings are translated between platform-specific characters and Unicode characters during I/O operations.
The String suffix character for implicit data type declaration is the dollar sign ($).
The declaration of a string variable uses this syntax:
Dim varName As String [* num]
The optional num argument specifies that varName is a fixed-length string variable of num characters. A fixed-length string variable is initialized to a string of null characters (the character Chr(0)).
When you assign a string to a fixed-length string variable, LotusScript® truncates a longer string to fit into the declared length. It pads a shorter string to the declared length with trailing spaces.
Fixed-length strings are often used in declaring data structures for use in file I/O or C access.
An implicitly declared String variable is always a variable-length string variable.
Variable-length strings are initialized to the empty string ("").
LotusScript® aligns variable-length String data on a 4-byte boundary. In user-defined data types, declaring variables in order from highest to lowest alignment boundaries makes the most efficient use of data storage space. Fixed-length strings are not aligned on any boundary. | <urn:uuid:7ce5218c-0fb8-4c17-8559-29b2c98e9760> | CC-MAIN-2024-38 | https://help.hcl-software.com/dom_designer/10.0.1/basic/LSAZ_STRING_DATA_TYPE.html | 2024-09-11T10:00:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00378.warc.gz | en | 0.814445 | 291 | 3.28125 | 3 |
You have a photo of a red car on a mountain road. You need to select the car to use with a different
Which tool is the quickest and most effective one for selecting the car?
A. The Quick Selection tool
B. The Magic Wand tool
C. The Selective Color command
D. Quick Mask Mode
Correct Answer: A
You re using a photo of a leafy green tree against a light blue sky as an illustration for a magazine article.
You want to make the sky more blue.
Which is the best way to select the sky, including the sky between the leaves?
A. Use the Magic Wand tool to select most of the sky, and then choose Select > Similar.
B. Use the Quick Selection tool to select most of the sky, and then choose Select > Grow.
C. In Quick Mask Mode, paint the sky between the leaves.
D. Use the Magic Wand tool to select most of the sky, and then use the Patch tool to select the sky between the leaves.
Correct Answer: A
You ve spent a lot of time making a complex selection.
How can you retain the selection for future use? (Choose two.)
A. Choose Select > Save Selection.
B. Choose Select > Similar Layers.
C. In the Channels panel, click Save selection as channel.
D. In the Channels panel, click Load channel as selection.
E. In the Paths panel, click Make work path from selection.
Correct Answer: AC
You made a selection with the Elliptical Marquee tool. You want to move the selection boundary to the
How can you move the selection without moving or changing the image? (Choose two.)
A. With the Elliptical Marquee tool, click and drag inside the selection boundary.
B. With the Polygonal Lasso tool, click and drag inside the selection boundary.
C. With the Move tool, click and drag inside the selection boundary.
D. With the Move tool selected, press the arrow keys on your keyboard.
E. With the Move tool selected, hold down the Shift key and drag.
Correct Answer: AB
You ve made an initial selection around an egg in an image, using the Elliptical Marquee tool. Which command should you use to reshape the selection so it better fits the egg?
A. Select > Transform Selection
B. Edit > Free Transform
C. Edit > Transform > Distort
D. Edit > Transform > Skew
E. Select > Modify > Border
Correct Answer: A QUESTION 6
You have a photo of a woman in front of a green background. You ?v made a good selection around her
hair, but you still see a fringe of green from the background at the edges of the hair.
Which is the best way to minimize that fringe?
A. Select Decontaminate Colors in the Refine Edge dialog box.
B. Use a Hue/Saturation Adjustment layer to reduce saturation.
C. Choose Select > Transform Selection, and contract the bounding box.
D. Choose Image > Adjustments > Replace Color.
Correct Answer: A QUESTION 7
You have a photo of a woman against a blue sky. You want to select the woman to use against another
Which feature is most likely to help you create a selection that captures both the wispy strands of her hair
and the smooth edge of her skin?
A. The Refine Edge dialog box
B. Soft Light blending mode
C. The Magic Wand tool
D. Blend If sliders
Correct Answer: A QUESTION 8
You want to print a digital photograph on a professional desktop inkjet printer. You have a custom ICC profile for your printer, paper, and ink combination. In the Color Management area of the Print dialog box, which option will allow you to choose your custom ICC profile for use in printing?
A. Photoshop Manages Colors
B. Printer Manages Colors
D. Proof Setup
Correct Answer: A QUESTION 9
You want to print a photograph from Photoshop to an inkjet printer so that the print simulates how it will be
reproduced on a commercial printing press.
Which option in the Print dialog box should you select?
C. Match Print Colors
D. Show Paper White
Correct Answer: A QUESTION 10
Which statement best describes the Proof Colors feature in Photoshop?
A. It displays an on-screen preview of how your document s colors will look when reproduced on a particular output device.
B. It can print a contact sheet that includes your currently open images or the currently selected colors in the Swatches panel.
C. When active, it displays a gray overlay indicating colors that are out of gamut.
D. It is used for visually calibrating and testing the accuracy of a monitor and monitor profile.
Correct Answer: A | <urn:uuid:a8f495f8-3b79-4ec9-99ed-5cabf61f31b5> | CC-MAIN-2024-38 | https://www.certadept.com/the-most-reliable-adobe-9a0-150-certification-ensure-you-100-pass-from-flydumps.html | 2024-09-12T16:28:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00278.warc.gz | en | 0.86879 | 1,021 | 2.625 | 3 |
The Morpheus processor has caught the attention of the cybersecurity community thanks to its advanced security measures, which have allowed it to undergo more than 500 hacking attempts that have proved useless. This processor is able to constantly rewrite its architecture, making it impossible for threat actors to exploit vulnerabilities typical of conventional x86 processors, including Spectre and Meltdown flaws.
Morpheus was developed as part of a DARPA-funded project and has undergone some 580 security tests in which researchers try to hack a database by injecting code into an underlying system. Researchers have spent more than 13 thousand hours trying to hack this system without successful results.
Todd Austin, a professor of computer science at the University of Michigan, says: “The approach to addressing each security flaw individually is a lost battle, developers create code at incredible speed, and as long as this process continues to advance new vulnerabilities will continue to appear frequently.” Morpheus addresses a novel approach, because even if a threat actor finds a security flaw, the information needed for its exploitation could disappear within seconds.
On its technical specifications, Morpheus uses the gem5 simulator in an Xilinx FPGA and simulates a core in order of 4 stages MinorCPU running at 2.5GHz with 32KB L1i and 32KB L1d. The L2 cache was 256 KB.
Austin mentions that Morpheus was developed with the aim of creating a difficult implementation to hack with any exploit focused on known vulnerabilities, which made it necessary to hide critical information from attackers while maintaining the integrity of the programmer’s tasks, but how was this goal achieved?
Developers chose to hide a data class known as “indefinite semantics,” which are pieces of information that the end user or developer does not need to know to operate a system. Austin compared this process to the action of driving a car: “To drive a vehicle we just need to know how to handle the steering wheel, gear lever and pedals; we shouldn’t even know the horsepower or the slightest details of the engine processes.”
Morpheus can encrypt memory pointers every 100 milliseconds as many times as needed. This constant encryption of information prevents threat actors from having the time to launch successful attacks, because by the time they can inject an exploit Morpheus settings will have changed.
Developers believe that Morpheus has already passed all the tests needed to demonstrate its anti-hacking capability, so they already plan to turn it into a business effort for the benefit of individual organizations and users, although this could take months or even years. To learn more about information security risks, malware variants, vulnerabilities and information technologies, feel free to access the International Institute of Cyber Security (IICS) websites.
He is a cyber security and malware researcher. He studied Computer Science and started working as a cyber security analyst in 2006. He is actively working as an cyber security investigator. He also worked for different security companies. His everyday job includes researching about new cyber security incidents. Also he has deep level of knowledge in enterprise security implementation. | <urn:uuid:708d7687-0028-4138-9b2f-3e550dafc327> | CC-MAIN-2024-38 | https://www.exploitone.com/technology/morpheus-the-advanced-processor-that-hundreds-of-hackers-have-failed-to-decrypt/ | 2024-09-12T14:57:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00278.warc.gz | en | 0.959375 | 623 | 3.046875 | 3 |
For organizations that want to survive and thrive in the era of big data, data integration is a critical component for success. And as the number of data sources multiplies each year, the volume of data organizations are dealing with gets bigger, too. In order to capture value from all these sources, the data needs to be integrated.
To understand how to begin a large-scale data integration initiative, it is necessary to understand what big data is and what it means to integrate big data.
What Is Big Data?
Big data refers to a data set that is too large and complex to be useful without being processed by software. It is usually characterized by the three V’s: volume, variety, and velocity.
Volume refers not only to the amount of data that has been collected, but also to the number of data sources.
Velocity refers to the rate at which data is captured by the many data sources. For big data, this means data that is being captured at a rate too fast for it to be useful without automated analysis.
Variety refers to the variable and inconsistent formats in which the data is being captured. When data is captured in multiple formats, it has to be normalized in order to be useful in conjunction with the rest of the data that has been collected.
What Is Big Data Integration?
Big data integration is the process of connecting every data source to a central platform that provides users with a single, streamlined visualization of relevant data, trends, and other insights. Data sources that feed into a big data integration platform can include CRMs, ERPs, marketing automation software, factory floor sensors, and IoT devices.
Another key aspect of big data integration is the automation of processes. By removing the need for human intervention, the central platform through which all data sources are integrated can be used to automate actions in one application or device based on data received from another.
Big data integration also enables a dismantling of silos within an organization. The centralized nature of the data and the ability for any system to access the organization’s full data set means every department has immediate access to whatever information they need, whenever they need it. Rather than submitting requests for data that will likely be outdated by the time it is received, a user can access the data directly.
Big Data Integration Best Practices
Boomi Master Data Hub (MDH) ensures data quality by creating golden records, which are single versions of truth for data entities, consolidating and cleansing data from multiple sources. Four MDH tools enable high-quality big data integration:
- Security and Access Control are enforced through role-based access to sensitive data and strict authentication mechanisms with robust administration.
- Data Compatibility is maintained by the platform’s ability to handle various data formats and structures, making it easy to integrate disparate systems.
- Staging Areas in Master Data Hub provide a space to test and validate source data before applying it to your model.
- Data Governance is enabled through tools for data stewardship, ensuring compliance with internal and external policies and regulations.
How to Integrate Big Data with iPaaS
Integration is the cornerstone of generating the efficiencies and insights organizations need to excel as information proliferates around them. Integration using an iPaaS solution provides a flexible and high-performance infrastructure that can tackle immediate needs and also grow as the business builds its big data capabilities.
The iPaaS integration process works like this:
- The entire collection of data sources is fed into a master server.
- The data is transformed into a uniform data set so it is readable by all other endpoints.
- The data set is stored in a centralized location.
- Additional data sources are added without the need to create custom connectors.
Once this process is complete, users can access whatever data they need through a customizable dashboard. The big data stored within the iPaaS system can be analyzed and acted on automatically by applications integrated into the platform or manually by users via the dashboard or by other means.
Boomi Powers Big Data Integration
With Boomi’s iPaaS solution, you can quickly and easily meet any data integration need, from the smallest data challenge to the largest big data initiative.
To learn how Boomi can help your organization capitalize on big data, get our eBook: The State of DataOps | <urn:uuid:4947e68e-1fcb-47f8-8434-6a305b9c46e5> | CC-MAIN-2024-38 | https://boomi.com/blog/integration-unlocks-big-data/ | 2024-09-20T00:07:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00578.warc.gz | en | 0.935397 | 884 | 2.734375 | 3 |
Most of us have heard of phishing, we may get an email supposedly from the CEO of the company we work for demanding we transfer some money. As it’s the boss, and we are human, and don’t always react calmly when our boss aggressively demands we do something, we may well comply. But these days, more people are aware of the danger — and are likely to check the authenticity of the email. Suppose, however, we get a phone call apparently from the boss, complete with the cadences of the boss’s voice that we are familiar with — we are far less likely to be suspicious. But now, in a variation of deepfake, it has been reported that AI has been used to scam an organisation out of money by impersonating the voice of a company’s chief executive. So what is the answer? Can AI or blockchain be used to fight in the battle against deepfake? Or does it boil down to staff training? Information Age spoke to three experts.
According to a report in the Washington Post, criminals used software applying AI to scam a UK energy company out of $220,000 or £194,000. The CEO of the company thought he recognised the voice of the chief executive of the parent company — and duly transferred the money he was asked to transfer.
Another recent example of deepfake, was less serious, but had more serious implications. A YouTube creator going by the name of Ctrl Shift, deep faked a scene from the AMC TV series Better Call Saul with the voice of Donald Trump and his son in law Jared Kushner.
How can organisations respond to the threat of deepfake?
He explained: “The human ear is sensitive to sound waves extending over an impressively large spectrum of frequencies, so generating human-quality speech requires the algorithm to correctly predict the sound wave thousands of times per second. By comparison, the human eye can only perceive data at around 30 frames per second. This means in general, small incorrectness in video deepfake is less noticeable than in its audio counterpart.”
AI ethics and how adversarial algorithms might be the answer
Training staff for Deepfake
He said: “We will see a huge rise in machine-learned cybercrimes in the near future. We have already seen Deepfake videos imitating celebrities and public figures, but to create convincing materials, cyber-criminals use footage that is already available in the public domain. As computing power increases, we are starting to see this become even easier to create, which paints a scary picture ahead.
“To help reduce these types of risks, companies should start by raising awareness and educating their employees, then introduce a second layer of protection and verification, one that would be hard to spoof, like a single-use password generator (OTP devices). Two-factor authentication is a powerful, inexpensive and simple technique to add an extra layer of security to protect your money from going into a rogue account.
“Before you know it, deepfake will be more convincing than ever, therefore companies need to consider investing in deepfake detecting software sooner rather than later. However, counter software is never developed that fast, so companies should focus on training their employees rather than just rely on software.”
AI in cyber security: a help or a hindrance?
Blockchain response to deepfake
Blockchain expert Kevin Gannon, who is the blockchain tech lead and solutions architect, PwC said: When it comes to the area of Deepfake, emerging technology like blockchain can come to the fore to provide some levels of security, approval and validation. Blockchain has typically been touted as a visibility and transparency play, where once something is done, the who and when becomes apparent; but it can go further.
“When a user who has a digital identity wants to do something — they could be prompted for proof of their identity before access to something (like funds) can be granted. From another angle, the actual authenticity of video, audio files can be proven via a blockchain application where the hash of certain files (supposed proofs) can be compared against the originals. Though, it is not a silver bullet, and as always, the adoption and applicability of the technology in the right way is key. From a security perspective, more open data mechanisms (like a public ledger) have an increased attack surface, so inherent protection can not just be assumed.
“But enhancing security protocols around the approvals process, where smart contracts could also come into play, can strengthen such processes. In addition, at a more technical level, by applying multi-sig (multiple signature) transactions in the processes can mean that even if one identity is compromised, there is more than one identity needed to provide ultimate approval.”
Blockchain: single source of truth and digital twins
AI and Deepfake
As for how AI can be used to combat deepfake, we return to Dr Alexander Adam. He said: “Machine learning algorithms are great at recognising patterns in large amounts of data. ML can provide a way to detect fake audio from real audio by using classification techniques that work by showing an algorithm large amounts of deepfake and real audio and teaching it to distinguish the difference in (for example) the frequency composition between the two. For example, by using image classification on the audio spectrograms you can teach an ML model to ‘spot the difference’. However, as far as I am aware no out-of-the-box solution exists yet.”
“In part, this may be because audio deepfake hasn’t been regarded as being as much of a threat as video deepfake. Audio deepfake are not pitch perfect and you should be able to tell the difference if it’s tailored to a specific person that you know. That said, interference across phone lines or staging general ‘outside’ background noise could probably be used to mask a lot of this. And as there has been so much high profile media attention on deepfake videos, the public are perhaps less aware of the potential risks of audio deepfake. So, if you have a reason to be suspicious, you should always validate it’s who you think it might be.
“However, we expect that the creation and use of audio deepfake for malicious purposes will increase in the coming years and become more sophisticated. This is because there is a better understanding of machine learning models and how to transfer what was used on one model to another person and train it quickly. But, it’s worth noting that as the generation of deepfake content gets better, typically so do the detection methods.” | <urn:uuid:c61161b3-7492-4f5c-8a7e-5fcb72fcb5dd> | CC-MAIN-2024-38 | https://www.information-age.com/can-ai-and-blockchain-be-used-in-fight-against-deepfake-14665/ | 2024-09-20T00:21:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00578.warc.gz | en | 0.959209 | 1,363 | 2.8125 | 3 |
Identity theft and identity fraud: What to do if your identity is stolen
What is identity theft?
According to the FBI, "identity theft occurs when someone unlawfully obtains another's personal information and uses it to commit theft or fraud." The type of personal information could be anything from general data, like your name or address, to more specific data like hospital records, tax return details, or banking information. Identify theft is sometimes also referred to as identity fraud.
How does identity theft happen?
Identity theft or ID theft takes place in various ways:
Data breaches can be either accidental or intentional:
- An accidental data breach might occur when an organization's employee leaves a work computer—containing personally identifiable information (PII) or a way to access it—in a vulnerable place, allowing someone to steal it.
- An intentional breach typically involves criminals finding a way to access an organization's computer network to steal sensitive data. The criminals might deploy a sophisticated technical attack or simply trick an employee into clicking on a link that creates an attack opening to be exploited.
Regardless of how it happens, a data breach can expose the PII of millions of unwitting victims.
Unsafe social media use:
Social media encourages sharing personal information, but reckless oversharing can endanger your personal safety and financial records. For example, it’s easy to disclose your date of birth, your location, where you went to school, your pet’s name, your phone number, and other personal details on social networks. If cybercriminals are watching, they can use this data to piece together information about you to commit identity fraud.
If you don't regularly change your email password, you’re increasing the risk of being hacked. And if you use the same password for multiple sites, such as banking or shopping sites, hackers could obtain access to all your accounts, then lock you out and go on a spending spree.
Even though a lot of our communication has moved online, interested parties can still find out a great deal about you by going through your trash. Since long before the internet, identity thieves have been combing through the mail to find documents that contain personal information. Bank and credit card statements, pre-approved credit card offers, tax information, and other personal documents sent through the postal system can be intercepted and used to access your data. Always keep financial and other personal documents for at least seven years, and shred all personally identifiable information before throwing it away.
By sticking to well-known websites and websites which have an up-to-date security certificate, you can browse the internet safely. But if you share any information on an unsecured website or a website that hackers have compromised, you could be putting your sensitive information directly in the hands of a thief. Some browsers may alert you if you try to access a risky website.
Dark web marketplaces:
Once your personally identifying information has been stolen, it can often end up on the dark web. Hackers may not necessarily be stealing your information to use it for themselves – often, they choose to sell it to others who have potentially malicious intentions.
The dark web is a hidden network of websites that aren't accessible by normal browsers. People who visit the dark web use special software to mask their identities and activity, making it a haven for fraudsters. If your information ends up on a dark web marketplace, anybody could buy it, putting your identity in more danger.
Phishing and spam attacks:
Phishing is a form of social engineering. Phishing occurs when an attacker masquerades as a trusted entity to dupe a victim into opening an email, text message, or instant message. Users falling for phishing attacks is a common cause of data theft.
If you use your computer or phone on a public Wi-Fi network— perhaps in an airport or coffee shop —hackers may be able to spy on your connection. This means that if you type in a password, bank account or credit card number, Social Security number, or anything else, a criminal could intercept it and use it for their own purposes.
Mobile phone theft:
Smartphones contain a treasure trove of information for identity thieves, especially if your apps allow you to log in automatically without a password or fingerprint. If someone manages to steal and unlock your phone, it could enable them to view the information found in your apps, as well as in your emails, text messages, notes, and more. That's why it is essential to ensure that your phone locks with a secure passcode, biometric screening is set up correctly and that your passwords aren't stored in plain text anywhere on your phone.
Some thieves use a skimming device placed over a card reader on an ATM to skim information from that ATM. The skimming device can steal the data stored on a credit or debit card's magnetic strip and then store or transmit it.
Identity theft statistics
According to the 2021 Identity Fraud Study by Javelin Strategy & Research:
- Identity fraud cost Americans a total of about $56 billion in 2020, with about 49 million consumers falling victim.
- About $13 billion in losses were due to what Javelin calls “traditional identity fraud,” where cybercriminals steal personally identifiable information and use it for their own gains, such as through data breaches.
- But the bulk of the losses, $43 billion, stemmed from identity theft scams where criminals interact directly with consumers to steal their information through methods such as robocalls and phishing emails. Victims of these scams lost $1,100 on average, according to Javelin.
- Because the Covid-19 pandemic changed the way people shopped and transferred money, criminals are increasingly targeting digital wallets and peer-to-peer payment methods such as Apple Pay and Zelle. About 18 million victims in the US fell prey to scams through these digital payment methods in 2020.
Who is stealing your identity?
Identity thieves are a diverse group, and many come from quite unexpected places. Many victims know their attackers – it could be a co-worker, friend, employee, neighbor, or even a family member. Tech-savvy children may see benefits in stealing Mom or Dad's credit card and Amazon login to buy a few items, assuming there's no real ‘victim’ if they eventually come clean and apologize. Work acquaintances may see an opportunity too good to pass up if you leave your computer unlocked or your wallet sitting out.
Petty criminals are getting in on the action since it's possible to download turnkey malware programs for little or no cost. Organized crime gangs using trained computer science graduates are also out looking for large quantities of personal data. These groups are often responsible for significant retail attacks and health care breaches. The sheer volume of this data is worth a great deal on the black market.
What do thieves do with your identity?
There are two timescales at play: immediate use and holding for sale:
- Criminals who want to use your data right now will try everything, all at once. They will try to hack email, smartphones, and retail sites to access bank accounts—all while calling credit card companies to create new user profiles. Although these attacks are short-lived, they can be financially ruinous.
- Other criminals will hold on to your data and either try to sell it or open a single new credit card that they'll use until the limit is reached and you start getting calls from the collection agency. These attacks are harder to detect and can add up to greater losses over time.
Anyone can be a target for identity thieves. If any of your data is online—personal info, credit card data, address, phone number—you are at risk of being compromised. Criminals don't discriminate: the more information you have online, the greater your risk.
How can you protect yourself from identity theft?
So, how to prevent identity theft and protect your identity online? Here are some precautions you can take to avoid a stolen identity:
Keep data to a 'need to know’ basis:
If someone is asking for your personal information – such as your Social Security number, credit card number, passport number, date of birth, work history or credit status, etc. – ask why they need it and how they will use it. What security measures do they have in place to ensure your private information remains private?
Use social media sparingly:
Familiarize yourself with each social networking platform’s security settings and ensure these are set to a level you are comfortable with. Avoid disclosing personal information like your address or date of birth in your social media bios, and be careful about the information you provide to any dating or meet-up sites. Criminals can use this data to build up a picture of you.
Keep your computer up to date:
Many hackers use malware to steal your information. Keeping your computer up to date with security patches and antivirus software helps protect against existing vulnerabilities and detect new attacks.
To limit the chance of a malware infection, avoid opening unknown email attachments or browsing suspicious websites.
Destroy private records and statements:
Shred credit card and bank statements and other documents that contain private financial or sensitive information. Minimize your paper trail by not leaving ATM, credit card, or gas station receipts behind when you’re out and about.
Secure your mail:
Empty your mailbox quickly, lock it or get a PO box, so criminals don't have a chance to steal sensitive mail.
Safeguard your Social Security number:
In the US, your Social Security number is the master key to your personal data. Guard it as best you can. When asked for your number, ask why it is needed and how it will be protected. Don't carry your card with you. Securely store or shred paperwork containing your Social Security number.
Never let your credit card out of your sight:
Always keep an eye on your credit or bank card, and don’t let retailers or others take it out of your sight. Also, be vigilant for card skimming devices at ATMs.
Review your credit cards statements carefully:
Read financial statements. Make sure you recognize every transaction. Know due dates and call to investigate if you do not receive an expected bill. Review ‘explanation of benefits’ statements to make sure you recognize the services provided to guard against health care fraud.
Ensure that you only ever log into banking websites using a secure connection. Don't save your credit card information online.
Know who you’re dealing with:
If someone contacts you requesting your personal or financial information, find out who they are, what company or organization they represent, and the reason for their call. If you think the request is legitimate, contact the company yourself and confirm what you were told before disclosing any of your personal data.
Remove your name from marketing lists:
Unsubscribe yourself from unwanted marketing lists. In the US, you can also add yourself to the national Do-Not-Call registry (1-888-382-1222).
Monitor your credit report:
Obtain and thoroughly review your credit report at least once a year to check for suspicious activity. If you find something, alert your card company or the creditor immediately. You may also investigate credit protection services, which alert you any time a change takes place with your credit report.
What to do if your identity is stolen
Identity fraud is on the rise and can cause significant damage, yet many people aren't sure what to do when they become a victim of this crime. Follow this step-by-step guide on what to do if your identity is stolen:
Discover the source:
Before you can correct the problem and get identity theft help, it's important to know the attack's origin. While traditional identity theft involved criminals ‘dumpster diving’ to obtain personal information such as receipts or credit card bills, thieves increasingly target popular online services. Banking websites, online retailers, and dating sites hold a wealth of consumer information.
Many signs can indicate you may have been a victim of identity theft, e.g., if new credit accounts have been opened in your name, purchases have been made without your consent, or your contact information with government agencies has been altered. As soon as you realize you've been victimized, think about your recent online activity:
- Did you respond to any emails that appeared to be from financial institutions claiming that your account was suspended or under review?
- Did you download any video players or media files as attachments from senders you didn't know?
- Have any e-commerce sites you regularly use recently sustained a cyberattack?
Any one of these could create a vulnerability to hacking.
Notify affected creditors or banks:
Once you've discovered the theft, start making calls. Begin with any companies where the fraud occurred, such as your credit card issuer or bank. Ask them to close or freeze your accounts and change all your login and password information.
Most credit cards have zero-liability policies and other protections for cardholders affected by identity theft. In the US, victims of credit card fraud are also protected under the Fair Credit Billing Act, which specifies that the maximum liability for unauthorized charges is just $50. On the other hand, ATM or debit cards and electronic transfers from your bank account fall under the Electronic Fund Transfer Act. Under the terms of this law, consumers must act quickly.
Reporting a lost or stolen ATM or debit card before any fraudulent transactions will ensure you are not responsible for any changes made after that. This means it is in your best interest to report suspicious activity as soon as possible. Once you have filed an identity theft report and a police report, you should share them with your creditor as well.
Place a fraud alert on your credit report:
Fraud can negatively impact your credit score — leaving long-lasting effects — which means protecting your credit from further damage should be high on the list of priorities if you’re affected. Contact one of the main credit bureaus, which in the US are:
Ask for a credit report and have a fraud alert placed on your accounts for 90 days. Once you have contacted one of these agencies, they are obligated to inform the other two.
Fraud alerts are free and, once placed, remain on your report for one year. If you want to keep the alert longer, you can get a new one after the first year. An alert makes it difficult for fraudsters to open accounts in your name since businesses must contact you before issuing any credit when a fraud alert is on your report.
If you are a victim of identity theft, you can place an extended fraud alert on your report, lasting seven years. Before placing the extended alert in the US, you need to complete an Identity Theft Report.
Review your credit reports:
Once you have set up a fraud alert on your credit file, you will automatically receive access to one free credit report from each of the three agencies.
Read through each of your reports for signs of identity theft — for example, new accounts you didn’t open, payment history or inquiries you don’t recognize, an employer you never worked for, and any personal information which is unfamiliar.
It is also advisable to review each of your credit reports again at least once over the next year to check for any continued signs of identity theft.
Freeze your credit:
Freezing your credit is free and prevents credit reporting agencies from releasing your credit report to new creditors. Contact the main credit bureaus and request it.
For the most robust defense against identity fraud, experts recommend placing both a fraud alert and credit freeze on your report. There is no time limit to a freeze; it will remain until you decide to lift it, which you may do temporarily or permanently.
When you place the freeze on your report, the bureaus will issue a PIN or password, which you will need when you decide to lift the freeze. Losing track of your PIN may delay or hinder your ability to unfreeze your credit, so keep it in a safe place while the freeze is active.
How do I report identity theft?
Different jurisdictions worldwide will have their own agencies to whom you can report identity theft and receive assistance with identity theft recovery. For example:
- In the United States: report your identity theft to the FTC by completing the online form at IdentityTheft.gov or by calling 877-438-4338, providing as many details as possible. Reporting the theft to the FTC will ensure you receive a recovery plan and an Identity Theft Report, proving that your identity has been stolen.
- In the UK, you can contact Action Fraud on 0300 123 2040 or at the Action Fraud website.
- In Australia, you can report identity fraud to Scam Watch.
Contact the police:
You may also want to alert your local police department. If you do contact the police, take a copy of your Identity Theft Report, a government-issued photo ID, proof of your current address, and any proof that your identity has been used for identity theft — such as collections notices. Remember to ask for a copy of the police report in case you need it. Make a note of your police investigator’s phone number for future reference.
Remove fraudulent info from your credit report:
Once you have reviewed your credit report, contact each of the leading credit bureaus to have any fraudulent information you find removed. In the US, you can use this sample letter suggested by the FTC as a template.
Along with the letter, include a copy of your Identity Theft Report and identifying information, along with details about which information is fraudulent. This allows you to remove, or block, the information from your report so it won’t appear and you won’t be contacted to pay any of the debts. Continue to keep a close eye on your credit report in case any additional fraudulent accounts are subsequently added.
Change all affected account passwords:
Change all your passwords on any account that was affected by fraud. If one of your existing accounts doesn’t have a password, now is the time to create a strong password. A strong password is at least 12 characters or longer and comprises a mix of upper- and lower-case letters plus symbols and numbers. The shorter and less complex your password is, the easier it is for cybercriminals to crack. You should avoid choosing something obvious – such as sequential numbers (“1234”) or personal information that someone who knows you might guess, such as your date of birth or a pet’s name.
To make your passwords more complex, you could consider creating a 'passphrase' instead. Passphrases involve picking a meaningful phrase that is easy to remember and then making the first letter of every word the password.
Avoid using the same password for multiple accounts and never write passwords down. If you have too many passwords to remember, consider using a password manager to help you keep track. Remember to change your passwords regularly – every six months or so.
Contact your telephone and utility companies:
It’s a good idea to contact your utility providers and telephone carriers if an identity thief tries to open a new account in your name, using a utility bill as proof of residence. If an account was opened in your name, explain what happened to the service provider and ask for the account to be closed.
Protect yourself with antivirus:
While this may sound overwhelming, it pays to know what to do if your identity is stolen. The tips above can help mitigate the damage and help you get your life back on track. You can maximize your online safety by using a comprehensive antivirus. Kaspersky Total Security works 24/7 to protect your devices and data, blocking common and complex threats like viruses, malware, ransomware, spy apps, and all the latest hacker tricks. | <urn:uuid:2ed1a3be-89b0-4038-afec-6059fcff1e28> | CC-MAIN-2024-38 | https://www.kaspersky.com.au/resource-center/threats/what-to-do-if-your-identity-is-stolen-a-step-by-step-guide | 2024-09-07T20:32:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00778.warc.gz | en | 0.937107 | 4,087 | 2.90625 | 3 |
A long time ago in a galaxy far, far away, hacking was born. Over the years, hacking has evolved into a complex system characterized by advanced technologies, attackers and vulnerable networks. Cybercriminals are like the Sith, lurking in the shadows and plotting their evil attacks. On the other side, our cybersecurity and IT professionals are the Jedi, peacekeepers of the cyber world. With new hacking strategies and challenges emerging daily, we must learn how to defend ourselves against the dark side.
There are many factors standing in the way of winning this battle. From the lack of skilled professionals to sophisticated attacks and poor cyber practices, cybercriminals have access to various entry points. While it may sound like the light side is at a disadvantage, the force is with us. Advances in technology as well as new services and solutions are entering the market at a faster rate than ever before. As we continue to improve our defenses, it is important that we keep an eye out for new hacking strategies that enter the cyberverse.
Today, low-skilled attackers know how to conduct hacking techniques that were once designed for specialists. Spear phishing, according to McAfee Labs Threat Report, has become widely used to access a network as human error is the easiest entry point. And with the rise of social media, hackers are now able to pull more information about individuals and use it against them. This technique has been coined ‘rose phishing.’
McAfee also states that ransomware attacks have grown 118 percent in the first quarter of 2019. While hackers still heavily rely on human interaction and social engineering as attack vectors, new vulnerabilities within connected devices, including smart locks and coffee machines, have been detected.
So what’s the best way to prevent phishing scams, ransomware and other infections from invading your system? The same methods used to prevent the corruption of the galactic democracy: weapons. While light sabers won’t help here, security tools and software will:
- Only download software, applications and updates from official sources. Suspicious websites and “free” offers should raise red flags. If it seems too good to be true, it probably is.
- When surfing the web, keep ad blockers and virus detection software turned on. Safety warnings will pop up and alert you to any potential problems and where your system may be vulnerable. .
- Practice good cybersecurity hygiene. Beware of opening emails from unknown senders or senders outside of your organization, and never share, write down or duplicate your passwords. Create strong passwords that include letters, numbers and special characters.
- Install a reliable anti-malware program and constantly update and conduct scans to ensure your system is functioning properly.
In the first quarter of 2019, ransomware attacks became more targeted with government, manufacturing and healthcare industries facing the brunt of these attacks. In the second quarter of 2019, we expect hackers to target industries with critical systems that can be compromised. This includes, but is not limited to, finance, energy and legal businesses. With the proper training and tools, we can create an army equipped to detect and respond to the actions of our ancient enemies, protecting our galaxy from ransomware and its high dollar costs. | <urn:uuid:5f5b6216-7328-4ac7-83ac-a63252cdd7d4> | CC-MAIN-2024-38 | https://cadinc.com/defeating-the-dark-side-how-to-defend-against-emerging-attacks/ | 2024-09-11T12:50:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00478.warc.gz | en | 0.944797 | 646 | 2.921875 | 3 |
Password security has never been more critical than it is today. From personal identification to account access and data storage, passwords are used in almost all digital transactions.
Unfortunately, due to the sheer number of apps, devices, websites, and services that require passwords, it is easy for cybercriminals to compromise passwords and confidential data.
To protect your digital information, it’s essential for anyone using passwords to make sure they’re as strong as possible.
While there’s a lot that goes into personal cybersecurity and password management, there are some best practices and tips to use if you want to avoid having your password stolen and to improve your overall web security.
Always Use a Strong Password
A robust password is one that can’t be guessed easily, either by hacking algorithms or manually, and has strong password security.
The first step to creating a strong password is making sure that it doesn’t contain any obvious or personal information such as your name, date of birth, or address.
Avoid using any numbers or letter sequences such as ‘123456’, ‘123456789’, ‘qwerty,’ or obvious phrases for your passwords such as ‘password’ or ‘password123’. These kinds of sequences and terms are commonplace and don’t do enough to provide users with adequate security.
Aim to make your passwords as long and difficult to guess as you can. If possible, try and use more than 15 characters for each password you create. Within these 15 characters, try and use a combination of uppercase and lowercase letters alongside symbols and numbers. Doing this makes your passwords unique and challenging for others to guess.
A great idea is to use song lyrics or literary quotes to form a password phrase, capitalizing each word.
Never Use the Same Password More than Once
If a hacker targets an unsecured website containing your only password and gains access to it, they suddenly have a potential way into every website or online service you use.
It can be tempting to use the same password multiple times, but this should be avoided.
Using the same password multiple times harms your password security and makes it easier for hackers to gain access to sensitive and confidential information.
While you may think that no one will be able to guess your favorite childhood pet’s name, hackers don’t typically sit around using a bunch of different passwords to see what works and what doesn’t. Instead, they target vulnerable websites, apps, devices, and services and gain access to a multitude of different passwords all at once.
If a hacker targets an unsecured website containing your only password and gains access to it, they suddenly have a potential way into every website or online service you use. The first they are going to do is try that password for another website or online service to see if it works.
To prevent this from happening, use a different password for each website, device, app, or service you use that requires a password. While it can be difficult to remember multiple passwords, it’s better than your one and only password falling into unwanted hands.
If you’re using different passwords, at least you know that the other accounts you use are safe should one of your passwords ever be compromised.
The best way to remember all your passwords and keep them secure is to use password management software. These programs allow you to store all your passwords in a secure location and generate encrypted passwords for high-security sites such as financial institutions.
Never Share Your Password With Anyone
Even if you trust the person, you still can’t be 100% sure what they will do with your password or access details.
When you give your password to someone else, you’re seriously compromising the security of the account the password provides access to. Even if you trust the person, you still can’t be 100% sure what they will do with your password or access details.
For example, the person you share your password with may write the password down and store it in an unsafe location where it could be stolen; they might store the password on an unsecured device that can be hacked or communicate the password over an insecure messaging platform. Any of these scenarios open up the possibility of others gaining access to your password.
Even if the person you share your password with means no harm, sharing your password with them only increases the chances of your password and data being stolen.
Be Wary of Password Phishing Attempts
Never follow a link that asks you to change your password unless you prompted it.
You should only ever click on or open any password reset links that you have requested.
Hackers often pose as an official organization, service, or website to trick users into either sharing their passwords or resetting their passwords through a link that the hacker has sent and has access to.
To prevent this from happening, never follow a link that asks you to change your password unless you prompted it. If you ever receive an email, SMS, or any other form of communication asking you to change your password that you didn’t request, make sure you delete it and report it as spam.
It’s also worth contacting the organization/company that the communication has come from to let them know you were the target of a phishing attack.
For example, if you get an SMS message or email from your bank asking you to change your password, but you haven’t requested to reset it, contact your bank to let them know of the fraudulent phishing attempt so they’re aware of it and can act upon this information.
Protect Your Devices
If you’re looking to keep a particular device secure but you’re unsure how to contact the device’s manufacturer.
While all the electronic devices we use today bring a lot of convenience, they also offer ample opportunities for hackers to gain access to private and sensitive data.
To stop this from happening, it’s important to try and make sure any device you’re using is as secure as possible.
For tablets and cell phones, make sure you use a secure login PIN/Password, keep your apps and operating system up to date, only download apps that you trust, and avoid clicking or sharing any suspicious-looking links.
If you’re looking to keep a particular device secure but you’re unsure how to contact the device’s manufacturer. The manufacturer should be able to give you tips and recommendations or point you in the direction of resources and information you can follow to make sure the device is secure.
It’s impossible to keep your passwords 100% safe and secure all the time, but by using common sense and password security best practices, you can reduce the chances of any of your passwords being compromised.
If you’re currently using any weak passwords or using the same password more than once, spend some time changing or strengthening your passwords to make sure every website, app, service, or device you’re using is as secure as possible.
For more interesting articles, you may check out LifeLock Identity Theft Protection Reviews.
Related Article About Protecting Your Accounts: | <urn:uuid:4a46749b-beb0-41e8-834b-bd127782a307> | CC-MAIN-2024-38 | https://staging.homesecurityheroes.com/how-prevent-password-stolen/ | 2024-09-11T13:36:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00478.warc.gz | en | 0.931951 | 1,481 | 3.0625 | 3 |
By the U.S. Fire Administration
Across the country, firefighters are responding to fewer fires but are increasingly called upon to provide Emergency Medical Services (EMS), perform search and rescue, and react to hazardous materials incidents and natural disasters.
They come across a wide variety of tragic situations that play out in or around their homes, along highways, and in every other conceivable part of their communities.
RET — the cumulative effect of regularly caring for the broken bodies and wounded minds of victims and their families — is thought to have a negative psychological impact on firefighters’ own mental health.
(Powerful Message. First responders must face the unknown every day. They never get warnings of what they’ll face. Courtesy of BrandHealthCanada and YouTube. Posted on Aug 18, 2015)
Previous studies have looked at firefighter mental health challenges in the context of post-traumatic stress syndrome (PTSD), which relies on assessment instruments attuned to one particular traumatic event.
Takeaways from Previous Studies
- Evidence shows that rates of depression among fire and EMS personnel are higher than in the general population.
- Firefighters have higher rates of alcohol use and binge drinking compared to the general population.
- There is a possible connection between risky drinking behaviors and PTSD.
- Firefighters experience “secondary trauma” or “compassion fatigue” from repeated exposure to trauma.
- They may not be diagnosed with PTSD, but clearly suffer from symptoms such as sleep disorders, avoidance behaviors, and feelings of helplessness that are associated with PTSD.
(“SIGNS” is a short film made to bring awareness to the importance of mental health in the first responder community. Originally made for internal usage, it was decided to share to spread awareness. Made in collaboration with Marysville Fire Peer Support Group and Seattle Fire Peer Support Group. Courtesy of Andrew Stebliy and YouTube. Posted on Oct 20, 2017)
Takeaways from This Study
FIREFIGHTING AND MENTAL HEALTH: EXPERIENCES OF REPEATED EXPOSURE TO TRAUMA
- It is more common for firefighters to experience a negative mental health impact from a series of traumatic events rather than from one single event.
- Symptoms of RET for most firefighters include desensitization, irritability, cynicism and intrusive flashbacks.
- Many firefighters appear to effectively manage their emotional response to trauma.
- Future research should explore their protective coping methods and resiliency.
(More than a quarter of people who work in our emergency services have contemplated suicide. Roger Moore served as a firefighter in Coventry, witnessing death and destruction – and by the end, wanted to take his own life. Now he’s urging others not to suffer in silence. Courtesy of 5 News and YouTube. Posted on Aug 17, 2017)
Learn more about this research
Many research studies have focused on firefighter mental health challenges due to a single traumatic event.
But what about repeated exposure to such events?
The article details findings from a research project1 that studied the impact of repeated exposure trauma (RET) on firefighters.
(Firefighters Hidden Dangers. When the danger of PTSD follows them for a lifetime. Courtesy of WPTV News | West Palm Beach Florida and YouTube. Posted on Jun 4, 2012)
The research article is available through the U.S. Fire Administration library by contacting email@example.com.
Interested readers may be able to access the article through their local library or through the publisher’s website.
1 Jahnke, S. A., Poston, W. S., Haddock, C. K., & Murphy, B. (2016). Firefighting and mental health: Experiences of repeated exposure to trauma. Work, 53(4), 737-744. doi:10.3233/wor-162255
This summary is for informational purposes only. Learn More + | <urn:uuid:c2cedc64-7a32-4d8e-8737-20151f8cbc44> | CC-MAIN-2024-38 | https://americansecuritytoday.com/effect-repeated-exposure-trauma-firefighters-multi-video/ | 2024-09-12T17:59:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00378.warc.gz | en | 0.935639 | 804 | 3.109375 | 3 |
Finance is game of numbers, it’s a powerful game of numbers and understanding the potential of these numbers gives root cause to the success of an organization.
Obtaining well-defined and explicit information from the financial facts and figures is always remained most demanding in the financial industry, from that a view of financial analysis and its elements we learn through this conversation.
“It’s far better to buy a wonderful company at a fair price than a fair company at a wonderful price.” – Warren Buffett
This blog covers
1. What is financial analysis?
2. Role of tools and techniques
3. Why is it important?
4. Key elements of financial analysis.
So, let's make our knowledge more interesting.
What is Financial Analysis?
“Financial analysis”, the process of accumulating, envisioning, controlling, deciphering, and anticipating financial data and following the aim to evaluate the financial accomplishment of a particular department inside a company or of a company itself. It helps in making more reliable decision making.
Financial analysis is adopted for examining economical patters, fixing financial policy, making long-term plans for various business activities, and analyzing projects or groups for investment. This can be perfected with the fusion of financial numbers and data.
The financial statements that include the income statement, balance sheet and cash flow statement of an organization are analyzed to obtain actionable interferences, also financial analysis can be conducted in both modes of corporate finance and investment finance frameworks.
If you want to learn more about these terms, follow our previous blog “An Introduction to Financial Analysis”
However, in order to evaluate and analyze the financial data, one of the most general methods is “measure the ratio or degree from the data available in financial statements and compare it with those of alternatives companies”, “even it can be compared with own past performance of the company.”
Consider the example of Return On Assets (ROA), it is a well-known degree/asset that is used to determine how capable an organization is while implementing its assets and also in the context of a ratio of profitability.
This ratio can be measured, within the same industry, for multiple companies and can be correlated to each other as a portion of the massive analysis.
(Also check: What is Inflation? Demand-pull and Cost-push)
Role of Tools and Techniques
There are multiple finance analytics tools available that comprise numerous features for delivering accurate and undiscovered substantial information.
It eliminates most of the intricacies of the data. Also, it assists in controlling the cash flows inclusive of revenue and expenditures across the organizations.
With the combined data of CRM, ERP, etc, and other systems, they give a consolidated aspect of all the data touching the entire organization. This helps in evading uncertainties and clutching better opportunities.
Apart from that, most of the latest financial analytics tools create the individual reports for example reports as per department and stakeholders and a financial dashboard that is simple to understand.
(Must check: What is FinTech? Examples and Applications)
Questions (or Example) Financial Analysis can Answer
From the past few years, companies have uncovered the value that finance can produce many perspectives on business. Most of the business leaders are leveraging finance to get actionable insights.
Through the linking of organic financial and operational data with external data like social media, demographics, and big data, finance analysis enables us to address decisive business queries with unparalleled ease, speed, and accuracy.
According to the report; below are the lists of questions that can be answered using finance analysis;
What is the uncertainty exposure with some particular customers, and how the relationship with customers is going to influence working capital?
How to streamline and intensify various business processes in order to make them extra dynamic?
Does a company is investing in the appropriate opportunities based on capital-value or revenue?
Is each product and service across sale modes are profitable for customers?
What would be possible future events that might affect stock prices?
(Most related: COVID-19 Impacts on Financial Markets)
Why is it important?
In the present time when each business has become digitalized, it entails up-to-date information for the process of decision-making.
It delivers in-depth information about the financial position of an organization and augments profitability, business value, and cash flow.
It assists to revamp the eventual purposes of a business that also fix the decision making approaches.
“The best financial models are simple enough for anyone to understand, yet dynamic enough to handle complex situations.” – Tim Vipond
Broad demand for prudent financial planning and forecasting is essential for each organization.
“Cash is King” is a familiar mantra, understand what? Financial analysis majorly focuses on estimating and operating on the substantial equities of the business, for example, cash and physical pieces of equipment.
For triumphant financial analysis, the assorted necessity of conventional financial department and augmentation in technologies are crucial factors.
(Recommended blog: Introduction to Technical Analysis)
Key elements of Financial Analysis
Key elements(components) of the Financial Analysis
Accounted as the core origin of cash, Revenues is crucial for long-term achievement;
Revenue growth: No former revenue can be added while calculating revenue growth as it leads to distorting the analysis.
Revenue concentration: It is assured that no single client can make more than 10% of total revenue, as a customer is generating high revenues now, but what if he stops purchasing, one can encounter financial difficulty.
Revenue per employee: It calculates the productivity of the business, the higher the value, the better it is.
(Refereing blog: Largest Stock Exchanges in the World)
Profit is the return investment that a business derives from the invested amount on the business. Multiple factors such as price, market trends, assets, obligations, costs, etc, can affect the profit of the business. It can be measured on the basis of;
Gross profit margin: It enables us to handle revenues or the cost of goods that are sold out without suffering the capability to pay off for continuous expenses.
Operating profit margin: It incorporates no interest or taxes, although it measures the strength of generating profits for a company despite how an individual manages finance services.
Net profit margin: Pure value that is left for reinvestment into the business, also the redistributed amount to be divided amid owners.
In order to determine how adequately the company’s resources are utilized, operational efficiency is implemented and its scarcity leads to shorter profits and more delicate growth.
(Related blog: What is Credit Rating?)
Capital Efficiency and Solvency
The core aspect of interest of capitalists and bestowers, basically;
Return on equity: It is used to depicts the return that is generated by lenders, coming out from the business.
Debt to equity: In generalized terms, it symbolizes how much leverage is practiced to work that can’t be more than what is justifiable to business.
The term Liquidity signifies the availability of a sufficient amount of cash and other assets to satisfy cash expenses like debts, bills.
Every business demands for a sufficient amount of liquidity to meet its expenses. Therefore, a low level of Liquidity implies the company needs extra capital and its performance is underprivileged. Liquidity can be measured by;
Current ratio: It calculates the worth amount to be paid for short-term debts from the available cash. If the value of the current ratio is less than the one, then the company needs extra amount due to inadequate liquidity, however, the current ratio’s value above two is considered as beneficial.
Interest covered: The measurement to pay interest expenditure from the available cash, and the value of 1.5 leads to meet bestowers. (Source)
While coming so far with the discussion, it can be concluded that Financial analysis is an extremely worthy tool for each organization either on small and large-scale. It should be adopted to maintain and regulate its progress. It lets the organization acclimate the trends that transform its operations.
(Most important: Fundamental Analysis Guide)
Financial analysis will yield highly reliable and convenient financial reports of an organization that is the major asset for calculating its achievement form the aspect of investors, analysts, and capitalists.
"Financial peace isn't the acquisition of stuff. It's learning to live on less than you make, so you can give money back and have money to invest. You can't win until you do this." - Dave Ramsey
In fact, if the financial analysis is handled internally, it could assist in executing substantial business decisions, also inspect past trends for consistent profits. And if it is handled externally, it leads the venture capitalists for selecting the exceptional-achievable investment opportunities. | <urn:uuid:8b9de7ec-b453-4020-ad53-390c48c71f9b> | CC-MAIN-2024-38 | https://www.analyticssteps.com/blogs/5-key-elements-financial-analysis | 2024-09-16T13:29:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00078.warc.gz | en | 0.935592 | 1,821 | 3.0625 | 3 |
A set of profile folders stored in the file system. User profile files are stored in the Profiles directory, on a folder per-user basis. The user-profile folder is a container for applications and other system components to populate with sub-folders, and per-user data such as documents and configuration files. Windows Explorer uses the user profile folders extensively for such items as the user's Desktop, Start menu and Documents folder. A local user profile is created the first time that a user logs on to a computer.
User profiles provide the following advantages:
• When the user logs on to a computer, the system uses the same settings that were in use when the user last logged off.
• When sharing a computer with other users, each user receives their customized desktop after logging on.
• Settings in the user profile are unique to each user. The settings cannot be accessed by other users. Changes made to one user's profile do not affect other users or other users' profiles.
Let's have a look at the user profiles stored in the C:\ directory on local hard disk of the computer.
We see a lot of folders such as Desktop, Documents, Favourites, Contacts and etc. in the user's profile folder as they are seen in the following;
Information and settings in User Profiles, Readable (readable) and Writable (writable) properties are stored in the database of NTUSER.DAT (.DAT = DATABASE).
They are created automatically by referring to the Default Profile when creating User Profiles. There is a blank profile prepared by Microsoft in Default Profile. Although this Default Profile is taken as a reference, each profile setting that will be made by the user later will be written to the NTUSER.DAT profile database and saved while creating User Profiles. In summary, we can say that the Default Profile is another starting point for the profile.
User Profiles are kept on the
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList |
registry path (directory path) with User SID information.
The Concepts of Romaing Profile, Mandatory Profile, Default Domain Profile
There are some other types of profiles used for other purposes apart from the local user profiles mentioned in the content of my essay. They are
● Roaming Profile
A roaming user profile is a copy of the local profile that is copied to, and stored on, a server share. This profile is downloaded to any computer that a user logs onto on a network. Changes made to a roaming user profile are synchronized with the server copy of the profile when the user logs off. The advantage of roaming user profiles is that users do not need to create a profile on each computer they use on a network.
● Mandatory Profile
A mandatory user profile is a type of profile that administrators can use to specify settings for users. Only system administrators can make changes to mandatory user profiles. Changes made by users to desktop settings are lost when the user logs off.
● Default Domain Profile
When computer users logon to their computers for the first time, they log into their computers with the Default Profile with reference to the Default folder in the Users folder under the C:\ directory, and the user data is saved in the NTUSER.DAT database in the user profile folder.
I hope it benefits...
You may submit your any kind of opinion and suggestion and ask anything you wonder by using the below comment form. | <urn:uuid:533999e3-c391-4e10-94bf-b5fa9df733d6> | CC-MAIN-2024-38 | https://www.firatboyan.com/en/local-user-profiles-in-windows-10.aspx | 2024-09-16T13:27:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00078.warc.gz | en | 0.922283 | 716 | 3.03125 | 3 |
Business and Enterprise
Protect your company from cybercriminals.
Start Free TrialAuthorisation is the process of determining whether to grant or deny users the right to access resources. Authorisation operates by following a set of predefined rules and policies. These rules are typically managed by an access control system that establishes permissions based on the organisation's compliance requirements. When a user is attempting to access a resource, the authorisation system will evaluate their permissions and the organisation’s predefined policies before permitting the user to access the resource.
Authentication is the process of verifying that a user is who they say they are. Authorisation, on the other hand, is the process of granting access to resources and what actions the user can perform with those resources. After a user is authenticated using their credentials, the system then goes through the authorisation process.
Both authorisation and authentication work together to ensure that users have access to the resources they need while maintaining the organisation’s security and integrity.
Without strong authorisation processes, organisations have poor governance, resulting in a lack of visibility and control over employees' activities and an increased risk of unauthorised user access. Let's see how authorisation addresses these concerns.
Here are five types of authorisation models organisations use to secure access to resources.
Role-based access control is a type of access control that defines permissions based on the user's role and functions within the organisation. For instance, lower-level employees will not have access to highly sensitive information or systems that privileged users would. When a user tries gaining access to a resource, the system will inspect the user's role to determine if the resource is associated with their job responsibilities.
Relationship-based access control is a type of access control that focuses on the relationship between the user and the resource. Think of Google Drive – an owner of a document has access to view, edit and share the document. A member of the same team may only have permission to view the document while another member may be authorised to view and edit the document.
Attribute-based access control is a type of access control that evaluates the attributes associated with a user to determine if they can access resources. This authorisation model is a more detailed form of access control because it assesses the subject, resource, action and environment. ABAC will authorise access to specific resources associated with these characteristics.
Discretionary access control is a type of access control in which resource owners take responsibility for deciding how their resources will be shared. Let's say that a user wants to access a specific document. It is ultimately up to the discretion of the document’s owner to authorise the user and set up their permissions. In some cases, resource owners will grant certain users higher privileges. These privileges might include the ability to manage or modify access rights for other users.
Mandatory access control is a type of access control that manages access permissions based on the sensitivity of the resource and the user's security level. When a user is attempting to access a resource, the system will compare the user’s security level to the resources’s security classification. If the user’s security level is equal to or greater than the resources’s classification, they will be authorised to access it. MAC is mainly used in government or military environments that require top-notch security. | <urn:uuid:e6ffac30-b3f2-4051-a4f3-a8dd506ea630> | CC-MAIN-2024-38 | https://www.keepersecurity.com/en_GB/resources/glossary/what-is-authorization/ | 2024-09-17T14:40:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00878.warc.gz | en | 0.933511 | 680 | 3.375 | 3 |
In a world increasingly reliant on technology, the importance of cybersecurity cannot be overstated. The ever-evolving threat landscape poses a significant challenge to organizations, individuals, and even governments. To combat these threats effectively, ethical hacking, also known as penetration testing or white hat hacking, has emerged as a critical practice in fortifying digital defenses. In this blog, we will delve into the world of ethical hacking, exploring its significance, methodologies, and how it strengthens security.
Understanding Ethical Hacking
Ethical hacking is the process of probing systems, networks, and applications for vulnerabilities with the permission of their owners. Unlike malicious hackers, ethical hackers use their skills for good, aiming to uncover weaknesses before cybercriminals can exploit them. This practice has become an essential component of modern cybersecurity strategies.
The Role of Ethical Hackers
Ethical hackers, often referred to as “white hat hackers,” work diligently to uncover vulnerabilities in systems and networks. One of their primary objectives is to identify weaknesses. Ethical hackers will systematically search for vulnerabilities, including software bugs, misconfigurations, and weak passwords, to identify potential entry points for cyberattacks. Once vulnerabilities are identified, ethical hackers assess the potential risks associated with them. They prioritize vulnerabilities based on their severity and potential impact on the organization. Ethical hackers not only identify problems but also recommend solutions. They work closely with organizations to patch vulnerabilities and enhance security measures. Lastly, ethical hackers simulate real-world attacks to test an organization’s incident response capabilities. The goal is to help organizations improve their ability to detect, respond to, and recover from cyberattacks.
Methodologies of Ethical Hacking
Ethical hacking follows a structured approach to uncover vulnerabilities and strengthen security. Some common methodologies include:
Reconnaissance: Ethical hackers gather information about the target, such as IP addresses, domain names, and employee details, to better understand the organization’s digital footprint.
Scanning and Enumeration: This phase involves actively scanning the target’s systems and services for vulnerabilities and open ports. Enumeration helps identify potential weaknesses.
Vulnerability Assessment: Ethical hackers use specialized tools to identify vulnerabilities within the target systems, including outdated software, weak configurations, and unpatched security flaws.
Exploitation: Once vulnerabilities are identified, ethical hackers attempt to exploit them to gain access to the target system. This step is crucial in assessing the potential impact of an actual cyberattack.
Post-Exploitation: After gaining access, ethical hackers assess the level of control they have over the system and attempt to escalate privileges, mimicking the actions of malicious hackers.
Reporting and Remediation: Ethical hackers provide a detailed report to the organization, outlining their findings and recommendations for improving security. Organizations then prioritize and implement necessary fixes.
The Benefits of Ethical Hacking
The benefits of ethical hacking are vast. Ethical hacking allows organizations to identify vulnerabilities before cybercriminals can exploit them, reducing the risk of data breaches and financial losses. Many industries have strict regulatory requirements for data protection. Ethical hacking helps organizations ensure compliance with these regulations. Organizations can increase their reputation by demonstrating a commitment to their security by engaging with ethical hackers thus building trust with their customers and partners. Another important benefit of ethical hacking is the ability to identify and address vulnerabilities early saving an organization the cost of dealing with the fallout of a successful cyberattack. Lastly, it’s important to note that ethical hacking is not a one-time event but an ongoing process. Regular assessments help organizations stay ahead of evolving threats.
In a world where cyber threats are constantly evolving, ethical hacking has become an indispensable tool for bolstering security. By leveraging the skills of white hat hackers, organizations can identify vulnerabilities, assess risks, and proactively fortify their digital defenses. Ethical hacking is not just a practice; it’s a mindset that embraces the proactive pursuit of cybersecurity excellence in an ever-changing digital landscape. | <urn:uuid:c2d9fddb-33a0-498d-bb4f-c8b5608417b1> | CC-MAIN-2024-38 | https://agileblue.com/ethical-hacking-unveiled-leveraging-white-hat-practices-to-strengthen-security/ | 2024-09-09T06:27:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00778.warc.gz | en | 0.919687 | 810 | 3.5 | 4 |
BIND is open source software that enables you to publish your Domain Name System (DNS) information on the Internet, and to resolve DNS queries for your users.
BIND implements the DNS protocols. The DNS protocols are part of the core Internet standards. They specify the process by which one computer can find another computer on the basis of its name. The BIND software distribution contains all of the software necessary for asking and answering name service questions.
The BIND software distribution has three parts:
- Domain Name Resolver: A resolver is a program that resolves questions about names by sending those questions to appropriate servers and responding appropriately to the servers’ replies.
- Domain Name Authority server: An authoritative DNS server answers requests from resolvers, using information about the domain names it is authoritative for. You can provide DNS services on the Internet by installing this software on a server and giving it information about your domain names.
- Tools: We include a number of diagnostic and operational tools. Some of them, such as the popular DIG tool, are not specific to BIND and can be used with any DNS server.
BIND is by far the most widely used DNS software on the Internet, as it is a transparent open source and is a flexible, full-featured DNS system. This means that if too many DNS queries to BIND fail or are dropped / rejected, your users will experience serious accessibility issues, which in turn may impact their productivity.
To avoid this, administrators should monitor incoming and outgoing queries of BIND DNS and proactively capture errors and failures in name resolution, well before users complain.
eG Enterprise provides 100% web-based monitoring of BIND DNS. Using a specialized monitoring model, eG Enterprise continuously monitors requests to and responses of BIND DNS, and promptly alerts administrators to error responses, failures, and rejections.
To know how eG Enterprise monitors BIND DNS and what statistics it reports, | <urn:uuid:449935d7-815d-4286-82ec-14aaa772fcd6> | CC-MAIN-2024-38 | https://www.eginnovations.com/documentation/Bind-DNS/Introduction-to-Bind-DNS-Monitoring.htm | 2024-09-09T05:49:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00778.warc.gz | en | 0.913404 | 397 | 2.578125 | 3 |
Remove Malware from Your Android Device
If you’ve ever used a desktop computer before, you’ve likely dealt with viruses and malware that can infect your computer and create myriad problems. Some viruses are relatively easy to get rid of and will only cause a slowdown on your computer. However, other types of viruses and malware can cause significant damage to a computer and steal your data.
The best way to keep these problems at bay is to use reliable antivirus software and take some precautionary measures. But once the virus gets into your device, your main goal should be to remove it as soon as possible.
Just like desktop computers, Android devices can be infected by malware and other types of viruses. This guide takes you through the numerous steps involved on how to remove malware from Android devices.
What is Malware?
Malware refers to any kind of malicious software that gets into a computer, network, or computer server. Malware is used as a blanket term for worms, viruses, and any harmful computer programs. The purpose of malware is to directly damage computing devices and gain access to all kinds of sensitive information, which could include anything from your credit card information to the passwords you use for your bank account and social media accounts. While all viruses are malware, not every piece of malware would be considered a virus. The three main types of malware that may infect your Android device include a worm, a virus, and a Trojan.
What Is a Worm?
A worm is a piece of malware that spreads from one device to another by reproducing itself. Worms are particularly dangerous because they can operate autonomously and don’t need a host file or a hijack code to spread.
What Is a Virus?
A virus is a simple computer code that gets into a device’s program before forcing it to take a malicious action that can either damage the device or steal information. Many modern viruses are outfitted with a “logic bomb”, which means that the virus won’t execute until specific parameters have been met. Some viruses are sophisticated, so it may prove difficult to detect them before it’s too late and without an expert.
What Is a Trojan
A Trojan is a type of malicious software that the user of the Android device can only activate. These programs can’t reproduce themselves but can mimic normal functions that the user will invariably want to click on. Once the Trojan is activated, it spreads and starts to damage the device. Just like a regular application, Trojans typically request administrator access. If you select the “agree” button, the Trojan will have extensive access to your computing device.
What Malware Can Do to Android Phones
There are many things that malware can do after infecting Android phones. After all, the point of malware is to generate some revenue for cybercriminals. Malware on Android devices can download malicious applications, open unsafe web pages, send expensive SMS text messages, and steal information. This information can include your passwords, personal information, location, and contact list.
Once a hacker has access to your Android device, they can either sell or use your information on the dark web. More sophisticated and complex malware will take the form of ransomware, which can lock your phone and encrypt some of your data and documents. You will then be given time to pay a fee if you want to have your files and data restored.
How Do I Know If My Android Phone Has Malware on It?
While external damage to a phone is easy to identify, malware can inflict internal damage that is more challenging to detect. In many cases, malware will take up significant resources on your Android phone, which can create a slowdown and other similar problems that indicate the presence of malware. As such, you need to know if your phone has a virus or malware. Here are some signs that malware has infected your phone:
- Your phone has slowed down significantly without an apparent reason
- Your battery drains at a quicker rate than normal
- Applications are taking a long time to load
- The phone is using much more data than it should be
- Pop-up ads are in abundance
- You notice applications on your phone that you don’t recall downloading
- Your phone bills are higher than they should be
How Do I Check for Malware on My Android?
You can do several things to detect the presence of malware on your Android device, the primary of which is to run a standard antivirus scan. There are many different antivirus scans and programs that you can add to your phone. These programs can be either free or paid. Keep in mind that the most expensive antivirus program isn’t always the best. So, ensure that you select a program that offers complete functionality and doesn’t solely provide a quick scan feature.
Quick scans can help check common areas of your device for viruses. Full scans, however, are necessary if you want the program to check every facet of your Android phone. A quick scan may also give you a false belief that your phone is free from viruses and harmful malware.
How Do I Completely Remove Malware from Android?
Once you have detected malware on your Android phone, you can remove the malware by following five simple steps.
Step 1: Immediately Turn Your Phone Off Before Performing Some Research
Upon detecting malware, you should turn the device off entirely while you perform some research. Turning the device off should keep the problem from worsening and may stop the malware from spreading to other networks in the vicinity.
If you know the name of the application or program that contains malware, you should take this time to research the program’s name to learn more about what it could be doing to your phone. If you don’t know the name, consider looking up the symptoms you’ve noticed on another computer. The only way to eliminate malware on an Android phone is to identify the app that’s infected with malware.
Step 2: Turn the Phone On in Safe Mode or Emergency Mode
Once you know which application needs to be uninstalled and deleted, you should turn your phone on in safe mode or emergency mode. Switching to safe mode is relatively simple for most Android devices and can occur by turning your device on, holding the power button down for several seconds, and tapping the power-off button. From here, you should be given “power” options like reboot and safe mode. Once you activate the safe-mode option, your phone will restart. Being in safe mode is necessary to keep the malware from spreading as you uninstall the program with malware in it.
Step 3: Go to Device Settings to Locate the Malicious App
When you’re in safe mode, enter the “settings” section on your Android phone. You can access this mode by clicking the gear-shaped icon on the screen. You can also search for the “settings” section on your device. In settings, scroll down until you see the “apps” option, which you should click on. You’ll then be given a list of the applications that are present on your phone. Look through this list until you notice the app that’s infected and needs to be uninstalled. If the application is a core app, you may be unable to delete it. Instead, you would have the option of disabling the app. However, it’s unlikely that a core app is the source of a virus or malware.
Step 4: Uninstall the Infected Application
Uninstalling an application is easy and begins with selecting the app, which will provide you with options like “force stop”, “force close”, or “uninstall”. Select the uninstall option to get rid of the application that has been causing problems on your phone. There are times when you will be unable to delete the application that contains the virus properly. This scenario can happen if your phone has been hijacked with ransomware. In this situation, the ransomware can get into your administrative settings, ensuring that the app can’t be deleted. You can fix this problem by going to the main settings menu and selecting the “security” section. From here, search for the “phone device administrators” area. In this area, you should have the ability to change your administrator settings and allow you to delete the app.
Step 5: Opt For A Factory Reset
If you’re willing to say goodbye to the current media and content on your Android phone, a factory reset is an excellent option to eliminate the malware. This process does remove viruses and malware, but more potent malware may survive. You may detect as much malware as possible with a deep antivirus scan.
Step 6: Download Malware Protection
When you successfully eliminate the malware, you should focus on downloading malware protection and increasing your knowledge on how to remove malware from Android devices. Ensure that you use a program that will delete unnecessary files, protect your information, and scan for viruses. You should also check for updates regularly to keep the antivirus program up-to-date with the protection it offers.
Tips for Keeping Malware Off of Your Android Phone
It pays to know how to remove malware from Android, but it’s still better to keep them off your phone. You can take these simple steps to keep viruses and other malware off your phone include:
- Make sure that you invest in reputable and robust security software.
- Don’t click on links in text messages or emails that you aren’t familiar with./span>
- Keep your software and operating system up-to-date.
- Use complex passwords.
- Make sure that you don’t use an unsecured WiFi connection. A VPN may be necessary when you’re accessing public network connections.
- Only install applications from the Google Play Store and other sources you can trust.
Malware can damage your phone and potentially steal your information if you aren’t proactive about getting rid of the malware once it has been detected. You can avoid these problems altogether by using robust antivirus software like McAfee Mobile Security, seeking help from experts, and staying informed about modern cyber threats and the risks they pose. | <urn:uuid:adc90973-aa08-44bc-b4e7-825ccc9f8400> | CC-MAIN-2024-38 | https://www.mcafee.com/learn/how-to-remove-malware-from-android/ | 2024-09-09T05:29:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00778.warc.gz | en | 0.921199 | 2,105 | 2.546875 | 3 |
January 27, 2016
5G will be much more than a simple technological evolution of current mobile infrastructures. It will be a change of paradigm for all telecommunications and ICT ecosystems. In fact, 5G will assume the form of a dense and distributed “fabric” of computational, storage and communication functions, entering deeply into our socio-economic reality. In the same way, metaphorically, a computer has an operating system -- dictating the way it works and providing services for developing applications -- 5G will have a global operating system (GOS) capable of operating converged fixed-mobile infrastructures.
As we may remember, in computing, the adoption of operating systems started facilitating programs and applications development by providing controlled access to high-level abstractions for hardware resources such as CPUs, memory, storage, communication and information like files and directories. It was a radical change; in fact, before that it was rather hard and complicated to elaborate, write, port and debug programs and applications.
Similarly, the main concept at the basis of the GOS is the definition of the “service,” representing a unifying abstraction across 5G physical resources such as nodes and IT servers for processing, storage and networking across multiple infrastructure domains (DC, WAN, access/edge, terminals/fog) and across different service levels like IaaS, PaaS, and SaaS.
A 5G service, in fact, can be seen as a software entity providing a network function -- for example, from ISO-OSI L2 to L7, so it could also be a network function or a middle-box -- exporting APIs, and being available anywhere and anytime (location-time independent). It should be scalable, elastic and resilient and composable with other existing software components in order, for example, to create a service chain. Services are executed in 5G infrastructure slices, which are made by sets of logical resources such as virtual machines connected through virtual networks.
Another driver for the GOS is the need to achieve a high degree of automatism in the 5G operational processes, thus limiting manual intervention in the infrastructure. Scalability requirements will dictate the adoption of new technical solutions, such as publish and subscribe mechanisms and real-time configuration of nodes, that will be equipped with standard interfaces such NetConf/Yang for instance. The GOS should also provide a certain degree of independence between 5G and the business support systems that make use of it and also will have a spectrum of orchestration capabilities.
The term orchestration has long been used in the IT domain to reference the automated tasks involved with arranging, managing and coordinating services deployed across different applications and enterprises. In these contexts, orchestration also involves optimized management of IT resource pools to guarantee adequate performance during service provisioning. In the 5G infrastructure, the scope of orchestration has been enlarged to cover network services, with comprehensive operations of both IT and network resource pools.
Eventually 5G will “reduce” the space-time dimensions of our society and economy with radically faster access, larger bandwidth -- an increase of 2x to3x orders of magnitude -- and lower latency of up to 1 millisecond. Provisioning services will become equivalent to installing and executing software apps onto 5G's GOS. This will pave the way to new paradigms such as “anything as a service:” A wider and wider set of ICT services offered by means of new 5G terminals, even far beyond our imagination. Think 5G-enabled intelligent machines, robots, drones and smart things. Imagine, for example, 5G services for improving industrial and agricultural efficiency, enabling decentralized micro-manufacturing, improving efficiency in private-public processes, and creating and maintaining smart environments.
About the Author
You May Also Like
Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE)
September 10, 2024Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024 | <urn:uuid:c8ace147-3bb8-4124-b244-8197acd70a76> | CC-MAIN-2024-38 | https://www.networkcomputing.com/5g-6g/5g-paving-the-way-for-new-paradigms | 2024-09-09T05:17:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00778.warc.gz | en | 0.932774 | 837 | 2.625 | 3 |
File-based cyberattacks, in which attackers send files, such as Microsoft Word documents or Adobe Acrobat PDFs, poisoned with embedded malicious code that executes once an intended target opens the file, are among the most common ways that hackers compromise computers. Of course, file-based attacks are not new – they have been around for almost as long as personal computers themselves; over 20 years ago, for example, the Melissa worm inflicted over a billion dollars of damage by exploiting a relatively simple malicious macro stored within a Microsoft Word document.
There are several reasons that file-based attacks remain a serious problem after so many years, despite the advent of many technologies intended to block malware-bearing files from ever reaching users. I recently discussed several of them with Dr. Oren Eytan, former head of the Israeli military’s cyber-defense unit, and currently CEO of the cybersecurity product firm, Odix. (I met Dr. Eytan after seeing an announcement that the European Union’s Horizon 2020 Research and Innovation program granted Odix $2 Million to optimize Content Disarming and Reconstruction technology (discussed below) – which typically can be found in use only in larger enterprises – for the SMB market.)
Here are some highlights from our discussion about why file-based attacks continue to plague cyberdefenders around the globe:
1. Files are a common attack vector.
While apps, software, and operating systems vary quite a bit between platforms, essentially every modern hackable computer utilizes files of some sort. Likewise, a large portion of the data utilized by both individuals and modern-day corporations resides within files. As such, as Dr. Eytan pointed out, files offer tremendous potential for reaching targets.
2. Files continue to grow larger with time.
As Dr. Eytan pointed out, it is significantly easier to hide malware (from both humans and cybersecurity countermeasures) within large files, than within small files whose sizes appear suspicious when malware payloads are added. As such, the tremendous growth in the average size of files over the past couple decades has helped enhance the attractiveness of files to cyberattackers looking for distribution mechanisms for malware. In one generation, for example, we have migrated from text files on 1.44-Megabyte floppy disks to high-definition-video files on 4-Terabyte portable hard drives – opening the door for the file-based hiding of malware that is many times larger and more sophisticated than was previously possible.
3. Signature-based anti-malware systems often miss new malware variants.
Hackers create many new variants of malware every day. Yet, at least historically speaking, many anti-malware technologies block malware by looking for only known strains. As I have discussed before, by definition, such a reactive approach is deficient – as it will always miss the most recently released malware, about which the security software obviously has no knowledge, and which often represents the most potentially damaging malware in the wild.
4. Heuristic analysis is imperfect.
Various advanced anti-malware technologies that look for suspicious behavior patterns exhibited by code as it runs may catch some as-of-yet unknown strains of malware, but, are likely never going to be 100% effective; as is also the case with the aforementioned signature-based defenses, heuristics often miss both the most advanced variants of malware and the most targeted forms of malware, which are also among the most likely variants to inflict the worst damage. As Dr. Eytan pointed out, sandboxing and analyzing software also utilizes significant computing resources – meaning that heuristic analysis technologies can slow down systems, introduce unwanted latency, and otherwise adversely impact performance – all while delivering inadequate results. Such systems are also prone to occasionally raising false alarms – which can unnecessarily disrupt various business functions.
5. People are curious – and social engineering works.
Attackers often socially engineer people in order to get the latter to open poisoned files. Whether through mass emailed salacious-subject emails, or targeted messages bearing alarming subjects and impersonating known senders, criminals know how to pique human curiosity so as to incentivize people to open a file. As I have mentioned many time before, the ease with which cyberattackers can exploit human vulnerabilities is not readily addressable: security software has advanced by many generations over the past 30 years, but it takes the human brain many millennia to evolve significant improvements.
6. USB ports are ubiquitous.
It is not a secret that because people are curious, they are prone to inserting untrusted USB devices into computers; it is easy to ship malware-infected USB devices to targets in the mail as “giveaways,” leave poisoned USB thumb-drives in the parking lots of corporate office buildings, or utilize various other means of ensuring that innocent-looking attack-vehicles reach the hands of employees within a target organization.
7. Email is the number one way of targeting smaller businesses.
Dr. Eytan also mentioned that because email is one of the most commonly used forms of communication, and, because, by default, it allows anyone to send untrusted attachments to anyone else, it is the primary method by which attackers send malware into smaller businesses, which are often not equipped with security software to stop email-borne attack-code from reaching unsuspecting users.
8. Hackers innovate.
Creative hackers constantly find new ways to hide malware within files, and develop mechanisms that help their cyberweapons avoid detection by anti-malware technologies. From embedding their cyberbombs within password-protected and encrypted documents, to splitting malware into multiple files, to embedding poisoned files within non-poisoned files (e.g., poisoned PDF within a DOCX), to using stealthy malware that self-encrypts into various different forms before spreading, to any one or more of many other advanced techniques, attackers find ways to help ensure that dangerous files reach users. Remember, attackers have a huge advantage over defenders in this regard – the former can design malware and test it against anti-malware security products available on the market, allowing them to refine and improve their code until it reaches the level of non-detectability that they wish it to have.
A promising approach to defense
One interesting and promising approach to addressing the file-borne attack conundrum is known as Content Disarming and Reconstruction, a technology that protects against poisoned files, not by looking for malware embedded within them, but by sanitizing all files; files are deconstructed and reassembled as new files that do not include any types of unfamiliar file content that may have been present in original files. Of course, different reassembly rules may apply to different organizations, or even to people within organizations: A firm may ban all Excel Macros in general, for example, but allow the CFO and her staff to use specific authorized Macros; a CDR implementation in such a case would rebuild Excel files without Macros for general users, and rebuild Excel files with only approved Macros for the accounting team. As mentioned above, I met Dr. Eytan after seeing an announcement about the EU granting his firm $2 Million to build CDR technology for smaller businesses. In fact, while many large security vendors have begun to add various CDR capabilities to their existing product suites, startups focusing on the space are already making names for themselves with innovative offerings, and, are also actively spreading the technology downstream – which will hopefully soon allow small and mid-sized business, as well as individuals, to join their larger counterparts in taking advantage of CDR capabilities, and, thereby, reduce their exposure to file based attacks. | <urn:uuid:9c9ba193-79f5-474f-9437-f28bdba318ac> | CC-MAIN-2024-38 | https://josephsteinberg.com/why-file-based-attacks-still-inflict-severe-damage-after-30-years/ | 2024-09-10T10:23:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00678.warc.gz | en | 0.953455 | 1,555 | 2.890625 | 3 |
Just when you think MIT had developed everything achievable with modern technology they announce yet another innovation to change the face of medicine and healthcare
Engineers from MIT have developed a tiny robot which can move through narrow pathways. The wormlike guidewire, or Robo-thread, is made from a nickel-titanium alloy and is magnetically steerable.
The engineers tested the threadlike robot in a life-size replica of the human brain. With pinpoint accuracy, they were able to remotely guide it through the circuitous, winding vasculature of the brain model using large magnets.
The robot is tipped for use in endovascular procedures alongside existing tech. The scientists and engineers behind the development, led by Xuanhe Zhao, envisage its potential uses in the future to include clearing blood-clots and administering medicines.
Speaking of a potential use treating strokes, Zhao said: “If acute stroke can be treated within the first 90 minutes or so, patients’ survival rates could increase significantly…If we could design a device to reverse blood vessel blockage within this ‘golden hour,’ we could potentially avoid permanent brain damage. That’s our hope.”
Guidewires like this are currently in use to treat blockages and lesions in blood vessels, but they are operated manually and thus leave doctors open to the risk of radiation from fluoroscopy.
The team at MIT understood that their development could mitigate this risk, by allowing the soft robot to be manipulated remotely, rather than a surgeon manually manoeuvring a guidewire through the blood vessels.
Yoonho Kim, the lead author of the paper detailing their development, said: “Existing platforms could apply magnetic field and do the fluoroscopy procedure at the same time to the patient, and the doctor could be in the other room, or even in a different city, controlling the magnetic field with a joystick.”
MIT’s thread-like robot becomes the latest in a string of innovations which aim to take surgery and medical procedures remote.
Shadow Robot Company have been working on a robotic hand which could potentially allow surgeons to operate from afar. The Tactile Telerobot has already been used to move chess pieces from over 5,000 miles away.
Could we see a future in which operating theatres are staffed entirely by robots and remote surgeons? | <urn:uuid:2f0193df-2ee3-4105-9b57-04a80071a746> | CC-MAIN-2024-38 | https://tbtech.co/news/mit-developed-a-robot-that-swims-through-your-veins/ | 2024-09-16T16:49:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00178.warc.gz | en | 0.957187 | 483 | 3.1875 | 3 |
For those in the security space or at C-level, you’ve likely seen a recommendation about how to manage encryption and corresponding keys. Or at least something about encryption needing further consideration.
Chances are, if you’re reading this you have at least an interest in the topic and are researching relevant products. In this post, I’ll run through the role of encryption for your organisation, and how (when it’s implemented effectively) it can remove a significant amount of risk management.
How do public/private encryption keys work?
Stepping back slightly, let’s establish what encryption is. Encryption is a process of encoding information so only those who should see it (those with a key), are able to. It’s important to differentiate this from cryptography, which is a field of study usually based on math and algorithms (that may or may not be mathematical).
When encryption is used, typically it will be asymmetric. This means two keys (a public and a private key) are used for encryption/decryption. To explain this, let’s use a metaphor of a postbox that any member of the public can drop mail into. The public key is the drop and is only wide enough to drop a letter in but not retrieve it once it is through. Only the postal employee has a private key to open the mailbox and retrieve its contents for onward delivery.
This is basic public/private key exchange in practice. We’ll come back to physical proximity shortly, because of its relevance with jurisdictional governance. What should be obvious from this metaphor is that if the mail/postal employee loses or divulges the key, then unauthorised people could see this confidential information. Besides the customer having implicit trust in their post arriving at its destination in a timely manner, there is an expectation it has not been tampered with.
Why encryption removes risk for auditors
What if, for argument’s sake, the letter in our example involved mortgage details with bank account information instructing a transfer of funds? Extending this hypothetical scenario, the customer would have no way to trace their mail once it is 'in the system' and anyone with the inclination and access *could* carry out tampering. It is this possibility of tampering that creates risk.
Encryption when implemented well removes tampering risk entirely. If the letter itself is encrypted and only the recipient has the key to decrypt it, the concern of tampering goes away. However, this is not the only solution in our scenario. This is exactly the sort of thing an auditor would recommend you put in place for sharing confidential information via email.
The role of an auditor is to help an organisation reduce their risk and in theory, reduce the likelihood of losing their business revenue. For example, does the hypothetical possibility of a government compelling a company to hand over keys exist? In some countries, certainly. This is where auditors can find value in choosing the physical location of encryption keys, and why Egress offers jurisdictional control over the use of our services.
What encryption regulations are in place?
Behind the scenes, there is surprisingly very little in the way of legislation or widely adopted standards, such as ISO 27001 or NIST 800-53, telling us how to implement encryption. Only that it should be documented.
One reason for this is that as technology advances, so does the ability to break encryption. We have seen certain algorithms such as SHA-1 fall to the heap of simplicity like a potato battery. Instead, standards bodies focus on implementation of various types of encryption. The NIST 800-38 series is a good example often referred to in the FIPS 140-2 draft Annex A.
ISO 27701, alternatively, focuses on the type of information that should be encrypted. This is a helpful bridge for data protection legislation where encryption is mentioned, such as UK/EU GDPR Article 32. With this basic background, it becomes apparent that an auditor needs to consider cryptography to give a good assessment of encryption.
Sometimes, the theoretical possibility of a risk can be heightened beyond what is necessary for most organisations. It is an organisation's responsibility to determine the level of action to take from an auditors’ recommendations.
Looking for an email encryption solution that is easy to use, keeps information safe, and enables a full chain of custody audit? Egress Protect delivers all of this and more – learn more here. | <urn:uuid:e1da4690-0c80-4757-91aa-bdea5ef2d0f2> | CC-MAIN-2024-38 | https://www.egress.com/blog/email-encryption/encryption-remove-risk-auditors | 2024-09-16T16:31:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00178.warc.gz | en | 0.942395 | 905 | 2.78125 | 3 |
At the turn of the century, an in-orbit satellite would have cost more than $17m. These same capabilities can now be put into the same orbit for $200,000.
Satellites rotate around a gravitational body – such as the Earth – at a distance proportional to their speed, but the orbit may not be circular, and the exact path of the satellite can provide advantages in terms of capability and availability.
Some orbits will provide better coverage for specific countries, times of day or antenna designs. They may be suited to relaying data between other space assets, avoiding known belts of electromagnetic radiation or reducing the revisit time to specific areas.
Developing such orbits requires a detailed understanding of gravitational fields in four-dimensional space – including time – and several orbits are protected by patent in their most useful applications.
Many new constellations of satellites are being launched into low Earth orbit (LEO), less than 1,000km above the Earth’s surface. This provides good observation and communication services. But these satellites are also skimming our planet’s atmosphere, and can only survive for about five years before de-orbiting and burning into dust.
These constellations therefore require a constant replacement cycle – in contrast to satellites in geostationary orbit, which might use the same technology for 20 years. New services must be judged on the upgrades they plan to deploy – and their financial ability to support the constant refreshes – rather than the capabilities they are already deploying.
Communications satellites in LEO offer low-latency communications, while commercial Earth observation satellites are photographing every spot daily to provide time-based analysis. Much of the technology used in these “new space” projects is borrowed from the smartphone industry – efficient batteries, compact antennas and impact-resistant electronics – but much has been developed to address the problems specific to working in space.
A circular orbit – around the equator of a planet – offers considerable utility, but as scientists gained experience working in space, it became clear that alternatives could open up new opportunities. Different altitudes – low Earth orbit (LEO), medium Earth orbit (MEO), geostationary Earth orbit (GEO) – provide one variable, but elliptical (stretched out) orbits can support new use cases. Combinations of orbit can be used to maximise service availability and minimise idle time – a critical factor in the profitability of a satellite constellation.
Several well-known orbits are already in common use. Sun synchronous is an orbit, generally used within LEO altitudes, which places the satellite over specific spots on the Earth at the same time of day, every few days. This is used heavily by Earth observation (and spy) satellites to provide consistent shadowing of an observed region.
An orbit in which the satellite passes over both poles is known as a polar orbit. This allows the planet to circle beneath it, enabling the satellite to pass over every point on the surface, and to pass over each pole every 100 minutes or so to collect data from static sensors or photograph specific locations.
Molniya is a highly elliptical orbit, which increases the time that the satellite spends over a particular latitude. This is generally used to increase availability for communications systems covering northern latitudes, such as Canada and Russia.
Read more about satellite comms
- In October 1945, Arthur C Clarke wrote a paper, published in Wireless World magazine in the UK, discussing a global communications network. This is now a reality.
- Despite fears of recession, especially in major economies, there is one thing that people can’t seem to have enough of: high-speed connectivity.
A geostationary orbit matches the rotation speed of the Earth and traces the equator, providing satellites that seem to hover above a specific point on the surface. Such an orbit requires an altitude of 35,786km to match the speed of the Earth, resulting in high latency, but such an orbit enables dish antennas to be pointed at the satellite.
Different orbits and orbital altitudes offer different features. Hybrid services combine LEO, MEO and GEO orbits to optimise the coverage, bandwidth and latency of a connection. GEO, for example, offers wide-area coverage, while LEO can provide low-latency communications.
A communications protocol, such as TCP/IP, may be broken down into layers that can be handled by satellites in different orbits – the channel being reassembled at the receiving equipment.
Management of data traffic, which is sensitive to latency, can be handled by satellites in LEO, while bulk traffic flows are delivered from satellites in MEO or GEO orbits. This enables the use of legacy assets, as well as reducing the need for high-capacity satellites in dense LEO orbits.
Network protocols for satellite communications
The TCP/IP protocol used for internet communications requires that received packets are acknowledged within a specific timeframe and consistency. This makes TCP/IP very difficult to use with MEO constellations and almost impossible to use over GEO, without modification.
By providing the acknowledgements over LEO while the bulk data is transmitted over MEO, a combined constellation can deliver data using standard TCP/IP communications. This enables, for example, interaction with a video service to be performed over LEO, while the selected video is delivered from a satellite in GEO.
Similar methods can be applied to other network protocols, providing the benefits of both orbits without the drawbacks of either. Such a combined constellation offers:
- Simplified application development: The ability to use standard internet protocols without going through a required proxy or protocol conversion simplifies the development of applications, as well as removing potential areas of incompatibility that can slow deployment times.
- Reduced constellation density: MEO (and GEO) satellites have a much larger footprint, being able to provide services across wide areas without customers needing low-angled antennas, which are more prone to interruption from buildings and trees. This results in a smaller, and more economical, constellation being able to provide a comparable service.
- Greater availability of spectrum: Splitting the control plane from the data plane allows more efficient use of the radio spectrum, because these two components of the network stream can use different radio frequencies.
- Broader coverage: The larger footprint from higher altitude allows coverage to be extended into areas without receiving ground equipment, and without needing satellite-to-satellite communications systems.
- Exploitation of existing assets: Many companies have existing assets in GEO orbits. Complementing these with LEO services can enable these assets to compete with wholly LEO constellations.
The potential for satellite comms
Various companies are seeking ways to increase the value of existing (deployed) orbital assets. The value of satellite data is falling rapidly, as broadcast television is increasingly supplanted by on-demand services, and LEO constellations threaten the market for rural internet access.
These companies may be hopeful that hybrid systems will have a long-term future, but much will depend on the development of low-cost endpoint equipment capable of supporting hybrid networks. Given that they have a 15-year life, and few new communications systems are being deployed in GEO, the existing assets may be operational for another decade. This means the greatest opportunity is in the next five years.
As a final point, satellites in orbit often require large reflectors or antennas, and almost always need extensive solar arrays to power the mission. Such objects are difficult to launch, so various technologies and techniques are needed to reduce the size during launch.
Techniques being used include folding arms, springs, origami techniques and memory metals. The best approaches offer a significant reduction in size, while keeping weight low, but, most critically, can offer very high reliability.
Because reliability is so paramount, the technology behind such systems is rarely complicated, so interesting innovation is mostly in the ways in which the components are folded and unfolded, and the method used to push them into deployment.
This article is an excerpt from the Gartner report, “Emerging technologies: emergence cycle for satellite systems”. Bill Ray is a vice-president analyst at Gartner. | <urn:uuid:8c9ef0a9-ffaf-4185-a1ef-7413a92c3965> | CC-MAIN-2024-38 | https://www.computerweekly.com/feature/New-connectivity-options-driven-by-low-Earth-orbit-satellites | 2024-09-17T21:58:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00078.warc.gz | en | 0.930632 | 1,673 | 3.90625 | 4 |
In recognition of Stress Awareness Month, we delve into the seldom-discussed realm of mental health within the data-intensive workplaces that characterize the modern age. As corporations and professionals grapple with the incessant flow of data, analytics, and artificial intelligence, the pressure to remain perpetually informed and make data-driven decisions heightens. Naturally, this relentless influx of information can become a double-edged sword, sharpening the competitive edge of businesses while simultaneously dulling the well-being of their employees.
The rise of big data and AI has revolutionized how decisions are made, risks are assessed, and opportunities are seized. However, less attention is paid to the hidden psychological costs of such an environment. How does the constant pressure to analyze and interpret complex data affect mental health? The cognitive load is substantial, and employees are often expected to sift through this data with precision, speed, and the understanding that the results may sway pivotal business decisions. This can lead to a phenomenon known as decision fatigue, which not only impairs an individual’s ability to make choices but also can contribute to heightened stress levels and burnout.
Organizations, however, are not powerless in the face of this challenge. There are strategies that can be implemented to mitigate the stress associated with data overload. For instance, introducing more robust data management systems can help streamline the flow of information, ensuring that employees have access to relevant, high-quality data rather than getting swamped by the enormity of irrelevant data points. Providing training on how to efficiently handle data and leverage AI tools can also alleviate the cognitive burden on workers.
Industry leaders are tasked with fostering a work culture that prioritizes mental well-being, balancing the need for insightful data with the health of their employees. To achieve this, companies can establish clear guidelines on when to disconnect, encourage regular breaks, and create spaces for relaxation and social interaction. Moreover, integrating mindfulness practices and stress management programs into the workday can significantly improve mental resilience.
Expert opinions from psychologists highlight the importance of a proactive approach to mental health at work. Data scientists can offer insights on creating algorithms that not only process data efficiently but also consider the user experience to minimize stress. Corporate wellness program coordinators can share successful case studies where employee well-being has been placed at the forefront, demonstrating that it is possible to maintain high productivity without sacrificing mental health.
In conclusion, as we navigate the data-dense landscape of the modern workplace, it is crucial that mental health becomes a central component of the conversation. Only when businesses acknowledge and address the mental toll of data overload can they truly cultivate an environment that is both intellectually stimulating and psychologically supportive. In doing so, they not only enhance the well-being of their employees but also secure the long-term sustainability of their operations in an age increasingly governed by data and AI. | <urn:uuid:2453d603-58d7-4eee-9d52-2818133856cc> | CC-MAIN-2024-38 | https://analyticsweek.com/the-burden-of-bytes-navigating-mental-health-in-an-era-of-data-overload/ | 2024-09-20T08:01:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00778.warc.gz | en | 0.941762 | 577 | 2.515625 | 3 |
One of the biggest challenges of collecting and organizing data starts before you even have any data. This of course is deciding how to best organize the information you plan to track. Excel is a powerful spreadsheet tool that tracks all kinds of data. The reason Excel is so incredibly powerful is that it has built-in tools like filters which allow you to narrow the focus of data, sorting which helps find specific information, formulas to make computations, and charts and tables to track trends and much more.
Unfortunately, if you are not careful how you set the data up for collection, or you are exporting the data from another application, it might not be in the best format. For example, when multiple pieces of data are in the same column, it can be much more challenging to effectively and easily manipulate the data.
This post discusses how to separate items in a single column into multiple columns so you can format the data to your unique needs.
For a video of this being demonstrated:
How to Separate Items in One Column into Multiple Columns
One of the most common mistakes people make when setting up data in Excel is including too many pieces of information in the same column or row. There are many examples where consolidating more than one piece of data into the same column can turn out to be less than ideal. Two of the most common examples of where too much data is placed in a single column are:
- Names - first and last names should always be separated. This makes it much easier to sort later when you have more than one person with the same last name, etc.
- Addresses - always keep the city and zip code in their own columns independently of the street and number. There are so many reasons why you might want to sort by zip code or drill down futher by city when looking at your data and this is far easier to do from the beginning.
However, sometimes you are not in control of how the data is available to you, most commonly because you exported the data from an application or program. When this is how you acquired the data, the format of the data is usually far more limited. Luckily, the data does not need to remain this way because there is a quick and easy way to separate the data out so that it is much more usable for you.
To split the data in a single column into several columns:
- Open the spreadsheet with the data.
NOTE: In this example there is no data to the right of the column that has information that needs to be broken out. However, if there is data directly to the right of the data you are splitting out, you will first need to add blank columns to make room for the data you are splitting. If you don't, you can accidentally replace existing content.
- Click on the Data tab.
- Highlight the rows with the data to extract.
- Click on "Text to Columns" in the Data Tools section of the Data tab.
- In the first step of the convert text to columns wizard, check the radio button for the option that most closely resembles the format of your data. The preview at the bottom shows how your data can be split. Most of the time delimited works best for what I am doing, but fixed width can also be used.
- Click "Next" to move to step 2.
- Check the boxes next to each delimiter that your data has. Delimiters are used to separate the data, in essence defining what is considered a break in the data so the data can be split.
- Once all your delimiters have been selected, and the data preview of how the data will be split looks correct, click "Next".
- In step 3, choose the data format.
- Again, look at the data preview at the bottom of the box to verify it looks correct, then click "Next".
This will split the data into columns just as it showed in the preview window at the bottom of each step.
The data is being split into several columns to the right of the original data. This is why it is so important to add empty columns before starting the wizard which will prevent existing data from being replaced. Whatever data you are splitting, be sure to add more than enough columns for the data just in case you miscounted the number of columns as this ensures other data is not overwritten.
Keep in mind, once the data has been split, you can remove any items you do not want or need. For instance, in the example above, you might not care about middle names or middle initials, so we could easily remove that column. This is true of any information that you split into separate columns.
When you have multiple pieces of data in a single column in Excel, it can seem overwhelming to try and get useful statistics and information from it. Luckily, there is an easy way to split up content so it is in a more usable format. Excel can separate information into different columns so you can better use it as well as allowing you to remove information you do not need. Using the text to columns utility is quick and easy and can save tons of time that might otherwise be spent re-entering data.
As always, any shortcuts that make it faster to get information into a usable format are helpful! | <urn:uuid:914bbd91-6aa8-4eec-b5c9-64636207c6bb> | CC-MAIN-2024-38 | https://blogs.eyonic.com/how-to-separate-items-in-one-column-into-multiple-columns/ | 2024-09-20T06:29:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00778.warc.gz | en | 0.923968 | 1,079 | 3.21875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.