text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Reinforcement learning (RL) is a category of machine learning that uses a trial-and-error approach. RL is a more goal-directed learning approach than either supervised or unsupervised machine learning. Reinforcement learning is a powerful means for solving business problems that do not have a large historical dataset for training because it uses a dynamic model with rewards and penalties. Reinforcement learning models learn from interaction – an entirely different approach than supervised and unsupervised techniques that learn from history to predict the future. Reinforcement learning models use a reward mechanism to update model actions (outputs) based on feedback (rewards or penalties) from previous actions. The model is not told what actions to take, but rather discovers what actions yield the most reward by trying different options. A reinforcement learning model (“agent”) interacts with its environment to choose an action, and then moves to a new state in the environment. In the transition to the new state, the model receives a reward (or punishment) that is associated with its previous action. The objective of the model is to maximize its reward, thereby allowing the model to improve continually with each new action and observation. For example, if you want to train a machine learning model to play checkers, you are unlikely to have a game tree that models all possible moves in a game or to have a comprehensive historical dataset of past moves (there are 10^20 possible moves in checkers). Instead, reinforcement learning models can learn game strategy using rewards and punishments. To test this approach, a team from software company DeepMind trained a reinforcement learning model to play the strategy board game Go. With a game tree of 10^360 possible combinations of moves, Go is more than 100 orders of magnitude more complex than checkers. The DeepMind team trained a model to successfully defeat reigning Go professional world champion Lee Sedol
<urn:uuid:82721176-335a-4d87-a0d3-38ca153506d4>
CC-MAIN-2024-38
https://c3.ai/introduction-what-is-machine-learning/reinforcement-learning/
2024-09-11T17:34:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00130.warc.gz
en
0.947722
381
3.40625
3
IMPLEMENTING A VIRTUALIZED LEARNING ENVIRONMENT IN WEIßENBURG In the schools in Weißenburg, Germany, terminal service-based virtual learning environments with “thin clients” from NComputing have been rolling out to classrooms since 2019. Fraught with outdated operating systems, computers, and networks from the previous “cooked their own digital soup” recipe, an IT teacher asked NComputing for a pilot solution to their many woes. NComputing partnered with the school’s IT administrators and teachers to solve longstanding problems with their digital learning environment. “At the time, the students were still working on computers with the Windows 7 operating system, which was already outdated,” says NComputing specialist Mirko Lasarz. “Internet security, vandalism, and a high level of support were required for the many desktop computers.” The forward-thinking staff didn’t just want to solve the pains of the old system. The new system would need to be somewhat future-proof: keep up with the fast pace of technology, integrate modern and existing learning tools, incorporate sustainable solutions, reduce operating costs, and make it easy to deploy at other locations. The pilot focused on sustainability - making what they have work better while adding technology that enhances everything else. The PC’s running Windows 7 were outdated, but the hardware was still serviceable. By wiping the operating systems off these devices and installing LEAF OS, NComputing’s repurposing software, we converted these computers to thin clients - moving all future data from these devices to the servers. LEAF OS is a Linux-based operating system that provides the basics for virtualization computing. It communicates with the virtualization servers and displays a unique Windows desktop for each user. Once we verified the solution during the pilot, we planned the first deployments for a vocational high school and the Alltmühlfranken School in Weißenburg-Gunzenhausen. Converting the desktop PCs using LEAF OS allowed them to access the virtualized Windows instances via RDS on terminal servers in the school basement. From a student’s perspective, they can’t tell they are running a virtual computer. They still use the same physical devices - keyboard, mouse, display, and PC. They have access to the latest Windows operating system and software. Because servers in the basement do the actual computing, these PCs stay cool and quiet. Teachers get a bump in productivity, too—no more waiting for the PC systems to boot. “The teachers now have more time for actual teaching. We save on electricity and maintenance, and we can add new workstations much more easily when necessary,” says Norbert Wörlein, the digitization officer at the Weißenburg district office. It often took five to ten minutes to start up all the desktop computers and run the software needed for teaching. The new thin clients are ready for operation in a matter of moments. With a unified digital recipe, “Teachers can now help each other more easily if a problem arises, exchange concepts once they have found them, and take them from classroom to classroom.” The Altmühlfranken-Schule Weißenburg-Gunzenhausen uses whiteboards, beamers, and document cameras in everyday teaching. NComputing has integrated these and other modern digital visual aids. The infrastructure controls these devices, processes their image and video data, sends them to the thin clients, and displays them smoothly without needing expensive and power-consuming local graphics cards. When teachers change classrooms, they can take their entire teaching environment with them - and continue precisely where the last session ended. The computing power is now out of the classroom into a well-tempered, protected basement. NComputing software running here manages the operating system and software that each thin client taps into. It manages software licenses, provides access controls, and remotely administers any maintenance or updates the thin clients need. The number of people with physical access to the system’s core is restricted, increasing overall security. The new architecture protects the school and its IT infrastructure better against vandalism by boisterous students, introduced malware, and theft than the previous Windows PCs. “A thin client separated from its network is virtually useless and unsellable, so it is not a worthwhile target for thieves,” explains Mirko Lasarz. The Linux-based operating system running on the thin clients is robustly immune to the malware that otherwise tends to infect Windows-based computers through contaminated data media, phishing, or direct Internet attacks. With the elimination of power-sapping PCs, reduced need for air conditioning in the classrooms, and a consolidation of computing power in the basement, the schools see a significant reduction in electricity usage. IT admin can largely stay out of the classrooms, orchestrating all updates and maintenance remotely. At the Altmühlfranken school alone, 36 LEAF OS thin clients are in use along with the IT infrastructure. If you add up all the projects in Weißenburg, NComputing has already set up around 120 thin clients at the participating schools. “In view of the good experience, we want to convert more schools together with NComputing in the future,” said Norbert Wörlein. “The partnership has always been very trusting, and the support is great at NComputing–but above all, the whole concept is convincing in practice. These solutions help us save significant resources and make IT at our schools more secure.”
<urn:uuid:2189db48-a2f1-4ee3-9682-5e5dbf141881>
CC-MAIN-2024-38
https://www.ncomputing.com/resources/customer-success/implementing-virtualized-learning-environment-wei%C3%9Fenburg
2024-09-11T16:06:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00130.warc.gz
en
0.931046
1,158
2.578125
3
Crowdsourcing generally espouses openness and broad-based cooperation, but it also brings out people’s worst competitive instincts. Crowdsourcing competitions have fundamentally changed the way idea-sharing takes place online. Famous contests such as the 2012 Coca-Cola crowdsourced campaign for a new logo and Chicago History Museum’s crowdsourced project for a new exhibit last year have created buzz around the practice. By tapping into the collective intelligence of the internet masses, information and ideas can be generated, edited, verified, and published without a middle man (and without, as some critics of the practice have suggested, a competent professional). Now, a new study says that the same open-source platforms that make crowdsourcing contests possible also make them vulnerable to malicious behavior. The study, conducted by researchers from the University of Southampton in the UK and the National Information and Communications Technology Australia (NICTA), looked at several recent crowdsourcing competitions online and analyzed participants’ behavior through the “Prisoner’s Dilemma” scenario. The analysis, often used in game theory, shows that even when it’s in their common interest two people might not cooperate with each other. Crowdsourcing generally espouses openness and broad-based cooperation, but the researchers explained that it also brings out people’s worst competitive instincts. “[T]he openness makes crowdsourcing solutions vulnerable to malicious behaviour of other interested parties,” said one of the study’s authors, Victor Naroditskiy from the University of Southampton, in a release on the study. “Malicious behaviour can take many forms, ranging from sabotaging problem progress to submitting misinformation. This comes to the front in crowdsourcing contests where a single winner takes the prize.” One competition the researchers looked at was the US-based Defense Advanced Research Projects Agency’s (DARPA) shredder challenge, which was comprised of five separate puzzles looking at a number of destroyed documents from war zones. The challenge was for participants to identify the document subject matter and provide the answer to a puzzle embedded in the content of the reconstructed document. The number of documents, document subject matter, and the method of shredding were varied randomly. The team from the University of California at San Diego (UCSD) had a lead and were on track to win, the researchers explained, but were victims of a “relentless number of coordinated overnight attacks.” The fact that each teams’ progress was publicly known on an open-source platform did not deter individuals’ behavior: Though it was in other participants’ interest to let the UCSD team provide clues to their own search, the competing teams impeded their progress in malicious hacking attacks. The researchers said their findings showed that “despite crowdsourcing being a more efficient way of accomplishing many tasks, it’s also a less secure approach.”
<urn:uuid:df1d9d05-a623-465a-ad19-077023d953e0>
CC-MAIN-2024-38
https://www.nextgov.com/digital-government/2014/09/crowdsourcing-competitions-encourage-malicious-behavior-study-finds/93410/?oref=ng-next-story
2024-09-11T16:20:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00130.warc.gz
en
0.952413
605
2.640625
3
Researchers for centuries have relied on observational and theoretical astronomy for studying the stars, using telescopes and mathematical calculations to view planets and other objects, determine how they relate to each other, delve into mysteries like black holes and dark matter, and put into perspective the Earth’s place in the universe. With the advances in compute technology over the past few decades, researchers can now more easily view outer space through the lens of ones and zeroes encapsulated in simulations, seeing things far away and large that would otherwise be invisible. The National Astronomical Observatory of Japan (NAOJ) for much of this decade has relied on supercomputers from Cray to power their computational astronomy efforts. Now the research organization has a much more powerful system at its disposal. Cray has installed the next-generation supercomputer – the NS-05 “Aterui II”— which delivers three times the performance of its Aterui predecessor and is the most powerful system in the world dedicated to astrophysical calculations. The powerful system, with a peak performance of just over 3 petaflops, is based on Cray’s XC50 supercomputer and is powered by 40,200 Intel Xeon Gold 6148 processor cores, with each 20-core chip running at 2.4 GHz. The system came online in early June. It’s the third generation of Cray-based Aterui systems that have powered research at the NAOJ’s Center for Computational Astrophysics. From 2013 to mid-2014, the center used the initial Aterui, a system based on Cray’s XC30 supercomputer that had a peak performance of 502 teraflops, 24,192 cores of the eight-core Xeon E5-2670 processors and 94.25 terabytes of main memory. As demand for the system grew, the NAOJ revamped Aterui. Researchers swapped out older CPUs for newer ones, upgrading them to 25,440 cores of the 12-core Xeon E5-2690 v3 chips and dropping the number of cabinets from eight to six. Main memory was increased to 135.6 TB and the peak performance jumped to 1.058 petaflops. The newer system was launched in October 2014. The new massively parallel Aterui II dwarfs those systems, including running 2,101 of the “Skylake” Xeon SP processors and providing 385.9 TB of main memory. Cray launched the XC50 family of supercomputers in 2016, aimed them at high-performance simulation, analytics, and machine learning workloads. The air-cooled system is designed for flexibility – it can run not only Intel’s Xeon Scalable processors but also the Arm-based ThunderX2 from Cavium (now owned by Marvell) and Nvidia’s Tesla P100 GPU accelerators – and includes high-performance communication libraries and Cray’s custom Aries interconnect to drive communication between the processors. The Aterui II supercomputer will enable the NAOJ to perform highly compute-intensive calculations and run high-resolution simulations for such models as the formation and evolution of the Milk Way galaxy and three-dimensional simulations of a supernova explosion. About 150 researchers will use the new system to calculate such jobs as the gravitational forces among 200 billion stars in the Milky Way, whereas before the stars would have to be collected in groups for other industry simulations. “Our new Cray XC50 gives us the computational capability required to solve challenges that previous systems could not address and calculate astrophysical simulations in a more realistic way,” said Eiichiro Kokubo, project director for the Center for Computational Astrophysics at NAOJ. Cray has seen its high-end supercomputer business ebb and flow over the past several quarters. The company in the latest financial quarter lost $25 million but saw revenue jump from $59 million the previous three months to $79.6 million. In a conference call in May, CEO Peter Ungaro the overall market contracted due to a number of factors, including uncertainty in government budgets with new administrations in multiple counties, a downturn in purchases in such verticals as energy and slowdown in decision making by customers. However, those areas are improving. “The government budget landscape is beginning to settle and decision makers are better able to solidify their plans both over the short and longer term,” Ungaro said. Prospects are improving in the United States, United Kingdom and European Union, he said, adding that the market in Asia is turning around quickly. The CEO noted that the company will build a XC50 supercomputer to the National Institutes for Quantum and Radiological Science and Technology in Japan. At more than 4 petaflops, the new system will be more than twice as powerful as the existing Helios Bullx cluster, which tops out at 1.5 petaflops. Cray also over the past few years has been looking to expand its reach beyond the highly competitive supercomputer space and into the commercial market.
<urn:uuid:66c07fe5-a714-497e-9440-56deea7a02b7>
CC-MAIN-2024-38
https://www.nextplatform.com/2018/07/26/cray-xc50-accelerates-astrophysics-in-japan/
2024-09-15T05:44:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00730.warc.gz
en
0.941724
1,049
3.796875
4
Desktop as a Service (DaaS) is a cloud-based subscription service that delivers managed desktops to users and has now evolved to be compatible with multiple devices. With Desktop as a Service (DaaS), users can access their desktops from anywhere, on any device, as long as they have an internet connection. This eliminates the need for physical hardware and reduces the costs associated with traditional desktop computing. Two kinds of desktops are available in DaaS—persistent and non-persistent. A persistent desktop is a virtual desktop that users can customize and save to look the same way each time a particular user logs on. These desktops require more storage than non-persistent desktops, which can make them more expensive. However, the benefit of a persistent desktop is that users can install software, save files, and personalize their desktop environment to their liking. This makes them ideal for power users who need a specific set of tools and applications. Recommended Reading: Persistent VDI vs. Non-Persistent VDI A non-persistent desktop is a virtual desktop that is wiped each time the user logs out. These desktops are merely a way to access shared cloud services, so users cannot install software or save files locally. However, non-persistent desktops are more cost-effective and easier to manage, making them ideal for organizations that need to provide a standardized desktop experience to many users. A Complete Guide to Desktop As a Service Download NowDesktop as a service: Debunking Myths Learn MoreHow Does DaaS Work? Desktop as a Service solution (DaaS) provides virtual desktops to users through the cloud. Instead of running a desktop on a local computer, the user accesses a virtual desktop on a remote server. The virtual desktop looks and functions like a traditional desktop, with a graphical user interface (GUI), desktop applications, and a file system. When a user logs in to their virtual desktop, the DaaS provider allocates a portion of the server’s processing power, memory, and storage resources to the virtual machine (VM) running the desktop environment. This VM is then assigned to the user, who can use it as if it were their desktop. The user can access the virtual desktop from any device with an internet connection, including laptops, tablets, and smartphones. The virtual desktop is hosted in the cloud, meaning the DaaS provider is responsible for managing and maintaining the underlying infrastructure, including hardware, software, and security. This makes DaaS an attractive option for organizations that want to reduce their IT infrastructure costs and simplify desktop management. DaaS service providers typically offer a range of service levels, from basic desktop hosting to fully managed desktops that include software licensing, patching, and updates. Some DaaS providers, including Ace Cloud Hosting, also offer integration with other cloud services, such as storage, backup, and disaster recovery. In this scenario, the end user’s data is safely stored in the cloud whenever they log off from the system. Every time they work on virtual desktops, the users get access to their data, no matter which device they use or their operating location. Benefits of Desktop as a Service (DaaS): How It Empowers Your Business Here are some of the most compelling advantages of DaaS: DaaS providers use high-performance computing servers far more advanced than typical physical desktops. As a result, organizations can expect top-notch performance, even when running complex applications like 3D designs or engineering software. This increased performance can lead to increased productivity and user satisfaction. Simplified Application Deployment and Configuration Installing and configuring applications on physical desktops can be time-consuming and costly. DaaS streamlines this process by handling all the technical details for you. This can save your organization time and money, as you won’t need to invest in expensive application deployment technologies or a team of IT experts to manage the process. Easy Administration and Support With DaaS, many technical issues and performance constraints plaguing on-premises VDI solutions are shifted to the provider. Your organization won’t have to worry about infrastructure maintenance or technical support issues. Many DaaS providers also offer 24/7 support to ensure that any problems are resolved quickly and efficiently. DaaS provides an extra layer of security by storing data in a secure, hosted environment rather than on vulnerable physical devices. DaaS providers also offer advanced security safeguards such as multi-factor authentication, intrusion detection and prevention systems, multiple firewalls, and data encryption. It also protects your organization’s data from potential cyber threats. Improved Business Agility With DaaS, your organization can quickly and easily adapt to changing business needs. DaaS is scalable and provides a pay-as-you-go consumption model, which means you only pay for what you need. It’ll help your organization stay agile and competitive in today’s fast-paced business environment. Take Your Data Security To New Heights With DaaS: Know How! DaaS Vs. VDI: Differences Between DaaS and VDI Desktop as a Service (DaaS) and Virtual Desktop Infrastructure (VDI) are different approaches to delivering virtual desktops to end users. While both technologies allow for remote access to desktop environments, they have significant differences. One major difference between DaaS and VDI is that DaaS is a subset of virtual desktop infrastructure (VDI) delivered as a subscription model. Ownership and Control When it comes to ownership and control, VDI requires a significant investment in hardware, software, and maintenance. The organization is responsible for setting up and managing the entire VDI infrastructure, including servers, storage, networking, and virtualization software. It can be a costly and time-consuming process, requiring specialized IT skills. On the other hand, DaaS eliminates the need for organizations to purchase, manage and maintain hardware and software infrastructure. The DaaS provider manages the infrastructure and virtual desktop environment, allowing the organization to focus on its core business operations. It also means the organization needs more control over the infrastructure and virtual desktop environment. However, this can also be seen as an advantage, as the organization can benefit from the DaaS provider’s expertise and experience in managing the infrastructure and ensuring security and compliance. When it comes to Total Cost of Ownership (TCO), DaaS often comes out as the more cost-effective option compared to VDI. VDI requires a dedicated IT infrastructure, including hardware and software, which can be costly to acquire and maintain. On the other hand, DaaS offers a pay-as-you-go model, where users only pay for what they use. It can be particularly advantageous for businesses with fluctuating workforce needs, as they can quickly scale up or down as required without incurring significant expenses. VDI requires significant planning and infrastructure investment to scale up or down to meet changing business needs. However, DaaS providers can scale resources up or down as needed, allowing organizations to meet demand fluctuations without the hassle of managing and maintaining their infrastructure. While VDI can offer better performance due to dedicated infrastructure, DaaS has made significant strides in recent years, making it a more viable option for organizations. DaaS providers now utilize advanced virtualization technologies and high-performance hardware to ensure customers can access resources that meet their requirements. While VDI may have an edge in terms of performance due to dedicated infrastructure, DaaS providers have closed the gap significantly, providing high-performance virtual desktops that meet the needs of most organization DaaS Vs. VDI: Which Is Better? VDI is managed by in-house IT staff, while DaaS requires a third-party provider to deliver a managed virtual desktop solution to the users. VDI is deployed in an on-premises data center, unlike DaaS, which provides virtual desktops remotely. An upfront investment in deployment and configuration commonly achieves a VDI solution. However, VDI is turned into DaaS to simplify the struggles associated with implementation. VDI is often regarded as a more challenging infrastructure from a technical standpoint. Therefore, DaaS is preferred by small and medium enterprises as they get significant advantages from it, including cost, security, and management, to name a few. To know more about which one suits your business needs, read this: Difference Between DaaS vs. VDI What Are the Use Cases of Desktop as a Service? Desktop as a Service (DaaS) has revolutionized how businesses operate by offering them an efficient and cost-effective solution for their IT needs. With DaaS, companies can access their desktop environment and applications securely from anywhere, at any time. This flexibility has opened a world of possibilities for organizations, enabling them to streamline operations and enhance productivity. Here are some of the most common use cases of DaaS: Bring Your Own Device (BYOD): Enabling Secure Access to Resources Anywhere, Anytime The current scenario of BYOD is the culmination of two developments, where users can access official resources from any device with the power of the cloud. As firms of all sizes embrace this tethered computing model, DaaS ensures employees can be productive, regardless of any chosen device. It allows firms to provide workforce access to resources they require, especially traveling employees, while simultaneously regularizing security and support. Recommended Reading: Why BYOD (Bring Your Own Device) is essential for a Remote Workforce? Mobile Workforce: Enhancing Productivity of Mobile Workforce Businesses that work towards full-time productivity from remote employees feel the most significant demand for having DaaS in their system. The workforce’s productivity is limited to physical desks and personal devices such as laptops, mobile phones, and tablets. In DaaS, data resides in the cloud, accessed from any secure device. It aims to enable secure access to software, applications, and data on virtual desktops hosted in the cloud rather than on-premises. DaaS has advanced cloud-centric storage systems to ensure sensitive information is stored on a secure cloud server, adding more protection layers. Digital Security: Securing User Access and End-to-End Encryption The 2022 Verizon Data Breach Investigations report that 82% of breaches involved the Human Element, including Social Attacks, Errors, and Misuse. Thus, it’s unsurprising to note that this number has increased in 2022 as well and continues to grow. Note that the flexible work environment is becoming the new normal as organizations let employees leverage anywhere, anytime work environment. IT leaders are deploying Desktop as a Service (DaaS) to support the hybrid environment, further counting on secure sharing to keep up with the distributed workforce. The unrevealed secret to date is how to achieve a secure DaaS model that is worthy in the long-term aspect. Virtual desktops illustrate how a robust access management control system and encryption can restrict a firm’s openness to security threats. Business Continuity - Ensuring Quick Restoration of Data During Disasters DaaS solution has built-in redundancy to maintain business continuity during and after a local disaster. Data is backed up and replicated in multiple data centers to restore quickly in case of mishaps- helping them stay productive even out of the office. Recommended Reading: How Does Virtualization Help Disaster Recovery How DaaS Eliminate Surprise Costs: Checkout the Pricing Benefits The Desktop as a Service (DaaS) model is a fully managed deployment of virtual desktops from migration to security. They are created and maintained on a third-party cloud-hosted server and delivered as a fully managed VDI (Virtual Desktop Infrastructure) that can quickly replace on-premises VDI. Unlike DaaS, the latter is a more traditional method of hosting desktops on-premises. The DaaS provider embarks deployment, management, and upgradation of virtual desktops, eliminating the need to invest in high-end hardware comprising servers and expensive desktops to meet the growing hardware. DaaS comes to the rescue with slashed capital expenses and effortless IT expenditure- different RAM, storage, and memory. While DaaS pricing varies from vendor to vendor, DaaS costs are much lower than traditional VDI. Recommended Reading: Understanding how VDI pricing works Factors Influencing Desktop as a Service Pricing At ACE, we have DaaS pricing models for your convenience. However, these are not only the configurations we offer. We offer custom packages as per business needs. We configure each cloud desktop according to the CPU, RAM, and storage so that you would get the required performance. MICRO | SMB | ENTERPRISE | DEVELOPERS | | USER TYPE | Ideal for Light Workloads like data entry tasks. | Ideal for Medium workloads like web browsing, productivity apps | Suitable for Heavy users like Software developers, testing engineers, business analysts | Specialized to meet developers’ needs | Want a tailor-made DaaS plan? Get one today! How Does DaaS Solution Fit Every Industry? Desktop as a Service is here to stay! As more and more industries are embracing the vast virtual globe. Let’s look at five sectors that will gain from adopting the DaaS model. Centralized desktops make it feasible for ITES firms to simplify and downsize the cost of administration and maintenance while making room for flexibility in resource utilization. ACE offers managed virtual desktops and apps that can be integrated more rapidly to support remote work. This solution sustains data security and backup in case of disruption to ensure business continuity. The education industry can never go down. Even during the pandemic, schools and institutions were offering education online. Desktop as a service lays a better ground for research and collaboration from anywhere globally. Moreover, educational institutions can’t provide devices to students; therefore, multiple-device compatibility enables users to study from remote locations. Download e-Book Virtual Desktops for Education to Supercharge E-learning Healthcare is considered the most sensitive industry where every patient’s data is essential. DaaS offers them better data security by storing it in highly secure servers. Its security needs make healthcare a perfect candidate for Desktop as a service. Doctors can access patients’ medical reports and records even during emergency visits. Recommended Reading: Driving Pharma’s Digital Success With The Power Of Virtual Desktops In this hybrid cloud world, the law industry moves all the time. Cloud desktops enable law firms to have a cloud-ready environment to collaborate with clients and access more different types of applications customizable on the cloud, such as Abacus Law, Amicus Attorney, and more. DaaS comes with a silver lining for the BPO industry. Cloud desktops’ security, performance, user experience, and workspace management features cloud desktops make them quite lucrative for the BPO industry. It is much more practical than unmanageable hardware and complex infrastructure, resulting in downtime. Future hybrid needs are all about fully managed infrastructure. Virtual desktops power financial institutions with a fully secure and managed digital workspace. Financial services companies are prone to cyberattacks the most. Thus, banks and insurers gain centralized security controls and instant access to apps, data, and communication on any device, network, and cloud. Leverage Secure, Super-performing Desktop as a Service for Your Firm with ACE ACE Desktop as a service is an all-in-one cloud solution that combines with Citrix Platform and supports both on-premises and cloud-based deployments. Our virtual desktops and applications benefit from cloud features such as scalability, enterprise-grade security, and minimal overhead costs. ACE virtual desktops have an extremely low fence total cost of ownership (TCO), making them suitable for any enterprise, regardless of size and industry. Besides addressing the mobility and performance issues of VDI, desktop virtualization solutions also reduce costs and enhance user experience to meet integrated digital needs. Be ready to work seamlessly on thin clients without worrying about performance. If you want to know more about how DaaS supports your business in the long run, consult our experts, who will guide you through every step. DaaS Frequently Asked Questions How much does Desktop as a Service cost? The pricing of Desktop as a service ranges from $40-100, depending on the configuration you opt for and the customization required. What is SaaS vs DaaS? Software as a service (SaaS) is basically focused on delivering software applications to users. On the other hand, Desktop as a Service (DaaS) enables a comprehensive desktop experience for the user that is accessible from anywhere. SaaS is application-focused, and DaaS is virtually desktop-focused. Is DaaS software? No, DaaS is not software. It is a secure environment created by hosting desktops on a centralized server. DaaS is delivered as a hosted solution to the users wherein the DaaS provider looks upon all IT management tasks. Is Citrix a DaaS? Citrix is the top DaaS provider that offers cloud applications to users without complicating IT and compromising security.
<urn:uuid:a93b8802-2bd4-4388-97df-6b69e99a179f>
CC-MAIN-2024-38
https://www.acecloudhosting.com/blog/what-is-desktop-as-a-service-daas/
2024-09-16T13:20:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00630.warc.gz
en
0.937689
3,603
2.515625
3
One solution to the digital divide here on Earth may actually be in the stars… or more accurately in orbit. Alongside approaches using Fixed Wireless Access (FWA), Low-Earth Orbit (LEO) satellites are poised to improve access to internet in rural areas in the coming years, but each has their strengths and drawbacks. Addressing LEO Latency Concerns The digital divide – the difference in access to knowledge, services, opportunity, and income based upon access to connectivity – isn’t going away any time soon, according to experts. A solution is needed which disrupts traditional cost models for network deployments, and GSMA Intelligence Principal Economist Kalvin Bahia alluded to this in a Mobile World Live webinar entitled “Advancing Toward a Connected World: The Role of Non-Terrestrial Innovations.” “We forecast that by around 2025, while internet connectivity will still grow, we’re still forecasting that around 40% of the world’s population will be unconnected,” he said. “We’re going to need to see an acceleration in connectivity and mobile internet access if we’re going to achieve a number of international targets that have been set to connect the next half of the world to internet.” In an interview with 6GWorld Jack Burton, Principal at consultancy Broadband Success Partners, said LEOs at the very least have potential in that regard. “Universal coverage is the [hypothetical] big benefit [of LEOs]. There’s no specific ground-based infrastructure required other than at the customer’s home,” he said. He conceded that there may be additional latency, which is a sector-wide concern regarding the technology. In fact, the Elon Musk-founded SpaceX recently applied to become the first LEO provider to benefit from the U.S. government’s Rural Digital Opportunity Fund, eventually winning $885 million in subsidies in the auction’s first phase. However, SpaceX’s Starlink satellite constellation first had to disprove the Federal Communications Commission (FCC)’s reported skepticism that it would meet the 100ms latency standard. Starlink reportedly came in under 20ms, which is consistent with ground-based broadband. In principle, it wasn’t a surprise to Michele Zorzi of the University of Padova’s Department of Information Engineering. “The main issue with satellite communications in terms of latency is that the time of travel from Earth to satellite and back may be very large and if the satellite is at 36,000 km [with geostationary satellites] that’s going to be an issue, but, if LEOs are at 300 km or 1,000 km, then it’s like going coast to coast in the United States,” said Zorzi in an interview alongside University of Padova colleague Marco Giordani. Zorzi and Giordani spoke to 6GWorld about a recent paper of theirs entitled Non-Terrestrial Networks in the 6G Era: Challenges and Opportunities. Zorzi added that one potential drawback to LEOs is their high-mobility patterns. Even so, he explained, it’s being addressed. “You can place a High-Altitude Platform in a given position and it stays there, whereas satellites inherently move unless they’re geostationary, but that’s very high altitude. LEO is closer to the Earth, but these will have to orbit, so they will have a high-mobility pattern,” he said. “So, a satellite would go away and then a new satellite would come in, but that’s actually a normal problem. LEO satellites are in existence, so people know how to handle this issue.” The Benefits of Non-Terrestrial Networks Giordani meanwhile elaborated on the lack of terrestrial infrastructure as a benefit. According to him, it’s not just a question of the infrastructure itself, but the terrain on which it would be deployed. “This is very expensive, particularly […] in the rural areas because of the very difficult degrees of terrain that may be encountered,” he said. “At the same time, [Non-Terrestrial Networks] can provide 100% availability. So, they are very robust to external events like natural disasters or terrorist attacks […] Also, these elements can be deployed everywhere, above oceans, above deserts, in those areas where installation of terrestrial infrastructure is not even possible.” FWA has been used to address rural coverage constraints in countries including the United Kingdom. One concern of FWA is a lack of range, especially over mmWave frequencies, although U.S. Cellular completed an extended range 5G mmWave data call over 3.1 miles recently. Burton was optimistic about the development, but still presented the inability of mmWaves to penetrate so much as walls as a challenge. In that sense, the terrain itself could present an issue, Zorzi said. “You may provide 3.1 miles in visibility, but as soon as you have a rural environment with hills or geographical obstructions or rain that obstruct propagation then it becomes hard to provide that coverage over that distance,” he said. “Of course, this problem is also there with satellites. You have trees, but at least from the point of view of hills or canyons, covering from above will not be blocked. In fact, I don’t know if what we’re talking about here is ‘either/or,’ if we’re to go for a purely Fixed Wireless Access solution or a pure satellite-based solution. Most likely, it’s going to be a combination so that each can leverage its own strengths.” Achieving 100% Connectivity Won’t Be Easy According to Ericsson, FWA investments have payback times of less than two years. In comparison, Sue Marek argued in her Fierce Wireless column that financial instability due to an unclear business model is holding back LEO companies. She cited OneWeb being rescued from bankruptcy as one example. She also made the point that Amazon, which is investing $10 billion in a satellite broadband plan called Project Kuiper, has more to gain than simply helping to bridge the digital divide, namely more retail customers. According to Giordani, there are plenty more use cases to explore, though. For one example, satellites can simultaneously provide umbrella coverage to all required sensors for Internet of Things applications. For another, Zorzi said terrestrial installations can offload traffic to satellites to increase network capacity at times of high demand. While Intelsat, a satellite services firm, does not deal in LEOs, Director of Innovation and Strategy Ken Takagi nevertheless said all networks, both terrestrial and non-terrestrial in nature, will likely figure into the solution. “When we discuss the issue of connecting the unconnected and connecting everyone in the world, we’re not going to get there with a single solution. So, it’s going to be a combination of different technologies, different platforms, different concepts,” he said, speaking at the previously mentioned Mobile World Live webinar. “It’s going to be innovations from terrestrial networks and expansion of reach,” he continued, “There’s going to be different types of satellites, LEO, [Medium Earth Orbit; MEO], Geo that each bring their respective benefits and value to the table and… there’s probably going to be newer things that we have yet to identify.” Feature image courtesy of SpaceX (via Pexels).
<urn:uuid:9e105374-5af0-41e2-9549-8cb7750486b7>
CC-MAIN-2024-38
https://www.6gworld.com/exclusives/digital-divide-set-to-narrow-thanks-to-leos/
2024-09-17T17:53:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00530.warc.gz
en
0.947311
1,605
2.59375
3
The effectiveness of an Internet investigation depends to a large extent on the ability to collect pieces of information about people or groups, and then combine this data into a more complete picture. Changing to provide information about email addresses, usernames, sites, and account involvement helps you create profiles of people or organizations that interest you. Such profiles can later be used in the investigation. Many tools used in modern investigations are designed for only one task: finding phone numbers, or e-mail addresses, or usernames, etc. d. But there is a whole growing group of tools designed to simultaneously search dozens – and even hundreds – of different sources. This helps to significantly reduce the initial search period in any project. Conducting online investigations helps to get something else: it is easy to find information not only about other people, but also about ourselves. Use the tools listed above to search for your name and other data, and you will see what information about you is available online. In today’s hyper-connected world, there aren’t many options for each of us to keep our personal information completely private, but one thing that can make your life a little more secure is a password manager. Another useful investigative technique is to search for a username and related information. Searching by username can – and often does – bring up information about other websites and online services associated with the same user. And even if you don’t have an exact username, testing a few combinations can yield interesting results. An online service that tries to recover lost passwords: hashes, encrypted Office files, legally obtained. The tools help you securely edit document metadata and use ImageMagick to analyze PDF files. The following table lists the search operators that work with each Google search service. An open source tool that allows anyone to create visually rich interactive timelines. (Very fast and useful tool) The world’s largest catalog of video surveillance cameras online. You can watch live street, traffic, parking, road, online. The database collects and analyzes legal requests to remove online material, helping you know your rights and make laws. This list includes data brokers, people finders, civil records, and criminal background check sites. You can track the real estate you are interested in using our tools to make the right decision to move. A platform for intercepting, sharing and analyzing radio waves of police or fire services, public safety. Database of stolen works of art. The stolen items are exported to the USA and other countries. The easiest way to query this important information is through a Python application called xeuledoc. A comprehensive tool for graphical link analysis that offers intelligent data analysis and real-time information gathering. The graffiti database contains photos of works. Our goal is to create the largest and most extensive online graffiti archive. The map pinpoints the locations of each ransomware attack in the US where possible, including the ransom amount.
<urn:uuid:a7a97bde-de19-49a3-a73b-e7de588bc8a3>
CC-MAIN-2024-38
https://hackyourmom.com/en/kibervijna/zbir-informacziyi-pro-suprotyvnyka/osint-akademiya/nabir-onlajn-rozsliduvan-bellingcat-rizne/
2024-09-20T06:28:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00330.warc.gz
en
0.9019
599
2.578125
3
While police are cracking down on cybercrime, there are still billions of dollars to be made from ransomware and other common cyber attacks every year. Combined with the fact that many attackers live in countries not readily accessible by US law enforcement, the threats remain very real and growing. Fortunately, cybersecurity basics — including multifactor authentication — are still sufficient to prevent the most common forms of cyber attack. But too many organizations remain ill-informed or improperly equipped. How To Implement Cybersecurity Best Practices The first step businesses can take to protect themselves from a cyber threat is to arm all employees with information and maintain an increased awareness of potential threats. That’s why proper security awareness training is a vital part of any organization’s cybersecurity checklist. It’s been my experience that employees truly want to help protect their company, but they need to be shown how. This is why I’m sharing the following infographic. It covers important do’s and don’ts that can be applied to just about any business. The information is accessible to even the least technical employees, and I recommend sharing it with everyone in your network. Business Security Do’s And Don’ts Protecting your business has a lot to do with the tech-heavy, network security best practices, patches, firewalls and antivirus software. But often, keeping a business safe from threats has a lot to do with arming employees with the tools and information they need to help keep your business safe. It’s been my experience that employees want to help protect their company, but they need to be shown how. This is why I’m sharing the following infographic. It covers important do’s and don’ts that can be applied to just about any business. The information is accessible to even the least technical employees, and I recommend sharing it with everyone in your network. As you can see, addressing a handful of areas can help increase medium and small-business cybersecurity throughout your organization. Giving your employees the know-how to navigate computer usage, passwords, email, internet, and portable media will empower them to take steps that can keep your business safe. Additional Cybersecurity Help for SMBs If you’d like to verify that your internal policies and physical and network security are following current best practices as outlined by the National Institute of Standards and Technology (NIST) check out our interactive cybersecurity checklist. Of course, no two organizations are entirely the same, and that is also true of your risk, which is why we offer a variety of comprehensive tech assessments. These services are especially popular for organizations that understand they need to make a big leap forward on a tight budget. Our experts can assess your current cybersecurity posture and identify what updates are critical and which ones can wait, so your dollars go where they’re needed most.
<urn:uuid:d96c1800-a702-4b7d-9692-fc3ce515f88d>
CC-MAIN-2024-38
https://www.marconet.com/blog/business-security-best-practices
2024-09-09T09:20:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00430.warc.gz
en
0.951583
584
2.671875
3
Is Python Used in ERP? Python is an object-oriented programming language that is widely used for web development, software engineering, and data science. In recent years, it has become increasingly popular for businesses to use Python for enterprise resource planning (ERP). This article will discuss how Python is being used in the ERP landscape and the potential advantages and disadvantages of using it for ERP. What is ERP? Enterprise Resource Planning (ERP) is a software system that is designed to help businesses manage their resources, operations, and customer relationships. It enables companies to manage all aspects of their business, including finance, inventory, sales, and customer service. ERP systems are used to streamline business processes and improve decision-making. What is Python? Python is an interpreted, high-level, general-purpose programming language. It is an open-source language that is used for web development, software engineering, and data science. It is known for its ease of use and its efficient readability. How is Python Used in ERP? Python is used in ERP in a variety of ways. It can be used to develop custom applications and modules for ERP systems. It can also be used to build machine learning models for predictive analytics and to automate business processes. Additionally, Python can be used to access and analyze ERP data. Advantages of Using Python for ERP The main advantage of using Python for ERP is that it is easy to learn and use. It is an open-source language, which means it is free to use and can be modified to fit the needs of a business. Additionally, Python is an object-oriented language, which makes it easier to create efficient, maintainable, and reusable code. Python can also be used to create custom applications and modules for ERP systems quickly and efficiently. This can help businesses save time and money compared to creating applications or modules from scratch. Finally, Python can be used to access and analyze ERP data. This can help businesses make better decisions based on real-time data. Disadvantages of Using Python for ERP The main disadvantage of using Python for ERP is that it is not as widely adopted as other programming languages. This means that there is a smaller pool of developers who are knowledgeable in Python and familiar with ERP development. Additionally, Python is not suitable for all types of ERP development. For example, Python is often not used for developing custom applications or for creating enterprise-level applications. Python is an increasingly popular language for ERP development. It is easy to learn and use, and it can be used to create custom applications and modules quickly and efficiently. Additionally, it can be used to access and analyze ERP data. However, there are some drawbacks to using Python for ERP, such as its limited adoption and its lack of suitability for some types of ERP development. Ultimately, businesses must weigh the pros and cons of using Python for ERP before making a decision.
<urn:uuid:0c36dde6-f879-41fc-864c-65a024bc3740>
CC-MAIN-2024-38
https://codedwap.co/is-python-used-in-erp/
2024-09-11T19:55:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00230.warc.gz
en
0.943829
618
2.875
3
Adults Text While Driving More Than Teens Almost half of all texting adults text while driving according survey findings that indicate only a third of texting teens sent or read a text message while behind the wheel. Adults are more likely than teenagers to text while driving, a practice that greatly increases motorists' chances of getting into an accident, a study shows. Nearly half of all texting adults say they have sent or read a text message on their mobile phone while driving, compared to one in three texting teens ages 16 and 17, the Pew Internet & American Life Project found in a survey released Friday. Overall, Pew found that 27% of all U.S. adults say they have sent or read text messages behind the wheel. Pew also found that 49% of adults say they have been passengers in a car when a driver was sending or reading text messages. Overall, 44% of adults say they have been passengers of drivers who have used a mobile phone in a way that put themselves or others in danger. The numbers were about equal to those of teens. Besides motorists, pedestrians can also get into trouble while texting. The study found that one in six cell phone-toting adults have physically bumped into another person or an object while talking or texting on their phone. Beyond texting, adults were also more likely to talk on a cell phone while driving than teens. Three in four cell phone-owning adults say they have talked on a phone while driving, compared to a little more than half of phone-owning teens. The findings for adults 18 or older are based on a nationwide phone survey of 2,252 people between April 29 and May 30. The numbers associated with teens 16 and 17 were taken from a separate study Pew conducted in 2009. A study conducted last year by the University of Utah found that texting while driving can be up to six times more dangerous than talking on a cell phone while driving. Researchers found that texting was more dangerous because it requires drivers to switch their attention from one task to another. By contrast, motorists just talking on a mobile phone attempt to divide their attention between a conversation and driving, adjusting priorities of the two activities depending on the task demands, researchers said. Nevertheless, safety advocates have condemned both practices, and many states have laws banning both. Read more about: 2010About the Author You May Also Like State of AI in Cybersecurity: Beyond the Hype October 30, 2024[Virtual Event] The Essential Guide to Cloud Management October 17, 2024Black Hat Europe - December 9-12 - Learn More December 10, 2024SecTor - Canada's IT Security Conference Oct 22-24 - Learn More October 22, 2024
<urn:uuid:1a16e3e8-0683-4954-bbf7-57c2f6bdf7fb>
CC-MAIN-2024-38
https://www.darkreading.com/application-security/adults-text-while-driving-more-than-teens
2024-09-11T19:19:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00230.warc.gz
en
0.964807
549
2.53125
3
Here at ecoINSITE, it’s more about green data centers and cleantech innovations. Connecting with kids… not so much. Worry not, National Geographic has it covered when it comes to getting your kids excited about the awesomeness that is planet earth. If you’re looking for some online resources to share with younger minds with Earth Day, point your browser to these fine destinations, courtesy of the National Geographic Kids website: An extremely fun game where you must attach three bubbles of the same color in order to get them to pop. Nokapaka: The Shallow Tail Control Cosmo’s movements as he surfs along a big wave and dodges obstacles along the way. A fascinating look into how non-native plants are negatively impacting domestic ecosystems. Drinking Water: Bottled or From the Tap? Water is good for you, so keep drinking it. But think about how often you use water bottles and if you can make a change. And from one friend of the High Line (Thank You!) comes this: Edward Norton: Bag the Bag Video: Plastic bags have become “insidious global tumbleweeds.” Edward Norton encourages you to bring your own bags to the grocery store. Plus, if you’ve already lost ownership of your iPad to your offspring (don’t even pretend you haven’t), check out National Geographic’s Build It Green: Back to the Beach makes the leap from the PC to the tablet realm for a mere $3.99. Players take on the role of a mayor that’s trying to keep his tropical paradise from becoming a environmental disaster. Fortunately, there’s no shortage of clean technology to help turn the tide toward sustainability. Happy Earth Day!
<urn:uuid:ad4d3140-bfc1-44fc-a84c-354b270acaf1>
CC-MAIN-2024-38
https://www.ecoinsite.com/2011/04/this-earth-day-think-of-the-children.html
2024-09-15T10:48:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00830.warc.gz
en
0.910735
374
2.8125
3
As any student in design school can tell you, the first design principle you come across is “KISS” (Keep It Simple, Stupid). This acronym, attributed to the late Lockheed Skunk Works lead engineer Kelly Johnson, is best understood if you remember that Lockheed’s products were often designed to be used in the theater of war. Kelly’s acronym would remind the designers at Lockheed that whatever they designed and built had to be simple enough so that it could be maintained and repaired in the field using basic training and simple tools. As Lockheed products, their use case would not allow for more complexity than that. In other words, if your products were not simple and easy to understand, they would quickly become obsolete and essentially worthless in combat conditions. Decades later, this axiom applies, whether it’s conceptual physics, elaborate engineering, or consumer products. The end user doesn’t care how clever the creator is, they care about being able to use the output of this creativity, to make it useful to their own application. The simpler the product or execution, the more likely it is that this output will be useful to the user. What’s true for fighter planes and mobile applications is especially true for Artificial Intelligence (AI-) and Machine Learning (ML-) powered features. When you think about it, AI and ML algorithms are an extreme example of the importance of the KISS principle. Highly complex in nature, AI and ML are perceived as a complete and untouchable black box by most users. In order to use them properly, the widest possible audience must be able to understand their output and effect on the user’s task. And even the most complex intelligent systems must still feel simple. As Informatica products are built with the CLAIRE engine at their heart, it becomes a top priority for us to simplify the design of features that are expanding in back-end complexity. We simplify by following a few Simple (J) guidelines: - Favor text over visual explanations A picture is worth a thousand words, and that’s exactly the reason you want to avoid using images to explain a complex task. We use simple, one-sentence explanations to help our users quickly understand and decide on any actions related to AI-based features. - Use the right vocabulary When explaining AI decisions and rationale, we try to avoid using highly technical or scientific terms related to Artificial Intelligence or Machine Learning those terms would require prior knowledge and education. Instead, we present explanations in simple language that everyone can quickly understand. - Break down the complexity Any complex idea can become a lot simpler when it’s broken down into smaller steps. We apply this stepped approach—using clear, understandable language—whenever we’re trying to explain tasks that cannot be briefly summarized. We apply these principles consistently throughout our design. Here’s an example of how we handled algorithm-driven recommendations for tags or stakeholder assignment in input fields: Recommended stakeholders or tags are shown inside the input field, and visually branded to indicate that they are driven by the CLAIRE engine. The reasoning behind the recommendation is provided in a short tooltip. This is a perfect example of KISS in action: The user has an option to apply or dismiss the recommendation, and a simple UI control that does not require prior knowledge of the intricate details of the algorithm’s technical details is working behind the scenes. Branding these “smart” inputs with a distinct CLAIRE color and appearance in various places further reinforces user’s awareness of these unique items. We simplify the user experience whenever we can to allow the user to carry out their tasks with our AI-based products. By maintaining our other design principles of Trust, Clarify, Control, and Humanize, our users are able to leverage the intelligent engine for more areas of their work.
<urn:uuid:0bfafb20-e7f6-4e96-b79e-69c246a73e73>
CC-MAIN-2024-38
https://www.informatica.com/blogs/keep-it-simple-stupid-kiss-guidelines.html
2024-09-15T10:45:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00830.warc.gz
en
0.937856
796
3.296875
3
In this Cisco CCNA training tutorial, you’ll learn about the different Spanning Tree versions. There have been a few different versions over time, which have improved on the previous versions. Scroll down for the video and also text tutorial. Cisco STP Spanning Tree Versions Video Tutorial Passed the exam after a couple months of studying. Thank you Neil Anderson for providing a great course and helping me understand the concepts. When I was first learning about this from other sources, it was super confusing, but there is actually a simple way to explain it. It is by breaking it down into the open standards and into the Cisco proprietary versions. Starting off with the open standards, the first original implementation of Spanning Tree was 802.1D. That uses one Spanning Tree for all of the different VLANs in the LAN, just one instance for everything. That was improved with version 802.1w, which is Rapid Spanning Tree. It improved Spanning Tree by significantly improving the convergence time. With 802.1D it can take up to 50 seconds for an interface to make sure that there are no loops there and transition into the forwarding state. With Rapid Spanning Tree, that gets down to typically a few seconds. Rapid Spanning Tree also uses one Spanning Tree instance for all VLANs in the LAN. The latest of the industry standards is 802.1s, which is Multiple Spanning Tree. It enables grouping and mapping VLANs into different Spanning Tree instances, which allows you to do load balancing. To summarise, 802.1D, the original implementation, got very slow convergence time and it doesn't support any load balancing. 802.1w came out after that, which improved the convergence time, but also did not support load balancing. The latest one, 802.1s, builds on Rapid Spanning Tree by keeping the improved convergence time and it enables load balancing as well. MSTP Load Balancing Example Let's have a look and see how the load balancing works. The Access Layer switches in our example here have got PCs which are attached in multiple different VLANs. We're going to make CD1, the Core Distribution switch one, the Root Bridge for VLANs 10 to 19. The traffic for those VLANs is going to be forwarded on the link to CD1 and blocked on the link to CD2. We're looking at it from the point of view of our Access Layer switch, Access 3. When we configure this, traffic for VLANs 10 to 19 are going to go up the uplink to CD1. CD2 is going to be made the Root Bridge for VLANs 20 to 29. The traffic for those VLANs are going to go up the link to CD2, and it will be blocked on CD1. Half of my traffic goes in the uplink to CD1, half the traffic goes in the uplink to CD2. If either one of those uplinks fails, then all traffic will flow over to using the one link. With MSTP Multiple Spanning Tree, we're going to have two Spanning Tree instances running, one for each group of VLANs. That's how it allows us to do load balancing. Next up, we'll look at the Cisco proprietary versions. The first one is PVST+. This came out around the same time as 802.1D, but it included Cisco's enhancements. The main enhancement is it uses a separate Spanning Tree instance for every VLAN. Per VLAN Spanning Tree+ allows you to do load balancing the same as Multiple Spanning Tree does. But because this came out about the same time as the original 802.1D, it's got the same issues with having a very long convergence time. PVST+ is the default on Cisco switches. Therefore, you've got a separate Spanning Tree instance for every single VLAN and it's got slow convergence time. The next Cisco version was Rapid Per VLAN Spanning Tree+. This came out at around the same time as 802.1w which, if you remember from the open standards was the second implementation which had a faster convergence time. RPVST+ also significantly improves the convergence time over PVST+. Like PVST+, it uses a separate Spanning Tree instance for every VLAN. MST, the industry standard, you can group multiple VLANs into the same Spanning Tree instance. But with the Cisco versions, PVST+ and RPVST+, they use a separate Spanning Tree instance for every single individual VLAN. PVST+ and RPVST+ Load Balancing Example Looking at the load balancing with PVST+ or Rapid PVST+ using the same example, CD1 is going to be made the Root Bridge for VLANs 10 to 19. CD2 is the Root Bridge for VLANs 20 to 29. VLANs 10 to 19 go over the left-hand path up to CD1 and VLANs 20 to 29 will go over the right-hand path to CD2. So far, it's looking exactly the same as MST. The difference is with MST, we grouped the VLANs. We had one group going up the left-hand side and another group going up the right-hand side. So, we had two Spanning Tree instances. With PVST+ and Rapid PVST+, you can't group the VLANs. You have a separate instance for each one. Rather than having two total instances like we had with MST, here we're going to have 20 separate instances, one for each individual VLAN. The Cisco versions PVST+ and Rapid PVST+ put a bit more load on the switch because it has to calculate Spanning Tree instances at the VLAN level rather than being able to do it at the group level. So those are the different versions of Spanning Tree. For which versions will be supported on your switch, it depends on the particular model of switch that you're using. PVST+ will always be supported. That will be the default. It will usually also support Rapid PVST+ as well and possibly depending on the model of switch, it may also support MST, the open standard Multiple Spanning Tree. PVST+ Port Roles PVST+, which is the default on Cisco switches, will assign the Root, Designated, or Alternate role to ports. The Alternate ports are your Blocking ports with PVST+. Spanning-Tree Protocol Types: https://www.learncisco.net/courses/icnd-2/vlans-and-spanning-tree/stp-protocol-types.html Understanding and Configuring Spanning Tree Protocol (STP) on Catalyst Switches: https://www.cisco.com/c/en/us/support/docs/lan-switching/spanning-tree-protocol/5234-5.html Types of Spanning Tree Protocol (STP): https://www.geeksforgeeks.org/types-of-spanning-tree-protocol-stp/ Text by Libby Teofilo, Technical Writer at www.flackbox.com With a mission to spread network awareness through writing, Libby consistently immerses herself into the unrelenting process of knowledge acquisition and dissemination. If not engrossed in technology, you might see her with a book in one hand and a coffee in the other.
<urn:uuid:e8375862-2c3a-4dc5-a0af-b8420bead816>
CC-MAIN-2024-38
https://www.flackbox.com/cisco-stp-spanning-tree-versions
2024-09-19T04:32:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00530.warc.gz
en
0.927284
1,564
2.625
3
In this AI Podcast, astronomers Olivier Guyon and Damien Gratadour how they’re using GPU-powered extreme adaptive optics in very large telescopes to image nearby habitable planets. Imagine staring into the high-beams of an oncoming car. Now imagine trying to pick out a speck of dust in the glare of the headlights. That’s the challenge Olivier Guyon and Damien Gratadour face as they try to find the dull glint of an exoplanet — a planet orbiting a star outside our solar system — beside the bright light of its star. - Olivier Guyon is an associate professor at The University of Arizona’s College of Optical Sciences and SCExAO Project Scientist at the Subaru Telescope. - Damien Gratadour is an Instrument Scientist at ANU.
<urn:uuid:4ba399ad-9f6e-4f9c-a118-f21ea6e5dabd>
CC-MAIN-2024-38
https://insidehpc.com/2019/10/podcast-how-ai-is-helping-astronomers-scour-the-skies-for-habitable-planets/
2024-09-20T09:19:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00430.warc.gz
en
0.871676
167
3
3
Category: Software > Computer Software > Educational Software Tag: Data, Discuss, environment, Java, together Availability: In stock Price: USD 49.00 This is the second course in the Java as a Second Language Specialization. In this course, we'll take a look at Java data types, discuss what primitive data types are, and explain data classes. We'll also explore characters and strings and you'll add a new class in the lab. Next, we'll take a look at Java Control Structures. We'll explain IF statements, Loops, and arrays, and will discuss Switch Statements and the Java Programming Environment. Interested in what the future will bring? Download our 2024 Technology Trends eBook for free. After that, we'll define inheritance and explore how methods and properties are inherited in Java. We'll also discuss polymorphism and overloading functions before completing a lab and quiz. The final module discusses how all of the things we've learned in the previous lessons together will come together for our final lab. The labs in this course require you to download and install the Java environment. The instructor walks you through the installation of the environment in course 1 of this Specialization. It is recommended that you take these courses in order because the knowledge is cumulative.
<urn:uuid:0ee6f888-8b2c-4416-98c9-71a14e2119ae>
CC-MAIN-2024-38
https://datafloq.com/course/the-java-language/
2024-09-08T07:10:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00630.warc.gz
en
0.902163
258
3.578125
4
If you’ve used Microsoft Windows for any length of time, then a dialog box like this will be no stranger to you: You may think that it’s harmless enough to click on the “Send Error Report” button and send details of the crash to Microsoft, but recent revelations about NSA surveillance underline that there are risks. For instance, did you realise that by default Windows crash reports are sent unencrypted, potentially exposing information about the setup of your computers? Indeed, according to a leaked presentation seen by Der Spiegel, the NSA’s TAO (Tailored Access Operations) division can be automatically notified whenever a targeted computer sends a crash report. The automated crash reports are a “neat way” to gain “passive access” to a machine, the presentation continues. Passive access means that, initially, only data the computer sends out into the Internet is captured and saved, but the computer itself is not yet manipulated. Still, even this passive access to error messages provides valuable insights into problems with a targeted person’s computer and, thus, information on security holes that might be exploitable for planting malware or spyware on the unwitting victim’s computer. To understand more about the threat, check out this investigation from the researchers at Websense. Bizarrely, whoever created the NSA presentation found the interception of the Windows crash error reports so amusing that they mocked up a version of the familiar dialog with their own wording, If (unlike the NSA) you fail to see the funny side of this, and want to prevent computers in your organisation from sending Windows Error reports to Microsoft (and potential snoopers) you may wish to make a group policy setting change. And maybe it would be good if Microsoft made some changes at its end too, ensuring that future crash reports are sent properly encrypted.
<urn:uuid:00322908-f161-46e7-be9e-eaa205ddf5ea>
CC-MAIN-2024-38
https://grahamcluley.com/nsa-spying-windows-crash-error-report/
2024-09-08T07:44:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00630.warc.gz
en
0.894223
385
2.515625
3
Fujitsu Ltd. has detailed research that could make digital signal processing (DSP) chips for coherent detection significantly smaller, lower cost, and less power hungry. The new chip design can compensate not only for linear distortions of the optical signal like chromatic dispersion, but also for complex waveform distortion caused by nonlinear effects – something that the current generation of DSP chips does not do. Coherent detection, paired with dual-polarization quadrature phase-shift keying (DP-QPSK) modulation, is the optical industry’s preferred approach to 100-Gbps long-haul transmission and beyond. Fujitsu says it is also exploring this technology for high-capacity, short-range applications such as data centers and access networks. Using conventional methods, the implementation of nonlinear compensation technology would require massive circuits with more than 100 million logic gates, and chips of this size are only expected to become feasible around 2020, according to Fujitsu. Reducing the scale required of such circuits, therefore, has been a pressing issue. In September of last year, Fujitsu Ltd., Fujitsu Laboratories, and Fujitsu Research and Development Center revealed that they had developed a technology that would dramatically simplify and reduce the size of these circuits by 70%, making them commercially viable as early as 2015 (see "Fujitsu details DSP algorithm and circuitry for transmission beyond 100 Gbps"). Now the researchers have improved their design further with a new signal-processing algorithm that, while retaining the distortion-correction performance of the technology developed last year, slashes the number of circuit stages required to about one-seventh of current typical levels – and about half the level of Fujitsu's previous technology. In a transmission test at 112 Gbps over 1,500 km, Fujitsu showed that a three-stage circuit based on the new technology achieved the same signal quality as 20-stage circuit using conventional technology. The work was reported at ECOC 2011 in Geneva. Some of the research was conducted as part of the "Universal Link Project R&D" sponsored by the National Institute of Information and Communications Technology (NICT) in Japan. For more information on communications ICs and suppliers, visit the Lightwave Buyer’s Guide.
<urn:uuid:0e492b96-40de-452d-a0d8-3acccb65ac11>
CC-MAIN-2024-38
https://www.lightwaveonline.com/optical-tech/article/16661226/fujitsu-claims-advance-in-coherent-dsp-algorithms
2024-09-08T06:59:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00630.warc.gz
en
0.945721
460
2.796875
3
What Is a Replay Attack? A replay attack occurs when a cybercriminal eavesdrops on a secure network communication, intercepts it, and then fraudulently delays or resends it to misdirect the receiver into doing what the hacker wants. The added danger of replay attacks is that a hacker doesn't even need advanced skills to decrypt a message after capturing it from the network. The attack could be successful simply by resending the whole thing. How It Works Consider this real-world example of an attack. A staff member at a company asks for a financial transfer by sending an encrypted message to the company's financial administrator. An attacker eavesdrops on this message, captures it, and is now in a position to resend it. Because it's an authentic message that has simply been resent, the message is already correctly encrypted and looks legitimate to the financial administrator. In this scenario, the financial administrator is likely to respond to this new request unless he or she has a good reason to be suspicious. That response could include sending a large sum of money to the attacker's bank account. Stopping a Replay Attack Preventing such an attack is all about having the right method of encryption. Encrypted messages carry "keys" within them, and when they're decoded at the end of the transmission, they open the message. In a replay attack, it doesn't matter if the attacker who intercepted the original message can read or decipher the key. All he or she has to do is capture and resend the entire thing — message and key — together. To counter this possibility, both sender and receiver should establish a completely random session key, which is a type of code that is only valid for one transaction and can't be used again. Another preventative measure for this type of attack is using timestamps on all messages. This prevents hackers from resending messages sent longer ago than a certain length of time, thus reducing the window of opportunity for an attacker to eavesdrop, siphon off the message, and resend it. Another method to avoid becoming a victim is to have a password for each transaction that's only used once and discarded. That ensures that even if the message is recorded and resent by an attacker, the encryption code has expired and no longer works.
<urn:uuid:cb8947d9-6f46-4def-a509-4da047775286>
CC-MAIN-2024-38
https://www.kaspersky.com/resource-center/definitions/replay-attack
2024-09-10T18:14:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00430.warc.gz
en
0.951801
464
3.03125
3
15 May Sailing Safely: 10 Impacts and Risks of What is AUP in Cyber Security Ahoy, cyber sailors! As you navigate the digital seas, it’s crucial to have a sturdy ship and a reliable map. One such map in the world of cybersecurity is the Acceptable Use Policy (AUP). In this article, we’ll explore the 10 impacts and risks of what is AUP in cyber security, helping you steer clear of cyber storms and navigate safely through the waters of cyber threats. - Impact on User Behavior What is AUP in cyber security, or Acceptable Use Policy, sets the tone for user behavior, encouraging responsible use of IT resources and promoting cybersecurity awareness among employees. It outlines the acceptable ways in which employees can use company systems and networks, helping to prevent security breaches and data loss. By promoting a culture of cybersecurity awareness, AUP educates employees about the importance of protecting sensitive information and following best practices for IT security. Ultimately, AUP helps create a safer and more secure cyber environment for organizations and their employees. - Risk of Non-Compliance Failure to comply with AUP in cyber security can lead to security breaches, data loss, and legal consequences for the organization. AUP sets the standards for acceptable use of IT resources, including guidelines for maintaining security and protecting sensitive information. Non-compliance can result in unauthorized access to systems, exposure of sensitive data, and violations of privacy regulations. To mitigate these risks, organizations must ensure that employees are aware of and adhere to AUP guidelines. This includes providing regular training on cybersecurity best practices and enforcing consequences for violations of AUP. By prioritizing compliance with AUP, organizations can reduce the likelihood of security incidents and protect their data and reputation. - Impact on Data Security What is AUP in cyber security, or Acceptable Use Policy helps protect sensitive data by outlining security measures that users must follow, such as using strong passwords and encrypting data. These measures ensure that only authorized users have access to sensitive information and that data remains secure both in transit and at rest. By enforcing these security measures, AUP helps mitigate the risk of data breaches and unauthorized access to sensitive information. Additionally, AUP promotes a culture of data security within an organization, ensuring that employees understand the importance of protecting sensitive data and are aware of the steps they need to take to maintain data security. - Risk of Malware and Phishing Attacks Non-compliance with AUP in cyber security can increase the risk of malware and phishing attacks, compromising the organization’s IT infrastructure. AUP sets guidelines for using IT resources responsibly and securely, including avoiding actions that could expose the organization to such risks. By following AUP guidelines, employees can help protect the organization from cyber threats and maintain a secure cyber environment. Regular training and awareness programs on AUP can also help employees understand the importance of compliance and the role they play in protecting the organization from cyber-attacks. - Impact on Productivity What is AUP in cyber security, or Acceptable Use Policy can impact productivity by restricting access to certain websites or applications deemed inappropriate or insecure. While these restrictions are intended to protect the organization’s IT infrastructure and prevent security breaches, they can also limit employees’ access to resources they need to perform their jobs effectively. To mitigate the impact on productivity, organizations should carefully consider the balance between security and accessibility when drafting and enforcing AUP. Additionally, providing employees with alternative solutions or workarounds for accessing necessary resources can help maintain productivity while ensuring compliance with AUP. - Risk of Insider Threats AUP helps mitigate the risk of insider threats by defining acceptable use and prohibiting unauthorized access to sensitive information. - Impact on Network Performance What is AUP in cyber security, or Acceptable Use Policy can impact network performance by limiting bandwidth usage or restricting certain activities that consume excessive resources. These restrictions are put in place to ensure fair and efficient use of network resources for all users. However, they can also lead to slower network speeds and decreased productivity if not implemented carefully. To minimize the impact on network performance, organizations should regularly monitor network usage, adjust policies as needed, and educate employees about the responsible use of network resources. Additionally, implementing network management tools and prioritizing critical applications can help optimize network performance while ensuring compliance with AUP. - Risk of Data Breaches Non-compliance with AUP can increase the risk of data breaches, resulting in the loss or theft of sensitive information. - Impact on Employee Morale What is AUP in cyber security, or Acceptable Use Policy, can impact employee morale if perceived as overly restrictive or intrusive, leading to dissatisfaction and reduced productivity. Employees may feel frustrated if they perceive AUP as limiting their ability to perform their jobs effectively or if they perceive monitoring of their online activities as intrusive. To avoid negative impacts on morale, organizations should ensure that AUP is clearly communicated to employees and that they understand the reasons behind its policies. Additionally, organizations should strive to find a balance between security needs and employee productivity, ensuring that AUP is reasonable and aligns with the organization’s culture and values. - Risk of Reputation Damage Failure to enforce AUP can damage the organization’s reputation, leading to loss of trust from customers and partners. In conclusion, what is AUP in cybersecurity? AUP, or Acceptable Use Policy, plays a crucial role in maintaining a secure cyber environment and mitigating the risks of cyber threats. By defining acceptable use and prohibited activities, AUP helps protect an organization’s IT resources from misuse and ensures the integrity and confidentiality of its data. AUP also promotes a culture of cybersecurity awareness within the organization, educating users about best practices for safe cyber practices. However, AUP is not without its challenges. It can impact employee morale if perceived as overly restrictive or intrusive, leading to dissatisfaction and reduced productivity. Additionally, AUP can impact network performance by limiting bandwidth usage or restricting certain activities that consume excessive resources. To mitigate these challenges, organizations should ensure that AUP is reasonable, clearly communicated, and aligned with the organization’s culture and values. Overall, what is AUP in cyber security is a vital tool in the cybersecurity arsenal, helping organizations navigate the complex waters of cyber threats and ensuring that their digital voyage is safe and secure. Sail on, cyber sailors, and may your digital voyage be smooth and secure! Bytagig is dedicated to providing reliable, full-scale cyber security and IT support for businesses, entrepreneurs, and startups in a variety of industries. Bytagig works both remotely with on-site support in Portland, San Diego, and Boston. Acting as internal IT staff, Bytagig handles employee desktop setup and support, comprehensive IT systems analysis, IT project management, website design, and more. Share this post:
<urn:uuid:fead59aa-c240-4693-8fcb-ff724d5458b2>
CC-MAIN-2024-38
https://www.bytagig.com/articles/sailing-safely-10-impacts-and-risks-of-what-is-aup-in-cyber-security/
2024-09-16T20:27:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00830.warc.gz
en
0.929586
1,397
2.953125
3
Where Can Kids Zap Aliens and Bust Ghosts? By: Meg Frainey, Employee Communications How do you show kids that science, technology, engineering and math (STEM) can be fun? The answer is simple. Learning by doing. Let them come with you to work and code apps where they blast things on a screen. We recently held a coding camp for employees' children ages 6 to 12 at our Big Data office in Plano, Texas. While the kids were zapping aliens, bursting hot dogs and busting some ghosts, they learned more about the technology their parents work with every day. "The idea was to promote STEM by inviting our employees and their children to sit side by side, have fun and build real apps," said Randy Garza, Big Data team member and event organizer. Big Data joined the AT&T Aspire team to host the event. Campers learned to code and develop apps. "We believe one of the most important investments we can make is in our kids' education," said Nicole Anderson, AVP, social innovation, AT&T. "Coding is a language every kid needs to know. It's the language behind what we do at AT&T and it's the language behind so many jobs of the future." Campers got to hear from a data scientist panel and tested their knowledge in a mini hackathon. They even presented their newly designed apps. We're hoping to inspire new generations of data scientists and other technology professionals—starting with our own.
<urn:uuid:fc342baf-a6bc-4463-b29d-eedf6a81c81f>
CC-MAIN-2024-38
https://about.att.com/newsroom/att_kids_coding_camp.html
2024-09-18T02:22:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00730.warc.gz
en
0.974749
312
3.0625
3
In recent years, agile development has taken the world by storm - with approximately 71% of organisations adopting agile methodologies for their product vision. Furthermore, research shows that 90% of agile projects have demonstrated a faster time to market than conventional project management techniques. What makes agile so unique and successful? What is Agile Planning? In essence, agile planning focuses on finding answers to simple questions such as what are we building, how long it will take to complete, how much will it cost, who should be involved, etc. Since it is a project planning method, it estimates work using units called sprints and iterations. Sprints are periods of at least 1-3 weeks in which the focus will be on small tasks that the team must complete. Agile works by identifying which items are completed in each sprint and creates a repeatable process to support teams and discover how much they can achieve. Does agile differentiate between the estimation of duration and estimation of size? Unlike other project planning tools, Agile delivery works in a different way. It essentially breaks down projects into small, self-contained units which can deliver value to customers. Multiple teams plan for what they want to achieve and how to satisfy customers in a short timeframe. This blog will focus on the step-by-step guide on properly breaking down your projects and planning small iterations that your team can reliably deliver every time. Four key components of agile planning: 1. A standard agile plan gets split into releases and sprints Agile planners help define a release, which involves developing a new product or updating an existing product. Each release gets split into more than one iteration, known as sprints. Each sprint has a fixed length, typically 1-2 weeks, and the team has a predefined list of work items to focus on in each sprint. The work items are known as user stories. Then, the release plan gets broken down into several iterations (sprints) that include user stories (items) 2. Planning revolves around user stories What makes agile planning dissimilar from other conventional project management methodologies such as PRINCE2? In traditional software development teams had very detailed and technical specifications of precisely what they would need to acquire to complete their tasks. As for agile planning, the team needs to document what the user needs. Using agile planning, the team can use sprints to figure out how to address that specific need in the most convenient way possible. 3. How planning is incremental and iterative Unlike the other project planning tools, agile focuses on the idea of iteration. All sprinters are equal in length, and an agile team essentially repeats the same process more than once in every sprint. Each sprint would result in working software features that organisations can deploy to end-users. An iterative process enables the team to learn and estimate how many stories they can complete in a given timeframe and learn about the type of problems that could impede their progress. 4. How team members do estimation While agile planning has many advantages, it equally important that development teams should participate in the estimation and planning process and not have the work scope controlled by management. Agile planning changes how teams work and manage projects. It allows teams to assign story points to user stories in the final release plan. How do you define an agile story sprint? Within the agile methodology, a story sprint is a number that reflects the amount of work involved in creating a user story and reflects on its complexity. For instance, suppose a team can assign 1 point to a simple user story, use 2-3 points for moderate complex and then 4-5 points for a huge story - based on the teams understanding of the work involved. Agile also has an alternate estimation unit for the agile stories. Known as the ideal time, which shows how long a user story should take to be complete without interruptions. Agile planning poker is an estimation game for agile teams. Team members would use the agile planning poker to estimate a user story by drawing a playing card with several story points and placing it face down on the table. Much like the nature of poker, the cards get turned face-up, should there be any discrepancies, such as one or two team members estimated 1 point and others estimated 4 or 5, they can discuss and reach a consensus. Agile planning: how the process works and the steps to take Release Plan Process Your product development plan must show the release goal: this encompasses how teams should solve problems or how will teams improve the user experience. Based on these goals, here are some steps to properly plan the release: 1. Discuss with teams the needed features to accurately address the goals. 2. Discuss the details involved in each feature and factors that can affect delivery. It must encompass the infrastructure required, risk and external dependencies. Features with a higher risk and highest value should be early in the release. 3. Decide amongst teams on how much each member can commit to finishing in each sprint. Usually, this gets compared to the team's velocity in previous sprints. It is best to consider existing work on infrastructure or tools and known interruptions such as support work. 4. List down the stories and epics for the release in priority order with their respective size. An epic is a significant development task that gets broken down into more than one user story. 5. After step four, add an iteration to the plan. 6. Add stories to the iteration until the maximum capacity gets reached. 7. Ensure that lower priority user stories get removed to adapt to the required time frame for release or add more iterations until the user stories are covered. 8. Share the plan using your agile management software of choice (or Asana) and request feedback to get commitment from all team members, product owners, and other known stakeholders. Sprint Planning Process After the release plan process, the sprint planning process will help an agile team plan at the beginning of a new sprint, as part of an existing release plan: 1. Create a retrospective meeting to discuss the previous sprints completed and lessons learned. 2. Run a sprint planning meeting to analyse the release plan, update it according to velocity in recent sprints done, changes to priorities or idle time that the team did not plan for in the release. Another reason to run a sprint planning meeting is to identify new features too. 3. Ensure that user stories are detailed enough to use. Be technical and elaborate on tasks that are not well defined to avoid confusion or unwanted surprises that might impede progress. 4. To make the progress of sprint planning easy, break down user stories into specific tasks. For instance, the user story can get divided into UX design of a back-end implementation and front-end development of the interface. Make sure to keep the size of tasks small and requiring no more than one workday. 5. Make sure to assign tasks to each team member and confirm that they are committed to performing them, which is essential. In the Agile / Scrum framework, this responsibility lies with the Scrum Master. 6. For each task to complete, make sure to write each task on physical sticky cards and allocate them on a large board visible to the entire team to see. All user stories in the current sprint should be up on the board. 7. Keep track of all tasks on a grid by recording the responsibility for completing each task, monitoring the remaining hours and actual hours used, and the estimated time to complete each task. 8. Keep track of the velocity using a burndown chart. When each sprint starts, use the team's time tracking to calculate a graph showing the set number of tasks or hours remaining vs the plan. The burndown chart’s slope will indicate if the team are on schedule, ahead or behind schedule. Implement a Daily Standup Meeting Daily meetings are crucial to communication progress, solving and identifying issues during a sprint. On each day, gather the entire team and have every team member report on their status: - Make sure that the maximum duration of a meeting is no longer than 15 minutes. - Daily agile planning meetings are usually stand up meetings to foster courage and bravery amongst each team member. - The Scrum Master or release manager is responsible for coordinating and helping team members overcome obstacles. - For each meeting, make sure to give each member no more than one minute to report what they did yesterday, what they will be doing today, and what is in their way. These are the things preventing teams from finishing a task on time. - Each task’s status should only be marked as ''done'' or ''not done'', and if a task is marked “not done”, it is best to know how many hours are remaining for that task. Agile Planning Template Asana has various templated options to suit most business needs if you require a structured template for agile planning purposes. To learn more, simply click on the picture below: Using the best team management tool for agile planning Use the agile planning tool to help define the user stories in the release, assign them to team members, organise them into sprints and track progress daily. A team can manage its sprint iteration planning with Asana templates. We help all kinds of teams that use Asana as their primary agile project management tool. They even use the tool to: - See clear ownership of features and bugs - Plan sprints realistically - Know at a glance if something is amiss or if a team member is falling behind on work - Set your team a dedicated and an aligned timeline - Get a proper understanding of priorities and estimates.
<urn:uuid:e51a3f5c-40bc-4534-9ad2-51e4b805342c>
CC-MAIN-2024-38
https://www.gend.co/blog/a-complete-guide-on-agile-planning-and-management
2024-09-14T13:36:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00230.warc.gz
en
0.944003
1,978
2.609375
3
This article describes the process to configure Word Breaker. Word Breaking is the breaking down of text into individual text tokens or words. Many languages, especially those with Roman alphabets, have an array of word separators (e.g., blank space) and punctuation used to distinguish words, phrases, and sentences. Word breakers must rely on accurate language heuristics to provide reliable and accurate results. Word breaking is more complex for character-based systems of writing or script-based alphabets, where the meaning of individual characters is determined from context. A Word Breaker is vital for the proper indexing of most of the Asian languages (e.g., Japanese, Chinese, and Arabic) and other languages. To configure the Word Breaker, you have to set up the Language Analyzer as described below: - Open GFI Archiver. - Navigate to the Configuration tab and click Archive Stores. - Click Index Management. - Configure one of the languages analyzing options: Option | Description | Enable built-in word breaker | The GFI Archiver language analyzer is enabled by default. It is highly recommended to enable this option for optimal indexing performance. | Enable Microsoft Windows word breaker | Choose this option to disable the GFI Archiver built-in word breaker and use the word breaker of your Windows operating system. Use the Default Language drop-down list to specify the language to be used to index archived data. NOTE: If the required language is not listed in the Default Language drop-down list, add the required language from the Regional settings option within the Windows® control panel. Alternatively, check the Enable automatic language detection box to let Windows detect the language automatically. |
<urn:uuid:7ccd0bdd-a098-4967-8bf0-be6223419b23>
CC-MAIN-2024-38
https://support.archiver.gfi.com/hc/en-us/articles/360015213180-Configuring-the-Word-Breaker
2024-09-15T20:42:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00130.warc.gz
en
0.811027
355
3.03125
3
We use BGP on the Internet to exchange routing information between autonomous systems (AS). A route leak happens when one or more routes are advertised and accepted by ASes that shouldn’t have these routes. This can have nasty side effects like traffic delays, drops, and security issues. In this lesson, we’ll dive into BGP roles, what we should advertise, what a route leak is, why route leaks are bad, how to detect one, and how to deal with it. Let’s get started! Routing between ASes When studying BGP, chances are that you only look at it from a technical perspective. The Internet might look like a huge spiderweb of ASes: It looks like a spiderweb, but there is some hierarchy. It’s not like every AS connects with every other random AS they can connect to and let BGP figure out, using its path selection, how to get everywhere. These ASes are in some form one of the two: - Customers requiring Internet access who pay a provider to get access to the Internet. - ISPs that make money by selling Internet connectivity to customers. An end user, such as an enterprise business requiring Internet access, is a customer. An ISP can be both a provider and a customer. ISPs can offer Internet to their customers but, depending on their size, require higher-tier providers to connect to the rest of the Internet. In a nutshell, we could say that we have these roles: Here’s a visualization: Let me explain this overview. We have ASes that pay for access and ASes that earn money. The black arrows tell who is paying whom. For example: - AS 4, 5, 6, and 7 are customers requiring Internet access. They pay AS 2 and 3 for their services. - AS 2 and AS 3 have two roles: - They are providers for customers AS 4, 5, 6, and 7. - They are customers of transit provider AS 1 for access to other parts of the Internet. - AS 2 and AS 3 are also peers: - AS 2 can reach AS 6 and AS 7 directly through AS 3. - AS 3 can reach AS 4 and AS 5 directly through AS 2. - AS 2 and AS 3 peer with each other, so they don’t have to use transit provider AS 1 to reach each other’s networks. - AS 2 and AS 3 might not charge each other for traffic they send to each other. Advertising between roles Now we understand the roles, let’s talk about what we normally advertise. Provider to Customer A provider advertises Internet routes to customers. This can be a full or partial Internet routing table or a default route. The customer advertises its network and perhaps routes from its customers if it has any. Peer to Peer The ASes exchange routes to each other’s network and their customer’s networks but nothing else. Route Leak Definition You now know about the different roles and what each AS usually advertises to other ASes. When do we consider something a route leak? RFC 7908 does a great job of explaining what a BGP route leak is. Their definition: The scope is the BGP import and export policies of an AS that define what routes should be exchanged between the local AS and the remote AS. This is an agreement on paper between two ASes, and then you need to configure your router(s) to advertise and filter specific routes. Route Leak Causes We use a lot of network automation nowadays, but in the end, humans configure routers and advertise routes. A route leak can happen when someone advertises a route you are not supposed to. This can be an accident or malicious. Most of the time, it’s because of misconfiguration. You want to advertise something to one neighbor but forget to filter something outbound. It’s also possible you forgot to filter routes you receive from your neighbor. The way BGP was designed, nothing checks whether our configuration matches the configuration of a remote router. Nothing checks our peering relationship (customer, provider, peer). Route Leak Types There are different types of route leaks, called classifications or types. We’ll walk through them. Type 1,2,3,4 are similar; a route leak is a violation of policy. The difference is what source AS leaks and to what destination AS (their roles). Type 1: Hairpin Turn with Full Prefix A multihomed AS learns a route from an upstream AS and propagates it to another upstream AS. The route hasn’t changed; it’s the same prefix, and the AS path did not change. The multihomed AS shouldn’t advertise this prefix, and the second upstream AS should have filtered it, but it didn’t happen. This route leak is often successful because ASes are often configured to prefer customer routes over peer routes. It’s called a hairpin because the traffic goes to AS 4 and turns around the other way. This route leak often succeeds because ASes are often configured to prefer customer routes over peer routes. Traffic will be forwarded as long as AS 4 can keep up. Type 2: Lateral ISP-ISP-ISP Leak This route leak typically happens when you have three peers in a row. Lateral is the same thing as “peer-to-peer” or “non-transit”. Here’s an example: This is what happens: - AS 2 learns a prefix from its customer in AS 5. - AS 2 forwards the prefix to: - AS 1 (transit provider) - AS 3 (peer) - AS 3 forwards the prefix to AS 4, while it should only advertise its own prefixes. - AS 4 might forward traffic destined for AS 5 through AS 3. Traffic will get there, but it’s not using the intended path. Traffic from AS 4 to AS 2 should use transit provider AS 1. Type 3: Leak of Transit-Provider Prefixes to Peer This route leak happens when an AS leaks routes from a transit provider to a peer. Here’s an example: AS 3 learns prefixes from its transit provider AS 1. These are forwarded to AS 2, which can now use its peer link to AS 3 to reach destinations behind AS 1. Traffic is forwarded, but this will cost AS 3 money and might overburden the link between AS 2 and AS 3. Type 4: Leak of Peer Prefixes to Transit Provider This route leak happens when an AS forwards routes from a peer to its transit provider. Here is an example: AS 2 forwards a prefix from its customer in AS 4 to its transit provider AS 1 and to its peer AS 3. AS 3, however, leaks this prefix to AS 1. AS 1 can now use AS 3 to get to AS 4. Type 5: Prefix re-origination with data path to legitimate origin This type is similar to type 1: A multihomed AS learns a route from upstream AS 1 and advertises it to upstream AS 2. In this scenario, however, the route is advertised as if the AS originated the route. This means the AS path is removed. This is also known as re-origination. Type 6: Accidental Leak of Internal Prefixes and More-Specific Prefixes This route leak happens when an AS leaks internal prefixes to one or more transit provider ASes and/or ISP peers. The leaked route is more specific than an already advertised summary route. The specific route should not have been advertised, and the receiving AS failed to filter it. Here’s an example: AS 2 advertises a summary route for the routes from AS 4 and AS 5. It also advertises a specific route to its peer, AS 3. Specific routes are preferred over summary routes. Route Leak Consequences The AS that performs the route leak attracts traffic to itself. There are several issues with route leaking: - Violation of policies - Traffic forwarding through unintended paths - Suboptimal routing - Increased latency - Traffic blackholes - Security issues: - Packet sniffing ASes agree on what routes they advertise, so leaking routes violates policies. Because of a route leak, traffic is forwarded through unintended paths. This can cause suboptimal routing and additional delays. The leaking AS might not have enough capacity to forward the traffic. If the leaking AS advertises something they don’t own, they may drop the traffic, causing black holes and denial of service (DoS). There are also security issues. The leaking AS will receive traffic, which they can inspect. They can also set up servers using IP addresses that belong to the leaked route. For example, you can spoof DNS servers, making users resolve hostnames to IP addresses you own. Route Leak Detection Some companies detect global BGP route leaks by analyzing BGP messages. Two examples are: On your local AS, the things you can monitor are: - Route monitoring to see any changes in received routes. - Spikes in traffic volume. - Increase in round-trip time (RTT). - Number of packets drops. Route Leak Mitigation What can we do to prevent route leaking? You need to configure your routers to have the correct outbound and inbound filters and not advertise anything you aren’t supposed to. There is no check between BGP neighbors to see whether your configuration matches the configuration on the other end. There is no enforcement of the relationship between the two neighbors. There are some things you can do, however. I’ll give you a short overview. Prefix List Filtering You can configure prefix lists with ge or le operators to filter prefixes with a certain length. This prevents you from advertising or accepting specific prefixes. Communities are also used. For example, no export tells a neighbor AS not to propagate a prefix and no advertise tells the remote router not to advertise it anywhere. ASes can use their own communities as well. The important thing with communities is that this is something you have to do both inbound and outbound. An AS-Set is used by ASes to group AS numbers they provide transit for. This can be used to define the filters you need to configure. Don’t confuse this with the aggregate as-set. BGP Roles and OTC RFC 9234 describes a solution where eBGP neighbors must agree on a BGP role and peering relationship when they establish a neighbor adjacency. This is implemented with a new BGP role capability in the OPEN message. BGP roles can be: - Route Server - Route Server Client The following role pairs are permitted: - Provider – Customer - Customer – Provider - Route Server – Route Server Client - Route Server Client – Route Server - Peer – Peer When there is a mismatch, the BGP neighbor adjacency fails. Once two BGP routers agree on their BGP roles and the neighbor adjacency is established, they have to figure out whether to propagate a route or not. This is done with a new optional transitive BGP path attribute called only-to-customer (OTC). When a router advertises a route, it sets the OTC attribute with the value of its own AS. When it is not set, the router on the receiving AS sets the OTC with the value of the originating AS. It doesn’t matter who sets it as long as the originating AS value is added. Other routers in ASes use the OTC attribute and compare it with their peering relationship to decide whether to propagate a router to other ASes or not. Resource Public Key Infrastructure (RPKI) BGP assumes that the routes we exchange are genuine and trustworthy. That’s not the case on the Internet, though. Everyone can advertise whatever routes they want. RPKI is a digital certificate system where we associate BGP route announcements with the correct originating AS. This allows ASes to verify that the advertising AS is the legitimate owner of the routes they advertise. RPKI is defined in RFC 6480. We have an RPKI lesson and an example of implementing it on Cisco IOS XE: BGP Prefix Origin AS Validation with RPKI. AS Provider Authorization (ASPA) ASPA is a security mechanism that helps to validate the relationship between an AS and its upstream ASes. An ASPA record states which ASes are allowed to propagate routes. You have now learned what BGP route leaking is, how it happens, the different types, and how to prevent it from happening. To learn more about BGP route leaks, look at RFC 7908. This RFC also has some real-life examples of global route leaks. There have been many global route leaks. Here are four major global BGP route leaks that are interesting to dive into: - Vodafone India BGP Leak (2021): This leak occurred in Vodafone’s autonomous network (AS55410) based in India and significantly impacted U.S. companies, including Google. The network mistakenly advertised over 30,000 BGP prefixes or routes, leading to a 13 times spike in inbound traffic and effectively causing a self-inflicted DDoS attack. This incident affected over 20,000 prefixes from global autonomous networks and lasted for about 10 minutes. - SafeHost BGP Leak (2019): In this incident, SafeHost (AS21217) announced over forty-thousand IPv4 routes learned from other peers and providers to its provider China Telecom (AS 4134). China Telecom then propagated these routes globally, affecting a vast number of networks. The leak was so extensive that almost every Full Routing Table (FRT) peer announced at least one leaked route to a route collector. - Nigerian ISP BGP Leak: A Nigerian ISP, while peering with Google, accidentally leaked its route to its provider AS4809, creating a type 4 route leak. This overwhelmed the small ISP and led to Google services being down for over an hour. - Cloudflare and AWS Route Leak (2019): This leak involved networks such as Cloudflare and AWS. It originated when ATI propagated thousands of routes received from its provider DQE communications to another provider, Verizon. This provider-to-provider leak significantly impacted global internet traffic, involving more than 4,000 different origins and over 65,000 subnets. Verizon was notably affected, with the leak lasting almost two hours. If you want to see BGP route leaks in action, you can check out Cisco’s BGP Stream. It’s a free resource that analyses BGP messages to detect BGP hijacks and route leaks. I hope you enjoyed this lesson. If you have any questions, feel free to leave a comment!
<urn:uuid:3ba596cd-3b5c-432e-a23e-b4e2fb438555>
CC-MAIN-2024-38
https://networklessons.com/bgp/bgp-route-leaking
2024-09-17T01:07:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00030.warc.gz
en
0.931899
3,129
2.71875
3
The potential risks to sensitive corporate data can be as tenuous as the malfunction of small sectors on a disk drive or as broad as the failure of an entire data center. When contriving data protection as part of an IT project, there are multiple considerations an organization has to deal with, beyond selecting which backup and recovery solution they will use. Data has penetrated every facet of our lives. It has evolved from an imperative procedural function into an intrinsic component of modern society. This transformative eminence has introduced an expectation of responsibility on data processors, data subjects and data controllers who have to respect the inherent values of data protection law. As privacy rights continually evolve, regulators are faced with the challenge of identifying how best to protect data in the future. While data protection and privacy are closely interconnected, there are distinct differences between the two. To sum it up, while data protection is about securing data from unauthorized access, data privacy is about authorized access – who defines it and who has it. Essentially, data protection is a technical issue whereas data privacy is a legal one. For industries that are required to meet compliance standards, there are indispensable legal implications associated with privacy laws. And guaranteeing data protection may not comply with every stipulated compliance standard. Data protection law has undergone its own evolution. Instituted in the 1960s and 70s in response to the rising use of computing, re-enlivened in the 90s to handle the trade of personal information, data protection is becoming more complex. In the present age, the relative influence and importance of information privacy to cultural utility can’t be understated. New challenges are constantly emerging in the form of new business models, technologies, services and systems that increasingly rely on ‘Big Data’, analytics, AI and profiling. The environments and spaces we occupy and pass through generate and collect data. Technology enthusiasts have been adopting new data management techniques such as ETL (Extract, Transform, and Load). ETL is a data warehousing process that uses batch processing and helps business users analyze data which is relevant to their business objectives. There are many ETL tools which manage large volumes of data from multiple data sources, manage migration between multiple databases and easily load data to and from data-marts and data warehouses. ETL tools can also be used to convert (transform) large databases from one format or type to another. The Limitations of Traditional DLP Quaint DLP solutions offer little value. Most traditional DLP implementations mainly consist of network appliances designed for primarily looking at gateway egress and ingress points. The cooperate network has evolved; the perimeter has pretty much been dissolved leaving network-only solutions that are full of gaps. Couple that with the dawn of the cloud and the reality that most threats emanate at the endpoint and you understand why traditional, network- appliance only DLP is limited in its effectiveness. DLP solutions are useful for identifying properly defined content but usually falls short when an administrator is trying to identify other sensitive data, such as intellectual property that might contain schematics, formulas or The data protection criterion has to transform to include a focus on understanding threats irrespective of their source. Demand for data protection within the enterprise is rising as is the variation of threats taxing today’s IT security admins. This transformation demands advanced analytics and enhanced visibility to conclusively identify what the threat is and deliver the versatile controls to appropriately respond, based on business processes and risk tolerance. Factors Driving the Evolution of Data Protection Current data protection frameworks have their limitations and new regulatory policies may have to be developed to address emerging data-intensive systems. Protecting privacy in this modern era is crucial to good and effective democratic governance. Some of the factors driving this shift in attitude include; Regulatory Compliance: Organizations are subject to obligatory compliance standards obtruded by governments. These standards typically specify how businesses should secure Personally Identifiable Information (PII), and other sensitive information. Intellectual Property: Modern enterprises typically have intangible assets, trade secrets, or other propriety information like business strategies, customer lists, and so on. Losing this type of data can be acutely damaging. DLP solutions should be capable of identifying and safeguarding exigent information assets. Data visibility: In order to secure sensitive data, organizations must first be aware it exists, where it exists, who is utilizing it and for what purposes. Data Protection in The Modern Enterprise As technology continues to evolve and IoT devices become more and more prevalent, several new privacy regulations are being ratified to protect us. In the modern enterprise, you need to keep your data protected, you have to be compliant, you have to constantly be worried about a myriad of like malicious attacks, accidental data leakage, BYOD and much more. Data protection has become essential to the success of the enterprise. Privacy by Design or incorporating data privacy and protection into every IT initiative and project has become the norm. The potential risks to sensitive corporate data can be as tenuous as the malfunction of small sectors on a disk drive or as broad as the failure of an entire data center. When contriving data protection as part of an IT project, there are multiple considerations an organization has to deal with, beyond selecting which backup and recovery solution they will use. It’s not enough to ‘just’ protect your data – you also have to choose the best way to secure it. The best way to accomplish this in a modern enterprise is to find a solution that delivers intelligent, person-centric and dynamic data-centric fine-grained data protection in an economical and rapidly recoverable way.
<urn:uuid:e66946c0-9d38-4e24-a8ed-1e11fbb6b527>
CC-MAIN-2024-38
https://www.filecloud.com/blog/2019/04/the-evolution-of-data-protection/
2024-09-17T01:20:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00030.warc.gz
en
0.94237
1,138
2.734375
3
UK Female Computing Students on the Rise, but Not Fast Enough We all know that the IT workforce of today is facing a massive skills shortage. Tech in the UK is booming, with the UK digital technology sector being worth nearly £184 billion, and turnover of companies within it growing by 4.5% between 2016 and 2017 (compared with the UK GDP which only grew by 1.7%). And yet in spite of this growth we find ourselves increasingly reliant on imported talent – about a fifth (180,000) of technology jobs in London are occupied by EU citizens. And, worryingly, we have already seen a 10% downturn in job applications from the continent, and that's before Brexit had even happened. Solution to Skills Gap Shortage: More Women in Technology Evidently we need to fill this gap fast, and arguably a key way of doing this would be to get more women into the tech industry. Women make up over 50% of our population, but only 17% of those working in technology in the UK are female. The statistics from today’s A-level results show a positive trend for girls in computing, however the subject is still overwhelmingly dominated by boys. Boys make up 88% of computing students, with only 12% being girls. That is a step up from the 90% for boys and 10% for girls last year, but still staggeringly weighted on one side. This is also evident at a university level: only 15% (3,015) of computer science graduates in 2016/17 were women. A prevailing stigma is that computing “isn’t for girls” but the statistics show despite being a minority, girls are outperforming boys in the subject at A-Level. 4.2% of girls achieved an A* and 20.1% achieved an A or A* (up from 2.3% and 14.7% respectively in 2017), while just 3.2% of boys achieved an A* and 17.9% achieved an A or A* (up from 3.1% and 17.2% respectively in 2017). Removing the Gender Biases Despite this, the heavily weighted gender gap in Computing A-Level and other STEM subjects shows that there need to be changes made at a governmental-level to remove the stigma around girls getting into technology. The education system needs to empower females to get into STEM subjects from a younger age. Once they leave education, the technology industry is responsible for encouraging women into their employment. At this time, traditional recruitment continues to fail to target and attract women due to gender biases. The digital skills gap is a massive issue in the UK and globally, as technology – including malevolent technology such as the tools used by cybercriminals – evolves at a rapid pace. Bring the number of women working in computing up so that it is equal to men and you've doubled the talent pool. It sounds simple in theory, but in practice it requires businesses and governments to invest in programs and schemes to break down barriers stopping young women from viewing a career in computing, and technology more widely, as viable. The future must be female in order to bridge the digital skills gap! Join the conversation!
<urn:uuid:65c453e0-bec7-4e72-8e14-1ab0bf737de7>
CC-MAIN-2024-38
https://www.ivanti.com/blog/uk-female-computing-students-on-the-rise-but-not-fast-enough
2024-09-09T19:28:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00794.warc.gz
en
0.959491
654
2.53125
3
The thought of information management can sound complicated or overly broad, but many aspects of information management are used in our daily personal lives—we just don’t think of them in such a formal way. I’ll provide you with an example that’s easy to understand and even apply to your own life. First, here are a couple of clear and simple definitions for information management and information integrity. In general terms, the definitions of the two concepts are as follows: - Information management: a process that leads to an interpretation of data to convey meaning - Information integrity: the accuracy and trustworthiness of the information Ultimately, an easy way to think about information management is the process of knowing where to get information and then use it appropriately to get to solve a problem or answer a question. If the inter-working of this process fails, then the information isn’t being managed well. For instance, let’s say you have a group of five friends with whom you are trying to coordinate a dinner. You ask everyone their restaurant preference and store each of their answers in a shared spreadsheet with specific fields and organizational structure. Based on the answers, you decide where you should go to eat. You do this every time you go out to eat as a group. At the end of the year, one of your friends wonders aloud, “Where was our favorite place to eat last year?” As long as you were accurately storing this information in a safe, organized place, they will be able to find the answer. They can access your shared excel spreadsheet, review the yearly information, and then confidently say which restaurant was the favorite of the prior year. If you hadn’t been diligent with your information gathering and storing, the answer would have been estimated and potentially inaccurate, which would mean that the information integrity had been lost. I hope this real-life example of information management helped to understand things from a daily perspective.
<urn:uuid:1fc97c4e-a51c-41e8-a196-d3be53944a37>
CC-MAIN-2024-38
https://netlogx.com/blog/2020/06/19/information-management-simplified-by-alec-mitchell/
2024-09-11T00:44:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00694.warc.gz
en
0.966768
399
3.609375
4
What is a BEC Scam? Business email compromise (BEC)—also known as email account compromise (EAC)—is one of the most financially damaging online crimes. It exploits the fact that so many of us rely on email to conduct business—both personal and professional. In a BEC scam, criminals send an email message that appears to come from a known source making a legitimate request, like in these examples: - A vendor your company regularly deals with sends an invoice with an updated mailing address. - A company CEO asks her assistant to purchase dozens of gift cards to send out as employee rewards. She asks for the serial numbers so she can email them out right away. - A homebuyer receives a message from his title company with instructions on how to wire his down payment. Versions of these scenarios happened to real victims. All the messages were fake. And in each case, thousands—or even hundreds of thousands—of dollars were sent to criminals instead. How Criminals Carry Out BEC Scams A scammer might: - Spoof an email account or website. Slight variations on legitimate addresses fool victims into thinking fake accounts are authentic. For example: - Send spear-phishing emails. These messages look like they’re from a trusted sender to trick victims into revealing confidential information. That information lets criminals access company accounts, calendars, and data that gives them the details they need to carry out the BEC schemes. How to Protect Yourself - Be careful with what information you share online or on social media. By openly sharing things like pet names, schools you attended, links to family members, and your birthday, you can give a scammer all the information they need to guess your password or answer your security questions. - Don’t click on anything in an unsolicited email or text message asking you to update or verify account information. Look up the company’s phone number on your own (don’t use the one a potential scammer is providing), and call the company to ask if the request is legitimate. - Carefully examine the email address, URL, and spelling used in any correspondence. Scammers use slight differences to trick your eye and gain your trust. - Be careful what you download. Never open an email attachment from someone you don't know, and be wary of email attachments forwarded to you. - Set up two-factor (or multi-factor) authentication on any account that allows it, and never disable it. - Verify any payment changes and transactions in person or via a known telephone number. - Be especially wary if the requestor is pressing you to act quickly. - Remember: Gift cards are for gifts, not payments. How to Report If you or your company fall victim to a BEC scam, it’s important to act quickly: Notify management who will institute the Company Incident Response Plan. Such a plan typically includes: - Contacting the financial institution immediately and request that they contact the financial institution where the transfer was sent. - Contacting the local local FBI field office to report the crime. - Filing a complaint with the FBI’s Internet Crime Complaint Center (IC3)
<urn:uuid:effbbdcf-a93e-458b-a6a2-eceda06fd118>
CC-MAIN-2024-38
https://www.icssnj.com/blog-what-is-a-bec-scam.html
2024-09-10T23:53:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00694.warc.gz
en
0.930862
665
2.546875
3
Cryptojacking a Growing Threat to Government Sites After Researchers Report on Issue, Security Experts Offer Mitigation AdviceResearchers say hackers are increasingly using Indian government websites to mine cryptocurrencies. Security experts urge government authorities to take steps to mitigate the risks of cryptojacking. See Also: 2024 Global Threat Landscape Overview Cryptocurrency-mining programs use a computer's processing power to generate hashes. Proof-of-work cryptocurrencies rely on crowdsourced hashes to complete blocks of transactions on a blockchain. If a correct hash is submitted, a share of cryptocurrency gets shared back as a reward to miners. The process of mining isn't necessarily harmful to a computer. But it does consume extra electricity, and in some cases could potentially cause performance problems or even cripple a system by monopolizing its processing power. According to new research conducted by Shakil Ahmed, Anisha Sarma and Indrajeet Bhuyan, computer science students at Assam Don Bosco University, hackers have illegally gained access to government websites in India using cryptocurrency mining malware to mine digital currencies. The researchers first discovered cryptomining script on Andhra Pradesh government's municipal websites and then found similar vulnerabilities in many other government websites as well. "The IT adviser to the Chief Minister of Andhra Pradesh, JA Chowdary, was immediately notified of the same, but as of September 16, 2018, the websites were still running the cryptojacking scripts," Bhuyan says. "The website was, however, down as of September 18, 2018." Sachin Raste, security researcher at eScan, an internet security solution provider, notes: "Government websites are highly lucrative targets for cryptojacking criminals due to the sheer fact that the volume of traffic on these sites is substantially high. Furthermore, most government-owned websites are handled by third-party vendors and understandably the maintenance is not very high." Pune-based Rohan Vibhandik, a cybersecurity practitioner and researcher at a large IT organization, says government sites handle so much traffic that it can be difficult to identify the illegitimate traffic. "Cryptojackers use the system resources of government sites to execute the intended operations for generating the cryptocurrencies," he says. "At some level, corporate or private websites can restrict the ingress communication based on their business interests, unlike government websites, which have a wide and diversified user base." Government websites not only use outdated software and CMS but they also have very poor security disclosure policies, some researchers claim. So reporting of flaws is also a challenge. The Modus Operandi Ever since Coinhive, a cryptocurrency mining service company, launched its service in September 2017, there has been an increase in the number of cryptojacking incidents. Cryptojacking is spreading fast because it's profitable and many organizations don't know how to prevent it. In a blog, Bhuyan shares step-by-step details of how they discovered vulnerabilities in Indian government websites. "Our first aim was to make a list of all the government websites of India and see if they are infected by cryptominers," he writes. "We searched online for list of government websites but did not get any so we headed over to the website goidirectory.nic.in, which lists all government websites." The private sector in India, too, is vulnerable to cryptojacking. In May, Aditya Birla Group, one of the nation's largest conglomerates, was cryptojacked, with more than 2,000 computers of various companies within the group affected. Surge in Cryptojacking A Fortinet report states that cryptojacking malware impacted 13 percent of companies globally in Q4 of 2017. The figure grew to 28 percent as of Q1 2018. The report further states that cryptojacking may prove to be more harmful in the long run than ransomware because cryptomining is tougher to detect and it takes control of a computer, which could lead to the device potentially being used to carry out other attacks. "Ransomware and cryptojacking are fairly similar in terms of how they need to penetrate and spread between systems," says Rajesh Maurya, regional vice president, India & SAARC, Fortinet. "Ransomware has some inherent limitations, such as poor long-term strategy for leveraging existing victims for additional revenue. Cryptojacking, if done properly, can leverage the processing power of hijacked system to mine for cryptocurrencies for a longer time. So it's a long-term profitable venture," Maurya says. Fortinet Threat Landscape Report Q2 2018 reveals cybercriminals added IoT devices to their arsenal of tools used for mining of cryptcurrencies. Is There a Solution? The main reason that government websites are attacked is because they lack continuous and rigorous traffic monitoring, some security experts say. "They should have a strong firewalls coupled with IDS and IPS systems. Apart from that, system-level scanners would help them to check the usage of memory resources going beyond threshold," Vibhandik says. To mitigate cryptojacking risks, Sachin Raste, security researcher at eScan, says all government websites should: - Have standardized IT security policies for all web services and IT-enabled public services; - Be managed by a central authority; - Have a centralized SOC and NOC.
<urn:uuid:322fad5b-8d44-4f63-abc3-9ebb2bb27082>
CC-MAIN-2024-38
https://www.inforisktoday.asia/cryptojacking-growing-threat-to-government-sites-a-11531
2024-09-10T23:58:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00694.warc.gz
en
0.950753
1,094
2.671875
3
Preventing And Overcoming Browser Hijacking Malware Browser hijacking refers to malware that’s capable of changing your browser’s settings without your knowledge. Often, your homepage or default search engine will be changed, new bookmarks or pop-ups added. Spotting the effects of browser hijacking malware is usually easy, but it’s best to avoid infection altogether. Mary Alleyne of Jupiter Support published a list of ways to avoid becoming a victim of hijackware. - Effective Antivirus Programs As with any malware, an up-to-date, trusted antivirus program is the key to stopping most infections. Anything you download, even if it’s from a seemingly trustworthy site, should be scanned before you open it. Many antivirus programs also offer constant scanning in the background that will alert you immediately if malware, viruses or trojans have infected your system. - Disaster Recovery Unfortunately, malware is updated and new pieces released at a rate too fast for antivirus programs to keep up with. This means that even the best antivirus programs can’t be relied on to catch every piece of malware. Since there’s always a chance that your computer will be infected with a browser hijacker or other malware, take precautions and make a plan for how you’ll recover. Back-up important data and look into other security software that will aide your antivirus program. - Change Security Settings Most popular web browsers offer higher security if you’re willing to sacrifice some functionality. In Internet Explorer, these settings are available under ‘Internet Options’ on the ‘Security’ tab. While setting the security level to ‘High’ will prevent your browser from automatically executing some code, including activeX instructions that allow most browser hijackers to function, it will also prevent some websites from working properly. For trusted sites however, you’ll be able to add them to an exceptions list that restores full functionality to only those sites. - Change Browsers Almost all browser hijacking malware is specifically coded for one browser. This means that malware that works for IE won’t work for Firefox or Chrome and vice versa. The simplest way to avoid the problem if you’re infected with hijackware is to use a different browser. But, the problem won’t be fixed and shouldn’t be ignored. Switching browsers is a simple way to end the hijacking, but you’ll still want to try to get rid of the malware causing it. More in-depth fixes like editing the ‘Hosts’ file for malicious entries and searching the registry for specific websites also help overcome browser hijacking malware, but require a little more expertise. If your computer is infected with malware, Geek Rescue fixes it. Bring your device to us, or call us at 918-369-4335. January 7th, 2014
<urn:uuid:5a092a66-b020-48f3-9749-ba33428f6350>
CC-MAIN-2024-38
https://www.geekrescue.com/blog/2014/01/07/preventing-and-overcoming-browser-hijacking-malware/
2024-09-12T05:21:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00594.warc.gz
en
0.866839
606
2.546875
3
Although one might think of a computer as being a single piece of hardware, this isn’t quite accurate. A standalone laptop, for example, comes packed with a screen, touchpad, keyboard, camera, and more- each of which might be considered a device in and of itself. This is to say nothing of the many potential peripherals that people can add on via Bluetooth, Wi-Fi and USB, including computer mice, headphones, external monitors, and much more. This means that even a standalone laptop has quite a few “devices” to manage. This is where Windows Device Manager comes in; it is a very powerful tool, and if you know when to use it, you can save yourself headaches in a number of circumstances, no matter how many peripherals are at play. As long as you have admin-level credentials for a given device, you can use Windows Device Manager in Windows 10 to troubleshoot device issues, modify device functionality, improve privacy, and more. You can use Windows Device Manager in Windows 10 to troubleshoot device issues, modify device functionality, improve privacy, and more. Let’s take a look at ways that you can capitalize on Windows Device Manager. (As a disclaimer, please make sure that you know what you intend to do in Windows Device Manager before accessing it. Windows Device Manager is useful precisely because its components get to the very core of how your computer functions, but this means that mistakes can have major consequences.) Keep your devices updated One of the most well-known features of Windows Device Manager is the ability to update device drivers. Microsoft defines a driver as “any software component that observes or participates in the communication between the operating system and a device.” Practically speaking, a driver sends messages between devices and operating systems; having an outdated driver can therefore hinder the functionality of any component of your computer. Opening up Windows Device Manager reveals a long list of devices. For any of these devices, you can right-click to see what drivers are installed, and check for any available driver updates online; this can be helpful if any computer components are showing erratic behavior. If Windows detects issues with any devices, it will display caution marks next to those devices, letting you find problems with just a quick glance. If a quick search does not show any new available drivers, you can search on device manufacturer websites for newer drivers. Modern devices usually feature “plug and play” functionality- which is to say, connecting them to your computer automatically installs a driver and requires minimal intervention on behalf of the user. If you have a much older device you need to connect, you can instead use Windows Device Manager’s “Add Hardware Wizard” feature to manually install necessary drivers. Gain the Upper Hand in Securing Device Privacy Especially given the public’s increasing vigilance regarding online security, the thought of having a microphone or camera that can always observe someone is disturbing. Like any hardware component, a computer’s built-in microphones and cameras rely on drivers to function, and this means they can therefore be managed using Windows Device Manager. Windows Device Manager can deactivate any given device, provided it is not essential to the core functionality of your computer. This effectively removes your operating system’s ability to interface with cameras and microphones, meaning that no can intrude on your privacy. What makes this useful is that it is easily reversible- simply right-clicking on the device and choosing to re-enable it will bring it back online. If you are interested in determining the history of what devices have been connected to your computer in the past, Windows Device Manager also enables you to do this. By selecting “Show Hidden Devices” in the “View” menu, you can see devices that were once, but are no longer, attached to the computer (such as USB drives). Access from Anywhere Another benefit of knowing how to use Windows Device Manager is that you can access it in all sorts of ways, even if one or several parts of the computer are malfunctioning. Whether you only have access to a keyboard, or a mouse, or Cortana voice dictation, you will still be able to open up and utilize Windows Device Manager. If you have device issues that require you to reboot a device in Safe Mode, you can even access it there. Taking Device Management to the Next Level For an individual user with admin-level credentials, Windows Device Manager is enormously useful. However, for a business, IT admins need to manage anywhere from dozens to thousands of devices, and because the consequences of misusing Device Manager can be disastrous, IT admins need to orchestrate how and when employees use it. This also assumes that Windows Device Manager can be found on every device, which is far from accurate in workplaces filled with iPhones, Macbooks, Android tablets, and other non-Windows devices. Unified endpoint management technology, like SureMDM, lets you manage devices whether or not they run Windows. UEM technology does for a business what Windows Device Manager does for an individual device; you can quickly assess and rectify any device health issues, or shut devices down if they pose urgent security threats to an organization.
<urn:uuid:db67d31c-169d-4d0b-8248-712e4d48ebc9>
CC-MAIN-2024-38
https://www.42gears.com/nl/blog/ways-windows-device-manager-can-help-you/
2024-09-13T11:02:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00494.warc.gz
en
0.931946
1,068
2.703125
3
Permissions, Privileges, and Access Controls Sun Java Web Start in JDK and JRE 5.0 Update 10 and earlier, and Java Web Start in SDK and JRE 1.4.2_13 and earlier, allows remote attackers to perform unauthorized actions via an application that grants privileges to itself, related to "Incorrect Use of System Classes" and probably related to support for JNLP files. CWE-264 - Permissions Privileges and Access Controls CWE 264 (permissions, privileges, and access controls) is not a weakness in and of itself, rather it is a category of weaknesses related to the management of permissions, privileges, and other security features used to perform access control. If not addressed, the weaknesses in this category allow attackers to gain privileges for an unintended sphere of control, access sensitive information, and execute arbitrary commands.
<urn:uuid:ac4f568f-8628-4f89-9736-c27036c69699>
CC-MAIN-2024-38
https://devhub.checkmarx.com/cve-details/cve-2007-2435/
2024-09-16T00:27:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00294.warc.gz
en
0.864577
178
2.515625
3
Android is the most widely used operating system. It is written in Java, C, C++, XML, Assembly Language, Python, Shell script and many others like this. An Android system is very much prone to virus attack it. For security purposes, now there are many Android Antivirus Programs or Android Antivirus Software. Before proceeding, you must know what actually is an antivirus? Those software which are used to destroy or vanishes the computer viruses are called antivirus software or anti-malware. The basic purpose of developing or making this type of software is used to remove the viruses that occur in a computer or your android system. We are now in an era where cyberspace security concerns are great and increased hacking attempts are being made. This makes us vulnerable that’s why we all are in search of best and cheap antiviruses’ software. You can see precisesecurity.com’s list of android antivirus software to choose the best option available to you. Germs transferred from one person to another person likewise virus spreads. It escalates from host to host and has the potential to duplicate itself. Android system viruses cannot procreate and escalate without programming such as a file or document. The virus can harm your files that are in your system, or virus can affect a few parts of your system, it depends on the strength of the virus. If the virus attacks on any android system then it cannot be removed without using any antivirus program or anti-malware software. A virus can remain slumber without showing you any big signs. Usually, the signs after the attack of the virus would not be shown clearly to us but yes… it can damage the internal files or documents. What a Virus Do? Many people ask the basic question that what a virus does after attacking. There are many things a virus can do, some of which are given below: - Stealing passwords - Stealing data, - Logging keystrokes, - Corrupting files, - Spamming your email contacts, - Taking over your machine - It can erase your data - The virus can cause permanent damage to a hard disk. Viruses can be spread through - Text message attachments. - Internet file downloads. - Social media scam links. Signs of Android Virus: - Frequent pop-up windows - Changes to your homepage - Massive amount of emails being sent from your email account - Frequent crashes which include that a virus can foist major damage on your hard drive - Unusually slow computer performance - Unknown programs that startup when you turn on your computer - Unusual activities like password changes Types of Android Viruses: - Android Installer Hijacking - Malware Hidden in Downloaded Apps - Universal Cross-Site Scripting Attack (UXSS) - Polymorphic virus - Macro virus - Multipartite virus - Boot sector virus Removing Virus Needs Antivirus We are always searching for some antiviruses to remove the virus from our system. In the world of the Android system, we need some specific or valid antiviruses so that we can save or protect our sensitive system. Why Use Antivirus Software? There are many popular Antivirus companies like Bitdefender, Eset, Avast, Avira, Kaspersky, Norton, etc. However, make sure that you do proper research before choosing your solution. Going for a free Antivirus solution also posses risk as there are multiple occasions where free Antivirus providers involve in selling user data. Therefore, it is better to go for a premium solution. You can always get reasonable prices around the year, like Kaspersky Total Security deals and others. Hence, spend some time in research. It protects your device from viruses, spyware and other types of malware and viruses. This software performs regular scans to detect threats and vulnerabilities and provide you shelter against malware-infected links on the web. They verify the reliability of any Wi-Fi network. It includes Anti-Theft tools to preserve mobiles and data. Some of the software also uses machine Learning to conflict with new threats. Not having antivirus on a computer is like inviting a criminal into the home or having an uninvited guest! They then cause hurdles or steal information from the owner. Today’s internet has provided various ways for virus attacks and there are thousands of threats. To be safe from these it is vital to police the computer and have it protected at all times. The Importance of Antivirus Software cannot be underestimated. That’s why we recommend you to take precautions. Article by: Sarah T. Brodie
<urn:uuid:cd742365-c53b-4d04-8ad2-fd606bb0078f>
CC-MAIN-2024-38
https://latesthackingnews.com/2019/11/19/your-ultimate-antivirus-software-guide/amp/
2024-09-17T06:01:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00194.warc.gz
en
0.918738
984
3.265625
3
By now, nearly everyone knows about GDPR and how it completely changed the way the world views privacy rights. Considered to be one of the world’s strongest data protection laws, GDPR is both comprehensive and wide-reaching. Since 2018, it has served as the global standard for data protection rules — and has inspired similar legislation, like the California Consumer Privacy Act. But what is GDPR compliance? Despite its popularity, there are misconceptions about compliance. Perhaps you understand the importance of GDPR but are unsure of how to adhere to its principles. Let’s take a closer look at parts of the GDPR and how they relate to your security strategy. But first, you need to understand the reason behind this law. GDPR stands for the General Data Protection Regulation. It’s an EU law that took effect in May 2018. It governs privacy, data collection, and data protection within the European Union and the European Economic Area (EAA). GDPR’s primary purpose is to protect private information and standardize data protection laws across the EU. But more than that, it protects the individual’s fundamental rights and freedom with the right to privacy clearly stated in Article 8 of the European Convention on Human Rights. In other words, if your organization does business in the EU and EAA, you must follow the GDPR regulations. Failure to do so comes with stiff fines and penalties. It goes without saying that GDPR compliance is good for customers, but it’s good for businesses as well. Its strict privacy regulations: - Require organizations to strengthen their cybersecurity - Promote better policies for handling and processing data - Help strengthen the trust between customers and businesses GDPR isn’t without challenges. The consequences of non-compliance with the GDPR are significant., and maintaining compliance isn’t easy due to its vague language and the dynamic nature of technology. Following are some challenges organizations face when adhering to the privacy laws laid forth in the GDPR. Lack of readiness Organizations of all sizes struggle with becoming GDPR compliant. Sometimes it’s due to complacency or a lack of understanding. Other times, it’s because consolidating years of data and training employees to follow new data security laws is a long and complex process. Many companies have addressed this challenge by hiring experts who specialize in helping companies with compliance-related challenges. Managing external parties GDPR requires external parties like vendors and contractors (“data processors”) follow the same legal compliance standards as you, the data controller. This also means your organization is responsible for ensuring all third parties you collaborate with follow protection measures that align with the GDPR because your organization could be held liable if an external party suffers a data breach. If your organization uses third parties to process data, evaluate their processing activities to ensure they’re GDPR compliant. You need to know: - How third parties manage and protect data - Their protocol for reporting breaches - Whether their company policies and cybersecurity strategy align with GDPR standards Meeting your security obligations While the GDPR doesn’t focus specifically on cybersecurity, the privacy law certainly influences it. Along with requiring protections like identity and access management (IDAM) and encryption, GDPR compliance requires organizations to have an incident response plan ready in the event of a cyberattack. Meeting these obligations can be a challenge. Though there’s an abundance of tools to strengthen your security posture, that’s not enough to keep you protected. Additionally, you need a security strategy that includes 24/7 monitoring to quickly detect and mitigate threats. And this requires hiring security experts with the skills to monitor and protect your IT systems. Vague and ambiguous wording One of the most frequently vocalized challenges of the GDPR is its ambiguity. Much of the GDPR is written to be vague and open-ended, providing little clarity on the roles and responsibilities of the data controller. For example, the law states organizations only can process data when it’s “necessary,” but offer little guidance on what is and isn’t deemed necessary. Another example is the broad and confusing definition of personal data. The GDPR defines this as any information relating to an individual’s private, public, or professional life. Personal data can be anything from medical records and financial information to pictures and posts taken from social media. GDPR Compliance Requirements As one of the most comprehensive laws passed recently, the GDPR covers a wide range of security and privacy requirements. Following is a GDPR overview and some best practices for maintaining compliance. Keep in mind this information is for educational purposes only. It is not intended to be legal advice. Always consult a lawyer who specializes in GDPR compliance to assist you with following compliance regulations for your specific circumstances. Organizations that process personal data are required to follow these seven GDPR principles: Lawfulness, fairness, and transparency Data must be processed lawfully, fairly, and in a transparent manner When collecting data, it must be for a specific and legitimate purpose and only used for the reason cited Organizations must only collect as much personal data as needed for the purposes explicitly specified to the customer Personal data must be accurate and kept up to date. Organizations must take all reasonable measures to correct inaccurate data Organizations may only store personal data as long as necessary for their intended purpose Integrity and confidentiality Data must be processed securely, in a way that protects the confidentiality of personal information The data controller is responsible for ensuring GDPR compliance Unfortunately, much of the GDPR was written to be vague. The reasoning behind this was technology constantly changes, which means the practices’ organizations take to protect data also must change. Because of this, understanding how to follow GDPR requirements can be challenging. Meeting GDPR’s Lawful and Transparency Requirements The GDPR prohibits organizations from processing data without justification. GDPR Article 6 stipulates that “data controllers” can lawfully process data if the “data subject” gives explicit consent to use their personal data for one or more specific purposes. In addition, data processing is considered lawful for one of the following scenarios: - To ensure the performance of a contract between the data subject and the data controller - If the data controller must process data to comply with a legal obligation - To protect the vital interests of the data subject or another person - To carry out a task in the interest of the public or if an official authority has been vested in the data controller - To protect the legitimate interests of the controller or third party, without violating the rights and freedoms of the data subject If your justification for collecting data is consent, you’ll need to make sure people have the ability to revoke that consent anytime they want. If your organization has 250 or more employees or conducts high-risk data processing, you must maintain an up-to-date list of your processing activities as laid out by GDPR Article 30. These records include, but are not limited to: - Name and contact information of all data controllers - Reason why you processed the data - Description of the data subjects categories and categories of personal data collected - All recipients who received or will receive the data collected, including international recipients - Time limits for erasing the collected data, when possible - Description of the security measures protecting the data Organizations required to keep these records must hand over those records to regulators upon request. Organizations under 250 employees should follow the same guidelines, as it can help them maintain GDPR compliance. Meeting GDPR’s Data Security Requirements GDPR Article 32 is a very important section for IT security and cybersecurity professionals. This section lays out the steps organizations should follow to secure private data. These steps include: - Pseudonymizing personal data and protecting it with encryption - Making data readily available upon request - Ensuring provisions are in place to prevent data from being accessed or tampered with by unauthorized persons — whether accidentally or deliberately - Implementing emergency measures (such as offsite backup) to quickly restore access to personal data in the event of an incident - Implementing a process for regularly testing and evaluating your organization’s data security measures This means your organization is responsible for private data protection and keeping it out of the hands of unauthorized parties. Yes, it’s a significant responsibility, and one requiring fundamental changes in how you think about private data. Both data protection and how you collect and manage it must be a priority. - Limit the data collected from users to only what you need - Delete data once you have no more use for it Data protection needs to become an integral part of your organization’s culture. Everyone from C-level executives to employees must be on board for data protection. Create an internal security policy Remember, GDPR compliance is about data protection and privacy — cybersecurity is only a portion of that. A robust security strategy is an important part of maintaining compliance. But you also need to protect yourself from internal threats. When we speak of internal threats, it’s not just malicious insiders who deliberately steal private information. It’s also employees who mishandle data and/or practice poor security hygiene. That’s why you need to create a policy that ensures everyone within your organization knows how to protect and manage data. Educate your employees on topics like: - Email security - Using strong passwords and multi-factor authentication - Encrypting devices, and other good practices for internal security Consider giving extra training to employees who handle personal data to lessen human error that leave you open to threats. Conduct impact assessments GDPR Article 35 requires organizations to conduct a data protection impact assessment (DPIA) when processing data in a way that could “result in a high risk” to the freedom and rights of the person. Unfortunately, GDPR does not define high-risk data. However, many organizations use the guidelines laid forth by the European Data Protection Board on DPIA to determine what is high-risk data. This data includes, but is not limited to: - Innovative technology - Decisions surrounding credit checks, mortgage applications, and other screening processes related to products, services, opportunities, or benefits - Large-scale data profiling - Biometric data - Personal data pulled from multiple sources - Personal data not obtained from the subject, when the data controller has difficulty proving (or cannot prove) compliance with Article 14 - Tracking data that looks at an individual’s geolocation and behavior There’s nothing new about DPIAs. It’s essentially a business impact analysis (BIA) under a different name. While GDPR compliance only requires companies processing high-risk data to perform these assessments, it’s a good idea for everyone to do it as a way to minimize risk. Article 35 sets some guidelines for performing a DPIA. These guidelines include: - Consulting with a data protection officer - Providing a description of processing operations, including the interests pursued by the data controller - An assessment of the necessity of the data being collected - An assessment of the risks to the freedom and rights of the data subject - Safeguards and security measures put in place to minimize risks and protect the data subject Understanding the GDPR Notification Requirements The GDPR requires organizations to notify the authorities within 72 hours of experiencing a data breach. While this seems like a straightforward process, there are a few things to take note of. Here’s what you need to know. GDPR meaning of a data breach Defining a data breach is pretty cut and dry –– sensitive and/or private data has been compromised by an external threat. Data commonly accessed in data breaches include: - Email addresses and passwords - Social security numbers - Financial information, like credit card numbers and banking details GDPR expands the aforementioned definition of a data breach to include the scenario mentioned above, in addition to a broader range of accidental and deliberate circumstances. The law broadly defines a data breach as a cybersecurity incident that affects the integrity, confidentiality, or availability of personal data. Meaning data breaches aren’t simply cybersecurity incidents where private data is lost. Here are some examples of personal data breaches, as defined by the GDPR: - When data is accessed by an unauthorized party - Accidental and deliberate actions (and inactions) by a data controller or data processor - Sending personal data to the wrong recipient - Personal data altered without permission - When computing devices that contain personal data are lost or stolen - Any personal data that becomes unavailable Even though the WannaCry ransomware attack of 2017 didn’t result in stolen data, it’s a GDPR personal data breach. The reason for this is the ransomware attack used encryption to make personal data inaccessible to organizations. In other words, any incident involving personal data that could risk the rights and freedoms of a person should be treated like a data breach under the GDPR requirements. And it should be reported to the relevant authorities within 72 hours. A key point to note is GDPR will not save you from a ransomware attack — it’s a mechanism to reduce risk and protect data. Data residency and GDPR are linked, but they aren’t the same thing. Applying principles to data residency is important regardless of where your data resides. Furthermore, know where your most important data is and how to secure it; a blanket approach cannot always be achieved. [Related Reading: What Is Ransomware?] Who do you report to Now that you know what a data breach is, who do you notify? There isn’t a straightforward answer for this, either. While you can find a list of official National Data Protection Authorities on the European Union website, the law doesn’t specify which public authority you should notify if your organization isn’t based in the EU. If your organization is based in an English-speaking country outside of the EU, consider reporting your data breach to the Office of the Data Protection Commissioner in Ireland. You can find more information about reporting in GDPR Article 33. Following are some points to keep in mind in the event of a data breach: - Data processors are required to notify data controllers without undue delay - Data controllers are required to notify the authorities within 72 hours What’s more, GDPR Article 34 requires data controllers to notify individuals in event of a high-risk breach. This is only required if: - The private data isn’t unintelligible to the unauthorized party (not anonymized, encrypted, etc.) - The controller hasn’t taken measures to prevent the compromised data from becoming a risk to the individuals affected - Public notifications wouldn’t be effective What to include in your report Article 33 outlines the information your organization should include in their incident report. - Description of the data compromised. If possible, include categories, approximate number of data subjects, and approximate number of personal data records - The name and information of the data protection officer who can be contacted for additional information - Description of the likely consequences of the breach - Description of measures proposed or taken to address the breach and mitigate its effects If you’re unable to provide all this information at once, you can report it in phases without undue further delay. Strengthening GDPR Compliance with Cybersecurity British Airways initially faced fines of $238 million for a 2018 data breach that compromised 430,000 customers personal data. While their final fine was $28 million to account for the economic impact of COVID-19, one thing is certain –– the penalty for noncompliance is strict. That’s why Swedish clothing retailer H&M received a fine of more than approximately $41 million for violating the GDPR. Clearly, the need for GDPR compliance and good cybersecurity is more important than ever. But if it’s impossible to prevent 100% of attacks, how do you protect your organization from data breaches that could turn into compliance nightmares? Your response time is important. The quicker you respond to a data breach, the easier it is to mitigate the damage. Unfortunately, IBM found the average time it takes for an organization to identify and contain a breach is 277 days. You don’t want to be an organization that takes months to contain a data breach. It will cost your company a lot of money in penalties, fines, and lost sales. That’s why you should protect your data with a managed detection and response (MDR) solution that gives you 24/7 monitoring. A good MDR provider will notify you of potential breaches or suspicious activity within minutes. This means you can investigate and address the incident, minimizing the severity of the impact or possibly avoiding damage altogether. Reach GDPR Security Objectives with Fortra’s Alert Logic Fortra’s Alert Logic’s MDR and XDR solutions will help you strengthen your security posture to GDPR-compliance levels. Alert Logic provides: - 24/7 monitoring and response by security professionals for your on-premises and cloud environments - Assessment, detection, and alerting capabilities designed to ensure you maintain necessary security measures - Intrusion Detection Systems (IDS) that identify potential threats, like brute force attacks, command and control exploits, and privilege escalations - Automated log management, web application monitoring, and other security tools to minimize threats and reduce your response time Suffering a data breach can be catastrophic. Even if you can pay the fines and penalties, the damage to your reputation may be beyond repair. It’s a situation you and your customers never want to be in. Schedule a demo today and see why many organizations trust Alert Logic with their security and GDPR compliance needs.
<urn:uuid:891b38af-50a8-495b-bebd-4332bfbfc499>
CC-MAIN-2024-38
https://www.alertlogic.com/blog/what-is-gdpr-compliance/
2024-09-18T11:21:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00094.warc.gz
en
0.918067
3,701
2.703125
3
The Federal Communications Commission encouraged web users to prepare for the upcoming transition to Internet Protocol version Six as part of World IPv6 Launch Day, scheduled for June 6, by checking for computer and network equipment readiness. IPv6 is an upgrade from the current IPv4 system, which will expand available IP addresses from more than four billion to more than 340 undecillion. “As IPv4 addresses run out, most new websites and online services will have IPv6 addresses that can only be accessed if you have your computer or network equipment is prepared,” wrote FCC Chief Technology Officer Henning Schulzrinne on the FCC blog. “In addition, the performance, reliability and security of the Internet and online services (such as video streaming and IP telephony) may slowly degrade unless most web traffic and network equipment is made compatible with IPv6. Finally, as new IPv4 addresses become harder and harder to get, new businesses, from providers of content and services to new ISPs, will find it more difficult to get started and compete with established firms unless they utilize IPv6.” The FCC has created an IPv6 Consumers Guide to help users prepare for the transition.
<urn:uuid:42ccabaf-dc46-4f0b-ba7c-04872fc6844f>
CC-MAIN-2024-38
https://preprod.fedscoop.com/fcc-readies-for-world-ipv6-launch-day/
2024-09-19T14:24:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00894.warc.gz
en
0.946245
239
2.5625
3
CATV technology has matured steadily over the past several years, and has expanded into diverse applications. However, as the quick expansion in technology and services, it’s important to improve CATV network component performance for higher visual and audio signals transmission. Optical amplifier for CATV application is the key element in such transmission. This post intends to give a clear introduction of optical CATV amplifier and its application in CATV transmission. CATV amplifier is also a type of EDFA (Erbium Doped Fiber Amplifier) amplifier which is the most popular optical amplifier in optical network communications. It is mainly used to amplify damped TV signals (compensation for loss) for improved signal quality before sending them to each subscriber. Moreover, CATV amplifiers not only amplify the signal, but also amplify the noise on the line, and bring some return loss. That’s why a quality CATV amplifier price is a little high, because it can provide better performance for the whole network transmission. As we all know, CATV network is a multi-channel TV system to transmit high quality video and sound signal from a large number of digital or analog broadcast television and radio channel via fiber optic cable or coaxial cable. CATV amplifier often acts as booster optical amplifier in this system to get satisfying transmission effect. The following picture illustrates a basic long haul CATV transmission system using EDFA amplifier. In most cases, the satellite providers deliver high quality digital video and audio to users’ home depending on the users’ equipment. However, the signal incoming cable feed is connected to more than one equipment with use of optical splitters. And if the incoming signal gets fragmented and rerouted, the overall speed and quality will be worse. Under this condition, an optical amplifier can be used to boost the signal power and help users get better services. As have mentioned above, a basic long-haul CATV communication link consists of head end, transmitter, receiver, optical amplifier, and sometimes fiber splitter is also needed in this type of transmission network. The head end receives TV signals off the air or from satellite feeds, and supplies them to the transmission system. The optical splitters are often utilized in a poin-to-multipoint configuration. Here are two CATV fiber network cases using CATV booster amplifier. This is a point-to-multipoint medium size private CATV network. In the head end, the transmitter receives signals from the RF combiner on the 1310nm or 1550nm wavelength. Then the signals split into several parts and are received by the CATV receiver. Finally, all the signals are amplified by the CATV amplifier and sent to the subscriber. In the above application case, the optical amplifier lies behind the CATV receiver, but in this case, it’s a little different. As we can see from the graph, the CATV amplifier lies in the front of the receiver to boost the transmission distance longer. Except for that, this transmission network also deploys two DWDM Mux/Demux to multiply the eight different wavelengths into one fiber for better transmitting. Please note that this graph just illustrates part of the long-haul CATV system. CATV amplifiers are used to boost the quality of optical signals and improve the speed and reliability of the services that users get. FS.COM offers various CATV amplifiers with different values and CATV optical transmitter. All of them are high quality. If you are interested, please contact us via email@example.com.
<urn:uuid:4f1d50fb-4131-40e3-b87a-dd153049cc2a>
CC-MAIN-2024-38
https://www.fiber-optic-components.com/tag/optical-amplifier-2
2024-09-07T15:26:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00194.warc.gz
en
0.924221
724
2.5625
3
Definition: Loopback Interface A loopback interface is a virtual network interface primarily used in networking for testing, managing, and troubleshooting purposes. It is not associated with any physical hardware and is often used to ensure that a network stack is properly implemented and functioning. Overview of Loopback Interface The loopback interface, typically associated with the IP address 127.0.0.1 in IPv4 and ::1 in IPv6, plays a crucial role in network communications. By design, packets sent to the loopback interface are immediately received by the same device, allowing for self-communication without leaving the host. The loopback interface’s significance extends across various areas of networking and computing. This includes verifying software functionality, facilitating secure internal communications, and serving as a critical tool for network engineers and system administrators. Key Features of Loopback Interface - Virtual Nature: The loopback interface does not correspond to any physical hardware. Instead, it is a software construct that mimics a network interface. - Self-Communication: It allows a host to send and receive data to itself, which is essential for testing and development purposes. - Default Configuration: Most operating systems automatically configure the loopback interface with the IP address 127.0.0.1 (IPv4) and ::1 (IPv6). - Internal Network Testing: It is used to test the IP stack and internal network functions without the need for an actual network. - High Priority: Loopback traffic is often given a higher priority within the network stack, ensuring reliable and fast processing. Benefits of Using Loopback Interface The loopback interface provides several advantages, especially in network testing and development: - Testing and Development: Developers use the loopback interface to test network applications locally. By sending data packets to the loopback address, they can ensure the application is working correctly without needing an external network. - Security: The loopback interface can be used to run internal services securely, isolated from external network threats. - Network Management: Network administrators use the loopback interface for managing and troubleshooting devices. It helps verify that the network stack is functioning as expected. - Performance Monitoring: The loopback interface allows for monitoring the performance of the local network stack and identifying potential bottlenecks or issues. Use Cases of Loopback Interface The loopback interface finds its applications in numerous scenarios: - Local Application Testing: Developers can test web servers, databases, and other networked applications by directing traffic to the loopback address. - Service Isolation: Services that need to communicate internally without exposing themselves to the external network can use the loopback interface. - Network Diagnostics: Administrators can perform diagnostics and network configuration tests to ensure that devices are properly set up. - Security Mechanisms: The loopback interface is used in implementing certain security features, such as local firewalls and access control. How to Configure Loopback Interface Configuring the loopback interface typically involves minimal steps, as most operating systems automatically set it up. However, understanding its configuration can be essential for advanced network setups. On Linux Systems On Linux systems, the loopback interface is usually configured during the boot process. You can verify and interact with it using standard network commands: # Display loopback interface information<br>ifconfig lo<br><br># Bring the loopback interface up<br>sudo ifconfig lo up<br><br># Assign an IP address to the loopback interface (if needed)<br>sudo ifconfig lo 127.0.0.1<br> On Windows Systems Windows also sets up the loopback interface automatically. You can use the following commands to interact with it: # Display loopback interface information<br>ipconfig /all<br><br># Enable the loopback interface (it is usually enabled by default)<br>netsh interface set interface "Loopback" enabled<br> In certain cases, you may need to configure additional loopback addresses for specialized applications. This is done by adding alias addresses to the loopback interface. Adding Alias Addresses # Add an alias IP address to the loopback interface<br>sudo ifconfig lo:1 127.0.0.2 netmask 255.0.0.0 up<br> # Use the netsh command to add an additional IP address<br>netsh interface ipv4 add address "Loopback" 127.0.0.2 255.0.0.0<br> Common Commands and Tools Several commands and tools are commonly used to manage and troubleshoot the loopback interface: - ping: Used to test connectivity to the loopback address:bashCopy code - traceroute (Linux)/tracert (Windows): Traces the path to the loopback address (though typically it is a single hop):bashCopy code traceroute 127.0.0.1 tracert 127.0.0.1 - netstat: Displays network connections, including those using the loopback interface:bashCopy code netstat -an | grep 127.0.0.1 Importance in Network Security The loopback interface also plays a role in network security. Services bound to the loopback address are not exposed to external networks, reducing the attack surface. This practice is common in environments where certain services need to be accessed only internally, such as databases or configuration management tools. Examples of Loopback Interface in Action - Local Web Development: Web developers often use 127.0.0.1 to run local servers during development and testing. This allows them to develop and test their applications without affecting the live environment. - Database Access: Databases like MySQL or PostgreSQL can be bound to the loopback interface, ensuring they are only accessible from the local machine. - Internal APIs: Microservices architecture might use the loopback interface to allow services on the same host to communicate securely and efficiently. Troubleshooting with Loopback Interface When troubleshooting network issues, the loopback interface can be a valuable tool. It helps in isolating problems to determine if they are related to the network stack or other components. - Testing Connectivity: By pinging the loopback address, you can quickly verify if the network stack is operational. - Verifying Application Binding: Ensuring that applications are correctly binding to the loopback address can help in diagnosing configuration issues. - Performance Testing: Loopback interface can be used to measure the performance of network applications in a controlled environment. Frequently Asked Questions Related to Loopback Interface What is a Loopback Interface? A loopback interface is a virtual network interface used primarily for testing and managing network communications within a single device. It allows the device to send and receive data packets to itself, facilitating internal communication and diagnostics. How is the Loopback Interface used in testing and development? The loopback interface is used in testing and development to ensure that network applications and configurations are functioning correctly. Developers can direct traffic to the loopback address (127.0.0.1) to test their applications locally without affecting the external network. What are the key features of the Loopback Interface? The loopback interface is virtual, allows self-communication, is configured by default with IP addresses 127.0.0.1 (IPv4) and ::1 (IPv6), and is used for internal network testing. It often has high priority within the network stack to ensure reliable performance. How do you configure the Loopback Interface on Linux systems? On Linux systems, the loopback interface is usually configured during the boot process. You can use commands like ifconfig lo to display its information, sudo ifconfig lo up to bring it up, and sudo ifconfig lo 127.0.0.1 to assign an IP address. What are the common uses of the Loopback Interface? Common uses of the loopback interface include local application testing, service isolation, network diagnostics, and implementing security mechanisms. It allows for secure and efficient internal communications and troubleshooting within the same host.
<urn:uuid:adbfc32e-7c74-4189-8380-c67ba3cda983>
CC-MAIN-2024-38
https://www.ituonline.com/tech-definitions/what-is-loopback-interface/
2024-09-07T14:51:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00194.warc.gz
en
0.854384
1,687
3.859375
4
Sweden’s public health authority, Folkhälsomyndigheten, has released updated guidelines stressing that children under two years old should avoid screen time entirely. This recommendation, announced at the start of the new school year, is part of a broader initiative to address the growing concerns over the impact of excessive digital media on young people. The guidelines also suggest that teenagers should limit their screen time to no more than three hours per day. This move aims to reduce the adverse effects of prolonged screen use, such as diminished physical activity and exposure to potentially harmful content. According to Helena Frielingsdorf, a doctor and investigator at Folkhälsomyndigheten, screen time can interfere with crucial activities such as sleep, exercise, and personal interactions. The authority recommends basic digital hygiene practices, including avoiding screens before bedtime and keeping smartphones and tablets out of bedrooms. These measures are designed to help children develop healthier habits by prioritizing physical activity, personal relationships, and academic work over screen time. The guidelines also outline specific limits based on age. For children aged 2 to 5 years, screen time should not exceed one hour per day. Children aged 6 to 12 years should have no more than 1 to 2 hours of screen time, while teenagers aged 13 to 18 years should restrict their use to a maximum of 2 to 3 hours daily. The recommendations emphasize the importance of parents setting a good example by managing their own screen time and creating a balanced digital environment for their children. This initiative reflects a growing global trend toward regulating digital media use among students. In response to mounting evidence linking excessive screen time to issues like poor sleep quality, depression, and dissatisfaction with one’s body, several countries and educational institutions have begun implementing similar measures. At least 13 U.S. states have enacted laws to limit smartphone use in schools, and schools across Europe, including Belgium, the UK, France, Norway, and the Netherlands, are adopting or trialing smartphone bans. Sweden’s guidelines align with these international efforts to promote healthier digital habits and improve overall well-being in children and adolescents.
<urn:uuid:504dba90-6b74-44b9-aa48-df6704356bda>
CC-MAIN-2024-38
https://cybermaterial.com/sweden-introduces-screen-bans-for-toddlers/
2024-09-11T04:22:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00794.warc.gz
en
0.947691
428
3.40625
3
U.S. government scientists are asking private sector and academic cryptographers for help in writing new encryption algorithms that are complex and powerful enough to withstand cracking attempts by quantum computers. Because of their immense computing power, mathematicians believe quantum computers will eventually be able to crack existing encryption algorithms. In a Federal Register notice Tuesday, the National Institute of Standards and Technology announced that it would be accepting candidate algorithms until November next year. “With the public’s participation,” the agency’s Cryptographic Technology Group says in a blog post, “NIST intends to spend the next few years gathering, testing and ultimately recommending new algorithms that would be less susceptible” to cracking by quantum computers. NIST publishes the minimum standards for cryptographic technologies used by the U.S. government in a series of documents called the Federal Information Processing Standards (FIPS). These include recommended algorithms for various kinds of encryption used to secure data, communications and identity. Despite a controversy when documents in the Edward Snowden leaks revealed that the NSA had tried to insert vulnerabilities in cryptography standards, NIST-approved algorithms are still considered the gold standard for cryptography and are widely used outside of government. The current appeal is the second step of finding quantum-proofed algorithms. In August, NIST issued a draft document laying out the procedure for submitting and evaluating proposed algorithms and asked for comments. Now, after revisions, that process is in train. After the submission period closes Nov. 30 next year, NIST will review the proposals, and anyone whose submission qualifies will be invited to present their algorithms at a workshop in early 2018. The evaluation phase which follows will take another three to five years, the blog post says. Although still theoretical, quantum computers will be orders of magnitude faster and more powerful than current supercomputers. That’s bad news for encryption — a process which scrambles data according to a massively complex mathematical code. In theory, that can be broken: Computers can crack the code by “guessing” it over and over — a form of cracking known as brute force. The current NIST-approved algorithms would take hundreds of years to brute force with today’s computers — but are expected to be much more vulnerable to the advanced power of quantum machines.
<urn:uuid:21954a60-2d30-45e9-b8d3-01ec2e20188c>
CC-MAIN-2024-38
https://develop.cyberscoop.com/nist-encryption-quantum-computing/
2024-09-11T03:15:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00794.warc.gz
en
0.94123
468
3.15625
3
Nearly half (46%) of websites contain a ‘high security’ vulnerability such as XSS or SQL Injection, new research has revealed. The study was performed on over 1.9 million files across 15,000 websites belonging to 5,500 companies. Eighty-seven percent of the websites were affected by at least a ‘medium security’ vulnerability. Many of the scans also found that the main superbugs of 2014 had not been patched, especially POODLE. The research, which saw security vendor Acunetix collect data over a period of one year ending March 2015, shows that the high-profile data breaches reported in the media are not the unlucky few – most companies are leaving themselves vulnerable to attacks. The company defined ‘high security vulnerability’ as something an attacker can easily exploit to compromise the integrity and availability of the target application, gain access to backend systems and databases, deface the target site and trick users into phishing attacks. Web apps that have a high security vulnerability would fail at complying with the financial industry’s PCI Data Security Standards. Hackers continue to concentrate their efforts on web-based applications since they often have direct access to back-end data such as customer databases. >See also: Britain is paying the price of cybercrime The nature of cyber-attacks is also diversifying as criminals target not only financial data but personal data for use in identity theft and confidential intelligence to carry out cyber espionage. When it comes to network vulnerabilities, administrators are performing better, however the stats are still not reassuring. Ten percent of the servers scanned were found to be vulnerable to high security risks, and 50% had a medium security vulnerability. Keeping in mind most of these servers are perimeter servers, having a network vulnerability on these internet-facing servers could spell disaster, as this could easily lead to server compromise and access to other servers on the network. “These are worrying stats, showing businesses are failing in some basic web security areas,” said Nick Galea, CEO at Acunetix. “It’s just like leaving your wallet or unlocked phone lying around in a public place. It’s more a question of how long it takes, rather than if at all, before you are compromised.”
<urn:uuid:f209a171-c8af-4db8-9b97-7bb964c1e279>
CC-MAIN-2024-38
https://www.information-age.com/almost-all-websites-have-serious-security-vulnerabilities-study-shows-32330/
2024-09-13T13:56:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00594.warc.gz
en
0.965151
473
2.671875
3
Data safety and security has never been more critical than now. Businesses across the globe invest in firewalls, antivirus, and antimalware tools to keep their own data — and their clients’ data — safe. There are, however, other types of threats to your technology, so (of course) there are other essential data security best practices and devices to use against commonly forgotten threats like electrical surges or power outages. Using an Uninterruptible Power Supply (UPS) can save you from very real technological dangers. Using the best UPS for computers minimizes the risk of losing data resulting from electrical issues — a leading healthcare IT challenge in 2019. To understand the real value of using this handy device, check out this list of the five real dangers of not using a UPS. What Is UPS and What Is the Best UPS for My Computer? UPS stands for Uninterruptible Power Supply. It is a device that goes between the power outlet and the power supply unit in a PC or server. Its central role is to provide a battery backup in the event of power failure or when electrical power goes below an acceptable voltage level. There are two types of UPS devices. A standby UPS uses power from the power outlet to charge the battery. It automatically switches to the battery when it detects a power failure, allowing the device to keep working. The other type — and what many experts consider the best UPS for computer devices — is a line-interactive UPS. A line-interactive UPS corrects the rise and fall of the voltage and smoothes out waveforms to protect devices in various scenarios. It can protect PC and laptop computers equally well and save you a lot of cash over time. Let’s look further into how a UPS can protect your computer. 1. A Power Surge Can Burn Your Power Supply Unit One of the most vital parts of the PC case is the power supply unit (PSU). It is the power distribution hub in a computer powering all essential components: the motherboard, processors, graphic cards, the hard drive, etc. Top-notch PSU components come with several fail switches to protect other components from power-related problems. Unfortunately, sometimes the power surge can be too big for a PSU to cushion its blow. The motherboard is the first piece to suffer. Electricity can fry its circuits and render it useless. Most warranties do not apply in these events, and your only option will be to replace it. 2. Your Hard Disk Drive Can Suffer A hard disk drive is where all data is stored. This PC component is susceptible to fluctuations in power. Most HDD manufacturers have improved hard disk drives and made them more resilient to power surges or sudden power failures. HDDs can still fail if an unacceptable level of power goes through their circuits. HDD failure is the most common power-related problem. In a worst-case scenario, all data on the drive is lost. Permanently. There will be no way you can retrieve it. This is a significant reason to address your backup policies and acknowledge the importance of performing regular backups. In a better outcome, the drive survives the surge with a few bad sectors. The data stored in these sectors is lost, but everything else can still be restored. A line-interactive UPS can easily protect delicate components like motherboards, PSUs, and HDDs by suppressing the surges and regulating voltage. In the healthcare vertical, using the best UPS for computers is a valuable asset. It facilitates the meaningful use of electronic health records while keeping those records safe from any electricity-related problems. 3. Unsaved Work is Forever Lost Imagine: After hours of hard work, you are almost done with an important project on your PC. You go for a lunch break, during which time a random power failure occurs. You come back to your workstation to find all those hours of work are lost. Forever. This is a pretty common scenario, and many people have experienced the incredible frustration of it. Data security best practices do not revolve exclusively around fancy cybersecurity protections and solutions. A UPS can be a vital asset in your data security strategy and help prevent lost data and unsaved work. Both standby and line-interactive UPS devices can minimize the risk of data lost because of unsaved work. After a power failure, you or your employees will have more than enough time to save your projects or documents and secure them on portable devices. 4. You Can Lose All Your Valuable Data Data has become increasingly important across verticals, including the healthcare industry. Medical health records and information technology go hand in hand. This synergy is even more critical today as total patient care becomes a trend in the industry. Staying on top of data security best practices and ensuring data availability across devices has become imperative. Healthcare IT experts have another challenge to overcome: Data loss. A UPS is just one of many solutions to help experts build a reliable network with several safety measures. With big networks, such as those found in hospitals, a UPS can support servers, computers at MD offices, and front desk clients. When the entire system is rendered safe from power failure and power spikes, the risk of data loss is minimal. 5. It Can Be A Devastating Blow to Your Budget Finally, consider the dangers of not using a UPS when you have a huge multi-computer system essential to your business. When we talk about businesses, there is not just a single motherboard, a solitary HDD, or one lonely PSU in question. Rather, we are talking about hundreds of devices on a network. One single power surge can wreak absolute havoc, leaving you with no option but to purchase new parts and service all your computers. Without a UPS, all the data access security best practices in the world will not help much when the power goes out with no warning. For More Information Perhaps the biggest advantage of all the benefits UPS systems bring to the table is data loss prevention. Everything else is usually pretty manageable, but if you lose sensitive data — data vital to your business — your options are incredibly limited. Make the investment before the problem occurs. For more questions about which UPS is best for your business computers or how you can engage in the best data access security practices, contact Scale today! Call us at 501-213-1732, or fill out our online form. Let Scale show you why we are the managed IT service provider for you.
<urn:uuid:edaa5b1d-7ceb-4bd6-8c2b-4fbcb2d1f5ed>
CC-MAIN-2024-38
https://www.letscale.com/5-real-dangers-of-not-using-a-ups/
2024-09-13T14:31:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00594.warc.gz
en
0.935787
1,317
3.15625
3
Archiving systems have two major components. First, they are stored on media that lasts for over 50 years. Second, once the information is stored, it cannot be changed. Paper and microfiche were replaced by optical discs in the 1980s. Optical discs are certified as archival media because they last over 50 years. Write Once Read Many (WORM) media can’t be modified. Once it is written, it cannot be changed. We now have more alternatives for archiving our information. There are four types of systems that use Write-Once Storage: Optical Jukebox libraries: store the information on discs that can be easily retrieved. Archive Appliance: Transfers data to a stack of optical discs and labels each one so they can be placed in a cabinet or drawer. This system requires manual disc handling. Cloud Archiving: Uses similar archiving software as optical jukebox libraries except the information is sent to a remote server. Because the data is redundantly stored, it is considered to be archival. Malware Protection Using Write-Once Storage: This malware protection system protects your computer system from virus and other attacks. It uses modified hard drives that provide write-once storage. It is not considered archiving but does have the characteristic of the optical WORM storage. ArcPoint-Blu Storage Appliance This archiving system provides the policy-based recording to Write Once Read Many (WORM) Blu-ray media. Your data is kept safe for over 50 years. - Policy-Based File Archive Storage Manager – Archive Edition offers policy-based file archiving. This is fully automated archiving directly to an optical storage device such as an Olympus Series Blu-Ray Disc Archiving System – no user action is needed. - WORM File System with CIFS Access Most applications do not support archive storage systems (like tape and optical) natively and require a standard storage access interface like CIFS. The VFS (Virtual File System) is an integral module of Storage Manager – Archive Edition and implements a native Windows file system. It provides standard CIFS file system access to archive storage systems. This means applications can make use of the benefits of the Olympus Archive Systems without adaptations. - User-Defined Archiving by Web Client Besides the automatic process of file archiving, Storage Manager – Archive Edition offers user-defined archiving supported by a Web Client. The Web Client is aimed at environments in which particular users need to be able to perform archiving operations by themselves. Usage of the Web Client is protected by a user authentication mechanism. - Archiving Methods Storage Manager – Archive Edition provides multiple archiving methods. This comprises copying of files (Copy Mode), moving of files (Data Mover Mode), and stubbing of files (HSM Mode). - Use of Standards Storage Manager – Archive Edition strictly adheres to standards for storing data on secondary storage systems. This ensures independence from a specific hardware vendor and protects customer investments. Furthermore, access to all archived data is provided by standard operating system methods. MTF (Mircosoft Tape Format), LTFS1 (Linear Tape File System) and UDF (Universal Disk Format) are supported as standard formats for tape and optical. This software controls all aspects of your archiving system. It will adapt to your requirements and fit well into an existing networked environment. The software consists of the server component, which controls the actual recording process as well as a set of applications for the definition of the disc contents. The Olympus server controls the hardware and administers the job list. Each job represents one production series and contains important information like the number of copies, label print file, and additional parameters. The job attributes and the job list can also be modified by network clients under control of the integrated user management. A job is easily created and hundreds of copies produced by just dragging & dropping the Disc Image to the job list. The user interface provides all important hardware and job status information even over the network and internet. Contact us for more details about this system. Call 800-431-1658 in the USA, and 914-944-3425 everywhere else. Or, just use our contact form. More Archiving Systems The archiving appliances take advantage of the same robotic mechanisms used in automatic duplicators. With these systems, you can archive general data to a stack of Blu-ray discs. This is a batch process where a set of blank discs are placed on a spindle and written one-at-a-time and placed on an output spindle. The discs are stored off-line and can be accessed by reading them back in using the same. The jukebox or Library systems are the only devices that provide easy on-line retrieval of the discs. Archiving is now the law of the land. Sarbanes-Oxley and other regulations require archiving of your emails and other critical files. Archiving means that the information is placed on permanent media. Tape is not an archive media. Only Optical discs are considered to be archive media. Select the system that’s right for your application. The Archiving Appliance makes it easy to archive Data. If you also need fast retrieval of archived data select from our optical libraries and jukeboxes. Take a look at the following comparison table. On-line Data Archiving and Retrieval | Appliance Data Archiving System | Optical jukebox and libraries provide online data archiving and retrieval. This is a more sophisticated solution that provides both archiving and retrieval functions. | This batch type archiving system uses a small network attached appliance described above. Provides a network-wide, filter based archiving archive to Blu-ray media. This automated Blu-ray writing system utilizes PoINT Archiving software. This powerful Archiving Compliance Disaster Recovery system is a complete Archiving, Management and Restoration Solution.
<urn:uuid:8183b139-f262-48cf-ab48-bd3bdce08013>
CC-MAIN-2024-38
https://kintronics.com/solutions/archiving-systems/
2024-09-14T22:01:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00494.warc.gz
en
0.881281
1,214
2.875
3
Today is Safer Internet Day. While Learning Tree International is not an official sponsor nor national partner, we support the effort. As part of that support here are some ways you can help foster their 2020 mission of "Together for a better internet". While Safer Internet Day is focused on younger net users, most readers of this blog have younger family or friends who can benefit from these suggestions. We can also model being a good netizen by doing these ourselves. - First and foremost social media safety is critical. For many younger users, this is their primary use of the 'net. And let's face it, many adults use these sites a lot, too. - First, watch what you share. Sharing images of your vacation or business trip while you are away can let bad actors know that your home may be unattended at least part of the day. Burglars and porch pirates thrive on this information. - Second, don't play the quizzes on popular sites like Facebook. The answers can be used by attackers to help them guess passwords. Questions such as "Do you remember the name of your third-grade teacher?", "What was your first phone number", and "What was your first dog's name?" are all found as password reset questions. - Third, check, and if necessary, update your privacy settings. If you don't know how to find them, your favorite search engine can help point you in the wrong direction. The media companies are frequently changing what can be changed and how to do it, so check these at least a couple of times a year. I choose New Year's Eve and, since I live in the US, the time around our Independence Day - Finally, don't insult, bully, or attack others. Your comments may boomerang back to you. My mom frequently said, "never put anything in writing you don't want the whole world to read." That was long before social media or even personal computers, but the advice is still important today. Even private messages sometimes get leaked: accidentally or intentionally. - Keep your browsers, apps, systems, and other software current. Attackers take advantage of software bugs. By keeping your software current, you significantly reduce the opportunities for compromise. Even the software on home routers has to be updated to protect from attacks. - Be sure you use anti-virus software and keep it current. Anti-virus software isn't perfect, but it is essential. Most such software protects from far more than viruses. For instance, some tools scan downloaded documents to ensure that there is no malware hidden in the files. - Don't chat with or "friend" strangers. While we may connect with people we do not know on business networking sites, connecting with those we don't know on Facebook and other personal sites may lead to unintentionally exposing personal information to ill-intended individuals. - Set a good example. This is true for people of all ages. No matter who you are, you are an example for someone. It is best to be a good one. The biggest thing parents, teachers, and adult friends can do for younger users is to talk about these points. We cannot expect those younger users to know these safety rules. Safer Internet Day is a great time to start that conversation. To your safe computing,
<urn:uuid:f2b7569d-9b27-4725-9218-a551684ea0b9>
CC-MAIN-2024-38
https://www.learningtree.com/blog/happy-safer-internet-day/
2024-09-14T21:53:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00494.warc.gz
en
0.959437
676
2.6875
3
Java Garbage Collection: Basics What is Garbage Collection in Java?. In Java applications, objects are stored in a memory area called the “heap”, which is dedicated to dynamically allocated objects. If left unattended, these objects can accumulate and deplete the available memory in the heap – eventually leading to an OutOfMemoryError. The Java Virtual Machine (JVM) employs an automatic Garbage Collection (GC) mechanism. This mechanism handles the release of memory occupied by unused objects and reallocates that memory space for new objects. Get in-depth information on Java Garbage Collection: automated memory management, heap memory, mark-and-sweep algorithm, JVM generations, garbage collectors and more. The Garbage Collection (GC) feature in the Java Virtual Machine (JVM) is truly remarkable. It automatically identifies and cleans up unused Java objects without burdening developers with manual allocation and deallocation of memory. As an SRE or Java Administrator you need a strong understanding of the Java Garbage Collection mechanism to ensure optimal performance and stability of your Java applications. If you are looking for a basic general overview on the principles of “What is Garbage Collection in Programming?” – you may like to start, here: What is garbage collection (GC) in programming? (techtarget.com). In this educational post, we will explain what Java Garbage Collection is, why it is important, and how to make it easy for Java SREs and administrators to deal with it. In related posts, we will look at how to get detailed visibility into your Java memory and GC Java performance. Why is Java Garbage Collection Important? Java Garbage Collection is essential for several reasons: - Automation and simplification: Automatic garbage collection in Java takes the burden off developers. In contrast, languages like C or C++ require explicit memory allocation and deallocation, which can be error-prone and lead to crashes if not handled properly. - Increased Developer Productivity: With automatic Java memory management, developers can focus on writing application logic, leading to faster development cycles. - Preventing OutOfMemoryError: By automatically tracking and removing unused objects, garbage collection prevents memory-related errors like OutOfMemoryError errors. - Eliminates dangling pointer bugs: These are bugs that occur when a piece of memory is freed while there are still pointers to it, and one of those pointers is dereferenced. By then the memory may have been reassigned to another use with unpredictable results. - Eliminates Double free bugs: These happen when the program tries to free a region of memory that has already been freed and perhaps already been allocated again. Memory Heap Generations in Java Garbage Collection Understanding the memory heap generations is crucial for Java garbage collection efficiency. The generations are: - Eden: Where objects are created; GC removes unused objects or moves them to Survivor space if still referenced. - Survivor: Comprises survivor zero and survivor one spaces in the young generation. - Tenured: Holds long-lived objects; GC checks this less frequently due to its larger size in the old generation. Garbage collection occurs more often in Eden, while Tenured is checked less, optimizing the process. Minor garbage collection takes place in the young generation, while major garbage collection occurs in the old generation and takes longer but happens less frequently. The permanent generation (PermGen) was removed in Java 8. How does Garbage Collection work in Java? Java Garbage Collection explained. In Java, objects are created dynamically using the “new” keyword. Once an object is created, it occupies memory space on the heap. As a program executes, objects that are no longer referenced or accessible need to be removed to free up memory and prevent memory leaks. Thus, the Java heap memory contains a collection of live and dead objects – live objects are still in use and dead objects are no longer needed. The Garbage Collection in Java operation is based on the premise that most objects used in the Java code are short-lived and can be reclaimed shortly after their creation. As a result of garbage collection in Java, unreferenced objects are automatically removed from the heap memory, which makes Java memory-efficient. In general, all Java garbage collectors have two main objectives: - Identify all objects that are still in use or “alive.”. - Remove all other objects that are considered dead or unused (i.e., unreachable). The Java garbage collector performs this task by periodically identifying and reclaiming memory that is no longer in use. The most commonly used Java Garbage Collection algorithm is called the mark-and-sweep algorithm, which follows these steps: - Marking phase: The garbage collector starts with a root set of objects (e.g., global variables, stack frames, and CPU registers) that are known to be in use. It recursively traverses through these objects, marking each object it encounters as “live” or reachable. All reachable objects starting from known root references (such as local variables, static variables, and thread stacks) are marked as live objects. - Sweeping phase: The garbage collector scans the entire heap, identifying and reclaiming memory occupied by objects that were not marked during the marking phase. These objects are considered garbage. Any objects that have not been marked as “live” during the mark phase are considered unreachable and are marked as eligible for garbage collection. The memory occupied by these unreachable objects is then freed up for future allocations. Two types of garbage collection activity that usually happen in Java - A minor or incremental Java garbage collection is said to have occurred when unreachable objects in the young generation heap memory are removed. - A major or full Java garbage collection is said to have occurred when the objects that survived the minor garbage collection and were then copied into the old generation or permanent generation heap memory are removed. When compared to young generation, garbage collection happens less frequently in old generation. To free up memory, the JVM must stop the application from running for at least a short time and execute the GC process. This process is called “stop-the-world.” This means all the threads, except for the GC threads, will stop executing until the GC threads are executed and objects are freed up by the garbage collector. Modern Java GC implementations try to minimize blocking “stop-the-world” stalls by doing as much work as possible in the background (i.e. using a separate thread), for example marking unreachable garbage instances while the application process continues to run. Java Garbage Collection – Impact on Performance Garbage collection in the JVM consumes CPU resources when deciding which memory to free. Stopping the program or consuming high levels of CPU resources will have a negative impact on the end-user experience with users complaining that the application is slow. Various Java garbage collectors have been developed over time to reduce the application pauses that occur during garbage collection and at the same time to improve on the performance hit associated with garbage collection. Modern JVMs have multiple collectors (alternative garbage collection algorithms) for performing the GC activity: - Serial Garbage Collector: Single-threaded GC execution. Enable with -XX:+UseSerialGC. - Parallel Garbage Collector: Multiple minor threads executing GC in parallel. Enable with -XX:+UseParallelGC. - Concurrent Mark Sweep (CMS): Concurrent execution of some application threads with reduced stop-the-world GC frequency. Enable with -XX:+UseConcMarkSweepGC. However, note that CMS was deprecated in JDK 9. - G1 Garbage Collector: Designed for big workloads, concurrent, minimizes pauses, adapts to machine conditions, string de-duplication feature reduces the overhead of strings. You can explicitly enable it using the JVM option -XX:+UseG1GC. - Epsilon Garbage Collector: Do-nothing GC for ultra-latency-sensitive or garbage-free applications. Use the following flags: -XX:+UnlockExperimentalVMOptions and -XX:+UseEpsilonGC - Shenandoah Garbage Collector: Concurrent GC with compaction and memory release while the application is running. Use the following flags: -XX:+UseShenandoahGC -XX:+UnlockExperimentalVMOptions -XX:ShenandoahGCMode=generational - ZGC (Z Garbage Collector): Experimental initially, designed for large heaps, concurrent, low pause times (<10ms), supports small to massive heap sizes. ZGC can be enabled using the -XX:+UseZGC JVM option. Many JVMs, such as Oracle HotSpot, JRockit, OpenJDK, IBM J9, and SAP JVM, use stop-the-world GC techniques – however, recent collectors such as G1GC and ZGC are changing this situation. Modern JVMs like Azul Platform Prime (formerly Zing) use Continuously Concurrent Compacting Collector (C4), which eliminates the stop-the-world GC pauses that limit scalability in the case of conventional JVMs. Why is Monitoring Java Garbage Collection Important? Garbage collection can impact the performance of Java applications in unpredictable ways. When there is frequent GC activity, it adds a lot of CPU load and slows down application processing. In turn, this leads to slow execution of business transactions and ultimately affects the user experience of end-users accessing the Java application. Excessive garbage collection activity can occur due to a memory leak in the Java application. Insufficient memory allocation to the JVM can also result in increased garbage collection activity. And when excessive garbage collection activity happens, it often manifests as increased CPU usage of the JVM! For optimal Java application performance, it is critical to monitor a JVM’s GC activity. For good performance, full GCs should be few and far between. The time spent on GC should be low – typically less than 5% and the percentage of CPU spent for garbage collection should also be very low (this allows application threads to use almost all the available CPU resources). What are the Key Java Garbage Collection Metrics to Monitor? To know if garbage collection is creating Java performance problems, you need to track all aspects of the garbage collection activity in the JVM: - When garbage collection happened - How often garbage collection is happening in the JVM - How much memory is being collected each time - How long garbage collection is running for in the JVM - Percentage of time spent by JVM for garbage collection - What type of garbage collection happened – minor or full GC? - JVM heap and non-heap memory usage - CPU utilization of the JVM This allows you to identify when Java garbage collection is taking too long and impacting performance, which will help you to determine the optimal settings for each application based on historical patterns and trends. Troubleshooting Java Garbage Collection Issues One way to troubleshoot whether the Java garbage collection process is impacting the performance of your application, when Java GC activity is excessive, is to take heap dumps of the JVM’s memory and analyze the top memory consuming objects. Any unusually large objects are an indicator of memory leaks in the application code. On the other hand, if no object is occupying an unusually large amount of memory and if the percentage of memory used by any of the JVM’s memory pools is close to 100%, this is an indicator that the JVM’s memory configuration may be insufficient. In this case, you may need to increase the corresponding JVM memory pool for improved application performance. Now that we have fair understanding of Java garbage collection, let’s summarize by answering some of key questions SREs and Java admins may have: - Is garbage collection in Java good or bad? Definitely good. But, as the adage goes, too much of anything is a bad thing. So, you need to make sure Java heap memory is properly configured and managed so that the GC activity is optimized. - When is Java GC needed? It is needed when there are unreferenced objects to be cleared out. Since it is not a manual activity, the JVM will automatically take care of this for you. From all the information above, you would have learned why GC is needed and when. - How to tune Java garbage collection? There are two common ways to do this: - Keep the number of objects passed to the old generation area to a minimum - Configure the major (or full) GC time to be low - Some critical JVM parameters to configure for right-sizing the JVM’s memory are -Xms, -Xmx, and -NewRatio (ratio of new generation and old generation size) - How to know when Java GC is not operating as expected? JVM monitoring is key. Make sure to track vital JVM metrics and be alerted when GC activity is deviating from the norm. Monitoring Java application performance with eG Enterprise With eG Enterprise, you can optimize JVM and Java application performance: - Set up monitoring quickly with prebuilt Java dashboards. - Visualize metrics like garbage collection CPU time, CPU utilization, memory heap usage, and more. - Identify and triage issues, detect memory leaks and performance bottlenecks. - Fine-tune memory heap and garbage collector configurations for optimal performance. - Leverage prebuilt alerts for high CPU usage, memory, transaction errors, and Apdex score. - Notify teams via Slack and PagerDuty for prompt issue resolution.
<urn:uuid:8f815daf-9805-43e8-9ad9-18855e48e2ca>
CC-MAIN-2024-38
https://www.eginnovations.com/blog/what-is-garbage-collection-java/
2024-09-16T03:50:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00394.warc.gz
en
0.903079
2,853
3.765625
4
Blockchain Identity Management: A Complete Guide Traditional identity verification methods show their age, often proving susceptible to data breaches and inefficiencies. Blockchain emerges as a beacon of hope in this scenario, heralding a new era of enhanced data security, transparency, and user-centric control to manage digital identities. This article delves deep into blockchain’s transformative potential in identity verification, highlighting its advantages and the challenges it adeptly addresses. What is Blockchain? Blockchain technology represents the decentralized storage of a digital ledger of transactions. Distributed across a network of computers, decentralized storage of this ledger ensures that every transaction gets recorded in multiple places. The decentralized nature of blockchain technology ensures that no single entity controls the entire blockchain, and all transactions are transparent to every user. Types of Blockchains: Public vs. Private Blockchain technology can be categorized into two primary types: public and private. Public blockchains are open networks where anyone can participate and view transactions. This transparency ensures security and trust but can raise privacy concerns. In contrast, private blockchains are controlled by specific organizations or consortia and restrict access to approved members only. This restricted access offers enhanced privacy and control, making private blockchains suitable for businesses that require confidentiality and secure data management. Brief history and definition The concept of a distributed ledger technology, a blockchain, was first introduced in 2008 by an anonymous entity known as Satoshi Nakamoto. Initially, it was the underlying technology for the cryptocurrency Bitcoin. The primary goal was to create a decentralized currency, independent of retaining control of any central authority, that could be transferred electronically in a secure, verifiable, and immutable way. Over time, the potential applications of blockchain have expanded far beyond cryptocurrency. Today, it is the backbone for various applications, from supply chain and blockchain identity management solutions to voting systems. Blockchain operates on a few core principles. Firstly, it’s decentralized, meaning no single entity or organization controls the entire chain. Instead, multiple participants (nodes) hold copies of the whole blockchain. Secondly, transactions are transparent. Every transaction is visible to anyone who has access to the system. Lastly, once data is recorded on a blockchain, it becomes immutable. This means that it cannot be altered without altering all subsequent blocks, which requires the consensus of most of the blockchain network. The Need for Improved Identity Verification Identity verification is a cornerstone for many online processes, from banking to online shopping. However, traditional methods of identity verification could be more challenging. They often rely on centralized databases of sensitive information, making them vulnerable to data breaches. Moreover, these methods prove identity and often require users to share personal details repeatedly, increasing the risk of data theft or misuse. Current challenges in digital identity Digital credentials and identity systems today face multiple challenges. Centralized systems are prime targets for hackers. A single breach can expose the personal data of millions of users. Additionally, users often need to manage multiple usernames and passwords across various platforms, leading to password fatigue and increased vulnerability. There’s also the issue of privacy. Centralized digital identities and credentials systems often share user data with third parties, sometimes without the user’s explicit consent. Cost of identity theft and fraud The implications of identity theft and fraud are vast. It can lead to financial loss, credit damage, and a long recovery process for individuals. For businesses, a breach of sensitive information can result in significant financial losses, damage to business risks, reputation, and loss of customer trust. According to reports, the annual cost of identity theft and fraud runs into billions of dollars globally, affecting individuals and corporations. How Blockchain Addresses Identity Verification Blockchain offers a fresh approach to identity verification. By using digital signatures and leveraging its decentralized, transparent, and immutable nature, blockchain technology can provide a more secure and efficient way to verify identity without traditional methods’ pitfalls. Decentralized identity systems on the blockchain give users complete control over their identity data. Users can provide proof of their identity directly from a blockchain instead of relying on a central authority to keep medical records and verify identity. This reduces the risk of a centralized data breach and gives users autonomy over their identities and personal data. Transparency and Trust Blockchain technology fosters trust through transparency, but the scope of this transparency varies significantly between public and private blockchains. Public blockchains allow an unparalleled level of openness, where every transaction is visible to all, promoting trust through verifiable openness. On the other hand, private blockchains offer a selective transparency that is accessible only to its participants. This feature maintains trust among authorized users and ensures that sensitive information remains protected from the public eye, aligning with privacy and corporate security requirements. Once identity data is recorded on a blockchain, it cannot be altered without consensus. This immutability of sensitive, personally identifiable information ensures that identity data remains consistent and trustworthy. It also prevents malicious actors from changing identity data for fraudulent purposes. Smart contracts automate processes on the blockchain. In identity verification, smart contracts can automatically verify a user’s bank account’s identity when certain conditions are met, eliminating the need for manual verification of bank accounts and reducing the often time-consuming process and potential for human error. Benefits of Blockchain Identity Verification Blockchain’s unique attributes offer a transformative approach to identity verification, addressing many of the challenges faced by the traditional identity systems’ instant verification methods. Traditional identity verification systems, being centralized, are vulnerable to single points of failure. If a hacker gains access, the entire system can be compromised. Blockchain, with its decentralized nature, eliminates this single point of failure. Each transaction is encrypted and linked to the previous one. This cryptographic linkage ensures that even if one block is tampered with, it would be immediately evident, making unauthorized alterations nearly impossible. Centralized identity systems often store user data in silos, giving organizations control over individual data. Blockchain shifts this control back to users. With decentralized identity solutions, individuals can choose when, how, and with whom they share their personal information. This not only enhances data security and privacy but also reduces the risk of data being mishandled or misused by third parties. Identity verification, especially in sectors like finance, can be costly. Manual verification processes, paperwork, and the infrastructure needed to support centralized databases contribute to these costs. Blockchain can automate many of these processes using smart contracts, reducing the need for intermediaries and manual interventions and leading to significant cost savings. In today’s digital landscape, individuals often have their digital identities and personal data scattered across various platforms, each with its verification process. Blockchain can create a unified, interoperable system where one’s digital identity documents can be used across multiple platforms once verified on one platform. This not only enhances user convenience but also streamlines processes for businesses. The Mechanics Behind Blockchain Identity Verification Understanding its underlying mechanics is crucial to appreciating the benefits of the entire blockchain network’s ability for identity verification. How cryptographic hashing works Cryptographic hashing is at the heart of the blockchain network’s various security measures. When a transaction occurs, it’s converted into a fixed-size string of numbers and letters using a hash function. This unique hash is nearly impossible to reverse-engineer. When a new block is created, it contains the previous block’s hash, creating a blockchain. Any alteration in a block changes its hash, breaking the chain and alerting the system to potential tampering. Public and private keys in identity verification Blockchain uses a combination of public and private keys to ensure secure transactions. A public key is a user’s address on the blockchain, while a private key is secret information that allows them to initiate trades. Only individuals with the correct private key can access and share their data for identity verification, ensuring their data integrity and security. The role of consensus algorithms Consensus algorithms are protocols that consider a transaction valid based on the agreement of the majority of participants in the network. They play a crucial role in maintaining the trustworthiness of the blockchain. In identity verification, consensus algorithms ensure that once a user’s identity data is added to the blockchain, it’s accepted and recognized by the majority, ensuring data accuracy and trustworthiness. Challenges and Concerns While blockchain offers transformative potential for identity verification, it’s essential to understand the challenges, key benefits, and concerns associated with its adoption. One of the primary challenges facing blockchain technology is scalability. As the number of transactions on a blockchain increases, so does the time required to process and validate them. This could mean delays in identity verification, especially if the system is adopted on a large scale. Solutions like off-chain transactions and layer two protocols are being developed to address this, but it remains a concern. While blockchain offers enhanced security, the level of privacy depends on whether the blockchain is public or private. In public blockchains, the transparency of transactions means that every action is visible to anyone on the network, which can compromise user privacy. Conversely, private blockchains control access and visibility of transactions to authorized participants only, significantly mitigating privacy risks. This controlled transparency is important in environments where confidentiality is paramount, leveraging blockchain’s security benefits without exposing sensitive data to the public. Regulatory and Legal Issues The decentralized nature of blockchain challenges traditional regulatory frameworks. Different countries have varying stances on blockchain and its applications, leading only to a fragmented regulatory landscape; for businesses looking to adopt blockchain for identity verification and online services, navigating this complex regulatory environment can take time and effort. Despite its benefits and technological advancements, blockchain faces skepticism. Many businesses need help to adopt a relatively new technology, especially when it challenges established processes. Additionally, the need for a standardized framework for blockchain identity management and verification and a complete ecosystem overhaul can deter many from its adoption. Blockchain Identity Verification Standards and Protocols For blockchain-based identity verification to gain widespread acceptance, there’s a need for standardized protocols and frameworks. Decentralized Identity Foundation (DIF) The Decentralized Identity Foundation (DIF) is an alliance of companies, financial institutions, educational institutions, and organizations working together to develop a unified, interoperable ecosystem for decentralized blockchain enables and identity solutions. Their work includes creating specifications, protocols, and tools to ensure that blockchain-based identity solutions are consistent, reliable, and trustworthy. Self-sovereign identity principles Self-sovereign identity is a concept where individuals have ownership and control over their data without relying on a centralized database or authorities to verify identities. The principles of self-sovereign identity emphasize user control, transparency, interoperability, and consent. Blockchain’s inherent attributes align well with these principles, making it an ideal technology for realizing self-sovereign digital identity. Popular blockchain identity protocols Several protocols aim to standardize blockchain identity verification. Some notable ones include DID (Decentralized Identifiers), which provides a new type of decentralized identifier created, owned, and controlled by the subject of the digital identity, and Verifiable Credentials, which allow individuals to share proofs of personal data without revealing the actual data. Through its unique attributes, blockchain presents a compelling and transformative alternative to the pitfalls of conventional identity management and verification systems. By championing security, decentralization, and user empowerment, it sets a new standard for the future of digital and blockchain identity and access management solutions. To understand how this can redefine your identity management and verification processes, book a call with us today and embark on a journey toward a more secure security posture.
<urn:uuid:a2a92c00-62cf-4a22-a226-856d5d7027e2>
CC-MAIN-2024-38
https://www.1kosmos.com/blockchain/blockchain-identity-management-a-complete-guide/
2024-09-17T08:36:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00294.warc.gz
en
0.908035
2,379
2.6875
3
Learn how to build a home lab for VMware vSphere 6.x to reproduce the training environment and run course labs. Before virtualization, I had many computers around my house that required maintenance, upgrading, replacement, etc., as well as the power to run all of the equipment. This was very time-consuming and expensive. In 1999, I began using VMware Workstation 2.0 to create virtual machines (VMs) to study NetWare, NT 4.0, Windows 2000, etc. Since that time, I have used it in all of my studies and reduced my lab equipment to two computers, a server in the office, and a laptop I use when traveling. Originally, ESX/ESXi didn’t run in a VM, requiring more hardware to study and learn ESX(i). Beginning with ESX 3.5 and Workstation 6.5.2, it is possible to virtualize ESX(i) in a Workstation VM (or inside a vSphere server, for that matter, but we won’t be discussing that in this white paper), although this is not supported. It is possible to run ESXi 6.x inside of ESXi 6.x, Fusion 7, or VMware Workstation 11 or higher. In fact, VMware and Global Knowledge teach their vSphere 6 courses in this manner: ESXi servers needed for class run as VMs on ESXi hosts, which works well, but requires a dedicated machine. This is often possible in a business setting, but may be difficult for the small business or others where spare hardware is not available. Hence, this white paper will discuss how to use Workstation 14 or Fusion 10 (or higher) to create the hosted environment. I often get asked by my students how to (relatively) inexpensively set up this kind of lab for study after class, and the result is this white paper. When specific vendors are mentioned, it is not an endorsement, but rather an example of something that meets the recommended specifications. This white paper is broken down into three major sections; the first and most detailed is about the hardware required, the second is about the VMware Workstation configuration, and the third is about installing vSphere 6.x and vCenter (vC) 6.x. Note that this white paper is not intended to be an in-depth review of how to install and configure vSphere as that is taught in the VMware classes and a VMware class is required for certification. The biggest question is whether to build your lab at a stationary location, such as your home or on a spare server at work, or whether it needs to be portable. In many cases, a stationary configuration is sufficient, so the desktop/server route works well and is usually less expensive. If you need to do demonstrations for customers, study at multiple locations, etc., then a laptop configuration may work better for you, though it will probably cost more. As far as minimum CPU requirements are concerned, you’ll need at least two cores (or CPUs) to be able to install ESXi and/or VC, but this will be very slow. I suggest a minimum of four cores (or CPUs, preferably hyperthreaded) so there is enough CPU power to run the VMs and the host operating system (OS). Eight or more cores work well. If you’re planning on creating and using I/O-intensive VMs, and/or running many VMs, and/or doing a lot on the host OS while VMs are running, you should consider at least 12 cores. Remember that ESXi 6 (vSphere 6) requires 64-bit-capable CPUs to run, so be sure to purchase 64-bit-capable CPUs with either Intel VT or AMD-V support (both physically on the CPU and enabled in the BIOS).
<urn:uuid:9763c183-c933-46a3-8f44-4c38131f2b6b>
CC-MAIN-2024-38
https://www.globalknowledge.com/us-en/resources/resource-library/white-papers/how-to-build-a-vmware-vsphere-6x-home-lab/
2024-09-17T07:43:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00294.warc.gz
en
0.943753
787
2.546875
3
Despite the many benefits of connected vehicles being touted in the press, it all sounds great, but do we really want to put our autonomy, and therefore safety in the hands of a wilfully minded machine? The experts at SGS help us to decide. Growth in the market for connected automobiles has snowballed. It is estimated 76.3 million cars on our roads will be connected by the year 2023, up from 28.5 million in 2019. By 2025, and even following the effects of the Covid-19 pandemic, the market is predicted to be worth 166 billion USD. Autonomous road transport is the goal for many, but for consumers this technology is already appearing on our roads in the form of connected vehicles. What is a connected vehicle? A connected vehicle is one that is cognisant of the world around it via access to the internet and, often, a wireless local area network (WLAN). It can access data, send data, download software and patches, and communicate with other Internet of Things (IoT) devices. It can also give Wi-Fi access to onboard passengers – that’ll please any teenage passengers. Connective technology enhances the driving experience by improving safety, security, navigation, infotainment options and onboard diagnostics. By establishing bi-directional communication between vehicles, mobile devices and infrastructure networks, the vehicle can receive triggered communications from other users on the network. T his might include other vehicles, traffic and intersection monitoring systems and remote payment systems (e.g., tolls). Through communication with these systems, the connected vehicle can build up a picture of the environment that surrounds it, thereby allowing it to interact and react in a safe and efficient manner. Connective systems include: - Adaptive cruise control - Route planning systems to avoid congestion/accidents/roadworks - Automatic braking systems - Smartphone connectivity - Diagnostic data sharing to remind the owner about servicing, etc. - Location identification if the vehicle is stolen or misplaced - Automatic payment systems. Connectivity is achieved through embedding or tethering technology to the car. Embedded technologies will often operate without the knowledge of the driver, who will only recognise their impact when they are triggered. For example, dedicated short-range communications (DSRC) or Cellular Vehicle-to-Everything (C-V2X) radios with very low latency can be used in safety-critical features to trigger braking before a collision. Tethered technologies, however, require the operator to connect an additional piece of equipment to the vehicle – e.g., a smartphone. The driver then has access to the functionality of the smartphone through the vehicle, allowing them to make telephone calls, listen to music, etc. Beyond expanding the infotainment options for occupants, the primary reason for many consumers to buy a connected vehicle is its enhanced safety. The US Department of Transportation (DOT) has said that its efforts had previously been aimed at helping people to survive crashes but, “connected vehicle technology will change that paradigm by giving people the tools to avoid crashes.” Allowing the vehicle to interact and respond with the world around it will give the driver 360-degree awareness of hazards and situations that may be invisible to them. The technology can alert them to imminent crash situations, thereby speeding up their responses or, in some cases, responding for them. However, the safety benefits of this technology will be removed if it becomes a distraction. Instead, much of the connective technology in our vehicles must be hidden and only become apparent to the driver when it is needed. Some people are distracted by their air freshener so probably for the best. The next step for connected vehicles is the introduction and incorporation of 5G technology. This will allow cars to ‘talk’ to one another in near real time. The advantages of this will be profound, making it the next step towards fully autonomous vehicle. For example, it will allow: - Cars travelling in opposite directions to share road condition data - Cars to communicate their position to enable safe driving at higher speeds - Cars to determine which has the right of way at stoplights, etc. - Real-time network communication to find parking spaces, addresses, avoid congestion, etc. - Safer travel through a reduction in accidents. This technology is currently in development. Today, it can cope with simplified highways, but the complexity of real-life situations means workable systems for safe navigation are still a few years away. Perhaps more than a few years. Overall, I think being able to locate a vehicle if it’s stolen, great; potential extra safety features, great. But is there the danger those who don’t need to, might start to rely a little too heavily on their vehicle’s “judgement”? There is such thing as human error, but with ‘technical issues’ being a phrase I hear daily, machines certainly aren’t infallible either. We’re definitely on the right road, pun intended, but we still have quite the journey ahead of us. DCR Live | The virtual event coming to you June 29 & July 1 | Find out more and register here.
<urn:uuid:2fd6371e-9e32-4da1-a998-db713df90e4b>
CC-MAIN-2024-38
https://datacentrereview.com/2021/05/connected-vehicles-a-help-or-a-hindrance/
2024-09-18T15:51:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00194.warc.gz
en
0.953932
1,080
2.671875
3
Microsoft researchers have developed an artificial intelligence (AI) system that has taught itself the intricacies of Mahjong and can now match the skills of some of the world’s top players. The complex board game of chance, bluff, and strategy was invented in China thousands of years ago and remains a passionate pastime for millions of Asians today, with many dedicated competitors playing online. Computers have learned to play Chess and another ancient Chinese game, Go, amid much fanfare in the past. But scientists at Microsoft Research (MSR) Asia see their achievement as far more than just a case of technology mastering yet another game. The researchers – who named their system Super Phoenix, or Suphx for short – developed a series of AI algorithmic breakthroughs to navigate the uncertain nature of Mahjong. With more work, these could potentially be applied in real situations to solve problems thrown up by unknown factors and random events. “For as long as researchers have studied AI, they have worked to build agents capable of accomplishing game missions,” says Dr. Hsiao-Wuen Hon, Corporate Vice President, Microsoft Asia Pacific R&D Group and MSR Asia. The MSR Asia team designed Suphx to self-learn Mahjong’s strategies, tactics, and subtleties through the experience of playing against thousands of people on Tenhou – a Japan-based global online Mahjong competition platform with more than 300,000 members. With constant machine learning, Suphx went from being a novice to an expert after more than 5,000 games over four months. The more it played, the more it learned at an ever-increasing pace. It has now honed its own playing style and can balance attack and defense moves, strategically weigh short-term losses against long-term gains, and make quick hand calculations and decisions with unclear information. Suphx has become the first AI system to compete at Tenhou’s prestigious “10th dan” ranking – something that just 180 people have ever done. Only a handful of professionals now play at a higher level in a private room for human players only. Mahjong is played socially and professionally across East and Southeast Asia and was recently featured in the Hollywood movie, “Crazy Rich Asians.” It is very different from Chess and Go, which are “perfect information games” in game theory parlance – as both allow players to see everything on a board that can impact the outcome. Mahjong is an “imperfect information game” because many factors are unknown. For instance, players must account for their opponents’ unseen tiles. This can lead to bluffs and unpredictable outcomes as they decide what to discard and whether to meld or fold. “Mahjong is more complex than other board games. So playing becomes an art as well as a science,” says Dr. Hon. “Good Mahjong players rely on a combination of observation, intuition, strategy, calculation, and chance that presents unique challenges for an AI system.” The post More than a game: Mastering Mahjong with AI and machine learning appeared first on Microsoft Malaysia News Center.
<urn:uuid:932926be-56cd-4db7-bc23-737ab7253c2a>
CC-MAIN-2024-38
https://www.glocomp.com/more-than-a-game-mastering-mahjong-with-ai-and-machine-learning/
2024-09-18T15:02:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00194.warc.gz
en
0.968894
653
2.75
3
An Intrusion Detection and Protection System (IDPS), or an Intrusion Prevention System (IPS), is a device or software that screens network traffic to avert security incidents. IDPS solutions can take different forms but they generally work across their setting to scan for malicious activity, log and analyze data, identify anomalies, prevent harm or its continuation, and report abnormalities. An IDPS can be network-based to protect a computer network, wireless network-based to protect wifi, it can check for network behavior, and it can take the form of a software download on a single device. McAfee NSP, Trend Micro TippingPoint, and Hillstone NIPS are all examples of an IDPS. "If our firewall catches some malware delivery, our IDPS will note that so we have some basis for ongoing or additional prevention."
<urn:uuid:c55d3d20-bab3-424c-9e67-55b3d4596437>
CC-MAIN-2024-38
https://www.hypr.com/security-encyclopedia/intrusion-detection-prevention-system
2024-09-21T00:38:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00894.warc.gz
en
0.913381
171
2.625
3
Lessons learned from a ransomware infection Since October, Datto has been conducting testing designed to quickly detect ransomware in backup data sets. Here’s why: it has become a major threat to individuals and businesses over the past few years, and the cyber extortionists behind these attacks operate with increasing sophistication. SMBs can be particularly vulnerable to attacks and are more likely to pay a ransom to get their data back than large businesses. In many cases, these attacks are conducted by large criminal organizations using wide-reaching botnets to spread malware via phishing campaigns. Victims are tricked into downloading an e-mail attachment or clicking a link using some form of social engineering. Fake email messages might appear to be a note from a friend or colleague asking a user to check out an attached file. Or, email might come from a trusted institution (such as a bank) asking you to perform a routine task. Sometimes, ransomware uses scare tactics such as claiming that the computer has been used for illegal activities to coerce victims. When the malware is executed, it encrypts files and demands a ransom to unlock them. Antivirus software is obviously essential, but on its own it isn’t enough. Many attacks still get through. So, a proper ransomware protection strategy also requires employee education and backup. It’s also critical to keep applications patched and up to date to minimize vulnerabilities. Education, antivirus, and patch management can help you avoid attacks to begin with. Backup allows you to recover if those measures fail. Also, many people assume that ransomware only locks the files on a single device. While this was the case in the early days, today’s ransomware is designed to spread itself out across entire networks. So, the sooner that you can detect the attacks that do slip by security measures the better. Recovering files for a single machine is obviously much easier than recovering files for infected machines across an entire network -- stopping the infection at Patient Zero, if you will. Two test types Backup presents an opportunity for early detection, because each time a backup is performed, it can be compared against previous backups to look for changes. Not all ransomware operates the same way, but there are a number of common themes. For example, ransomware always encrypts user documents and directories (e.g., photos, files stored in "My Documents" folder, etc. It also encrypts "work" related files (e.g., docx, xlsx, etc). Also, ransomware is constantly changing to avoid detection, which is why antivirus software is not always capable of blocking the malware. Antivirus software relies on a virus signature database that must be constantly updated. Since Datto is not an antivirus provider and does not maintain such a database, testing focused on detecting known ransomware characteristics. Our team devised two types of tests to identify these characteristics. Both were designed to run fast enough to keep up with frequent backups, rely only on information captured in snapshots, and not boot the box or risk further infection. The first, known as file upheaval testing, looks for whether files have changed between backups. For example, about 80 percent of the ransomware tested changed file names when encrypting files. Upheaval testing designed to look for batches of changes to files that could indicate that ransomware is present. The remaining 20 percent of ransomware tested did not change file names when encrypting data. The second type of test, known as entropy testing, looks for specific conditions that indicate that files have been encrypted. All files, including images, have some degree of organization and structure. Encrypted data, however, is completely randomized. High levels of entropy in backup data can also indicate the presence of ransomware. Based on the information gathered during the months of ransomware testing, we were able to develop a new ransomware detection feature. When ransomware is detected, an alert is sent allowing businesses and other users to diagnose the issue and restore data quickly to a point in time before the infection. There is a growing trend to develop similar technologies that are capable of combating the ransomware epidemic via backups. This is vital for those occasions when ransomware gets through firewalls and antivirus protections. Unfortunately, the popularity of ransomware among cyber criminals does not appear to be waning. Recently, Datto surveyed more than 1,000 IT service providers located across the world about the current state of ransomware and found that a staggering 97 percent of respondents said ransomware attacks on small businesses are becoming more frequent, a trend that will continue over the next two years. The survey found that 91 percent of respondents reported their clients were victimized by ransomware, 40 percent of whom had experienced six or more attacks in the last year. Nine out of ten IT service providers reported ransomware attacks among their small business customers. The number one cause of ransomware infection? Almost half, 46 percent of respondents, said that phishing emails were to blame. The survey found that the average ransom requested was typically between £400 and £1,600 but ten percent of respondents reported the ransom average to be greater than £4,000. However, the ransom is just a fraction of the losses businesses can incur from a ransomware attack. The downtime following the attack can be crippling. According to the survey, 63 percent of respondents mentioned that a ransomware attack led to business-threatening downtime. Finally, there is a disconnect between IT service providers and their small business customers when it comes how they perceive the threat of ransomware. The majority of IT service providers are "highly concerned" about ransomware but indicated that their customers are generally not, likely due to lack of awareness. The most important lesson we learned from infecting ourselves with ransomware is that early detection matters. This allows IT professionals to: - remotely diagnose the extent of damage - contain and minimize infection - identify "last good" backup quickly - differentially update production machines to restore known good versions of compromised files If an infection is addressed before it spreads to other systems, recovery is considerably faster. For IT service providers, early detection reduces the time and effort required to perform complex recoveries of data and applications and allows them to better serve their customers. Robert is responsible for managing Datto’s development and infrastructure initiatives in support of its comprehensive data backup and protection platform, which is specifically designed to meet the needs of SMBs. Published under license from ITProPortal.com, a Future plc Publication. All rights reserved.
<urn:uuid:2183e657-8892-406b-8aea-a78804774295>
CC-MAIN-2024-38
https://betanews.com/2017/03/14/ransomware-infection-lessons/
2024-09-11T06:41:24Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00894.warc.gz
en
0.959732
1,301
2.578125
3
When we go online shopping or banking, for security, we expect to see the website to have both the “HTTPS” and the secured lock icon on the address bar. But what does this “HTTPS” and lock icon actually mean? To answer these questions we need to understand HTTPS, SSL protocol, and SSL certificates. On HTTPS, SSL, and SSL certificate Hypertext Transfer Protocol Secure (HTTPS) is the secured version of HTTP, which encrypts communications between computer networks. In HTTPS, the communication is encrypted using Secure Sockets Layer (SSL) or now known as Transport Layer Security (TLS). Hence, HTTPS is also referred to as HTTP over SSL (or HTTP over TLS). Any website with an HTTPS web address uses SSL. Specifically, SSL or TLS, is a protocol that creates secured connections between communicating devices by implementing SSL certificates. SSL certificate is a web server’s digital certificate issued by a certificate authority (CA), hosted in a webserver, and then installed on a web browser. A CA is a trusted third-party organization that generates and gives out SSL certificates to website owners. SSL certificate plays a critical role in building trust between the browser and the web server, and it does this by performing two functions: 1) by authenticating the identity of the webserver and the website; and 2) by encrypting the data transferred between the web browser and the webserver. In essence, SSL certificates enable websites to use HTTPS, which is a more secure protocol. It also allows private conversation just between two parties to secure sensitive user data (e.g., usernames, passwords, email addresses, banking information, etc.), and to reduce the risk of stealing or tampering of sensitive data from fake versions of websites. Here are the information and data included in the SSL certificate: - The domain name that is certified - The associated subdomains - Person, organization, or device who owns the domain - The certificate authority - The digital signature of the certificate authority - Issue date of the certificate - Expiration date of the certificate - The public and private keys Types Of SSL Certificates There are several types of SSL certificates, and they can be classified based on their level of identity validation and the number of domain/s they cover. A. Level of validation - Domain validated (DV) certificate – this is an X.509 public key certificate that is issued after the applicant has proven some control over the domain. This is the most common type of certificate. The sole criterion for a DV certificate is proof of control over the whois records, DNS records, email, or web hosting account of a domain. Basically, DV certificates can be issued without any human intervention, which allows it to have these following advantages: - They are often cheap (10 USD per year) or even free, e.g. Let’s Encrypt. - They can be generated and validated without any documentation. - Most of them can be issued in a minute or so, via special tools which automate the issuing process. Web browsers will display the secured lock icon but does not show any legal entity. Clicking the sign will only show “This website does not supply ownership information”. - Organization validated (OV) certificate – this is an X.509 public key certificate that is issued when the applicant satisfies these criteria: - control of the domain (similar to DV certificate); - physical presence of the website owner; - fee between 50 to 100 USD per year. Web browsers will display the secured lock icon but does not show any legal entity, similar to a DV certificate. - Extended validation (EV) certificate – this is an X.509 public key certificate that is issued after a CA verifies the legal organization that controls the domain. This is the most trustworthy type of certificate. The verification includes: - control of the domain (similar to DV certificate); - physical, operational, and legal presence of website owner; - government business records, to make sure the company is registered and active; - independent business directories, such as Yellow Pages, Dunn and Bradstreet, Salesforce’s data.com, etc.; - inspection of all domain names in the certificate; - fee between 150 and 300 USD per year. Web browsers will display the secured lock icon and will have menus that show the EV status of the certificate and the name of the validated legal identity i.e. registered company of the website. Clicking the sign will show details about the organization, such as name and address. B. Number of domains covered - Single domain – This is the most common type of certificate. It secures one valid domain or subdomain name, such as example.com or www.example.com. - Multiple domains (UCC/SAN) – This type of certificate is also known as Unified Communications Certificate (UCC) or Subject Alternative Names (SAN) certificate. It is not limited to a single domain and you can cover multiple domains up to a certain number. You can mix different domains and subdomains as long as they are related websites. - Wildcard domain – This type of certificate covers the main domain as well as an unlimited number of subdomains that is within the wildcard format e.g. *.example.com covers example.com, www.example.com, mail.example.com, neo.example.com, etc. How does an SSL certificate work? Given a scenario of a user wanting to connect to the Mlytics webserver, the following will happens when the user input https://www.mlytics.com and then hit enter. - The browser will initiate a TCP handshake and then request secure pages (HTTPS) from the Mlytics webserver. - The Mlytics server sends an SSL certificate (digitally signed by a CA). Devices attempting to communicate with the webserver will need the SSL certificate to verify the server’s identity, and to obtain the webserver’s public key. The private key is kept secret and secure in the webserver. - Once the browser gets the SSL certificate, the browser will check the digital signature of the certificate to make sure that it is valid or it is from the correct webserver. A digital signature is created by CAs private key, and browsers will refer to its installed CAs public keys to verify digital signatures of SSL certificates. - Once the SSL certificate’s signature is verified, it will obtain the webserver’s public key. At this point, the secured lock icon will appear on the browser’s address bar. The lock icon can be used to indicate that the certificate can be trusted, and the browser is indeed communicating with the correct webserver, and not from an impostor. - The next step is for the browser to share a secret. The browser will create one pair of symmetric keys, or shared secret. It keeps one key, and gives the other key to the webserver. However, it is not safe for the browser to send the secret in plain text, hence this is where the webserver’s public key comes into play. The webserver public key is a long string of characters used for encrypting the secret from plain text to cipher text. Once the copy of the secret is encrypted, the browser will send this encrypted secret to the web server. - When the webserver gets the encrypted key, it will use a private key to decrypt it. The webserver’s private key is a long string of characters used for decrypting the secret from cipher text to plain text. Data encrypted with the public key can only be decrypted with the private key. After decrypting, the webserver and the browser now obtained the same copy of the shared secret (symmetric keys). - From now on, all traffic between the browser and the webserver will be encrypted and decrypted using the symmetric keys. From this example we also describe how asymmetric key algorithm and symmetric key algorithm work. The asymmetric key algorithm (public key and private key) is used to verify the identity of the webserver and to build trust between the browser and the webserver. Once the connection is established, the symmetric key algorithm (shared keys) is used to encrypt and decrypt all traffic between the browser and the webserver.
<urn:uuid:bf018b78-0b4e-4d2b-ae12-72c9d72f0b43>
CC-MAIN-2024-38
https://learning.mlytics.com/the-internet/what-is-an-ssl-certificate/
2024-09-12T13:08:13Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00794.warc.gz
en
0.908571
1,738
3.75
4
Advances in technology have seen the process go from an extremely expensive and risky venture to something which could one day fill everything from our art galleries to our hospitals. The beginnings of 3D printing In the 1980s, engineer Chuck Hull produced the first examples of what 3D printing could potentially do using a process he named stereolithography. Using a vat of liquid polymer and a raised platform, he was able to instruct a UV light to harden a layer of plastic, then sink the platform and harden each layer above in certain formations and designs until he had a solid, tangible product. Stereolithography was first used to create prototype parts and has made its creator a very rich man, but such processes were difficult and extremely expensive to implement en masse. The current applications of 3D printing At the moment 3D printers are used only to aid the manufacturing process, rather than to it’s the full extent. They’re used to create prototype parts to save on the risk of overpaying for the required manpower and materials if the project fails. But in tech labs and universities like Michigan Technological University, it’s already paying dividends, with MTU overseeing the production of the Recyclebot. However, experts have predicted that as 3D printers become simultaneously more affordable and more innovative, it could lead to a drastic workforce reduction in many areas of manufacturing. Hiring and training workers to produce parts would undoubtedly prove costlier on an ongoing basis, in terms of both money and productivity, whereas an initial heavy investment in 3D printers – once their effectiveness and potential has been fully realised – would seem to be a cost-effective solution. We’re already seeing what the products created by 3D printers can do – Staples stocks the affordable ST3Di Pro 200 3D Printer at stores throughout Europe, and is also offering in-store customers the opportunity to print out their own more ambitious designs via a new website. At this point we are only a step away from a 3D printer in every home, but soon we could be able to reproduce parts for anything that requires repair. For example, if our printers require a new tray, we could soon have them ready-made in the home, reducing the need for a factory which produces printer parts. It could even extend as far as the military. Although we’re at risk of losing a lot of jobs to this technological advance, an article on Bloomberg calls for a different approach. “As customised mass production becomes more common, a more flexible approach would focus on processes instead of products,” it wrote, “that is, approve any product made with certified equipment according to transparent manufacturing guidelines.” Training a new generation of employees to efficiently operate 3D printers rather than standing on an assembly line will not only enhance promising talents but make everyone’s job a more fulfilling process. Done early enough, the integration of 3D printing into mass production could secure a bright future for the manufacturing industry, and keep the world’s economy ticking along without a great big hole of supply for the increasing demand of new techniques.
<urn:uuid:269990a6-47f1-4ceb-8421-87b10b3d94c0>
CC-MAIN-2024-38
https://www.information-age.com/will-3d-printing-affect-economy-31653/
2024-09-12T12:06:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00794.warc.gz
en
0.957013
644
3.359375
3
Kirsten Bay, CEO and President of Cyber adAPT outlines the limitations of AI in cyber security and why the human brain remains our greatest asset in the battle against attacks Let’s start by stating the obvious, shall we? Cyber security is a huge issue. According to official statistics, 90 per cent of all large organisations have reported suffering a security breach[i]. In fact, it is no longer a matter of “if” you suffer a breach, but “when”. There’s been a 144 per cent increase in successful cyber-attacks on businesses[ii] and a 267 per cent charted increase of ransomware attacks in 2016[iii]. And the average cost of a data breach is now estimated at $4 million[iv]. None of this should come as a shock. We have been fed these stats over and over again by an industry that was estimated to reach $170 Billion by 2020[v]. The enormity of the challenge and the complexity of the solution are mind boggling. As an industry, we are scrabbling for solutions. How can we survive this tidal wave of threats? One solution is to automate it. To get robots – artificial intelligence or AI – to do it for you. Sure – sounds good. But let’s dig a little deeper. Can AI really help? First of all, let’s consider how smart AI actually is. The answer is “pretty smart”. Plenty of machines can do impressive things. Many of us will remember chess master Garry Kasparov being trounced by IBM’s Deep Blue nearly 20 years ago. Even more of us will recall IBM’s Watson beating the human contestants of TV quiz show Jeopardy in 2011. But it is not all about supercomputers. Many of us experience AI every day when we talk to Siri or Cortana on our smartphones. Some of us even allow AI to do the cleaning. Amazon recently sold 23,000 robotic vacuum cleaners in a single day[vi] before they were let loose to learn how to spruce up our living rooms. Even Tesla’s autopilot is a form of AI. When it comes to commercial deployments, AI is doing entry level jobs like offering holiday shoppers travel ideas and developing personalised marketing[vii]. AI is smart and will rapidly get smarter. So smart in fact, some believe in something called the “Singularity” – the point at which AI becomes as powerful as the human brain. This, if it does happen, will do so sometime around 2045[viii]. The point is this: AI’s good. In fact, it’s amazing. But it’s got a long way to go. At the moment, the common theme in the use of AI is a narrow scope of application: play chess, answer general knowledge, clean the floor, and drive a car. While impressive, AI’s still in its infancy – quite literally. A team from the Massachusetts Institute of Technology developed an AI system able to take an IQ test designed for a young child. The results showed it had the intelligence of a four-year-old[ix]. Some take the view that AI will never trump the human brain. Danko Nikolic, a neuroscientist at the Max Planck Institute for Brain Research in Frankfurt, recently stood up in front of an audience of AI researchers and made a bold claim: we will never make a machine that is smarter than we are[x]. He says “You cannot exceed human intelligence, ever. You can asymptotically approach it, but you cannot exceed it.” Even if we could, implicit in the prophecy of the Singularity is the idea that AI is currently nowhere near as clever as a fully developed human and will not be for nearly 30 years. As a result, we, as humans, continue to run rings around our computer friends in most respects. And cyber security is no exception. There are extremely successful hackers out there now. Collectively, they steal $billions with groups such as the Carbanak gang pulling off one of the greatest heists of all time without the slightest bit of tunnelling into gold-laden vaults. With their human criminal minds they stole more than $1bn from more than 100 institutions in 30 countries over a period of two years[xi]. These people are smart and they do not just rely on malware to do the job for them. Yes, they need to know how to code and deploy malware, but they also need to be brilliant at social engineering; they need to have an understanding of finances and law enforcement; and they need to be one step ahead of security teams. To achieve what they have requires emotional and technical intelligence as well as an automated army of bots doing the dirty work. With criminal brilliance like this ready and willing to strike, will you be happy putting your defenses in the hands of AI? Think very carefully. Considering the potential disaster that can be unleashed in the event of a breach, will you be happy putting the equivalent of a four-year-old on the front line of your defenses? No. Me neither. Sure, that seasoned criminal is using machines and malware to infiltrate networks, which are arguably less smart than the AI, but behind every piece of malware is a person with a specific and very human intent: to steal credentials, to undertake reconnaissance, to shut something down or to embarrass someone. AI cannot beat this. It is not a machine vs. machine battle and treating it as such is to misunderstand the nature of cyber-security. This begs the question: how do we deal with the tsunami of cyber threats we now face if we can’t use AI? Until the Singularity happens – if it does – the answer lies in a human approach. Hackers are human, with human intentions. It stands to reason that they need to be fought with human insight. This is why the best defense combines the smartest minds with the best software. In looking for a security partner, organisations keen to defend their networks need to find vendors who have real practitioners from both the security and hacking world. Combined with the expertise of network and mobile technology specialists, these practitioners need the space to monitor millions of packets of real world traffic so that statisticians can develop models that make a difference. Only by doing so can they focus on codifying patterns of behavior that will find attacks others won’t. In conclusion, remember: you would not put Siri in charge of the White House, you would not allow a robot vacuum to manage hygiene in a hospital and you would not get the office junior to chair board meetings, and you wouldn’t put AI in charge of your security. AI is just not up to the job. Yet. [su_box title=”About Kirsten Bay” style=”noise” box_color=”#336588″][short_info id=’101019′ desc=”true” all=”false”][/su_box] The opinions expressed in this article belongs to the individual contributors and do not necessarily reflect the views of Information Security Buzz.
<urn:uuid:57584a2d-fd35-4085-9251-f9c8636def2d>
CC-MAIN-2024-38
https://informationsecuritybuzz.com/wouldnt-put-four-year-old-charge-security/
2024-09-13T17:46:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00694.warc.gz
en
0.948865
1,485
2.859375
3
The Internet of Things (IoT) is a boom which has come with the ongoing industrial progress and revolution offering something inexpensive and suitable to everyone, but yet quite unreliable and unsecure which has triggered a novel group of the concerns in accordance with a wellbeing of those using such a gird and letting themselves be exposed to a serious threat. Developing an IoT project is, from an engineering point of view, pretty obtainable and even communities across the globe which deal with an assembly industry might make such a technical system remaining competitive at a national, regional or even international stage. In other words, today it is possible mastering such a technology and many being in business can expect a good return of investments (ROI) if they give some money for an IoT project. Even if some theory of the fear regarding an IoT solution is present in the world the companies’ race for a profit will not be shaken as those being on the marketplace will always do anything to stay positioned in an overall competition not paying a lot of the attention to creeps coming from those who advocate safety and security, so far. The fact with the IoT technologies is they rely on a web connectivity in order to govern interconnected devices communicating with each other and as it is well-known, the internet is a critical infrastructure which provides an access to many literally getting them exposed to any kind of the high-tech operations that can affect the entire objects and their networks. Indeed, if it is coped with a security challenge, a logical thought could be how such a risk could be avoided or mitigated giving new sorts of the requirements to engineering teams worldwide in regard to better assurance of those being the consumers of such products and services which have opened a new question and that is a transition from an unsafe to safe exploitation of the emerging technological ideas and their brainstorming pillars. At the beginning of any project, it is important to define some initial requirements which might contribute to better quality, reliability and security of the final technological solution that should capture all inner, outer and combined challenges being part of any technical system, so far. The IoT is a paradigm that still needs to be smartly used as those inventing such a technology have been in a search for an answer to the previous engineering challenge and that has been some resources shortage across the world. In other words, at that time the humankind has looked for a response to that challenge equally trying hard to provide something economically suitable which will from a business perspective have a marketplace as cost-effectiveness with the functionality are the main demands of an optimal approach. About a couple of decades ago, when the IoT has started it was quite obvious that for such a time a well-developed and implied Wi-Fi communication could make a great job which was the fact then, but only after a few years many have become aware that such a digital transformation has a plenty of weaknesses that could affect lives and businesses of a lot of people over the globe. The current tendency suggests that the IoT is a well-accepted concept nowadays even if there are yet a heap of vulnerabilities with such an opportunity and apart from so with new innovations those barriers could be removed as if it is imagined that an information exchange between devices is sensitive to the hacker’s attacks certainly some of the high-tech defense techniques and tactics could be applied in order to mitigate any threat which is not anyhow annoying as a wheel of the history goes forward and even the modern technologies are just a phase of the development in an overall progress and evolution of the human beings across the planet. Apparently, one engineering challenge is well-tackled and truly the IoT might offer a plenty of options and from such a reason it should not be excluded like that as it can additionally bring a lot of false positives as its false negatives should be assumed as some concerns being such an expected in a world of math and science and undoubtedly the clever minds which has resolved one engineering challenge will be capable to proceed with the next maybe not in an ongoing generation of the technological development, but more likely with the future collection of talents in an area of science and technology. The mindful individuals throughout history have always dealt with the rational decisions providing to their environments a true knowledge and in a case of the engineering stuffs which will literally work and be based on the rigid scientific findings and evidence. Further, a real scientific thought will not hesitate to give an accuracy and those who have left a track in an arena of the math and science have always avoided all those oversufficient speculations which might destroy the beauties the human mind has created during the time as only if it is coped with the pragmatic facts some dangerous mistakes could be prevented and the world might remain in safe hands as it should be an imperative of some prospective actions. Indeed, a typical scientist will spend a lot of time thinking hard and once all angles are observed some pieces of knowledge could be taken into consideration and despite to those who must ruin the science will always put some blocks into their right places building the world being known today and if led by ethical principles those community members will always attempt to neutralize any sort of the destruction simply thinking in a fully positive and constructive manner. Finally, in a sense of the IoT products and services there are yet a lot of open questions which course the world will take in the future and even if there are some engineering challenges at the present those concerns will confidently be overcome tomorrow supporting the entire societies to deal with a less irrationality and put onto a fire the real ratio which will help to those making decisions and keeping directions instead of many to be wise leaders who will cope with the facts, not beliefs. About The Author Milica D. Djekic is an Independent Researcher from Subotica, the Republic of Serbia. She received her engineering background from the Faculty of Mechanical Engineering, University of Belgrade. She writes for some domestic and overseas presses and she is also the author of the books “The Internet of Things: Concept, Applications and Security” and “The Insider’s Threats: Operational, Tactical and Strategic Perspective” being published in 2017 and 2021 respectively with the Lambert Academic Publishing. Milica is also a speaker with the BrightTALK expert’s channel. She is the member of an ASIS International since 2017 and contributor to the Australian Cyber Security Magazine since 2018. Milica’s research efforts are recognized with Computer Emergency Response Team for the European Union (CERT-EU), Censys Press, BU-CERT UK and EASA European Centre for Cybersecurity in Aviation (ECCSA). Her fields of interests are cyber defense, technology and business. Milica is a person with disability.
<urn:uuid:15868504-426f-493e-8a01-f48c797847e9>
CC-MAIN-2024-38
https://www.cyberdefensemagazine.com/the-internet-of-things-technological-perspective/
2024-09-13T19:05:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00694.warc.gz
en
0.967003
1,349
2.78125
3
This decade is critical in the fight against climate change. We’re starting to experience its repercussions in our daily lives, and many governments now understand scientists’ projections mean they must enact more stringent sustainability measures. With decarbonization policy, green energy overhauls and pollution taxes agreed as the best solution, government and industry must develop affordable, workable sustainability technologies. While energy, conservation and transport technologies get much airtime, we rarely discuss the role space technology like satellites can play in avoiding climate catastrophe. An eye on the Arctic European Space Agency (ESA)’s Mark Drinkwater explains, “Throughout the satellite era, polar scientists pointed to the Arctic as a harbinger of more widespread global impacts of climate change. As the events of 2020 make their marks in the climate record, it’s evident a ‘green’ low-carbon Europe alone is insufficient to combat the effects of climate change.” Our acute understanding of the scale of the problem we face is thanks in part to climate-monitoring satellites. While weather satellites have been in our skies since the Space Race days, observation platforms for environmental monitoring achieved lift-off in the 2000s. These satellites track disasters like wildfires, hurricanes and oil spills, and longer-term shifts in temperature, loss of polar ice and changes in the jet streams. Prevent cyberespionage, protect critical infrastructure and detect and respond to advanced threats. ESA’s Copernicus Program delivers expansive climate monitoring services using their Sentinel satellites. Copernicus recently reported July 2020 as the third warmest July on record and several heatwaves and wildfires in the Arctic Circle. Multiplying like wildfire Arctic permafrost contains some of the most carbon-rich soils of any environment, safely frozen for centuries. As it thaws, it releases stores of CO2 and particles, causing an exponential warming cycle, particularly when subjected to wildfires – common in the Arctic as they are elsewhere. Multiplier events like these have scientists worried that politicians’ optimistic predictions are far-fetched. The next wave of Sentinel systems is gearing up as Copernicus 2.0 gains momentum. One candidate mission is the Copernicus Imaging Microwave Radiometer (CIMR) platform, a microwave radiometer giving all-weather, high-resolution estimation of ocean and sea-ice features like temperature and salinity. Another is the CRISTAL mission, measuring sea ice thickness and snow depth with a dual-frequency radar altimeter and microwave radiometer. ESA’s Director for Earth Observation, Josef Aschbacher, explains, “These missions will provide new year-round monitoring throughout the Arctic and CO2 emissions data to support the European Green Deal.” Early warning of disasters could save lives The sharp end of environmentally turbulent times is the increasing frequency and severity of natural disasters. In April 2021, ESA awarded a three-year contract to develop a natural disaster early warning system to a consortium including a maritime satellite communications provider, an independent research group and a meteorological tech developer. Aiming to help governments with emergency response, it will use a reliable, mobile satellite network and earth-based internet of things (IoT) infrastructure to alert at-risk populations to danger. “We aim to demonstrate using a secure satellite communication‑IoT solution to help civil government reduce risks from geohazards like avalanches, debris flow and floods,” said senior researcher Ivan Depina. The project will be trialed in Trøndelag, Norway, a region whose geology and climate make it especially susceptible to natural disasters like landslide and flood. Sounding the alarm on commodity-driven deforestation Constant monitoring at satellite scale could prove a powerful defense for vital natural ecosystems like forests and peatland, under threat from irresponsible agriculture, natural disasters and climate change impacts. One satellite sustainability project making a stir in this arena is Ecometrica‘s Forests 2020 project, funded by the UK Space Agency’s International Partnerships Programme (IPP.) The satellite sustainability reporting service worked with the Ghana Forestry Commission and the Kwame Nkrumah University of Science and Technology to map land use across Ghana and segregate cocoa agriculture from natural forests. The Ecometrica platform, the new map and other satellite-based data like deforestation alerts mean commodity trading companies can have their environmental impact more accurately assessed to help stamp out deforestation. The map is the latest in a series of initiatives to enhance sustainability for Ghana’s agricultural commodities, such as cocoa. It aims to end deforestation while promoting forest restoration. Paula McGregor, Space Program Manager, Ecometrica Ecometrica also works on soy, oil palm and tobacco projects, producing data commodity trading companies can use to regulate their activities and inform environmental protection investment. “To make an impact, satellite-derived insights must be delivered in ways decision-makers in government and the private sector understand. Forests 2020 made clear governments and companies can use this kind of information to understand their impacts and climate risks, and take action,” says Ecometrica’s McGregor. “Satellite-based monitoring is an affordable way to monitor ecosystems at scale, guiding priority interventions, investment needed to protect threatened ecosystems and future risks to these regions from climate change.” NASA leading the charge NASA on their climate research and monitoring goals for 2022 and beyond In 2021 NASA asked the US government to increase funding for their work, including climate monitoring. The administration had already established a Senior Climate Advisor at NASA to prioritize environmental objectives. A large part of NASA’s new climate work includes the Earth System Observatory, connecting satellites data to produce a “3D, holistic view of Earth, from bedrock to atmosphere,” useful for tracking climate change, improving conservation and disaster mitigation. “I’ve seen first-hand the impact of hurricanes made more intense and destructive by climate change,” explains NASA Administrator Senator Bill Nelson. “[Our] response to climate change matches the magnitude of the threat: A whole of government, all-hands-on-deck approach.” The National Academies of Sciences, Engineering and Medicine’s 2017 to 2027 Earth Science Survey, outlining climate science objectives like air quality forecasting, drought assessment and ecosystem carbon fluxes, informed the observatory’s design. Senator Nelson continued, “Over the past three decades, much of what we’ve learned about the changing climate is built on NASA satellite observations and research. NASA’s new Earth System Observatory will expand that work, providing the world with an unprecedented understanding of Earth’s climate system, arming us with next-generation data critical to mitigating climate change, and protecting our communities in the face of natural disasters.” Scientists know well the extent of climate change and national organizations are starting to act. With corporations following suit, we could have the power to reverse climate change’s worst-case scenario, a genuinely apocalyptic 4 degrees warmer world. With the best tools identifying the options to move forward, we cannot afford to ignore their observations. Our eyes in orbit see the bigger picture unfolding.
<urn:uuid:c746c620-6505-4765-9c05-6c28b3b29c67>
CC-MAIN-2024-38
https://www.kaspersky.com/blog/secure-futures-magazine/climate-change-satellite-data/41083/
2024-09-13T17:58:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00694.warc.gz
en
0.906088
1,488
3.765625
4
BGP is a complex routing protocol used to exchange routing information between autonomous systems. Deploying Anycast using BGP is the most common with Internet Service Providers (ISPs), but can also be used if you are a large enterprise customer needing to interconnect networks across disparate geographical or administrative locations. Anycast BGP with DNS Servers Deploying Anycast BGP on a managed DNS Server turns it into a fully-fledged BGP router in the network, capable of establishing connection with a BGP peer, participating in BGP routing processes, accepting and distributing dynamic routing information through BGP, and so forth. Anycast BGP on a managed DNS Server provides functionality in both IPv4 and IPv6 address families. The DNS Server can communicate with an IPv4 BGP router and exchange IPv4 routing information, communicate with an IPv6 BGP router, and exchange IPv6 routing information. One instance of BGP on the DNS Server can run simultaneously and independent in both IPv4 and IPv6 address families. MD5 authentication with Anycast BGP MD5 authentication requires a case-sensitive alphanumeric password of up to a maximum of 25 characters; no spaces. The following special characters are permitted: @ - . : _ [ ] . MD5 authentication with Anycast BGPIf MD5 authentication passwords are configured incorrectly, the DNS Server won't be able to establish the BGP peering session. BlueCat recommends verifying that the BGP peering session is established after configuring MD5 authentication. Prefix Lists in Anycast BGP - one prefix list to filter INPUT IPv4 routing information - one prefix list to filter OUTPUT IPv4 routing information - one prefix list to filter INPUT IPv6 routing information - one prefix list to filter OUTPUT IPv6 routing information These lists are independent from each other—you can have only one of them defined at a time or both. Each deployed prefix list is automatically bound to a related BGP peer.
<urn:uuid:b3a41385-0000-4b2f-a6a0-0e877b39a383>
CC-MAIN-2024-38
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Anycast-BGP/9.5.0
2024-09-17T11:17:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00394.warc.gz
en
0.857463
409
2.671875
3
Are you ready to revolutionize the way we power our communities and data centers? Picture a future where electricity isn't just distributed from centralized grids but generated and managed locally. Welcome to the world of microgrids, battery energy storage systems, and electronic isolation and controls. While it is fun to use these buzzwords and speak about the possibilities the future holds, why does this matter? Simply put, resources. Whether it is capital, space, power, water, or talent, we live in a resource constrained world. As our technology becomes more advanced, its demands for power and cooling will increase. This puts a large strain on our already fully loaded power grids, with the states ¹most at-risk being Texas, Michigan, Ohio, New York, and California. Texas is not interconnected to the national grid, which puts it at risk for downtime due to a lack of redundancy. New York and California, on the other hand, are strained due to their large populations and the decommissioning of traditional power plants. Additionally, with an increase in legislation supporting EV vehicles, the strain on the grid can be too large especially in inclement weather (i.e. hot and cold) increasing risk of downtime. Like it or not, soon we will have to supplement power and storage solutions that are smart and reliable enough to be treated as de-centralized grid assets. Let us dive deeper into the realm of Microgrids. What is a microgrid? Microgrids represent a paradigm shift in how we think about energy distribution. These localized grids can operate independently or in conjunction with the main grid, offering resilience and flexibility in the face of outages and disruptions. So, what are some of the basic components that we’d expect to see in a microgrid? Renewable energy, most commonly solar (PV), wind, or, in some cases, hydropower. Next, we would expect to see an inverter to convert the energy from the renewables to a usable form for the loads that are connected. After that, a BESS (Battery Energy Storage System), isolation with controls, a fuel cell, and/or hydrogen electrolyzer. While these individual components, alone, could not support an outage, when deployed together, the sky is the limit for “islanding” yourself from utility. These assets could be on a commercial site, outside of a housing community, a data center, and beyond. These are the building blocks for these locally deployed decentralized grids. Imagine a community powered by its own microgrid, seamlessly integrating renewable energy sources, like solar panels and battery storage systems, into its infrastructure. These technologies not only reduce reliance on fossil fuels but also pave the way for a more sustainable future. Outside of the communities, integrating renewables into their energy portfolio, there are mission critical operators who look to add redundancy to their utility connection and further control their uptime parameters. Mission critical operations are businesses that cannot suffer an outage even for a second. These customers are mostly data centers, healthcare providers, departments of transportation, utilities, etc. Furthering the point of living in a resource-constrained environment, these providers are seeing that the addition of high compute applications are driving their energy consumption up higher every year. To combat the risk associated with simply relying on utility, they deploy uninterruptible power supplies, generators, and, now, renewables and BESS systems to allow them even more flexibility during utility loss. As AI and other high performance compute practices start becoming the norm in the market, the utilities won’t be able to adapt quick enough. Standard per rack power density in hyperscale and co-location data centers ranges from 10 - 20 kW of consumption. And, in the next 3-5 years, market analysis predicts for this to shoot to 50 - 300 kW/rack of consumption. While this can increase revenue per sq/ft tremendously in colocation data halls, it is also introducing challenges in cooling and power requirements. Liquid cooling, active rear door heat exchangers, and cold plates, are poised to address these challenges on the heat rejection side. However, the power requirements are an entirely different beast to deal with. Enter, the need to BYOP (Bring Your Own Power). This is a facility level strategy that is creating and managing your own distribution, generation, and energy asset deployment. This can be accomplished through a variety of solutions. Utilizing DERs (Distributed Energy Resources), which is a fancy terminology to describe the energy generating and storage assets that comprise a microgrid, facilities can manage peak demand, add layers of redundancy to their systems, and ultimately, completely island themselves from the grid. While a completely renewable and stand-alone data center is not happening in the next 1-2 years, it is just over the horizon, and it is critical to start having important conversations as these systems require large intellectual investment, planning, and capital to get them off the drawing boards and into the real world. While the matters mentioned above mainly concern data center providers, an energy intensive activity that more and more consumers are participating in, every day, is… Electric Vehicle (EV) charging. Subsequently, never have we seen before, parking garages and multifamily home developments requiring the addition of new transformers to support 1000 amp and above services. Super chargers and 220V standard EV chargers require a large amount of power to charge vehicles quickly. Understandably, this strains the utility provider, especially considering that most charging is occurring simultaneously. What this looks like is a large group of EV users who commute to work and charge during the day, and another other group of users who charge exclusively at home during the night. As adoption increases, these routinely popular charging times become more and more problematic for utility providers. So, as the US continues to push automakers to electrify their fleets, the demand on the grid and surrounding infrastructures cannot keep up. Critical equipment necessary to install these new services have lead times measured in years, while the cost to retrofit existing parking structures to support charging can add up quickly, pricing many providers out of the market. The need for more readily available power is here, and we are just barely knocking on the door of what is possible, as we will need to, as an adapted society, further expand upon the utilization of already existing technologies. And, as mentioned, a BESS and PV Farm separately will not achieve much, but the value lies in linking them together into a smart controllable system. As we continue to be creative with implementing these already existing solutions together, then we can iterate and create more efficient systems, which allows for more of a mainstream adoption across the industry. Plain and simple, for most operations these solutions are currently cost prohibitive. However, let’s keep in mind a key learning from the ramp up of the solar industry; Utilities and governments are willing to subsidize and incentivize companies that choose to implement these solutions ahead of the curve. Currently, in Utah, Rocky Mountain Power (RMP) is rolling out an incentive program that is either per kWh or a one-time upfront incentive for the installation of a BESS. These are not small sums either, with some programs covering up to 75% of the cost of the BESS. One may ask, what is the angle for RMP? In short, the more DERs that are connected to the grid, the more redundancy is built into the utility framework. In the case of a contingency, these assets can all be controlled as one, spinning reserve for RMP. During normal operation, owners can enjoy peak shaving benefits, as well as outage protection. A truly rare “win-win” scenario. As peak demand charges continue to increase, ROI numbers start to make sense on 12- and 24-month timelines. Additionally, RMP is utilizing “Make-Ready” incentives to support the adoption and installation of EV charging. These incentives could cover up to 100% of the cost associated with powering EV chargers in commercial and residential applications. To further this discussion of the future, we can start to think of abstract solutions such as on-site hydrogen generation using natural gas. We can replace diesel gensets with hydrogen fuel cells, as hydrogen is three-times more energy dense/liter than diesel. We are even close to the deployment of small, self-contained, 300 – 500 MW nuclear reactors that can be deployed in remote environments and do not require service for 60 years. So, when it comes to reliability and cost savings, all signs point to BYOP. While the adoption of microgrid solutions may currently pose financial challenges, the tide is turning as incentives and awareness grow. Just as the solar industry witnessed exponential growth fueled by supportive policies, the trajectory of microgrids and BESS suggests a similar transformation in the energy landscape. As we stand on the cusp of this paradigm shift, it is necessary to initiate conversations and investments today for a more sustainable and resilient tomorrow. The journey towards decentralized, renewable energy is not merely an option; it's a strategic imperative for businesses and communities alike. If you enjoyed this high-level overview of the current market of microgrids, please join us for part two of this blog series, which will be released the last week of March. We'll do a deep dive on use/case and applications, and we’ll expand upon DVL’s current product offerings that support this infrastructure and qualify for utility incentives. Additionally, we will provide real-life applications to this equipment. Have a question or comment about this blog? Reach out to blog author Alexander "D'Angelo" D'Angelo, Power Systems Sales Engineer, (based out of our Salt Lake City office) at firstname.lastname@example.org.
<urn:uuid:39198438-34f1-4c47-a4a2-fa7d68589983>
CC-MAIN-2024-38
https://www.dvlnet.com/blog/topic/sustainability
2024-09-20T01:23:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00194.warc.gz
en
0.946596
2,008
3.03125
3
Generative AI is an artificial intelligence system that can generate content and data, including images and other media sets, in seconds using only natural language input. The advent of generative AI has revolutionized various industries by enabling the creation of realistic image, audio, and video content. Unfortunately, these same AI tools introduce advanced risks to establishing and proving identity. In this post, we discuss the risks of Generative AI to legacy identity systems and explore how NextgenID's Supervised Remote Identity Proofing (SRIP) solution mitigates these risks. With the increasing sophistication of generative AI algorithms, there is a rising concern surrounding the potential exploitation and misuse of these technologies for malicious intent. This raises serious implications for the identity ecosystem, as it enables the emergence of individuals who can create exceptionally advanced, AI-enhanced credentials, enabling unauthorized access to services, facilities, and systems. Below are some of the risks and implications introduced by AI to the Identity ecosystem: 1. Deepfakes: Generative AI can create realistic deep fake images and videos from publicly available content. These deceptive media can be utilized by individuals to gain unauthorized access to someone's identity or to manipulate and misrepresent a person's actions or speech. This not only leads to the creation of counterfeit credentials but also increases the risk of data breaches, fraudulent activities misinformation campaigns, reputational damage, and potential 2. Synthetic Identities: Cybercriminals are using Generative AI to create fake profiles with synthetic identities that appear real. These fake digital profiles can be used to perpetrate identity theft or fraud. Moreover, because the tools utilize natural language inputs, bad actors with little technology skills can launch sophisticated techniques with minimal effort. 3. Personal Data Leaks: With the increasing capabilities of AI, data mining, and bots, it is becoming easier to discover, infer, alter, or create someone’s personal data through a combination of existing and generated content. This potentially exposes sensitive information. 4. Misrepresentation: Generative AI can easily manipulate or misrepresent a person’s identity or activities, leading to defamation or miscommunication. NextgenID’s SRIP solution incorporates multi-factor enrollment and authentication, which includes a fusion of biometric modalities, government-issued credentials, and 3rd-party validation. Combining biometric and biographic data collection coupled with real-time video interaction by trained agents offers an enhanced verification process. This synergy ensures the individuals undergoing enrollment are indeed whom they claim to be, are physically present, and their identity is bound to the biometrics and credentials provided under the watchful supervision of the trained agent. 1. Real-Time Supervision and Constant Monitoring: SRIP uniquely enables real-time supervision during the identity-proofing process, ensuring a live agent monitors the entire transaction. SRIP provides an extra layer of human validation to further dissuade fraudulent activities. The identity-proofing session can be optionally recorded (depending on the requirements of the agency/organization), facilitating swift detection and response to any fraudulent attempts. 2. Advanced Security Features: SRIP employs tamper-proof technology with advanced detection algorithms and presentation attack defenses to protect against fraudulent identity documents, fake fingerprints, and spoofed facial images or video feeds. These advanced security features ensure that deep fakes or synthetic identities do not bypass the verification process. 3. Secure and Efficient Data Communication: The SRIP solution transfers sensitive data securely, employing mutual authentication and high-grade encryption services that align with industry standards and comply with data protection regulations. This results in a time-efficient solution, allowing users to complete the verification process swiftly while maintaining the highest levels of security and privacy. Moreover, using high-grade encryption, SRIP protects user information, making it exceedingly difficult for cybercriminals to steal or leak personal data. 4. Transparency and Consent: An integral part of NextgenID's SRIP solution is its commitment to transparency and consent. It always seeks user consent before conducting any identity verification, ensuring users know how their data is used and offering them greater control over their information. The advent of Generative AI has undeniably created exciting new opportunities, from streamlining business operations to enabling unique new creative journeys in entertainment, education, and more. These advancements are paralleled by emerging threats to personal identities, giving rise to an urgent need for countermeasures. This is where NextgenID's Supervised Remote Identity Proofing (SRIP) solution takes center SRIP is a robust and agile solution that does more than just react to threats; we proactively anticipate them, constantly evolving and adapting our solution in response to an ever-shifting digital landscape. Now is the ideal time to leverage NextgenID’s state-of-the-art technology to protect identity, creating a more secure world and safer future.
<urn:uuid:141d1ac4-cc2f-403a-a61d-9f254d50db0a>
CC-MAIN-2024-38
https://www.nextgenid.com/blog-risks-of-generative-artificial-intelligence-ai-to-identity-and-benefits-of-ngid-srip-technology.php
2024-09-20T00:07:54Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00194.warc.gz
en
0.892467
1,064
2.6875
3
Let’s start with some quick definition of terms. A trade school is a specialized educational institution that allows you to pursue a course of technical education related to a specific skill. Common courses of study include mechanical, electrical, automotive, carpentry & plumbing. But trade schools may also include fields as diverse as culinary arts, music production, broadcasting, graphic design, computer programming, fashion design, cosmetology, & filmmaking. In the field of healthcare alone, from nursing to the administration of an ever-growing array of medical technologies, trade skills are in demand nationally. According to The Atlantic, The National Center for Education Statistics cites a rise in trade school enrollment, from 9.6 million students in 1999 to 16 million in 2014. This jump may be attributable in part to a rising emphasis on the technical, mechanical, and engineering skills surrounding computer science and computer programming. Our rising dependence on technology continues to create excellent career opportunities & earning potential for professionals with trade school backgrounds. More schools equal more need for copiers, printers and scanners, and a higher demand for products like PaperCut MF. So why wouldn’t trade schools benefit from using PaperCut MF? The answer is they would, and they can. PaperCut MF has already helped the public-school sector, like Little Rock’s School System all the way up to higher education schools like UAMS. Vocational or trade schools will need to manage and track printing, scanning, and copying just like any other educational facility. Every school has a bottom line, introduce PaperCut MF and show them what it can do for their organization. According to the Consumer Price Index, the cost of college climbed 74.5% between 2000 and 2016, an era notably intersected both by a crippling economic recession and a rise in trade school enrollments. Today, Americans collectively own more than $1.5 trillion in student loan debt. And with an estimated 40% of college graduates enduring sustained underemployment in the so-called “gig economy” largely working in roles that don’t require a college education like Uber, Starbucks and GrubHub the ROI for a four-year bachelor’s degree is earning new scrutiny. If trade schools have filled in the college enrollment slump, then that means it filled in the potential demand for more copiers & printers, which is a HUGE opportunity for more PaperCut MF licenses.
<urn:uuid:92a966df-48f4-491e-b70d-f70438956893>
CC-MAIN-2024-38
https://acd-inc.com/blog/america-is-in-the-middle-of-a-serious-trade-school-shift-want-to-learn-how-to-benefit-from-it/
2024-09-07T20:01:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00394.warc.gz
en
0.951394
491
2.640625
3
For the purpose of providing a reliable connection between electronic devices, choosing a proper shielded twisted pair cable is essential to the network using copper cables. EMI (electromagnetic interference) is a disturbance in twisted pair cables. It affects the performance of an electrical circuit by electromagnetic induction, electrostatic coupling, or conduction. But with the help of cable shielding, cables can be immune to the disturbance and keep a stable connection. And this article will present some knowledge about cable shielding. Hope you can find it useful. Before getting to know cable shielding, you may wonder about the real difference between shielded twisted pair (STP) and unshielded twisted pair (UTP). As their names suggest, STP has a shield that works as a guard and drains the induced current surges to earth. Yet UTP has no cable shield with such a function. But the shortcoming of STP cables is the extra shielding cost added to an installation. Typically, STP cables are more expensive than UTP cables. And due to the stiffer and heavier shielding, coping with STP cables is more difficult. But if you pursue a higher performance, STP will be a preferable choice. There are mainly two types of shields: braided shield and foiled shield. Braided shield is made up of woven mesh of bare or tinned copper wires. It has better conductivity than aluminum and more bulk for conducting noise. An easier attachment with connectors can be achieved by crimping and soldering the braid. However, braided shield does not possess 100% coverage. It usually provides 70% to 95% coverage according to the tightness of weave. But as a matter of fact, 70% coverage is always sufficient if cables are fixed. Another shielding is foiled shield. This type of shielding uses a thin layer of aluminum and has a 100% coverage around the conductors. But the drawback is that its conductivity is lower than copper braided shield. Today, people will use acronyms to name different shielding constructions. Take U/FTP as an example, the first letter “U” represents the outer shield or overall shield of cable, and the followed letter “F” represents the individual shield under the overall shield of each twisted pair or quad. Here are some commonly used shielding constructions: U/FTP is the typical individual shielding using aluminum foil. This kind of construction has one shield for each twisted pair or quad above the conductor and insulation. Individual shield especially protects neighboring pairs from crosstalk. F/UTP, S/UTP, and SF/UTP are overall shielding with different shield materials. Overall shield refers to the entire coverage around the whole cable. This type of shielding helps prevent EMI from entering or exiting the cable. F/FTP, S/FTP, and SF/FTP are individual and overall shield. This type of construction has both layers of shielding. And its immunity to EMI disturbance is greatly improved. Meanings of the abbreviated letters: U = unshielded F = foiled shielding S = braided shielding TP = twisted pair As for the application in 10GBASE-T Ethernet, UTP, U/FTP, F/UTP, F/FTP and S/FTP are often used. But their practicable cable categories are varied from cat 6/6a to cat 7/7a. When twisted pair cable is deployed for 40GBASE-T Ethernet, U/FTP, F/UTP, F/FTP, S/FTP are applied under cat 8/8.1/8.2. Adopting shielded twisted pair cable is an effective method to prevent EMI from interfering signal transmission. And there are different shielding constructions for you to choose. Of course, using twisted pair without cable shielding is also feasible if your budget is limited. Wish you find the most suitable shielded twisted pair cable for your project!
<urn:uuid:133a04f6-f9a7-4ef2-b923-e6ea26d778d2>
CC-MAIN-2024-38
https://www.chinacablesbuy.com/tag/stp
2024-09-09T00:55:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00294.warc.gz
en
0.92968
815
3.1875
3
They say good things come in small packages, and in the nanotechnology field, this holds especially true. I find it fascinating that mankind not only has the ability to view something as small as a single atom, but also understands that within that atom, there are things even more miniscule to learn about. After all, the entire universe is made up of these tiny building blocks – our food, our bodies, our clothes and more – and thanks to nanotechnology, we are learning more about atoms every day. So, what exactly is nanotechnology? Nanotechnology is typically defined as “the branch of technology that deals with dimensions and tolerances of less than 100 nanometers, especially the manipulation of individual atoms and molecules.” Nanotechnology didn’t really become a defined field until the 1950s – more precisely, 1959, when Richard Feynman of the California Institute of Technology gave what many consider to be the first lecture on technology at the atomic scale. However, it wasn’t until 1974 that the term “nanotechnology” first appeared, and the field didn’t truly take off until the 1980s. It was then that we saw developments like the atomic force microscope and precise atom manipulation, as seen in the IBM logo below. (You can check out a full historical timeline here). Despite its very short history as a defined field, nanotechnology has existed for centuries. Have you ever looked at a stained glass window? Brightly colored medieval stained glass windows use alternate-sized gold and silver particles to create colors, but the artists had no idea their process actually changed the composition of the materials they were working with! Stained glass windows are a prime example of early nanotechnology at its finest (and prettiest!). Fast-forward to more modern times and you’ll find that nanotechnology has contributed to some of the most life-altering scientific and technological advances of the last 60 years – not to mention a slew of pretty cool consumer products. Some highlights include: - Wrinkle- and stain-resistant clothing - Golf balls that fly straighter - Scratch-resistant glass coatings - And of course, for any tech aficionado, improved television, cellphone and digital camera displays Aside from everyday innovation, what I find really dazzling about the field of nanotechnology are the strides it’s made in the medical field. Thanks to the ability to compact technology to the nanoscale, researchers can now create tiny mechanical organisms that travel to targeted locations within the human body and perform specific functions, all while being controlled from the outside. This is incredibly important for cancer patients who, while undergoing chemotherapy, can suffer extreme reactionary symptoms when the toxic chemicals affect healthy cells as well as cancerous ones. Nanotechnology allows scientists to target specific areas and release the needed toxins directly to the cancerous cells only. Nanotechnology is, in my humble opinion, one of the coolest sciences out there, not to mention a field with incredible influence over how our society will continue to evolve. If you’re interested in learning more, this YouTube video really blew my mind. The purpose of this blog is to answer the questions you ask! To learn more about Nico and the rest of the MyITpros staff, check out our team page! If you’re interested in learning more about MyITpros and managed services (or hey, just want to talk about nanotechnology), contact us today!
<urn:uuid:afd5cae7-2f78-41f0-8abc-c365ab7947ce>
CC-MAIN-2024-38
https://integrisit.com/nanotechnology-small-science-big-impact/
2024-09-10T07:55:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00194.warc.gz
en
0.948521
711
3.34375
3
Ransomware and malware are tools used for a myriad of purposes by cybercriminals with devastating results. That’s a big reason why ransomware and malware are the go-to moves of nation-state cybercriminals. Unfortunately, malware and ransomware can be evolved to strike quickly as has been illustrated by data wipers targeting Ukrainian computers as a component of Russia’s invasion. Experts have been warning for years that ransomware and malware can easily be wielded as weapons of war. Right now, many of those experts fear that Russia-aligned threat actors are pointing that weapon toward industrial targets and critical infrastructure in Ukraine and other nations that support it. Learn the secret to ransomware defense in Cracking the RANSOMWARE Code. GET BOOK>> Digital Weapons of War Are Here Experts around the world have asserted for years that modern wars will carry a heavy component of cyberattack and hacking activity, and they were right. Nation-state threat actors are targeting infrastructure components using malware and ransomware in the Russia/Ukraine conflict. CISA cautions that attacks and damage from the cyberwar component of this conflict may spread beyond Ukraine, saying in an advisory: “Russia’s unprovoked attack on Ukraine, which has involved cyber-attacks on Ukrainian government and critical infrastructure organizations, may impact organizations both within and beyond the region.” Microsoft, who has announced their corporate support of Ukraine after Russia’s unprovoked invasion, has stepped up to offer assistance in guarding against cyberattacks on Ukraine’s first responders and infrastructure. The company disclosed that it had discovered a new malware package at work in Ukraine, likely dispatched by Russian threat actors, Microsoft said that this new malware this operation specifically targeted key infrastructure points, dubbing it FoxBlade, was discovered on February 24, by Microsoft’s Threat Intelligence Center (MSTIC). Microsoft asserts that they immediately made the Ukrainian government aware of the situation providing technical advice on steps to prevent the malware’s success. Ukraine is no stranger to Russian hacking impacting its critical systems and infrastructure. In 2015, suspected Russia-aligned hackers cut off the power in parts of Ukraine temporarily, then did it again to Kyiv in December 2016. Russian hackers were also behind the notorious NotPetya malware that was originally dispatched in an attempt to knock out government and infrastructure targets in Ukraine in 2017 before spreading widely throughout the world. Ukrainian officials and operators of potentially targeted infrastructure are well aware of the danger of further attacks, resulting in a higher level of preparedness than Moscow may have been counting on. Ukraine is the second most cyberattacked country in the world (the US is #1) and recently became a member of NATO’s malware information-sharing network. AI is the secret weapon you’re looking for to boost business email security. SEE WHY>> Industrial & Infrastructure Ransomware Are Growing Last year’s major incidents at Colonial Pipeline and JBS served as notices that cyberattacks can do major damage to a country’s infrastructure and essential manufacturing operations. Those examples also struck fear into governments around the world, who grew deeply concerned about protecting their essentials from cyberattacks in both times of peace and times of war. The impact of ransomware and malware attacks like those rippled far into the mainstream, drawing additional awareness of the need for industrial and infrastructure targets to maintain strong security. Critical infrastructure is definitely firmly in cybercriminal sights. A report from Claroty shows that a whopping 80% of critical infrastructure organizations experienced a ransomware attack in the last year. Of the 80% of respondents who experienced a ransomware attack, 47% reported an impact to their industrial control system (ICS) environment. That may not seem like a big deal at first, but critical infrastructure operators losing control of their ICS is potentially catastrophic. It also makes ransomware an even more powerful weapon for nation-state threat actors. See how to avoid cybercriminal sharks in Phishing 101. DOWNLOAD IT>> More Information Gives APTs Better Chances for Success Bad actors will only get better at hijacking operational technology. Stealing information about operational technology (OT) and industrial controls will help them architect ransomware attacks that are even more effective, and they’re getting their hands on that data at an alarming rate. In a study on the dangers that cyber dangers like ransomware attacks could have for operational technology, Mandiant analysts discovered that one in seven attacks exposed sensitive information about operational technology. Out of 3,000 data leaks originating from ransomware attacks, the study identified at least 1,300 exposures from critical infrastructure and industrial production organizations that use OT. Advanced Persistent Threat (APT) groups could seriously benefit from that information. Some of the information that researchers found exposed in dark web data dumps from OT information snatching includes usernames and passwords, IP addresses, remote services, asset tags, original equipment manufacturer (OEM) information, operator panels, network diagrams and more information that is exactly the kind of data that APTs need to plan effective cyberattacks on industrial and critical infrastructure targets. Automated security isn’t a luxury. See why Graphus is a smart buy. Manufacturing Was the Top Industry Attacked in 2021 IBM’s X-Force Threat Intelligence Index 2021 drilled deeper into the industrial and infrastructure cybersecurity space to determine which industries came under siege the most in 2021. Their researchers determined that the manufacturing sector replaced financial services as the top attacked industry in 2021, victimized in 23.2% of the attacks X-Force remediated last year. Of course, just like everyone else, those sectors faced ransomware threats more than any other kind. Ransomware was the top attack type, accounting for 23% of attacks on manufacturing companies. OT Industries Targeted, 2021 Industry | % of Total | Manufacturing | 61% | Oil & Gas | 11% | Transportation | 10% | Utilities | 10% | Mining | 7% | Heavy & Civil Engineering | 1% | Operational technology was the root of much of the trouble. More than 60% of incidents at OT-connected organizations last year were in the manufacturing industry. In addition, 36% of attacks on OT-connected organizations were ransomware. Overall, analysts determined that for all industries with OT networks that they’d observed in 2021 including operations in engineering, mining, utilities, oil and gas, transportation and manufacturing, ransomware was the primary attack type they faced by a large margin, the vehicle for 36% of all attacks on the sector. Attack Types on OT, 2021 Attack Type | % of Total | Ransomware | 36% | Server access | 18% | DDoS | 11% | Credential harvesting | 9% | Insider | 9% | RAT | 9% | Botnet | 4% | Webshell | 2% | Worm | 2% | Still relying on an old-fashioned SEG? See why Graphus is better! SEE THE COMPARISON>> Cybercrime is a Business Too Why is attacking industrial targets in fashion for cybercriminals right now? IBM speculates that it is because cybercriminals know that manufacturers and similar organizations have a very low tolerance for downtime, meaning they’re more likely to pay. They’re right – more than 60% of industrial organizations that were hit by ransomware last year paid the ransom, which for more than half of the impacted companies ran to $500,000 or more. In a breakdown of ransom amounts, researchers determined that 45% of industrial victims faced a ransom in the $500,000 to $5,000,000 range, and 48% were hit with a ransom demand below $500,000. But for about 7% of impacted organizations, the cybercriminals aimed high, and those companies were looking at a ransom in excess of $5,000,000. Unfortunately, the organizations that were hit by ransomware were faced with a complex decision. Many of them did the math and found paying the extortionists more affordable than the shutdown that a recovery might require – the majority of industrial and manufacturing targets estimated their organization’s loss in revenue per hour of downtime equal to or greater than the amount the bad guys were demanding. Between the high chance of scoring a big payout and the damage that can be done in a nation-state capacity, organizations in the critical infrastructure and manufacturing sectors need to devote significant resources to improving their defenses to withstand the tide of trouble. Stop Ransomware from Hitting Your Organization by Eliminating Its Most Likely Path to Your Door: Phishing Messages Stop phishing with Graphus – the most simple, automated & affordable phishing defense available. TrustGraph is the star of the show, keeping potentially dangerous emails away from staffers. - Your first layer of defense against phishing, TrustGraph uses more than 50 separate data points to analyze incoming messages completely before allowing them to pass into employee inboxes. - Machine learning enables the TrustGraph AI to learn from each analysis it completes, adding that information to its knowledge base to continually refine your protection and spot new threats without human intervention. Graphus makes it easy for users to report suspicious messages and get help in case of trouble. - EmployeeShield adds a bright, noticeable box to messages that could be dangerous, empowering staffers to report that message with one click for administrator inspection. - Phish911 makes it a snap for users to report any suspicious message that they receive. When an employee reports a problem, the email in question isn’t just removed from that employee’s inbox — it is removed from everyone’s inbox and automatically quarantined for administrator review.
<urn:uuid:7c90f4ca-5be9-483f-aff7-b6f95da382e3>
CC-MAIN-2024-38
https://www.graphus.ai/blog/cyberattacks-on-infrastructure-industrial-targets-are-a-weapon-of-war-for-russia/
2024-09-10T07:51:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00194.warc.gz
en
0.944266
1,987
2.578125
3
Mobile money: an industry primer Mobile money is driving payments and money transfers across the developing world, bringing financial inclusion to billions of new customers. How does it differ from mobile banking, and what are its limitations? Mobile money is a digital wallet allowing users to send and receive money with a secure account linked to their mobile phone number. Available to both prepaid and postpaid subscribers, it is an easy-to-use, secure alternative to cash, providing instantaneous payments and peer-to-peer transfers, without going through middlemen or banks. The service is widely popular in Africa, Asia and Central & South America, where there are large unbanked populations and bank branches tend to be few and far-between outside of major cities. Different from mobile banking, which involves accessing banking services via a mobile app, mobile money allows customers to store and manage money in a mobile account, send money to other users, pay bills, and make purchases, including calls, texts and data. Users can also withdraw cash without needing an ATM and deposit money via a network of mobile money agents, who service their local communities. In fact, mobile money has spawned a whole new generation of entrepreneurs setting up their own micro-businesses for this very purpose. Usually offered by a mobile service provider, mobile money services are compatible with basic phones and smartphones without the need for an internet connection or additional apps. Every transaction requires entry of a secure PIN, and the service provider or agent must verify the user’s identity. Mobile money began in Kenya in 2007 with the launch of M-PESA, developed by Safaricom in partnership with Vodafone Group. The latter received funding from the UK Department for International Development’s Financial Deepening Challenge Fund, established to encourage private sector projects for generating economic growth and reducing poverty in developing economies. The GSMA Mobile Money programme defines mobile money services based on a few key criteria, including: - The service must be readily available to “unbanked” people – those without access to services from a typical bank or other financial institution. As of 2020, the unbanked population is estimated to be some two billion people worldwide; though the majority of these live in the developing world, there remains an estimated seven million unbanked in the US and over a million in the UK. - The service must offer alternatives to ATMs and bank branches in the form of agents, who can onboard new customers, and process physical withdrawals and deposits. The GSMA Mobile Money dashboard reports 9.12 million agents worldwide as of 2020. - Mobile banking services as another channel in a traditional banking product are not considered to be part of the mobile money ecosystem – this includes mobile payment apps such as Venmo and Cash App, and digital wallet services connected to bank accounts such as Google Pay, Apple Pay and Samsung Pay. The number of registered mobile money accounts grew by 13% in 2020 to reach a total of more than 1.2 billion users worldwide, according to the GSMA’s annual State of the Industry Report on Mobile Money. The report also identifies over 300 providers of mobile money services. Meanwhile, the value of daily mobile money transactions has increased to $2 billion, fuelled by international remittances, despite the overall drop in such payments through other means. The increase in transactions has no doubt been driven by the COVID-19 pandemic, with lockdown closures restricting access to cash and in-person banking facilities. More than just enabling customers to make easy payments, mobile money services are increasingly providing access to more complex financial products too; Kilimo Salama (or “Safe Agriculture”) is an insurance scheme for farmers in Kenya to protect themselves against excessive rain or drought. In exchange for a 5% premium on their agricultural purchases made through mobile money, farmers can obtain coverage from insurance provider UAP Old Mutual. However, mobile money hasn’t been a success in every market. Vodafone launched M-PESA in Romania in 2014, only to withdraw the service three years later. Despite being one of the world’s most unbanked countries – with almost 42% of citizens going without an account – uptake of the service remained low, owing to the country’s cash-oriented culture, which sees 70% of transactions till made in cash and even those with bank accounts opting to withdraw their entire monthly salary to pay for all goods and services in cash. M-PESA operations in India and Albania were also closed down. Even where mobile money is successful, customers can be confronted with liquidity or access issues from agents, who are often little more than a single person running a kiosk or out of another business and can be faced with the same issues accessing financial services in isolated rural areas as their customers. These agents have also found themselves targeted by criminals, with a spate of robberies and killings in Uganda highlighting their vulnerability. Nevertheless, developing economies and the private sector can make immeasurable improvements to people’s lives and their own businesses by widening access to the financial system. Each market has its own challenges, but with the world increasingly turning away from cash payments and transactions, mobile money ensures that digital financial services are still accessible to those outside of the traditional banking system, while lowering the costs of handling cash for small businesses and enterprises.
<urn:uuid:c4e4298e-c7a9-41f1-b579-33c3e3b2db63>
CC-MAIN-2024-38
https://www.cerillion.com/blog/mobile-money-an-industry-primer/
2024-09-11T13:37:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00094.warc.gz
en
0.953546
1,084
2.546875
3
Figuring out how to pedal a bike and memorizing the rules of chess require two different types of learning, and now for the first time, researchers have been able to distinguish each type of learning by the brain-wave patterns it produces. These distinct neural signatures could guide scientists as they study the underlying neurobiology of how we both learn motor skills and work through complex cognitive tasks, says Earl K. Miller, the Picower Professor of Neuroscience at the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences, and senior author of a paper describing the findings in the Oct. 11 edition of Neuron. When neurons fire, they produce electrical signals that combine to form brain waves that oscillate at different frequencies. “Our ultimate goal is to help people with learning and memory deficits,” notes Miller. “We might find a way to stimulate the human brain or optimize training techniques to mitigate those deficits.” The neural signatures could help identify changes in learning strategies that occur in diseases such as Alzheimer’s, with an eye to diagnosing these diseases earlier or enhancing certain types of learning to help patients cope with the disorder, says Roman F. Loonis, a graduate student in the Miller Lab and first author of the paper. Picower Institute research scientist Scott L. Brincat and former MIT postdoc Evan G. Antzoulatos, now at the University of California at Davis, are co-authors. Explicit versus implicit learning Scientists used to think all learning was the same, Miller explains, until they learned about patients such as the famous Henry Molaison or “H.M.,” who developed severe amnesia in 1953 after having part of his brain removed in an operation to control his epileptic seizures. Molaison couldn’t remember eating breakfast a few minutes after the meal, but he was able to learn and retain motor skills that he learned, such as tracing objects like a five-pointed star in a mirror. “H.M. and other amnesiacs got better at these skills over time, even though they had no memory of doing these things before,” Miller says. The divide revealed that the brain engages in two types of learning and memory — explicit and implicit. Explicit learning “is learning that you have conscious awareness of, when you think about what you’re learning and you can articulate what you’ve learned, like memorizing a long passage in a book or learning the steps of a complex game like chess,” Miller explains. “Implicit learning is the opposite. You might call it motor skill learning or muscle memory, the kind of learning that you don’t have conscious access to, like learning to ride a bike or to juggle,” he adds. “By doing it you get better and better at it, but you can’t really articulate what you’re learning.” Many tasks, like learning to play a new piece of music, require both kinds of learning, he notes. Brain waves from earlier studies When the MIT researchers studied the behavior of animals learning different tasks, they found signs that different tasks might require either explicit or implicit learning. In tasks that required comparing and matching two things, for instance, the animals appeared to use both correct and incorrect answers to improve their next matches, indicating an explicit form of learning. But in a task where the animals learned to move their gaze one direction or another in response to different visual patterns, they only improved their performance in response to correct answers, suggesting implicit learning. What’s more, the researchers found, these different types of behavior are accompanied by different patterns of brain waves. During explicit learning tasks, there was an increase in alpha2-beta brain waves (oscillating at 10-30 hertz) following a correct choice, and an increase delta-theta waves (3-7 hertz) after an incorrect choice. The alpha2-beta waves increased with learning during explicit tasks, then decreased as learning progressed. The researchers also saw signs of a neural spike in activity that occurs in response to behavioral errors, called event-related negativity, only in the tasks that were thought to require explicit learning. The increase in alpha-2-beta brain waves during explicit learning “could reflect the building of a model of the task,” Miller explains. “And then after the animal learns the task, the alpha-beta rhythms then drop off, because the model is already built.” By contrast, delta-theta rhythms only increased with correct answers during an implicit learning task, and they decreased during learning. Miller says this pattern could reflect neural “rewiring” that encodes the motor skill during learning. “This showed us that there are different mechanisms at play during explicit versus implicit learning,” he notes. Future Boost to Learning Loonis says the brain wave signatures might be especially useful in shaping how we teach or train a person as they learn a specific task. “If we can detect the kind of learning that’s going on, then we may be able to enhance or provide better feedback for that individual,” he says. “For instance, if they are using implicit learning more, that means they’re more likely relying on positive feedback, and we could modify their learning to take advantage of that.” The neural signatures could also help detect disorders such as Alzheimer’s disease at an earlier stage, Loonis says. “In Alzheimer’s, a kind of explicit fact learning disappears with dementia, and there can be a reversion to a different kind of implicit learning,” he explains. “Because the one learning system is down, you have to rely on another one.” Earlier studies have shown that certain parts of the brain such as the hippocampus are more closely related to explicit learning, while areas such as the basal ganglia are more involved in implicit learning. But Miller says that the brain wave study indicates “a lot of overlap in these two systems. They share a lot of the same neural networks.” - Scott L. Brincat, Evan G. Antzoulatos, Earl K. Miller. A Meta-Analysis Suggests Different Neural Correlates for Implicit and Explicit Learning Roman F. Loonis. Neuron, October 2017 DOI: 10.1016/j.neuron.2017.09.032
<urn:uuid:ff5f871d-6f93-45ea-bf0d-93e556e38f6e>
CC-MAIN-2024-38
https://debuglies.com/2017/10/14/brain-waves-reflect-different-types-of-learning/
2024-09-13T21:44:09Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00794.warc.gz
en
0.95318
1,355
3.203125
3
The Internet of Things (IoT) is everywhere. These convenient devices are in our homes and offices as well as in our pockets. Along with the convenience they provide there are some security risks associated by using these devices. There have been a number of known security breaches reported in the news regarding this topic, and those breaches included massive distributed denial-of-service (DDoS) attacks, and botnet hijacking attacks which have caused major disruption to organizations. What is potentially affected? All those devices that communicate and can be accessed via the Internet based upon their IP addresses. That would include traditional office equipment such as copiers, printers, video projectors, and even televisions in reception areas. Some of the less obvious devices would be climate control, motion detection systems and security lighting systems which are equipped with remote access can be controlled over the Internet. And, don’t forget the smartphones and smartwatches – these personal devices play a role in a company’s security. These devices create access points and the best way to be secure is to define a policy to put protections in to place. Many IoT devices are produced with the very basic software, which often can’t be updated. As people become more aware of risk, some IoT devices are being brought up to current security standards with periodic firmware updates. It’s a good start, but the majority of internet-ready devices cannot be integrated into the conventional IT hardware or software protections with which companies protect themselves against internet-based attacks. The variety of new internet-ready devices brings a mass of new data traffic to the network that must be managed and secured by IT departments. But it’s complicated by the variety of network protocols used by all of these various device types. These devices are being used for personal and business and sometimes the lines of use will cross. The integration of personal devices will pose a security risk simply because more and more attacks on companies are started against individual employees. As an example, if a device is infected with malware or a virus, it can be used to gain traction and then wreak havoc when it connects to the company’s network. The tricky part is defining who should be responsible for IoT security – however, it is an important step. The first consideration you need to make is whether or not connecting a particular device will be a large enough benefit to be worth the inherent risks. Depending on the device, an IoT device could be used to spy on you, steal your data, and track your whereabouts. If the device in question directly offers you a helpful, worthwhile utility, it may be worth the risk. If the connected device serves little purpose beyond its novelty, or its purpose could just as easily be managed by a staff member, it is probably best to leave it disconnected. By taking inventory you have a benchmark as to all the devices that will connect to the Internet. An organization should evaluate every single device that is added to the network. Desktops, laptops and servers are generally tested extensively but mobile devices should also be added to the list. Oftentimes devices are ignored even though they actively communicate over the network, and strict attention should be given to those devices that send data. It’s very important to set guidelines for the use of IoT devices. Be sure to define which devices are permitted on the company network and what data exchange with the network or Internet is desired. The proper security technology will prevent unwanted traffic. IoT introduces additional complexity for security. Organizations are advised to monitor the data traffic to and from IoT devices in their network. Perimeter-based solutions are not adequate in today’s IT environment because users and apps can no longer be contained inside a organization’s network, behind a clearly defined protective wall. Organizations need to evaluate new security concepts that have already proven reliable as workplace tools of mobile employees and remote offices. For example, a protective shield from the cloud can scan all incoming and outgoing data traffic for malicious code, regardless of the device used. With cloud solutions, organizations gain control of all internet-based traffic and can actively manage which communications are permitted or should be blocked. This can include preventing the printer from automatically ordering toner and restricting all other devices to a minimum amount of communication on the web. You should also make sure that the environment that you are using an IoT device in is as secure as possible. Making sure that your firmware is updated will ensure that you have the latest security patches and fixes for the various exploits and vulnerabilities that the IoT may present. If possible, this process should be automated so that your IoT devices, as well as your router, are fully updated. It may also be a good idea to check if your router supports guest networking. With guest networking, you can keep potentially risky IoT devices off of your main business network, protecting its contents. Organizations should always make sure that passwords are in line with best practices, and that you are not reusing passwords between devices and accounts. Following these guidelines means that even if one of your accounts is comprised, the rest of your accounts are safe behind a different set of credentials. Ultimately, the best way to keep your organization safe from IoT issues is to establish rules regarding the use of these devices and monitor their permissions. Extending the consideration of whether or not a device needs to be connected, you need to establish if it even needs to be in the office. After all, a smartwatch can offer some business utility, whereas a smart trash can (which does in fact exist) does not. Monitoring your organization’s network can help you identify if any unapproved devices have made a connection.
<urn:uuid:626a24ba-1923-467a-98a8-0e209af479d8>
CC-MAIN-2024-38
https://www.bryley.com/the-internet-of-things-convenience-vs-risk/
2024-09-09T05:39:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00394.warc.gz
en
0.957389
1,144
3.140625
3
Despite the growing popularity of the "green economy" and other ecologically friendly forms of production in industrialised countries, oil remains the backbone of the current global economy. In the global energy markets, Russia has a prominent role. It is one of the top three crude producers in the world, competing with Saudi Arabia and the United States for first place. Russia is significantly reliant on oil and natural gas income, which accounted for 45 per cent of Russia's federal budget in 2021. Russian crude and condensate output peaked at 10.5 million barrels per day (bpd) in 2021, accounting for 14% of global supply. It has oil and gas producing facilities across the country, but the majority of them are in western and eastern Siberia. Russia sold a projected 4.7 million barrels per day (bpd) of petroleum to countries worldwide in 2021. Russia exports a significant crude amount to European consumers (2.4 million bpd) that contributed 25% of all oil imports by the EU as of October 2021. In recent weeks, the policies and conditions that have led to this dependency have been closely scrutinized. What made Europe so reliant on Russian gas? Reducing Europe's reliance on Russian gas has been a hot topic in Brussels for the past 15 years. Despite warnings from a January 2009 incident that resulted in a two-week disruption of Russian gas transit via Ukraine and Russia's invasion of Crimea in 2014, Europe has continued to buy Russian natural gas. On the other hand, Russian pipeline gas deliveries to Europe surged after 2014, and Europe also began importing Russian LNG from the Yamal LNG project, which started in 2017. There are three main reasons for Europe's continued reliance on Russian gas: After 2010, Europe's natural gas production began to fall dramatically. Between 50 and 60 per cent of Europe's demand was met by European gas supply in the 2000s. The Netherlands, Norway, and the United Kingdom were the top three producers. However, in the last 10 to 15 years, UK natural gas production has begun to drop, while earthquakes linked to gas production in the Netherlands have hastened the loss of the Groningen field's gas output, which was once Europe's largest. Alternative sources of supply are insufficient to bridge the rising import gap. Europe's net imports have climbed by more than 80 billion dollars since 2016 due to decreased production. While Europe raised its LNG regasification capacity to roughly 250 billion cubic metres per year by 2020 (a 40% increase from 2010), LNG imports have not been enough to meet the growing demand. Furthermore, Europe serves as a worldwide LNG supply balancing market, absorbing excess LNG supplies when the market is oversupplied and redirecting LNG cargoes when the market is tight. Europe is also competing with Asia for LNG purchases. Imports from North Africa (Algeria, Libya) have declined, while deliveries from Azerbaijan are expected to be modest in 2021, at 15 billion cubic metres. Oil and Gas – Patent Analysis RU Patents (Russian Patents) Russian patent trends have seen dynamism over the past many years. Patent filings fell during the Great Recession in 2008, in both Oil and Gas patents and the total patent universe. Following this decrease, both industries experienced a period of rapid intellectual property acquisition that lasted until 2012. The numbers are thought to decline further as the world awaits cleaner and environmentally friendly energy sources. Russia owns about 5893 patents out of which 35% are held by top 10 players. Russia's largest gas producers are Gazprom and Novatek; however, numerous Russian oil corporations, including Rosneft, also have gas producing facilities. The state-owned Gazprom is the largest gas producer, but its share of output has dropped in recent years as Novatek and Rosneft have increased their capacity. In 2021, Gazprom was still responsible for 68 per cent of Russian gas output. Traditionally, production was concentrated in West Siberia, but in the last decade, investment has transferred to Yamal and Eastern Siberia and the Far East and the offshore Arctic. The availability of highly skilled employees, the overall high quality and intensity of research and development (R&D), and the corporations' policies of placing their technological centres in major cities all contribute to the greater concentration of patenting activity in metropolitan locations. EP Patents (European Patents) European Union has only 4667 patents in the Oil and Gas domain with 29% holdings by the top 10 players. The patent trend cannot rise because of the geographical conditions prevalent. The volume of oil output in the European Union has decreased dramatically since the turn of the century. During this time, production peaked at 168 million metric tonnes in 2000. This had dropped to 19 million metric tonnes by 2020. The United Kingdom was once a major oil producer in the EU, but its reserves have shrunk by nearly half since 1995. Recent Developments in the wake of the Russian invasion of Ukraine In response to Russia's invasion of Ukraine in 2022, the United States and the European Union have implemented harsh sanctions to destroy Russia's economy. However, these ambitious moves come with some potentially nasty consequences: Russia is not just one of the world's major energy exporters but also Europe's main source of these fuels. The International Energy Agency proposed several new measures to help the EU reduce its dependency on Russian natural gas. New recommendations include, among other things, stopping new gas supply contracts with Russia, replacing Russian supplies with gas from alternate sources, speeding up renewable energy deployment, and expanding power generation from bio-energy and nuclear reactors. The proposal also calls for hastening the replacement of gas boilers with heat pumps and boosting energy efficiency improvements in buildings and businesses. What will be the impact on Russia? While Europe relies on Russia for a considerable portion of its energy demands, it is also Russia's largest client, consuming over three-quarters of the gas it produces, resulting in huge cash for Moscow. Russia also sells oil to Europe, and oil and gas exports account for a significant amount of Russia's state budget. A longer-term trade blockade of Europe would be unfavorable. To lessen its reliance on Russian gas, the EU will need to make a concerted and sustained policy effort across many sectors, as well as strong international engagement on energy markets and security. There are several links between European policy decisions and global market equilibrium. Improving international coordination with alternative pipeline and LNG suppliers, as well as other significant gas importers and users, would be critical. Clear communication between governments, industry and consumers is also necessary for successful implementation. As the world's foremost energy authority, the IEA will continue to serve as a focal point for the global conversation on attaining a secure and sustainable energy future.
<urn:uuid:8dd4f8ae-6953-490b-acd1-70996407e323>
CC-MAIN-2024-38
https://www.copperpodip.com/post/strategic-position-of-russia-in-the-global-energy-supply
2024-09-11T16:23:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00194.warc.gz
en
0.959167
1,373
3.234375
3
The Department of Homeland Security and Chief Data Officers Council put out calls recently for products and insight on synthetic data generation. Government agencies are on the hunt for vendors and best practices that can help them make use of artificially generated data — also known as synthetic data generation – that can be used to build or test artificial intelligence applications and machine learning models. The Department of Homeland Security’s Science and Technology Directorate released a solicitation Dec. 15 for synthetic data solutions that can “generate synthetic data that models and replicates the shape and patterns of real data, while safeguarding privacy.” The technique has the potential to help the department train machine learning models in instances where there is no real-world data available or when using that data would be a privacy, civil rights and liberties or security risk. The agency’s Silicon Valley Innovation Program, which invests in startup companies with tech that could meet operational needs for DHS, calls out the potential for synthetic data generators to be of particular use to the Cybersecurity and Infrastructure Security Agency to develop realistic training and exercise scenarios or model cyber and physical environments in real time. A National Strategy on Privacy-Preserving Data Sharing and Analytics, issued by two subcommittees of the National Science and Technology Council in 2023, notes that the vast amounts of data existing today have great potential, but are often restricted by the challenges around sharing and using sensitive information. The strategy lists synthetic data as a type of privacy-preserving data sharing and analytics technology that could “unlock the beneficial power of data analysis while protecting privacy.” Adoption of synthetic data has been slow, the report notes, because of limited awareness, a lack of standards, varying stages of maturity and more. The report’s authors call out the need for verification and validation techniques for the use of synthetic data to address accuracy and data quality issues, as well as the need for research on the effectiveness of those different techniques. At DHS, “the ability to generate and use synthetic data would be a gamechanger in the department’s use of complex and rapidly evolving technologies to meet its critical mission while protecting privacy,” DHS Chief Privacy Officer Mason Clutter said in a statement. The solicitation notes that currently, although DHS generates a lot of data, it is “highly challenging to utilize or share that data across organizational boundaries” because of its sensitive nature. The department’s solicitation is open through April 10, and companies that participate are eligible for up to $1.7 million in funding to develop the tech for homeland security use cases. The Chief Data Officers Council is also asking for input on synthetic data, as the council works to establish best practices for synthetic data generation. A request for information published in the Federal Register on Friday seeks input on a more formalized definition for synthetic data as well as answers to questions about its applications, challenges and limitations. Among the questions they’re asking are how synthetic data can be used and the challenges associated with it, as well as what best practices should be considered for ethics and equity. That RFI is open through Feb. 5. NEXT STORY: How hackers can 'poison' AI
<urn:uuid:c0398ba7-7de6-4060-a2fc-8ab0907a6a5c>
CC-MAIN-2024-38
https://www.nextgov.com/artificial-intelligence/2024/01/agencies-eye-synthetic-data-help-train-and-test-ai/393146/?oref=ng-next-story
2024-09-14T00:45:27Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00894.warc.gz
en
0.940753
657
2.546875
3
Values and ethics define an individual – as well as families, societies, and culture in general. Everyone puts a stake in the ground as to what is important to him or her and what is not. We interact with others based on our values: which acts much like two magnets. If the right polarity exists the magnets attract each other, if the wrong polarity exists then the magnets repel each other. Corporations have values and ethics as well – which are either formally defined and managed or are left to be defined by a variety of pressures and influences. From a legal perspective a corporation is an entity – it can be interacted with, sued in court, and even taxed (depending on the type of corporation) just as an individual can. Who defines the corporation’s values and ethics? The answer really stems from the corporation’s overall culture – but that too has to be modeled and defined somewhere. There are several places that a corporation can have its values and ethics molded for it, these are: - Directors and executive management. Ultimately the board and management have a key stake in establishing the culture, ethics, and values of the organization. It is at this level that code of conduct should be defined and enforced from the top down. The board also plays a key role in establishing risk appetite and tolerance levels that impact how an organizations takes and manages risk. This is what is meant by tone at the top. - Employees. If executives fail to define and communicate an organization’s culture, ethics, and values employees are left to define it. Even when executives have defined and communicated values it is employees that mold, shape, and make it reality or fiction. People tend to hire and relate well to those that have similar interests – political, religious, social, etc. The discussion in break rooms, meetings, and even interviews often acts like a magnet to attract similar systems of belief and value. - Business partners. An organization is no longer an entity unto itself – it is impossible to define where the culture and boundaries of an organization start and stop. The extended enterprise of business partners, supply chain, outsourcers, service providers, contractors, consultants, temporary staffing, and customers all influence and mold the values of an organization. Organizations, particularly in this era of corporate social responsibility, want to make sure they are doing business with other businesses that share the same values. No organization wants to be in the spotlight of media for partnering with unethical business – those that engage in such things as child labor or corrupt practices. - Customers. Ultimately an organization exists to provide value. For commercial organizations this is financial value and not just ethical value. In order to achieve financial value it is necessary to attract customers. Customers obviously want to achieve value in quality and service from the organization – though they are also becoming more selective in doing business with organizations that share the same ethical and social values. - Governments. Through regulation, legal liability, and plain old pressure, governments are able to extend great influence on the culture and values of the organization. This current economic crisis has given us many examples of government’s influence and control over entire industries as well as practices within those industries (e.g., salary & bonuses). - Non-government organizations. Non-profits, lobbyists, and associations all influence power over an organization and how it defines its culture, value, and ethics. NGO’s are quick to wield great political, social, and media pressure upon organizations to manipulate them to the purposes they value. The net result of all of this – an organization is going to have its values defined somewhere. Either management is going to lead this charge or other pressures will influence it. Where values and ethics are not centrally defined and communicated as a part of corporate culture – the organization risks going in a direction it never intended. Additionally, an ad hoc approach to defining corporate values leaves the door wide-open for corruption. Values and culture also influence risk management through how the organization and its employees take risk and stay within boundaries of risk tolerance and appetite. Without sound values defined the organization can and most often will enter reckless risk taking and poorly defined boundaries of acceptable and unacceptable risks (the financial crisis of the past few years are a great example of reckless risk taking and willingness to put aside defined boundaries of risk tolerance and appetite). The area of corporate values and ethics is very real to me. I left a former employer because of a significant difference in values. Management allowed one group in the organization to move forward with a conference that included a keynote speaker from an organization branded for adult entertainment (I do not want to use specific words that I feel better describe this so this post is not blocked by filters). I spoke up stating this was a slap in the face to the women of the organization. I also expressed that there are many people within the organization that have had families devastated by this industry – something I can speak personally to in my extended family. My voice to management fell on deaf ears and I was brushed aside. They ignored the issue and allowed this group in the organization to further define the culture and direction of what was acceptable. Though a top performer (and I had recently received an award for this) I resigned. Organizations need to define their values from the top down. In this day and age you are not going to appease everyone. The pressures of conservative, liberal, environmental, social, and other factors are real and significant upon the organization – and can even be in conflict with stakeholders. If this topic interests you – and you want to know how to make culture, values, and ethics defined, managed, and monitored in your organization – I would point you to the Open Compliance & Ethics Group (OCEG) Red Book 2 and the GRC Capability Model™. This delivers the only full framework that I am aware of that drives an organization toward Principled Performance™. Later in August I am delivering a multi-day bootcamp specific to this topic – GRC Strategy & Red Book 2 Bootcamp. This is directly followed by another bootcamp aimed at using technology to enable a culture of ethics, compliance, and risk management – Developing Your GRC Technology Improvement Bootcamp. Please reply back with your feedback and thoughts. How do you see/recommend that an organization define and communicate its values, culture, and ethics? In today’s complex business environment a failure to get an enterprise perspective on this is a recipe for disaster. “To understand the religion of a people is to understand the people. For their religion expresses what they take to be the ultimate values of human life, underlying their whole attitude to everything else.” J. Geddes MacGregor (1909 – 1998)
<urn:uuid:43902430-27e0-4ad1-a6ab-686dcfe6598b>
CC-MAIN-2024-38
https://grc2020.com/2009/07/31/67who-defines-your-corporation-s-values/
2024-09-19T01:13:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00494.warc.gz
en
0.95544
1,367
2.78125
3
As a US business owner, understanding the North American Industry Classification System (NAICS) is crucial for unlocking your business’s growth potential. The NAICS code system provides a standardized classification system for industries, allowing for easier data collection, analysis, and comparison. By having the correct NAICS code for your business, you can identify growth opportunities, leverage data insights, and develop effective strategies within your specific market niche. Whether you’re a startup or an established business, having the correct NAICS classification is essential. It helps you stay ahead of the competition, access relevant data, and make informed decisions for your business. In this article, we will explore the importance of the NAICS code system, the benefits of using it, how to find the right code, and how to utilize it effectively for your business success. Table of Contents Toggle- NAICS code system is a standardized classification system used in the US to categorize businesses by industry. - Having the correct NAICS code helps businesses identify growth opportunities, leverage data insights, and develop effective strategies. - Accurate NAICS classification is crucial for accurate data reporting, market analysis, and benchmarking. - Online resources such as NAICS code search tool or the NAICS code lookup database help businesses find the appropriate NAICS code. - The NAICS code is used for various government reporting requirements, including tax filings, statistical reporting, and industry regulations. Understanding the NAICS Code System The North American Industry Classification System (NAICS) is a standardized classification system used in the United States to categorize businesses by industry. It provides a numerical code for each industry, allowing for easier data collection, analysis, and comparison across sectors. The NAICS code was developed in collaboration by the United States, Canada, and Mexico to create a common classification system for North American businesses. How the NAICS Code Works The NAICS system is hierarchical, with the first two digits of the code representing the sector, the third digit representing the subsector, the fourth digit representing the industry group, the fifth digit representing the NAICS industry, and the sixth digit representing the national industries. Each NAICS code is a unique identifier for a specific industry, providing a standardized method for collecting and analyzing data on economic activity in the United States. Why the NAICS Code is Important The NAICS code is essential for businesses to understand their industry landscape and make informed decisions. It enables them to: - Identify competitors - Analyze market trends - Access relevant industry data - Develop effective strategies within their specific market niche The use of a standardized classification system for industries is critical for comparing economic activity across different sectors and regions, providing a foundation for economic research and policy development. “The NAICS code is critical for understanding the business environment in the United States, providing a standardized method for collecting and analyzing data on economic activity.” – US Bureau of Labor Statistics Benefits of Using the NAICS Code The North American Industry Classification System (NAICS) provides a standardized classification system that categorizes US businesses by industry. The NAICS code is a numerical representation of each industry, allowing for easier and more accurate data collection and analysis across sectors. By utilizing the NAICS code, businesses can gain a better understanding of their industry landscape and access relevant industry data. The NAICS code enables businesses to identify their competitors within their respective industry. By examining the code, businesses can determine the size and distribution of their market and the number of businesses competing for the same customers. Analyzing Market Trends NAICS industry codes include sub-sectors and specialized segments of each industry, allowing businesses to gain a more comprehensive understanding of their specific markets. This classification system helps businesses identify market trends and evaluate industry outlooks, providing valuable insights for strategic planning. Accessing Relevant Industry Data The NAICS code also serves as a gateway to access relevant industry data. This data includes financial, operational, and consumer behavior information that can be used to inform business decisions and enhance performance. By utilizing the NAICS database, businesses can gain access to the most up-to-date industry information and stay ahead of the competition. Assisting in Strategic Planning NAICS codes provide a valuable tool for strategic planning. By aligning business activities and strategies with the specific industry code, businesses can position themselves to better compete and succeed within their market niche. The NAICS code aids in identifying growth opportunities, developing targeted marketing campaigns, and benchmarking performance against industry averages. Finding the Right NAICS Code Identifying the correct NAICS code for your business is crucial for accurate data collection and useful industry insights. Luckily, there are several online resources available to help you find the right code for your business. The first resource to consider is the NAICS Search Tool, found on the official website of the United States Census Bureau. This tool allows you to search for the appropriate NAICS code by entering keywords related to your business activities. Another useful resource is the NAICS Lookup Database, which provides a comprehensive list of all NAICS codes and their corresponding industries. It can be accessed via various online platforms and can be searched by keyword, industry, or description. When searching for the right NAICS code, be sure to consider all relevant aspects of your business to ensure accuracy. This includes the products or services you offer, your company’s primary focus, and the industry in which you operate. Importance of Accurate NAICS Classification Accurate classification of your business according to the North American Industry Classification System (NAICS) is crucial for effective market analysis, benchmarking, and data reporting. The NAICS code provides a standardized method of categorizing businesses based on their primary activities, with each code representing a specific industry sector. By using an accurate NAICS code, businesses can unlock valuable data insights related to their industry. This information can help them identify market trends, consumer behavior, and financial performance metrics, enabling them to make informed decisions about their operations. The NAICS code also ensures that businesses are correctly classified for statistical reporting purposes. This accurate data reporting enables policymakers to make informed decisions about industry regulations, economic development initiatives, and other government programs that affect businesses in different sectors. NAICS Industry Codes Provide Better Comparisons Accurate NAICS classification allows for better comparisons between businesses in the same industry sector. By ensuring that businesses are accurately classified, the data collected is more reliable, enabling meaningful benchmarking against comparable firms. For example, if a business is classified under an incorrect NAICS code, it may not be possible to compare its financial performance accurately against other businesses in the same industry. This inaccuracy could lead to incorrect conclusions about market trends, growth opportunities, and other factors that could impact the business’s success. Access Relevant Resources and Support The NAICS code also provides access to relevant resources and support for businesses operating in specific industry sectors. Industry associations, government agencies, and other organizations offer resources and assistance to businesses seeking to better understand the NAICS classification system and to utilize it effectively. By connecting with these resources, businesses can gain access to training, networking opportunities, and other support services that can help them grow and succeed within their industry sector. Benefits of Accurate NAICS Classification: | Better market analysis | Accurate data reporting | Meaningful benchmarking | Access to relevant resources | Accurate NAICS classification is essential for unlocking business potential. By utilizing the NAICS code, businesses can access valuable data insights, make informed decisions, and access relevant resources and support services. Leveraging Data Insights with the NAICS Code The NAICS code serves as a powerful tool for businesses to gain access to industry-specific data insights. Leveraging these insights can enable businesses to make informed decisions, identify growth opportunities, and tailor their strategies to meet the needs of their target market. The NAICS database provides businesses with a goldmine of industry data, ranging from demographics to financials. By identifying the appropriate NAICS code for your business, you can access relevant data insights and use them to inform your business decisions. For example, a business in the retail industry can use the NAICS classification system to gain crucial insights into consumer behavior. By analyzing consumer data, they can identify trends, preferences, and pain points and tailor their sales strategies to better meet their consumers’ needs. NAICS Code and Financial Performance Industry | Annual Revenue | Growth Rate | Retail | $5.2 trillion | 5.3% | Finance and Insurance | $1.4 trillion | 3.1% | Healthcare | $2.9 trillion | 4.1% | Construction | $1.3 trillion | 2.3% | This table highlights the diverse financial performance of different industries in the US, providing businesses with valuable insights into market trends and opportunities. By comparing and analyzing financial data across different industries, businesses can identify areas of growth, investment, and potential partnerships. In addition to financial insights, the NAICS code also provides businesses with access to consumer behavior data. For example, a business in the tourism industry can leverage NAICS data to identify popular tourist destinations, demographic trends among travelers, and preferred activities. Armed with this data, they can develop targeted marketing campaigns, enhance customer experience, and drive revenue growth. In conclusion, businesses can leverage the NAICS code to gain valuable data insights, including financial performance and consumer behavior. By analyzing and interpreting this data, businesses can develop effective strategies, identify growth opportunities, and stay ahead in their industry. Strategic Planning with NAICS Code By aligning your business strategies with your NAICS code, you can gain a competitive edge within your industry. Understanding your market niche and the competitive landscape can help you develop targeted marketing campaigns and tailor your product/service offerings to meet the needs of your customers. Identifying Your Market Niche First, use your NAICS code to identify your market niche. Research competitors within your industry, analyze their strengths and weaknesses, and identify gaps in the market that your business can fill. Use this information to develop a unique value proposition that sets your business apart from others in your industry. Analyzing the Competitive Landscape Next, analyze the competitive landscape within your industry. Utilize industry-specific data insights to understand market trends, consumer behavior, and your competitors’ strategies. Use this information to develop targeted marketing campaigns that effectively reach your target audience and differentiate your business from your competitors. Tailoring Your Strategies Finally, tailor your strategies to meet the needs of your market niche. Use your NAICS code to identify relevant industry resources, support, and networks that can assist you in developing and executing effective strategies. Continuously monitor the industry landscape and adjust your strategies accordingly to maximize your growth potential. Updating Your NAICS Code As your business evolves, it’s essential to review and update your NAICS code if necessary. Changes in your products, services, or operations may require a different NAICS code to accurately reflect your business activities. Staying up-to-date with the latest NAICS code list ensures accurate classification. The NAICS code list is updated every few years to accommodate changes in industry trends and new business activities. Review your NAICS code annually to ensure that it still accurately reflects your business’s activities. If you find that your business has expanded into new areas that are not appropriately captured by your current NAICS code, you may need to update it. This can be done using the same online resources that were used to find your initial NAICS code. Updating your NAICS code can have ramifications for your business. It may affect how you’re classified for government reporting requirements, industry regulations, and statistical reporting. Review your NAICS code carefully to ensure that it accurately reflects your current and future business activities. Keeping your NAICS code up-to-date is an essential part of maintaining accurate data reporting, market analysis, and benchmarking. It ensures that your business is categorized correctly within its industry, allowing for better comparisons, accurate market research, and access to relevant resources and support. NAICS Code and Government Reporting The NAICS code is a crucial element in government reporting requirements for businesses in the United States. It is required for various forms of statistical reporting, tax filings, and industry regulations. Inaccurate or outdated NAICS codes can lead to fines, delays, and other legal ramifications. For example, the Internal Revenue Service (IRS) and other government agencies use the NAICS code for tax filings and to classify businesses for statistical purposes. The Small Business Administration (SBA) uses the NAICS code to identify eligible small businesses for its programs and services. Therefore, providing the correct NAICS code is vital for businesses to comply with government regulations and requirements. Government agencies also use the NAICS code to track industry trends and economic growth. Accurate data reporting through NAICS codes helps policymakers and other stakeholders identify areas for improvement, allocate resources, and provide support to businesses in need. It’s essential to keep your NAICS code up-to-date to ensure compliance with government regulations and to take advantage of the support and resources available to your business. You can update your NAICS code by using an online NAICS search tool or consulting with industry associations or government agencies for guidance. Examples of government forms that require NAICS codes: Form | Purpose | IRS Form 1120 | Corporate tax return | SBA Form 7(a) | Loan application for small businesses | BLS Quarterly Census of Employment and Wages | Employment and wage data collection for statistical analysis | Make sure you provide the correct NAICS code when filling out these forms and other government documents. It’s also a good idea to keep a record of your NAICS code for future reference and to ensure you are using the most current version of the NAICS code list. NAICS Code Resources and Support Proper NAICS classification is essential for the success of US businesses across all industries. However, navigating the NAICS system can be a daunting task for many business owners, especially those who are just starting. Fortunately, there are numerous resources and support available to help businesses with their NAICS code. Industry associations are a great place to start when seeking assistance with your NAICS code. Many associations provide resources, workshops, and consultations to help businesses identify the appropriate code for their industry and access industry-specific data. For example, the National Association of Manufacturers (NAM) offers a comprehensive database of industry codes and provides guidance on how to use the NAICS code for data analysis and industry benchmarking. The National Restaurant Association (NRA) offers resources and tools for restaurant owners seeking to identify their NAICS code and access industry trends and research. US government agencies also offer a wide range of resources and support for businesses utilizing the NAICS code system. The US Census Bureau provides detailed information on the NAICS system, including a searchable database and industry-specific reports and data. Additionally, the Small Business Administration (SBA) provides assistance in identifying the appropriate NAICS code for small businesses and offers online training and webinars on how to use the code for strategic planning and decision-making. Various online platforms offer free tools and databases to help businesses find the right NAICS code for their operations. One such platform is the NAICS Association, which provides a searchable NAICS code database and offers a range of resources and support for businesses. Another online platform, the North American Industry Classification System Center, provides detailed information on the NAICS system and offers guidance on how to use the code for data analysis, market research, and strategic planning. What is a NAICS code? A NAICS code is a numerical code used in the United States to categorize businesses by industry. It allows for easier data collection, analysis, and comparison across sectors. Why is it important to have the correct NAICS code for my business? Having the correct NAICS code helps you identify growth opportunities, leverage data insights, and develop effective strategies within your specific market niche. How can I find the right NAICS code for my business? You can use online resources such as the NAICS code search tool or the NAICS code lookup database to search by keyword, industry, or description and find the code that best matches your business activities. What are the benefits of using the NAICS code? By using the NAICS code, you can gain a better understanding of your industry landscape, identify competitors, analyze market trends, access relevant industry data, and assist in strategic planning and decision-making. Why is accurate NAICS classification important? Accurate NAICS classification ensures accurate data reporting, market analysis, benchmarking, and access to relevant resources and support within your industry. How can I leverage data insights with the NAICS code? The NAICS code provides access to industry-specific data, including market trends, financial performance, and consumer behavior. By leveraging these insights, you can make informed decisions and identify growth opportunities. How can the NAICS code help with strategic planning? The NAICS code helps you identify your market niche, understand the competitive landscape, and develop targeted marketing campaigns. By aligning your strategies with your NAICS code, you can maximize your growth potential. Do I need to update my NAICS code as my business evolves? Yes, it is essential to review and update your NAICS code if necessary. Changes in your products, services, or operations may require a different NAICS code to accurately reflect your business activities. Why is the NAICS code important for government reporting? The NAICS code is used for various government reporting requirements, including tax filings, statistical reporting, and industry regulations. Providing the correct NAICS code ensures compliance and accurate representation of your business. Are there resources available to help with the NAICS code? Yes, numerous resources and support are available, such as industry associations, government agencies, and online platforms that offer guidance, tools, and databases to assist you in navigating the NAICS classification system effectively.
<urn:uuid:780bba79-0838-4ce2-8c27-b89a3cb5a168>
CC-MAIN-2024-38
https://www.gsascheduleservices.com/blog/unlock-business-potential-with-the-right-naics-code/
2024-09-19T01:54:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00494.warc.gz
en
0.919867
3,718
2.875
3
IoT or the Internet of Things is the new buzzword all around and refers to a networked computing environment that enables devices able to monitor, record and report data, as well as allows users to interact with devices, perform actions remotely or use a stream of useful information when performing tasks. However, not enough attention has been paid to the security aspect of these so-called “smart” devices. This course will help anyone interested, get started on IoT security and penetration testing of “smart” devices. To assess the security of IoT devices, we must first understand the various components involved in it, and then identify what kind of security issues could affect each component and then look into each of them. That is exactly the approach candidate will be learning in this course. This course covers and discusses IoT protocols, potential risks, vulnerabilities, exploitation, and data breaches.
<urn:uuid:e2c40951-614d-4f3a-9524-278b2a90bffa>
CC-MAIN-2024-38
https://www.codecnetworks.com/Trainings/IT-Security-&-PEN-Testing/IOT-Hacking-Penetration-Testing.php
2024-09-20T06:36:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00394.warc.gz
en
0.953109
180
3.296875
3
The ongoing COVID-19 pandemic has sparked an unprecedented crisis for governments, healthcare providers, and industries across the world. Biopharmaceutical companies were especially affected, as they were left struggling to develop solutions that will ultimately help diagnose and treat those suffering from COVID-19. As the virus swept through different countries, unprecedented action became necessary. Pharmaceutical companies around the world joined forces to prove that true innovation is possible when the global scientific community comes together in a spirit of collaboration and trust. However, with new variants of the virus still emerging, the fight is far from over. The industry needs to step up again and create new innovative solutions to fight the new variants of the virus, as well as many other diseases along the way. Using New Science might prove to be the best way of dealing with the new issues about to emerge during the COVID-19 pandemic, on top of the usual demands from the healthcare industry. It involves new mechanisms and the use of new technologies to meet the needs of patients in ways previously not attainable. That may require an adjustment of the strategies needed to develop innovative treatments, but also lead to better access to treatments at more affordable prices. With the biopharma industry now facing compressive disruption that might jeopardize conventional approaches, New Science seems to be the safest bet. From Compressive Disruption to Innovation According to a recent Accenture report, multiple signs now show that the biopharma industry is dealing with compressive disruption that may come as a threat to conventional approaches. Unlike normal disruptions that businesses normally face when a new competitor enters the market, for instance, compressive disruption takes place at a much slower pace. This can be linked to innovations on the market, as well as macroeconomic factors and other transformations that may cause profit to drop over the course of years. The COVID-19 pandemic, a crisis that demands both innovation and access to it, may be one of the factors that will exacerbate this compressive disruption. However, many other signs, such as decline in future value as a percentage of enterprise value from 2015-2018, were already easy to spot. Given the context, pharmaceutical companies around the world relied on New Science to offer a more accurate, productive way to treat and care for people in ways previously not achievable. This new approach opens up new possibilities to drive growth across multiple therapeutic areas, while also promoting innovation. With the help of technology, new treatments can now be developed and delivered to the patients that need them the most. However, New Science is not only boosting innovation but also fueling growth, as 54% of sales between 2017 and 2022 are now expected to be driven by it, according to the report. Pharmaceutical Strategy Around the World Although pharmaceutical companies seem to be relying on New Science and innovation to boost sales during this global pandemic, some countries around the world also see the pandemic as the perfect moment to develop new pharmaceutical strategies. Members of the European Parliament are discussing the new Pharmaceutical Strategy for Europe, a proposal that seems to encourage companies to invest more of their profits in research and development (R&D). The new strategy seems to focus on the discovery and development of new treatments inside the European Union (EU), including new antibiotics and potential cures for rare diseases. The project is also designed to prevent medical supplies shortages, like the ones the EU was forced to face during the first few weeks of the COVID-19 crisis. However, the debate was not only centered around R&D, as some members of the European Parliament remarked that pharmaceutical companies should also become more transparent. Transparency is also a problem on the US market, with many Americans saying they trust their healthcare providers more than health insurance companies, technology companies, and pharmaceutical companies. According to a new survey, just 15% of the people questioned claimed to trust pharmaceutical companies more now than they did before the start of the pandemic. Even though many companies are developing and delivering vaccines and treatments across the country, while also adopting New Science, people are still hesitant in trusting them. According to the study, many people will develop more trust in the biopharma industry if the companies on the market will become more transparent about R&D, as well as the effectiveness and possible side effects of their treatments. New Science is probably the best way of providing people with better treatments, as well as more accessibility to those treatments. However, when it comes to growth, pharmaceutical companies may need to link New Science with better communication and more transparency. The US is the largest market for biopharmaceuticals, accounting for more than 30% of the global market, and that means it is often seen as being responsible for putting the industry on the right track. Only time may tell if New Science and transparency will be enough to boost the market.
<urn:uuid:b580c8a1-5860-4f0e-b7e4-3cad075a89ea>
CC-MAIN-2024-38
https://biopharmacurated.com/editorial/why-biopharma-industry-relies-on-new-science/
2024-09-08T03:10:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00594.warc.gz
en
0.968632
970
2.5625
3
How AI is Transforming Cybersecurity in the Era of Automation Security professionals who leverage AI with the right strategy will gain a distinct competitive advantage over adversaries. The Rise of AI-Driven Cybersecurity As cyberattacks proliferate, legacy security tools are no longer enough. AI and its ability to automatically detect patterns, analyze massive data sets, and adapt to new threats has become essential for defense. According to recent research by Capgemini, over 56% of organizations now utilize AI in cybersecurity. By 2022, the AI cybersecurity market is projected to reach $46 billion. AI enables security teams to: - Identify new threats and anomalies faster - AI analyzes huge volumes of activity to pinpoint outliers. - Augment human analytics - Handling enormous alerts and false positives is unrealistic for staff alone. AI prioritizes exceptions for human review. - Shorten response time - Machine learning models can take automated actions against known threats in milliseconds versus hours for manual processes. - Continuously strengthen defenses - With each new attack, algorithms expand detection rules and threat intelligence. These capabilities create integrated, agile systems that stay ahead of attackers. When deployed well, AI allows security professionals to spend less time on routine tasks and more on higher-value analysis. Next, we’ll explore leading-edge applications of AI for cybersecurity. 4 Ways AI is Powering Cyber Defenses Detecting Phishing Attempts One of the top vectors for malware and ransomware is phishing emails. These socially-engineered messages are designed to trick users into clicking malicious links or attachments by impersonating trusted entities. But new AI solutions can now detect phishing emails with over 99% accuracy. Tools like Vade Secure apply machine learning to features like IP addresses, header anomalies, and embedded URLs to instantly determine email legitimacy. AI analyzes past patterns to develop robust statistical models and updates itself continuously. Such capabilities allow organizations to catch phishing attempts before employees are compromised. Malware Analysis and Classification Traditional anti-virus software relies on rules and signatures to catch malware. But new strains appear constantly, rendering those defenses inadequate. AI-based malware analysis solutions can rapidly classify and assess new samples. Deep learning algorithms are trained on malware features and behaviors to accurately categorize specimens. By studying code instructions, file properties, source relationships, and execution actions, AI provides granular assessments in seconds. Security teams gain valuable foresight into potential impact and required containment responses. Insider Threat Detection Malicious insiders with authorized access represent a key hidden risk. Whether through data theft, sabotage, or collusion with external parties, insider attacks can cause severe damage. AI behavioral analytics uncover anomalies in access patterns and activity that indicate insider threats. By profiling normal behavior for users based on past activity, algorithms identify highly abnormal actions in real time like unauthorized data transfer or downloads. Machine learning models also get smarter over time as they ingest more use-case data. Automating Threat Hunting Threat hunting typically requires skilled staff manually sifting through massive data sets to surface hard-to-detect threats. But AI is now automating this process for stronger defenses. Technologies like Darktrace and IBM Security QRadar use unsupervised learning algorithms to comb through network activity logs, endpoint behavior, email data, and more. Anomalies and incidents identified by AI become threat leads for analysts to investigate. This amplifies human threat-hunting capabilities. Overcoming Challenges: AI Implementation Best Practices To leverage AI effectively, organizations must invest in foundational elements beyond just purchasing an AI product. Key requirements include: - Clean, rich datasets - Models are only as good as the data used to train them. Prioritize quality over quantity of data. - Dedicated AI talent - Having in-house ML experts is ideal to fine-tune solutions and understand detections. - Cloud infrastructure - Scalable computing power is crucial for intensive training and inference. - Integrated security stacks - Workflows between AI tools and downstream response systems must be seamless. - Ongoing model validation - Continuously measure model performance to ensure accuracy remains high. With reliable data, strong infrastructure, and human-machine collaboration, AI cybersecurity platforms can thrive. However, implementation missteps will severely limit value. Adopting AI requires holistic upgrades across capability building, processes, and personnel. The Future of AI for Cybersecurity Looking ahead, AI will become integral to all layers of cyber defense. According to Juniper Research, by 2025, 60% of cybersecurity technology will utilize AI . As algorithms become more advanced, attacks launched at machine speed will be autonomously prevented in real time. AI will continue expanding both protection and detection. For proactive protection, techniques like adversarial machine learning will allow systems to simulate threats before they occur. On detection, AI will help analysts cut through the noise by connecting disparate signals and providing insights at scale. With ever-evolving threats, AI generates hope for cybersecurity teams. But it is ultimately only one component of an integrated defense strategy. As long as defenders maintain robust data pipelines, cyber hygiene, and a skilled workforce to interpret AI, they can turn the tide against attackers. The future of cybersecurity will rely heavily on artificial and human intelligence working symbiotically. “AI in Cybersecurity: The Future of Fighting Advanced Threats.” Juniper Research. https://www.juniperresearch.com/whitepapers/ai-in-cybersecurity-the-future-of-fighting
<urn:uuid:74580ccd-473d-480c-a2ca-bd991851ca94>
CC-MAIN-2024-38
https://www.cyberkendra.com/2023/08/how-ai-is-transforming-cybersecurity-in.html
2024-09-09T08:14:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00494.warc.gz
en
0.907098
1,123
2.640625
3
How to Measure Security Behavior Technology isn’t enough to protect your business from modern cyber threats. Cybercriminals exploit the human element at every opportunity. Making people more secure is core to any meaningful cyber security strategy. Over recent years security teams have brought the human element of security more into focus. You’ve put in place regular training and information sharing. It has been a significant shift, and a great start. But as cyber threats get more advanced, so should our understanding of the people who play a role in security. People are complex. Which means reasons people aren’t cyber secure are also complex. The reason someone reuses passwords is very different from the reason someone falls for a phishing scam. To be able to change those actions, we have to work in the best way for each. Understanding why people do things in a certain way, and how to effect real change is well worth the effort. Don’t worry, it’s interesting stuff! And at the end of this eBook you’ll have some great tools to take to your team. At CybSafe, behavioral science is at the heart of everything we do. With our dedicated behavioral science team, we help organizations reshape how they make people more secure. Let’s get stuck in to “measuring behavior” As security professionals we often talk about “behaviors’”, but how do we measure them? And how do we get data-driven insights and proof for the decision-makers? So, there are two things to bear in mind here. Measuring behavior isn’t just about understanding which behaviors are happening. We need to know why they are happening, too. We want to show you six ways of measuring behavior. They each help you analyse, benchmark, and understand why people do what they do. It’s important to remember that we’re all different. So what works for Alison in accounting, won’t work for Matt in logistics. Or Risha, or Fiona. But once you have the secret sauce the important thing is to continually evaluate. Enough talk, let’s look at how we get this done. Chapter 1: Objective measures Objectively speaking, objective measures are a great place to start. Not to sound cold, but it removes all emotion from the situation. These measures aren’t based on feelings or what people think. Here we evaluated the way in which people undertake tasks. For example, computer logs or a password strength “calculator” can collect objective data. Importantly, this method is purely focused on what people are doing. There are a couple of points to take into account with this approach. It may need technical integrations and access to data sources. So could potentially have a cost implication. And teams must be made aware of the data being gathered and how it’s being used. There’s so much more to learn on this subject. Get yourself a coffee and read this research: “On Defining Subjective and Objective Measurements” from J.M.Rothstein from a deeper dive. Chapter 2: Self-report Next up it makes sense to talk about self-reporting. As it says on the tin, it’s asking people to report on their own behavior, view, or opinions. Either by survey, diary, or interview. Pros: to it’s easy-to-conduct and low-cost. Cons: when assessing themselves, people may answer to make themselves look good (there’s a cracking piece of research on this from the Journal of Business and Psychology). To encourage people to answer honestly – rather than saying what they think a colleague or manager wants to hear – start with good questions. This is not easy. But it’s critical if we’re to get the most accurate data. An entire book can be (and has been) written on the creation of research criteria for self-reporting. A great resource can be found at the Pew Research Centre. But we will focus on three things for creating effective reporting surveys. 1. Allow anonymity. Probably the most important consideration for creating self-report surveys. Making reports anonymous has been shown to increase the honesty of a response. People are more likely to feel they can reveal true behaviors, and the reasons for them, if they won’t be held directly accountable. An honest evaluation of practices is the most effective way to enable planning and processes that can address behaviors. 2. Be specific. When asking about particular behaviors, be specific with time periods. Open options such as, “do you install updates on your computer”, allow room for interpretation. The respondent probably has done this at some point, but unlikely once a week or even in the last six months. So framing it with a time period focuses the answer to be more accurate. The question then becomes “In the last month did you install updates on your computer?” You might be more likely to get a no – but it’s honest! 3. Be concise. Avoid jargon. Avoid double-barrelled questions. Avoid long scenarios. Avoid…you get it. Keep it simple to keep respondents engaged. What’s great about self-reporting is that it gives a quick temperature check or snapshot of the current situation within the organization. It also gives you a good baseline so you can start measuring behavior change over time. And that’s something we can work with. Chapter 3: Proximate measures Have you ever wanted to see the future? No crystal balls needed! Proximate measures show us behavioral intent in a person. It’s a measure of a person’s motivation or desire to perform a behavior, which is a predictor of future behavior. But it’s not a measure of actual behavior. This is the first measure where we’re looking more at the why of the actions in the team. This measure provides insight into how to motivate people to follow through with improving habits and behaviors. To support behavioral intent and transform it into actual behavior we need to support it with goal setting and planning. It helps people build better habits by aligning them to goals. People tend to be motivated to do something by meeting a goal or gaining a reward. We could go all the way back to Pavlov (if you’ve ever had a dog, you’ll know what we mean). But bringing it back to humans and the modern day, there’s a lot of scientific evidence to back this up. Gollweitzer and Sheeran produced a great study which is worth a look. Or we could go back to the forefather of the theory of planned behavior and look at Ajzen work which has stimulated the research into human behaviors. Chapter 4: Scenarios Bringing risk factors to life in an interactive way engages people. It could be a detailed story, or a live simulation. People are asked to show how they would respond to different situations. It can spice up otherwise mundane training. However, scenarios can be open to interpretation. Opt for simplicity and clarity in the setup. Because scenarios are fictional, people are more likely to respond as they would in real life, rather than as they think they should. They should still reflect real life though. Creating a situation that wouldn’t happen in real life, no matter how entertaining, is pretty pointless. For greatest impact, why not create fictional identities for participants. It’s not just David Bowie who wants to be Ziggy Stardust sometimes. When creating scenarios, making them relevant to your organization will increase engagement. Spend time getting the scenarios right. The results will take care of themselves. There are over 70 specific security behaviors. How many are you measuring? Contact one of our team to find out more. Chapter 5: Observational data As you can probably guess, observation involves watching people in their natural environment. Think David Attenborough. To understand why people do things in a certain way, the best thing is to see them do it in their own space. [voiceover] Piotr has received the phishing test email. He’s looking at it intently. Perhaps his distant Aunt Magda really does need help managing her wealth. He clicks. The trap is sprung. The person running the observation has two options. Be hands off, or be involved. Purely observing people in their own workplace can give a great view of true behavior. Alternatively, they might want to get hands on and talk to the people to get deeper insights. Getting back to Sir David Attenborough, he has the most incredibly in-depth knowledge of the natural world he observes. To get the best insights from observation, the person doing the observing has to be highly trained. They need to know what they are looking at and looking for. It’s important to make accurate recordings of the information gathered. And also get a second opinion to make sure the observation isn’t biased. Chapter 6: 360 feedback As with performance reviews, 360 feedback relies on feedback about a person provided by those they work with. The more input you get, the more reliable the complete picture of a person’s visible security behaviors, such as locking the computer screen. 360 feedback isn’t about finger pointing. It should always be focused on observed actions, not guesswork or hearsay. And it should always be confidential. Observing behaviors works best in a physical workplace and in team environments. What’s important is the way 360 feedback is designed. The focus here is how to design a process that is effective across a large number of people. There are three elements in an effective 360-degree design process: 1. Relevant content. First, make sure content is fit for purpose. That means using specific, relevant questions. Second, make sure that cultural differences are taken into consideration. 2. Accountability. The person overseeing 360 feedback plays an important role in how people will answer. It can be useful to get a third party to facilitate collating feedback and making it anonymous. People are more likely to be open if the feedback can’t be attributed to them. 3. Census. Including everyone within the business lends itself to successful 360 feedback. This paper – When does 360-degree feedback create behavior change? And how would we know when it does?– is an excellent resource. Chapter 7: Measuring for future change So that’s a whistle-stop tour of six ways to measure behavior. But, as mentioned at the beginning of this Ebook, the two most important factors in any measurement strategy are engagement and continual evaluation. Continuous evaluation should never be an afterthought. Building it into the programme makes sure it can be used to understand actual changes in behavior. Chapter 8: Evaluate and influence to stay secure Businesses are taking security very seriously – it isn’t optional. The ones who really understand how to make people the strongest link will be the ones who are better protected. They will be the ones to win. A single workshop or leaflet isn’t going to effect change. Evaluating and influencing behavior in your team brings measurable impact and change for your organization. Measuring behavior is possible and there is a range of options for measurement available. It doesn’t need to be complicated. Simple can be just as effective, as long as it is tailored for your people and what they need. Doing so will help your organization be all the more secure. Chapter 9: Summary – six ways to measure behavior - Measures the way people complete tasks - Emotion free - Fact based - Technical considerations/costs - Data management/privacy considerations - People give answers based on self reflection - Easy to conduct - Gives a good snapshot of the situation - People may answer as they feel they should - People avoid extreme opinions - Good survey design requires time and care - Measures intent and motivation - Useful for goal setting - Focuses on positive habits - Not a measure of behavior - Needs focus on ensuring follow through from intent to action - Engages people with active participation - Anonymity through fictional setups - Honest actions more likely - Scenarios are open to interpretation - Time investment needed for best results - Observation in the natural work environment - Option to engage with participants for deeper insights - Training needed to understand what should be observed - Only applicable to visible security practises - Wide range of feedback means higher reliability - Confidential/anonymised answers call for openness and honesty - Difficult to implement in virtual working environments - Good survey design requires time and care There are over 70 specific security behaviors. How many are you measuring? Contact one of our team to find out more. Traditional security awareness training is a relic of the past. Learn how you can quantify human cyber risk and change security behaviors. Anyone can be phished and simulated phishing is not enough to protect your people. Learn four steps to an effective Agile Phishing Strategy What you – and your people – do need to understand about ransomware (and any malware!) is how to spot it, and stop it. And importantly, not to be afraid of it. If you are ready to ditch the fear and find an approach that works, this Ransomware is boring eBook is what you need! Download today.
<urn:uuid:8e99b204-04a0-49b8-95dc-6f8a7cb8fd19>
CC-MAIN-2024-38
https://www.cybsafe.com/blog/how-to-measure-behavior-long-read/
2024-09-09T07:33:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00494.warc.gz
en
0.944395
2,805
2.671875
3
In the modern digital age, the role of Database Management Systems (DBMS) has become indispensable, these systems offer a plethora of advantages that significantly enhance the efficiency and effectiveness of data handling. From streamlined data organization to robust security measures, the benefits of DBMS are abundant and impactful. - 1 Efficient Data Organization and Retrieval - 2 Advantages of Database Management Systems and Consistency - 3 Enhanced Data Security - 4 Scalability and Performance Optimization - 5 Data Redundancy and Space Efficiency - 6 Advantages of Database Management Systems - 7 Ease of Data Maintenance - 8 Data Analytics and Decision Support - 9 Backup and Disaster Recovery - 10 Advantages of Database Management Systems Centralized Efficient Data Organization and Retrieval DBMS revolutionizes the way data is organized and retrieved. By employing well-structured tables and indexes, DBMS ensures that data can be accessed swiftly and accurately. This facilitates efficient data management, allowing businesses to promptly respond to queries and obtain valuable insights without unnecessary delays. Advantages of Database Management Systems and Consistency Maintaining data integrity and consistency is of paramount importance for any organization. DBMS enforces data validation rules, ensuring that only accurate and valid information enters the system. This prevents the occurrence of data anomalies and discrepancies, fostering a reliable foundation for decision-making processes. Enhanced Data Security The security of sensitive data is a top concern for businesses across industries. DBMS offers a robust security framework, allowing for access controls, user authentication, and data encryption. This multi-layered approach safeguards data from unauthorized access, reducing the risk of data breaches and ensuring compliance with data protection regulations. Scalability and Performance Optimization As businesses grow, so does their data volume. DBMS provides scalability features that enable seamless expansion of the database infrastructure. Additionally, performance optimization tools, such as query optimization and caching mechanisms, ensure that data retrieval remains efficient even as the database size increases. Data Redundancy and Space Efficiency DBMS efficiently handles data redundancy, a common issue in traditional file-based systems. Through normalization techniques, redundant data is minimized, optimizing storage space. This not only reduces storage costs but also enhances data accuracy by eliminating the chance of inconsistent duplicate records. Advantages of Database Management Systems In a dynamic environment where multiple users access and manipulate data simultaneously, DBMS excels. Transaction management features ensure that database operations occur in a controlled and consistent manner. ACID (Atomicity, Consistency, Isolation, Durability) compliance guarantees that even in the event of system failures, data remains consistent and reliable. Ease of Data Maintenance Maintaining and updating data becomes a streamlined process with DBMS. Whether it’s adding new data, modifying existing records, or deleting outdated information, DBMS simplifies these tasks while preserving data integrity. This reduces the risk of errors during data manipulation and contributes to the accuracy of the stored information. Data Analytics and Decision Support Data-driven decision-making is a cornerstone of modern business strategies. DBMS supports data analytics by offering tools for complex querying, data mining, and reporting. This empowers organizations to extract valuable insights from their data, leading to more informed decisions and a competitive edge in the market. Backup and Disaster Recovery Unforeseen events such as system crashes or natural disasters can lead to data loss. DBMS provides mechanisms for regular data backups and efficient disaster recovery. These features ensure that data can be restored to a consistent state, minimizing downtime and preserving critical information. Advantages of Database Management Systems Centralized In organizations with multiple departments and functions, DBMS centralizes data management. This creates a single source of truth, eliminating data silos and promoting collaboration. Employees across different teams can access accurate and up-to-date information, fostering better communication and more coordinated efforts. In conclusion, the advantages of Database Management Systems are manifold and have transformed the way businesses handle and utilize data. From efficient data organization and security to scalability and decision support, DBMS has proven to be an indispensable tool in the modern technological landscape. Its role in enhancing data management practices cannot be overstated, making it a fundamental component of successful enterprises.
<urn:uuid:b22934ac-93a6-4718-b025-4e988a30d9b5>
CC-MAIN-2024-38
https://generaltonytoy.com/unveiling-the-advantages-of-database-management-systems/
2024-09-13T01:07:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00194.warc.gz
en
0.879261
874
2.890625
3
You’ve likely already been the target of a neighbor spoofing call — you just might not know it. This clever little trick has been on the rise since early 2016. What is neighbor spoofing? Neighbor spoofing is the method of masking a phone number with a local area code so victims will believe they recognize (or should recognize) the number and feel safer in picking up the call. There’s legal spoofing — for example when a company contracts with a contact center and allows it to use the company's name and phone number on the caller ID — and there’s illegal illegal spoofing, when fraudsters purposely disguise their real numbers to fool the call recipient and prey on unsuspecting victims. Once these bad actors have your customers on the line, they’ll start in on their same, old, and tired attempts to trick them out of their hard-earned dollars. This is a strategic shift in traditional spoofing. In classic spoofing, a scammer will copy the first 5 or 6 digits of a company’s phone number with the hope of successfully posing as a local business. This change in strategy suggests that spam blocking providers like Hiya are successfully predicting neighbor scam calls. In response to effective anti-spam solutions, the scammers have attempted to continue the scam by switching to a less targeted strategy; by generalizing their approach, scammers hope they can go undetected and, therefore, keep the scam viable. Unfortunately, the scammers are right—predicting if a call is a neighbor spoof is much harder with three digits, as opposed to the traditional six—but Hiya is up for the challenge. Hiya has aggregated and anonymized data to create algorithms that effectively, and efficiently, identify area code-based neighbor scams. Our model instantly recognizes if the number in question belongs to a scammer and blocks calls (or marks them as spam) to protect consumers. Conversely, the model also identifies calls from legitimate businesses and ensures that calls from that number aren't mistakenly flagged as spam. It’s not just consumers—and Hiya—that recognize neighbor spoofing as a problem. The Federal Trade Commission (FTC) is also aware and has made steps to resolve it. In 2019, they began a major crackdown on robocalls and spoofing through a variety of regulations, which led to a major decline in neighbor spoofing. Although it is impossible to completely stop phone spoofing, the decline shows that the FTC’s regulations initially made a substantial impact. However, in the last year, the number of neighbor calls have begun to rebound and even surpass their initial peak since the initial crackdown, as scammers have found ways around the new regulations, indicating that there is still lots of work to be done. How to stop neighbor spoofing Although it is difficult to completely block neighbor spoofing and prevent your company’s phone numbers from being spoofed, there are a few steps you can take to minimize risk for your company. - Find a secure voice performance platform that provides visibility and control over any of your numbers that have been spoofed. - Display a branded caller ID to consumers to give them the confidence to answer calls from you. - Get a free reputation analysis report from Hiya to see if any of your numbers are being spoofed. Hiya’s network is 170 million users strong, due in part to strategic partnerships with AT&T, Samsung, Cricket Wireless, and other national providers. Hiya allows you to see how many times your number has been spoofed on our network, so you can make sure that your customers won’t fall victim to neighbor spoofing. If you make more than 20,000 calls a month, see if any of your call center numbers have received negative (spam!) labels with a free Hiya Connect reputation analysis. Get additional information on how to stop your numbers from being spoofed with our How to Stop Spoofing eBook.
<urn:uuid:a9c00534-da7f-4b5d-ad3e-e1811d6052f9>
CC-MAIN-2024-38
https://blog.hiya.com/the-evolution-of-the-neighbor-scam
2024-09-15T09:18:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00894.warc.gz
en
0.960603
821
2.671875
3
What is DMARC? DMARC stands for Domain-based Message Authentication, Reporting & Conformance. DMARC is an email authentication protocol that helps recipient domains verify that an email sender is who they say they are and not a cybercriminal spoofing a domain name. Essentially, DMARC determines the authenticity of an email message to protect organizations from malicious email attacks like phishing. DMARC was built around two existing email authentication technologies: SPF (sender policy framework) and DKIM (domain keys identified mail). These technologies had been in use before DMARC was developed but as email security threats evolved, they had become less-effective ways to authenticate email senders on their own. DMARC was designed to be a more collaborative way to genuinely improve mail authentication and enable recipient organisations to detect and reject unauthenticated emails. Unlike SPF and DKIM, DMARC offers reporting functionality that can be used to determine whether a domain is being used by cybercriminals to send emails. Domain owners are able to publish a DMARC record into the DNS (Domain Name System) record, providing a report of who is sending emails on behalf of their domain name. This information enables the domain owner to understand and control the emails being sent using their email channel, to prevent fraudulent emails that rely on domain-based spoofing. The types of attacks DMARC protects against include: - Phishing attacks targeting customers or third parties - Large-scale spear phishing, whaling and CEO fraud - Malware and ransomware attacks - Brand abuse and online scams How does DMARC work? DMARC is designed to integrate into the recipient organization’s inbound email authentication process – but it requires both the sender and the recipient to have DMARC protocol in place. To explain how DMARC works, let’s take two companies: Company A (the sender’s organization) and Company B (the recipient’s organization). For the sake of this example, let’s assume both have DKIM, SPF and DMARC protocols in place. When the sender at Company A sends the email, their email server will insert a DKIM header and then send the email to the recipient in Company B. This DKIM header indicates that the message is protected by SPF and/or DKIM. As the email arrives at Company B, their mail server will carry out standard validation checks, such as whether the sender’s IP is blocklisted or they have a poor domain reputation. If the email passes these checks, the recipient’s mail server will then validate and apply the sender’s DMARC policy. This involves retrieving the verified DKIM-signature from the header, and the “Envelope From” (valid domain names listed in the SPF record) and the return-path address (again listed in the SPF record), and then applying appropriate DMARC policy in response to whether an email is perceived as legitimate or not. If the email passes through this stage, it will then go through standard email filtering processes (such as anti-spam filters) and ultimately be delivered to the recipient. If the email fails (i.e. because it is spoofing the domain name), this information is updated in the sender’s organization’s Aggregate DMARC Report (in this instance, the Aggregate DMARC Report for Company A). The record in the Aggregate DMARC Report will include the sender’s IP address, which can used to qualify whether an email is legitimate or not. In this way, the domain owner at Company A can monitor and report on the emails that fail to pass through the DKIM and SPF stages to detect spoofing and fraudulent usage of their domain using the DMARC report. These reports can be shared with the domain owner on a daily basis for ongoing assurance about their domain’s usage. Why is DMARC important? Email is the most popular communication tool used by organizations today. Everyone within your organization will have access to email, and so email message exchanges have become a routine way of working. This makes email an attractive attack vector for cybercriminals, who can exploit complacency around everyday usage to trick people into doing things like clicking on fraudulent links, downloading malware, and replying to spear phishing emails. This makes verifying the authenticity of emails using protocols like DMARC, DKIM and SPF incredibly important, so you can monitor how your domain is being used to quickly detect fraudulent activity. In their 2019 Internet Crime Report, the Federal Bureau of Investigations (FBI) stated they received 467,361 complaints of internet crime over the 12 month period (on average, 1,300 per day) and recorded more than $3.5bn in financial losses to the individuals and businesses that fell victim to these attacks. The most financially costly complaints included business email compromise and spoofing, which require cybercriminals to impersonate a legitimate organization via email to trick victims into carrying out actions such as transferring money, opening malicious attachments or clicking on malicious links. In some cases, cybercriminals are looking for a quick payday – for example, by spoofing a supplier’s domain to trick someone in your finance department into paying a fake invoice. Or by impersonating your CEO to obtain pre-paid gift cards that they can then spend. Where this involves large sums of money, this can obviously have significant impacts for your organization’s bottom line. But email attacks can also put your personal data at risk, as well as that of your employees and clients, meaning you won’t be complying with data privacy laws. For example, some malicious email attacks are designed to trick you into entering your system credentials into fraudulent websites – for example, by pretending your password needs resetting or, as in the case of some COVID-19 scams, by pretending you need to log into a website to access training materials or education resources. Once they have your log-in credentials, attackers can then use them to access data stored on your company network, will sell them to others who might try this, or will use them to see whether they will unlock any of your other online accounts. If personal data is put at risk, this constitutes a data breach and must be reported to the relevant authorities. Using DMARC means you will be able to monitor whether cyber criminals are using your domain for fraudulent email attacks, like phishing and spear phishing. When domain owners at other organizations also have DMARC enabled, organizations are able to act as a community to protect each other and, ultimately, protect sensitive data. Unfortunately, cybercriminals will continue to use email as a top attack vector – again, mostly because everyone has access to it, so the likelihood of being able to trick people increases. It’s important, therefore, you take every step you can to prevent instances of phishing, spear phishing and CEO fraud, and understand how DMARC helps. Is DMARC enough? While DMARC is something every organization should enable, on its own, it’s not enough to protect you from every type of threat. DMARC is a proactive security technology. As a domain owner, you would use DMARC to monitor instances where your domain is being spoofed to prevent instances of phishing, spear phishing and CEO fraud, etc. Consequently, you are also relying on other organizations to use DMARC so they can protect you from similar attacks. Unfortunately, not everyone will do this so you will need other technologies to help detect and prevent inbound email attacks. (You should continue using DMARC even if not everyone else is to protect your domain authority and brand’s reputation by reducing the likelihood that a cybercriminal can successfully spoof your company’s domain to attack another organization.) In addition, not every cybercriminal spoofs a domain as part of their attack. As well as DMARC, you should invest in inbound email filtering systems. These technologies will scan incoming emails for suspicious links and malware, as well as detect spam emails which can be reported to internet service providers who can blocklist the spammer to reduce the number of spam emails. Email filtering systems can also protect you for DDOS and zero-hour attacks. In addition to DMARC and inbound filtering systems, you must also consider outbound email protection – because, as we’ve already established, everyone has access to email and, consequently, they use it to share sensitive data and privileged information. At Egress, we provide Intelligent Email Security, which can prevent emails being sent to incorrect recipients – whether that’s in response to a spear phishing attack or simply because you’ve added the wrong ‘Bob’. And because just getting an email to the right person isn’t enough to keep sensitive data safe, we also provide powerful encryption technology and detailed reporting functionality. Only by implementing holistic controls, such as DMARC to protect your name, filtering to remove unwanted or malicious emails, and outbound security like Egress Intelligent Email Security will you be able to truly keep sensitive data secure.
<urn:uuid:5428f3dd-7095-4c0f-9489-715bd4a0664c>
CC-MAIN-2024-38
https://www.egress.com/blog/phishing/guide-to-dmarc
2024-09-16T15:58:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00794.warc.gz
en
0.942976
1,864
3.671875
4
By now, most of us have experienced the advantages of emerging technologies like artificial intelligence (AI) and machine learning in our daily lives through our personal and business technology. These technologies – and many more – will become even more deeply entrenched in society as governments, local business partners, and technology providers look to create smart cities. Smart cities are becoming increasingly more important as localities look to lower crime rates, ensure citizens’ safety and lower congestion. Each individual citizen will benefit from smart cities, and places that adopt this technology will become more efficient, reducing their impact on the environment and creating better places to live. Smart cities are also more attractive to businesses that are looking to more efficiently grow their employee base. While AI and machine learning are necessary for powering many of the technologies within a smart city, the three most important technologies that make up smart cities are IoT, video and LIDAR and of course the network, data centers and storage capabilities that sit behind these technologies to harness the power of the data collected. IoT devices and sensors dispersed throughout smart cities are especially helpful for understanding where people congregate and their movements to lower congestion and keep people safe. For example, adding IoT sensors to bridges can help determine when they need to be de-iced to ensure safe crossing during busier times. Video technology helps to both deter crime and catch criminals as quickly as possible to keep the streets safe. Video technology is extremely effective at identifying individuals who have committed a crime but can also be used to identify high crime risk individuals before they get the chance to do something illegal. While some have cited the potential for misuse of this technology, the benefits have been shown to outweigh the drawbacks. LIDAR will deeply benefit factory and manufacturer workers in smart cities, as it will make them and their businesses more efficient. LIDAR allows businesses to determine where their employees are, which gives them the unique opportunity to guide workers as they complete a task and ensure the right amount of manpower is allotted for a particular task. All of this technology sounds like a great investment for cities; however, one major investment has to be made before any of these can work efficiently. If you consider the technologies mentioned above like a car, then a resilient data infrastructure is the gas that powers it. All of these services require robust networking, more so than previous cables were capable of in the past. Telecom companies are already touting their 5G capabilities, and while this technology will certainly benefit smart cities of the future, Edge computing and 6G will ultimately be the two technologies that move smart cities forward. Smart cities require “always-on” connectivity with as little latency as possible, which is especially important when you’re using the technology to make real-time decisions. This is where it becomes so critical to bring connectivity closer to the source as opposed to pinging data centers far away. These “edge” data centers will be a major technical enabler in the near future and include a mix of compute, storage, and network services to meet demand. Over the next few years, those regions that are looking to transform into smart cities will need to create a roadmap. Though the features of a smart city like IoT and video technology are exciting, it’s important to start by taking a step back and designing and implementing a data infrastructure that can support these technologies. This critical component of laying the right groundwork and foundation will be the difference between a city that is more efficient and a smart city that can’t “think” fast enough to be useful. Jason leads a team focused on defining, assessing and providing direction on the changing technology landscape of Flexential’s business and customers. He and his team are responsible for developing insights on what is next on the horizon to further position Flexential as a hybrid IT and data center leader. He joined the company in 2011 and has held various roles in product, operations and technical management. Jason has over 25 years of experience in leadership positions in product architecture, software engineering, and technical sales and support across a variety of companies, including Sun Microsystems, where Jason was honored as a Distinguished Engineer, VMware and the Mayo Clinic. Jason was the lead author of “Building N1 Grid Solutions,” one of the first books highlighting the combined use of virtualization and automation. He also has several patents in networking, data center resource management, virtualization, and security. He attended Luther College and Western Governor’s University.
<urn:uuid:8cde7154-6c1f-4dcf-858b-1abd31f19322>
CC-MAIN-2024-38
https://www.mytechmag.com/bringing-smart-cities-to-life/
2024-09-17T21:22:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00694.warc.gz
en
0.961668
911
2.921875
3
High profile data leaks and security breaches have been commonplace in the past few years, with instances of high-profile breaches of large tech companies often making the news. However, it’s not just tech giants who are at risk of having their business’ or their customers’ data accessed by outside entities; small and medium-size organizations across all industries can be at risk as well. Recently, such an example manifested at Georgia Tech University, where an unauthorized user of a university web application exposed information like names, birth dates, and social security numbers for up to 1.3 million people. The implications of these leaks can range from an outside actor simply viewing the data to find anything of use, to using the information they extract to discover perceived weaknesses at your firm, or even demanding a ransom for disposing of the data. How Breaches Occur There are countless ways by which a potential hacker could infiltrate your IT systems. It’s not uncommon for a hacker to make multiple attempts using different methods to find a successful way to breach any potential defenses. Email is a well-known route, with techniques such as phishing widely known to have the potential to allow a hacker into your systems, but it’s important to watch out for more than just suspicious addresses. Whaling is another deceptive technique that hackers may use, which entails the individual sending you a message appearing to be a person or entity who you know or trust, while actually having an intent to obtain some form of information. By analyzing data – like information sent from your computer while browsing the web, which can include details about your browser, operating system, whether the transmission was encrypted, and more – a potential unauthorized user can begin formulating a strategy to exploit security vulnerabilities that may exist in your system. For example, if the version of software your machine is running is out of date and has an unpatched security issue, a hacker may be able to infer that by obtaining HTTP-based data, and breach the system that way. What You Can Do to Prevent a Breach It’s important to realize that while there are lots of steps everyone can take to protect their digital assets, there is no one-size-fits-all solution. However, general practices such as keeping your system and application software as up to date as possible will help safeguard you from known threats, as well as avoiding suspicious looking emails, even if the address seems familiar at first glance. Other approaches can be tailored to your needs; for example, if you run a business in which you only use a web browser and email when connecting to the internet, it may be prudent to invest in a hardware firewall, which can limit the internet traffic on your network to ports and services you permit. Updating passwords frequently or implementing multi-factor authentication can also help thwart or complicate any attacks from hackers. Even with prudent measures, risks can still remain. Therefore, it’s important to invest in measures to protect your data by means other than upgrading security too. Investing in a backup solution can ensure that even if your business falls victim to an intrusion, that you will at least retain a copy of the data that was lost. Keeping backups up to date and maintaining multiple instances of them also protects you from other risks, such as data corruption, by allowing you to revert to a previous version of a file. The Bottom Line It’s important to take steps proactively to prevent an intrusion into your IT systems. If you’re looking for help in safeguarding your digital assets, Cyber Sainik has many offerings in areas such as Security as a Service, Backup as a Service, and more. Get in touch with us today to see how we may be able to support your IT security strategy.
<urn:uuid:d5515302-dba1-4101-b6b8-3f26102f3f36>
CC-MAIN-2024-38
https://cybersainik.com/what-you-need-to-know-about-preventing-data-breaches/
2024-09-13T05:40:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00294.warc.gz
en
0.96333
766
2.78125
3
Encryption of information has become part of our modern digital life. However, when using classical cryptography, there is a problem with data recording. In this article, we looked at innovative approaches that are often found in protecting information without violating its structure and format. Format-preserving encryption is a method that allows you to securely encrypt data while preserving its original structure and format. Traditional encryption algorithms convert data into an unreadable form, which can make it difficult to further process and use it especially with legacy applications. However, with the advent of new technologies and the development of specialized encryption algorithms, it has become possible to ensure the deterministic security of data without violating their format. This is important in many areas, including the field of communications, information storage and data processing when implementing retrofitting security. Format-preserving encryption methods are usually based on a combination of data compression algorithms and cryptographic transformations. They allow you to protect information by converting it into a special ciphertext that remains readable only by authorized users or when using a special key. This approach to format-preserving encryption provides usability and processing advantages without making significant changes to the structure of the data. This makes it especially useful in cases where it is necessary to ensure the security of information without violating its original format and functionality, for example to secure legacyapplications. With the development of cloud technologies, more and more organizations and users are turning to cloud storage for convenient and flexible storage of their data. However, the security and privacy of information remains a priority when using cloud services. Format-preserving encryption in the cloud becomes the solution to these problems. Cloud Format Preserving Encryption provides the ability to encrypt data before sending it to cloud storage while preserving its original format. This allows you to ensure the confidentiality of information, even if it is stored in an insecure cloud or transmitted over untrusted communication channels. One approach to format-preserving encryption in the cloud is to use client-side encryption. The user’s FPE encryptsthe data on his device before sending it to the cloud, and only he has access to the decryption. This provides reliable data protection, even if the cloud provider is under attack or unauthorized access. This cloud-preserving encryption approach combines the convenience of cloud services with the strength of encryption, ensuring that information is stored and transmitted securely in the cloud. This is especially important for organizations that process sensitive data and strive to comply with appropriate security standards. Format-preserving encryption is an innovative approach to data protection that offers a number of significant benefits. Here are some of the main benefits that this approach provides: Preserve data structure: Format-preserving encryption preserves the original structure and format of the plaintext, making it convenient and easy to use. Users can continue to work with data without wasting time decrypting it or restoring its original format. Security and privacy: Format-preserving encryption provides a high level of data security and privacy. Even in the event of unauthorized access or information leakage, it will be difficult for attackers to gain access to readable data without the appropriate key or authorization. Flexibility and compatibility: Format-preserving encryption is usually based on standard algorithms and protocols, which ensures compatibility with various platforms and applications. This allows you to effectively use secure data in various environments and interact with other systems. Security Compliance: Format-preserving encryption enables organizations and users to comply with legal or industry-standard data security requirements. This is important to protect sensitive information, including personal customer data and trade secrets. Format-preserving encryption combines the benefits of protecting data and preserving its original format, providing security, usability, and interoperability across systems and platforms. Format-preserving encryption is widely used in many areas where data security and structure preservation play an important role. Here are some examples of using this approach: Protecting sensitive corporate information: Companies can use format-preserving encryption to protect sensitive data such as financial statements, client lists, internal documents, and other sensitive information. This allows you to maintain the confidentiality of information without violating its availability to authorized employees. Secure Cloud Storage: Cloud users can take advantage of format-preserving encryption to protect their data while it is in transit and stored in the cloud. This approach guarantees confidentiality and protection from unauthorized access, while maintaining the convenience of working with data through cloud applications and services. Secure network transmission: Format-preserving encryption can be used to protect data as it travels over a network, especially when the format of the data is an important aspect of its use. Examples would be the transmission of medical records, financial transactions or other sensitive data where format retention is an integral part of their subsequent processing and analysis. Many vendors offer FPE in their products and services, including Entrust, Thales, HashiCorp, Futurex and others. Their FPE implementations are based on the NIST published Special Publication 800-38G, DRAFT Recommendation for Block Cipher Modes of Operation: Methods for Format-Preserving Encryption. Three methods are specified in this publication: FF1, FF2, and FF3. Each is a format-preserving, Feistel-based mode of operation of the Advanced Encryption Standard AES block cipher. Details of these, as well as patent and test kit information, can be found on the NIST Block Cipher Modes Development website. Format-preserving encryption and tokenization are two different approaches to data protection that have their own characteristics and advantages. However, both complement each other and can be used in a mixed approach called Format-Preserving Tokenization. For example, if the system is dealing with strictly formatted 16-digit numbers like a credit card number, FPE preserves the length of 16 digits and character set (numeric) of an input for legacy system support. However, if strict formatting isn’t required like in the Card Holder’s Name column, then tokenization is preferred because it is a stronger authentication method. Here are a few key differences between FPE and Tokenization: As technology advances and data security requirements increase, modern format-preserving encryption techniques have become more powerful and efficient. Here are a few current NIST standards related to this area: NIST SP 800-38F: This standard defines a hybrid encryption mode that preserves the format of data when it is encrypted. It is recommended to protect sensitive data by preserving its original format and ensuring strong encryption. NIST SP 800-38G: This standard provides a format-preserving encryption technique specifically designed to protect data in transparent encryption mode. It defines algorithms and protocols that allow the original data structure to be preserved when encrypted and decrypted. NIST SP 800-38H: This standard describes an authenticated format-preserving encryption mode. It proposes a technique that ensures not only the confidentiality of data, but also the integrity of its format, which allows you to trust the data when it is encrypted and transmitted. State-of-the-art NIST standards for format-preserving encryption techniques ensure data is reliable, secure, and interoperable. They are the basis for the development and implementation of encryption solutions that preserve the original data format while protecting it and ensure compliance with security requirements. Format-preserving encryption can be more computationally intensive than conventional encryption. Processing data in its original format requires additional computing resources, which can lead to an increase in time and costs for encryption and decryption operations. Preserving the data format during encryption can introduce additional difficulties in ensuring their security. Incorrectly implemented or vulnerable data formats can become the target of attacks that threaten the confidentiality and integrity of information. Format-preserving encryption requires appropriate support and integration into existing systems and applications. This can lead to difficulties when implementing new solutions and interacting with different platforms and devices. Encryption key management is an important aspect of data security. Format-preserving encryption introduces additional complexities in key management to ensure data access and security. One important aspect is conducting regular security audits to evaluate the effectiveness of the format-preserving technologies in place. These audits should encompass a thorough review of FPE algorithms, key management practices, and overall security controls to identify any potential vulnerabilities or areas for improvement. Staying up-to-date with the latest standards and best practices in format-preserving encryption is crucial. It is essential to remain informed about updates from reputable organizations such as NIST and industry forums to ensure that encryption methods align with the most current guidelines and recommendations. Another important consideration is the classification of data based on sensitivity. By categorizing data and establishing appropriate encryption policies, organizations can tailor their encryption approaches to meet the regulatory and compliance requirements specific to each data category. Investing in employee training and education about format-preserving encryption is vital. Ensuring that employees have a solid understanding of encryption protocols, key management practices, and the significance of safeguarding sensitive data can significantly contribute to a robust security posture. Format preserving encryption is designed to provide security for sensitive data while preserving its original format. When implemented correctly and combined with strong key management practices, format preserving encryption can be considered safe. The main difference lies in the preservation of the original format. While encryption transforms data into an unreadable format, format preserving encryption ensures that the encrypted data retains the same format, allowing for seamless integration and processing. An example of format preserving encryption is the FFX mode of encryption, which allows for the encryption of data while maintaining its original format. It is commonly used in scenarios where preserving the format of data is essential, such as credit card or social security number encryption.
<urn:uuid:59a18500-5c20-4e20-a413-4b2ed3c1fb26>
CC-MAIN-2024-38
https://helenix.com/blog/format-preserving-encryption/
2024-09-13T04:55:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00294.warc.gz
en
0.907266
1,958
3.53125
4
NG Firewall is made up of two general classes of programs - the Untangle Virtual Machine and the rack applications themselves. The Untangle Virtual Machine (or UVM) is a collection of Java classes that runs entirely inside a Java Virtual Machine (JVM). Memory used by the UVM is represented as memory used by the Java process on the system. Memory released by the UVM must be processed by the JVM garbage collector before being released to the operating system. The other class of programs are the ancillary daemons that operate on network packets - examples include Spam Blocker (spamassassin), Application Control (classd) or Virus Blocker Lite (clamav). These processes all use their own memory which is directly acquired from the operating system. Linux also uses any free memory for temporary storage of data as it moves it around the system. This greatly speeds up I/O operations. With all that being said, we can take a look at a few graphs from a real NG Firewall system: The Memory Usage report displays the actual usage of the real or physical RAM. This graph can give you an indication of an issue occurring over a time period. Green indicates usage, white indicates free memory. The Swap Usage report, naturally, displays the amount of memory swap that is occurring on the appliance. Swapping is the process whereby a page of memory is copied to the pre-configured space on the hard disk, called swap space, to free up that page (chunk) of memory. A little bit of swap usage is okay. (More on that in the conclusion below.) Here is the output of free -m command on the system (numbers are in MB): total | used | free | shared | buffers | cached | | Mem: | 3903 | 3721 | 182 | 0 | 14 | 522 | -/+ buffers/cache: | 3183 | 720 | |||| Swap: | 10291 | 583 | 9708 | The first line line shows "free memory" that doesn't include buffers (the temporary cache). To get the real free memory, you want the second line, which counts buffers as free memory. As you can see, the system has 720MB free. A little bit of swap usage is okay as it's mostly "idle" memory that can't be accessed or duplicated memory that isn't pulled into real memory unless it's being written to. Generally speaking, we don't want to see swap growing. If your NG Firewall starts swapping, bad things can start happening. If a system like the NG Firewall (which is mostly performing I/O operations) starts swapping, things start to go downhill very quickly. NG Firewall will likely not be accessible through the web GUI and appears to stop passing traffic, however it's really just busy working on packets that were sent to it previously. As it gets further behind, traffic actually stops getting sent (people stop trying to use the network because it's down) and after a few minutes it recovers. In situations such as the one laid out above, adding RAM should help.
<urn:uuid:4d90461c-7d98-4904-a31a-f76912178f1a>
CC-MAIN-2024-38
https://support.edge.arista.com/hc/en-us/articles/200683518-NG-Firewall-and-Memory-RAM-Use
2024-09-19T08:03:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00694.warc.gz
en
0.948333
644
2.734375
3
Generative AI and Power BI: A Powerful Duo for Data Analysis What Is Generative AI? Generative AI is an umbrella term for for a range of powerful models, capable of producing original outputs, based on the provided information. Hence the moniker “generative”. This “generative” nature allows it to create anything from fresh texts (like ChatGPT), never-before-seen visuals (like DALL·E2), and even new audio pieces (like AudioLM) or code snippets (like GitHub Copilot). In the simplest terms, generative AI is trained on massive data samples. The goal of the training is to teach the model to classify various inputs based on the labels provided by researchers: a model learns that an apple is “red”, “round”, “juicy”, etc. The scale of data sets needs to be substantial. For example, GPT-3 devoured 45 terabytes of text data and 175 billion parameters – and it is not even the largest model there is. Neural networks lie behind the impressive capabilities of modern generative AI models. This machine learning technique mimics the human brain's structure by utilizing interconnected processing units (neurons) arranged in layers. Essentially, these neural networks give generative AI models the “intelligence” to produce creative outputs, make data-backed decisions, and perform a variety of other tasks. The big boon of generative AI models is their user-friendliness. Unlike traditional programming that requires complex code, generative AI can be instructed using natural language – simply type in your request! This accessibility, combined with the lightning-fast generation of outputs, translates to a quantum boost in worker productivity and delivers substantial economic benefits. Generative AI models can take over a number of routine, low-value tasks. For example, in the business intelligence domain, generative AI models can help with data querying, analysis, and visualization. In software engineering, generative tools can help with code reviews and refactoring, plus a wide range of infrastructure management tasks. McKinsey estimates that Gen AI can automate 60%-70% of repetitive tasks currently consuming employees' time. Generative AI can also make data analytics more accessible to a wider audience by bridging the data skills gap. Users with any background can interact with data using natural language commands to receive personalized results. Likewise, generative AI models can be tasked to build data visualizations and dashboards with clear commentary on the data, sources, and statistical methods used. By utilizing explainability features (XAI), users can understand the model’s reasoning to mitigate the risks of biases or inconsistencies. Trained on massive datasets, generative AI excels at identifying unique patterns and correlations that humans might miss. This creative “shtick” can enhance the innovative capabilities of your business. For example, Simcenter's generative AI has a generative AI tool that helps discover the optimal system architectures. The model scans through thousands of possibilities based on the input product characteristics and then suggests the best-fit system architecture pattern. Microsoft has a well-established reputation for innovation in AI. Their Cloud AI developer services have consistently ranked among the best by Gartner's Magic Quadrant for five years running. These services empower developers to build, deploy, and manage custom AI models in production environments. Here you can read in detail about practical examples of how AI in Power BI is being leveraged across various industries. The following section will explore four key options for deploying AI within Power BI for enhanced data exploration and insights. Power BI employs Microsoft's advanced generative AI model to give users a seamless, natural language interface. Introduced in May 2023, Copilot within Power BI lets users interact with data using spoken commands for tasks like data retrieval, editing Data Analysis Expressions (DAX) calculations, and even report or visual dashboard generation. Beyond data manipulation, Copilot acts as a conversational guide, offering insightful answers to user queries and generating data summaries that enhance data storytelling. Developers can also make a direct integration between ChatGPT and Power BI to get assistance from the famous OpenAI model. Firstly, ChatGPT can help construct complex calculations and advanced queries within Power BI models. Secondly, its analytical capabilities can be leveraged to troubleshoot errors encountered during development. Finally, ChatGPT holds promise for optimizing report generation by automating repetitive tasks and refining report structures. Power Query M functions offer an efficient language for data manipulation within Power BI. Tools like ChatGPT could potentially provide "Power Query M Function support" in the form of: - error correction - code generation based on user instructions - development through natural language code generation. For advanced AI capabilities within Power BI, developers can leverage Microsoft Azure Cognitive Services – pre-trained, customizable AI models, packaged as application programming interfaces (APIs). Deployable to any cloud or edge application with containers, Cognitive Services provide advanced analytical capabilities to enhance applications. Power BI offers an option to enrich existing dataflows with available Cognitive Services models via a graphical interface. Current Power BI integration with Cognitive Services supports: - Automatic language detection and text recognition in 120 languages - Keywords and phrases extraction from unstructured texts - Sentiment analysis for smaller text documents - Image tagging, capable of identifying over 2,000 objects. Azure Cognitive Services help data analysts handle large datasets more effectively by reducing time spent on data cleansing, labeling, and preparation for self-service analytics usage. Power BI also lets you build fully custom machine learning models to run against your data using automated machine learning (AutoML) tools from Azure Machine Learning service. AutoML provides tools for deploying supervised learning techniques such as binary prediction, classification, and regression models. You do not need an Azure subscription to use AutoML in Power BI since the tool entirely managed the process of training and hosting ML models. Generative AI Use Cases in Data Analytics and BI Whether you are using Power BI or other self-service BI tools, generative AI models have a lot to offer in terms of streamlined workflows and superpowered analytical capabilities. Here is how you can uncover value buried beneath your data. Generative AI tackles a major pain point in data analysis: data silos. By automating data classification, tagging, anonymization, and segmentation, it streamlines data organization and accessibility. Tools like Microsoft Fabric, integrated with Power BI's Copilot mode, support the potential of generative AI for improving data lineage and governance within data management platforms. Data analysis is fundamentally driven by the pursuit of new intel. The problem, however, is that traditional models often inherit the “thinking process” from their creators. For example, your domain experts may have preconceived notions and biases that would be incorporated into the model. Likewise, some users might struggle to formulate the right questions or approach the data from an unconventional angle. Well-trained generative AI models can uncover new data dimensions and correlations and present them to business users for consideration. Generative AI models can ideate at fast speeds by building thousands of associations within seconds, generating various novel concepts for further human evaluation. Generative AI models are able to extract key findings and generate concise summaries from lengthy reports. These models can leverage analyzed data to automatically generate entire reports, including narrative text that contextualizes the findings. In this way, data analysts shift their focus towards higher-level tasks like interpretation, recommendation, and strategic data storytelling. Who wouldn’t benefit from clear-cut reports that effectively communicate complex findings in a timely manner? The latest generation of Gen AI models are capable of scenario modeling and prescriptive analytics. Using historical data and domain knowledge, these models can assess the feasibility of proposed actions by juggling multiple variables and evaluating different scenarios. Based on these forecasts and feasibility assessments, the models can provide prescriptive recommendations for optimal decision-making. How to Use Generative AI in Power BI Gen AI can bring game-changing performance gains to data teams. However, despite all the hype and surging interest, many IT leaders are also wary of the potential security and bias risks. Main Concerns with Generative AI To capitalize on the full spectrum of generative AI capabilities, both present and future, organizations need to implement these best practices: Focus on Data Security Many commercial generative AI models use the input data for model training purposes, which may not be ideal for privacy-focused industries. Likewise, analysts may include sensitive data in proprietary models by mistake. To mitigate data privacy and security risks of GenAI usage, organizations need to: - Establish auditable trails on Gen AI data collection, storage, and processing practices. - Use data anonymization and/or aggregation to mask sensitive data, which is shared with AI models. - Implement granular access controls to restrict data access to authorized users or processes. - Employ secure user authentication methods and role-based authorization for different types of data manipulations. In other words, organizations need to create a strong data governance process — such that allows establishing full data traceability across the organization. Each created dataset needs to have a clear owner and a list of users with access/modification permissions. It should also be validated against the organization's compliance rules. Modern solutions like Microsoft Purview and Microsoft Fabrics help establish a clear data ownership structure, paired with scalable, secure data-sharing practices. These tools can help control which data is consumed by GenAI models and prevent accidental disclosures. Implement Quality Assurance Processes for Developed Models Generative AI models gain their “knowledge” from their creators. Issues at the design level lead to subpar model performance and biased results. Without extensive quality assurance and model observability, unconscious biases will enter the new models. Amazon once launched an AI resume rating tool, which unfavorably ranked all female candidates because of their gender. An early version of a Google Photo image recognition algorithm discriminated against black people. In data analytics, it can result in plain wrong calculations or data interpretation, like it happened with Bing AI that presented inaccurate analyses of earnings reports for selected companies. To avoid biases in AI model design, it is a good practice to: - Select the best-fit learning model for the selected use case. GenAI models can be built with unsupervised or semi-supervised learning techniques. Each has its pros and cons when it comes to the quality and accuracy of produced outputs. - Train models on datasets, representative of your organization and industry. The data provided must be comprehensive and balanced. It’s a good idea to train models on proprietary data rather than public datasets. - Implement model observability to analyze the model’s behavior, data, and performance across its lifecycle. Observability helps detect and investigate drifts in performances and anomalies in a timely fashion. Build a Corporate Culture of AI Usage AI makes certain people uncomfortable for one reason or another. Without a clear understanding of the benefits and use cases of GenAI, adoption will always remain an uphill battle. At the technology level, organizations will need to establish a better data management infrastructure for continuous dataset creation and self-service access to insights. This step alone requires major transformations both in terms of supporting infrastructure and in supporting processes. At the processes level, leaders will need to identify the problems, which can be effectively solved with the GenAI. The adoption process should be centered on solving actual business challenges, not adopting expectations that AI will be an end unto itself. At the people level, your employees will need to be educated on the purpose, benefits constraints, and risks of using the available AI solutions, as well as de-briefed on security and privacy best practices. Generative AI and Power BI form a powerful duo, bringing a true transformation to data analysis. AI streamlines workflows, unearths hidden insights, and generates clear reports. Power BI's user-friendly interface and Azure integration provide a platform to use these AI capabilities. Security, model quality, and fostering an AI culture are integral parts of these tech duet. As AI evolves, Power BI will adapt, data teams will get an opportunity to unlock information's full potential.
<urn:uuid:767f9b2b-3673-4ef8-bf8f-413ad8398183>
CC-MAIN-2024-38
https://www.infopulse.com/blog/generative-ai-power-bi-data-analysis
2024-09-19T07:48:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00694.warc.gz
en
0.907298
2,514
3.09375
3
CAPTCHAs, those squiggly and frustrating puzzles that many Web sites require users to solve before registering or leaving comments, are designed to block automated activity and deter spammers. But for some Russian-language forums that cater to spammers and other miscreants, CAPTCHAs may also be part of a vetting process designed to frustrate foreigners and outsiders. “Verified,” one of the longest-running Russian-language forums dedicated to Internet scammers of all stripes, uses various methods to check that users aren’t just casual lurkers or law enforcement. It recently began using CAPTCHAs that quiz users about random bits of Russian culture when they register or log in. Consider this CAPTCHA, from Verified: “Введите пропущенное число ‘… мгнoвeний вeсны.'” That translates to, “Enter the missing number ‘__ moments of spring.'” But it may not be so simple to decipher “мгнoвeний вeсны,” the “moments of spring” bit. One use of cultural CAPTCHA is to frustrate non-native speakers who are trying to browse forums using tools like Google translate. For example, Google translates мгнoвeний вeсны to the transliteration “mgnoveny vesny.” The answer to this CAPTCHA is “17,” as in Seventeen Moments of Spring, a 1973 Russian television mini-series that was enormously popular during the Soviet Union era, but which is probably unknown to most Westerners. Although these cultural CAPTCHAs may not stop those determined to break them, cultural CAPTCHAs are an interesting approach to blocking unawanted users. Most CAPTCHA systems can be trivially broken because they merely require users to repeat numbers and letters. Some CAPTCHAs ask the visitor to solve math or logic puzzles, but these questions can be answered by anyone with a grade school grasp of math. Spammers tend to rely on commercial, human-powered CAPTCHA solving services, which automate the solving of CAPTCHAs with the help of low-paid workers in China, India and Eastern Europe who earn pennies per hour deciphering the puzzles. CAPTCHAs that bombard workers at these automated facilities with a range of cultural questions might frustrate these low-paid workers, but the challenges likely would be more frustrating (not to mention alienating and offensive) to legitimate users who are unfamiliar with the targeted culture. In many ways, cultural CAPTCHAs seem to be uniquely suited for small, homogeneous and restricted online communities. I would not be surprised to see their use, variety and complexity increase throughout the criminal underground, which is constantly trying to combat the leakage of forum data that results when authorized members have their passwords lost or stolen.
<urn:uuid:132a7732-e646-4edf-8523-b7a605cfcd9c>
CC-MAIN-2024-38
https://krebsonsecurity.com/2011/09/cultural-captchas/
2024-09-20T13:43:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00594.warc.gz
en
0.934479
647
2.5625
3
Importance of Social Engineering Awareness Training In today’s digital age, cybersecurity is more important than ever. But while companies invest heavily in technological defenses, often the weakest link in security is the human element. Social engineering attacks, which exploit human psychology rather than technical vulnerabilities, are on the rise. This is why social engineering awareness training is not just beneficial, but essential for organizations of all sizes. By educating employees on the risks and signs of social engineering, businesses can better safeguard their sensitive data against malicious actors. In this article, we explore the value of social engineering awareness training and provide actionable insights for protecting your company from the inside out. Understanding Social Engineering Social engineering is a method of fraud that manipulates individuals into divulging confidential or personal information that may be used for fraudulent purposes. It’s a threat that transcends the digital world and can occur in person, over the phone, or through any form of virtual communication. Common Social Engineering Techniques - Phishing: Phishing is the most common social engineering technique. It involves fake communications that appear to come from a trusted source. Attackers use emails, messages, or websites to trick individuals into providing sensitive information, such as login credentials or financial details. Real-Time Incident: In 2020, a major tech company fell victim to a phishing attack where employees received emails that seemed to be from the IT department, asking them to reset their passwords. Many employees complied, leading to a significant data breach. - Pretexting: In pretexting, attackers create a fake scenario to obtain information. This often involves pretending to be someone in a position of authority or trust. Real-Time Incident: In a well-known case, attackers posed as police officers to convince a bank employee to reveal customer account information, leading to significant financial losses. - Baiting: Baiting exploits curiosity or greed by offering something tempting, like free software or a USB drive, which contains malware. Real-Time Incident: Attackers left USB drives labeled “Confidential” around a company’s parking lot. Employees who picked them up and plugged them into their computers inadvertently installed malware on their systems. - Tailgating: Also known as “piggybacking,” tailgating occurs when an unauthorized person follows an authorized individual into a secure area. - Vishing: Voice phishing that involves the same tactics as phishing but is done over the phone. - Spear Phishing: Highly targeted phishing attempts directed at specific individuals or companies. Impact of Social Engineering The effects of successful social engineering attacks can be severe and far-reaching. These attacks can lead to significant financial, operational, and reputational damage. - Data Breaches: Social engineering can lead to unauthorized access to sensitive information, resulting in data breaches. Compromised data can be used for identity theft, financial fraud, or sold on the dark web. - Financial Losses: Organizations may suffer substantial financial losses due to stolen funds, fraudulent transactions, or ransomware attacks demanding payments. - Reputational Damage: A security breach can severely damage an organization’s reputation. Customers and clients may lose trust, leading to a decline in business. - Operational Disruption: Social engineering attacks can disrupt business operations, causing downtime, loss of productivity, and additional costs for recovery. Case Studies: Impact of Social Engineering Awareness Training Real-world examples can illustrate the transformative impact of effective training. Company A: Before and After Training Before implementing social engineering awareness training, Company A experienced frequent phishing attacks, and several employees fell victim. After a comprehensive training program, the frequency of successful attacks dropped significantly. Company B: The Cost of Neglect In contrast, Company B neglected to invest in awareness training. When targeted by a spear-phishing attack, an employee unwittingly compromised sensitive customer data, resulting in financial and reputational damage. Why Social Engineering Awareness Training is Crucial Social engineering awareness training is not just an add-on to your cybersecurity strategy—it’s a vital component of your defense mechanism. Defense Against Sophisticated Scams As attackers become more sophisticated, so too must our defenses. Training empowers employees to recognize and respond to advanced tactics that traditional security software may not catch. Establishing a Culture of Security Regular training sessions signal to your staff that security is a priority, fostering a culture of awareness and vigilance that can be the best line of defense against social engineering attacks. Legal and Compliance Implications In many industries, there are legal and regulatory requirements to provide security awareness training. Neglecting this responsibility can lead to fines, penalties, and a damaged reputation. Key Training Components An effective social engineering awareness training program should include several essential components to ensure comprehensive coverage and lasting impact. - Education on Threats: Training should start with detailed information on various social engineering techniques, providing real-life examples to illustrate each type. - Scenario-based Training: Practical exercises, such as simulated phishing attacks, help employees recognize and respond to real-life scenarios. These simulations provide hands-on experience in identifying and mitigating threats. - Incident Response Training: Employees should be trained on the steps to take if they suspect or identify a social engineering attack. This includes knowing who to report to, immediate actions to minimize damage, and proper documentation procedures. - Regular Updates: Social engineering tactics continually evolve. Regular training updates ensure employees stay informed about the latest threats and defenses. - Evaluation and Feedback: Conduct regular assessments to measure the effectiveness of the training. Use surveys, quizzes, and feedback sessions to identify areas for improvement and adapt the training program accordingly. Implementing Effective Training To maximize the effectiveness of a social engineering awareness training program, consider the following best practices: - Management Support: Secure endorsement and active participation from top-level management to emphasize the importance of the training. Leadership involvement demonstrates commitment and encourages employee engagement. - Tailored Content: Customize the training content to address specific threats relevant to the organization’s industry, size, and operational context. - Interactive Modules: Use interactive training modules, such as videos, quizzes, and hands-on exercises, to enhance engagement and retention. Interactive content helps employees better understand and remember key concepts. - Periodic Reinforcement: Reinforce training with regular refresher sessions and ongoing communication about emerging threats. Continuous education helps maintain high levels of awareness and readiness. - Feedback Mechanisms: Establish channels for employees to provide feedback on the training. Use this feedback to continuously improve and adapt the training program to meet the organization’s evolving needs. Advanced Social Engineering Penetration Testing To further enhance security, organizations should consider conducting social engineering penetration testing. This involves simulated attacks carried out by ethical hackers to identify weaknesses in human factors and organizational processes. Benefits of Social Engineering Penetration Testing: - Identify Weaknesses: Penetration testing helps reveal potential security gaps and weaknesses in employee awareness and organizational processes. - Real-world Simulation: These tests provide a realistic assessment of how employees would respond to actual social engineering attacks, offering valuable insights for improving defenses. - Customized Reports: Penetration testing providers deliver detailed reports with specific recommendations tailored to the organization’s needs, helping to strengthen overall security posture. Best Social Engineering Service Provider in India Selecting a reputable provider for social engineering testing services is crucial for ensuring effective and comprehensive assessments. When choosing a service provider in India, consider the following criteria: - Proven Track Record: Look for providers with a history of successful social engineering tests and satisfied clients. - Certified Professionals: Ensure the provider employs ethical hackers with relevant certifications, such as CEH (Certified Ethical Hacker) or CISSP (Certified Information Systems Security Professional). - Comprehensive Services: Choose a provider that offers a wide range of services, from phishing simulations to full-scale penetration testing, tailored to meet the organization’s specific needs. Social engineering awareness training is a critical component of a robust cybersecurity strategy. By educating employees about the dangers of social engineering and teaching them how to recognize and respond to attacks, organizations can significantly reduce their risk of a security breach. To truly protect your company, invest in comprehensive training, foster a culture of security awareness, and implement best practices for fraud prevention. With these steps, you can create a human firewall that complements your technological defenses and keeps your organization safe. In the fight against cyber threats, knowledge is power. Equip your team with the awareness and tools they need to defend against social engineering attacks, and ensure the security of your business for the future. For More Blog Contents – Click Here
<urn:uuid:f52e167f-08ad-4d08-aa6a-a3b2cc4bee30>
CC-MAIN-2024-38
https://cybervie.com/blog/importance-of-social-engineering-awareness-training/
2024-09-08T09:34:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00794.warc.gz
en
0.921708
1,789
2.984375
3
In 2018 the Leadership Computing Facility at the Oak Ridge National Laboratory (ORNL) in Tennessee installed Summit, the world’s most powerful supercomputer ever built, to help it break new ground in scientific research. The US Department of Energy built Summit using nearly 28,000 NVIDIA Volta GPUs and high-speed NVLink interconnect technologies. Capable of delivering a peak 200 petaflops of double-precision computing, it is ten times faster than its predecessor, Titan; the system that set ORNL on its pioneering path of GPU-accelerated computing. Summit’s huge performance boost has already begun powering scientific research into areas ranging from fusion energy to advanced materials and human diseases. Watch the video to hear Bronson Messer, the facility’s senior scientist, talk about one of the ORNL’s most ambitious projects: exploring the mystery of what happens when stars end their lives in supernova explosions. “This is the birthplace of neutron stars, black holes and, most importantly, the place where we are born,” says Messer. “Everything on the periodic table, the iron in our blood, the gold around your neck, is made in stars, or in their deaths. We want to understand how the elements are made and how they are disseminated in interstellar space.” Building a computational simulation model of a supernova explosion involves a previously unmanageable panoply of calculations. “We can solve all those equations simultaneously in parallel on Summit’s GPUs,” says Messer. Eventually, the team of scientists will compare the precise isotopic data they have obtained with Summit to observations others have gleaned from optical telescopes, gamma-ray telescopes in orbit and from meteorites carrying evidence of the solar system’s birth. The aim is nothing less than knowing the Earth’s origins. “We want to understand where the story goes when we trace it back to the individual stellar explosions that we simulate on very large computers like Summit,” says Messer.
<urn:uuid:6452146c-ba3d-4133-9a9a-9c04a7fbd038>
CC-MAIN-2024-38
https://www.nextplatform.com/2019/02/06/summit-supercomputer-boldly-goes-to-where-the-stars-die/
2024-09-09T14:00:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00694.warc.gz
en
0.930439
426
3.796875
4
There’s been a notable increase in demand for legislation to guide the future of generative AI technology. This renewed interest in the role government plays in technology raises the question: does technology influence legislation or vice versa? I recently read an article that caught my attention. Not because it called for the US government to regulate technological innovation but instead to leverage technology to shape the legislative process. US lawmakers are now pushing to establish a bipartisan commission that calls on experts to collect, review, analyze, and make recommendations to Congress using data to drive evidence-based policymaking.1 As a proponent of truth in data, I applaud the lawmakers advocating for evidence-based policies. As a technologist who has worked with the public sector for nearly two decades, I’m passionate about the role technology can play in helping lawmakers deliver on their evidence-based promise. The promise of evidence-based policymaking The new initiative would not only create more transparency, but also enable lawmakers to harness the power of data to gain deep insights for future policymaking and to prioritize the services government delivers to its citizens. But data can be challenging to find across different systems and organizations. It can be even harder to clean and normalize its format, all of which creates huge barriers for lawmakers hoping to leverage data for their policymaking. Technology can change that. Here are just a few examples of how a modern, robust technology foundation can provide the necessary tools to leverage existing data for effective policymaking: - Extraction: Data lives in diverse formats across different organizations and systems. Data integration technology has significantly matured to make it easier to extract from these diverse sources. - Normalization: Data in these systems could be in varying structured and unstructured formats. Advances in machine learning can help normalize the structure and bring uniformity. - Cleansing: Data quality vastly varies due to differences in validation rules or lack thereof. Various algorithms and statistical methods can cleanse the data by identifying missing values, duplicate records, and inconsistent formats, removing duplicates, enriching existing data, and converting to a standardized format or structure. - Analysis: Data science has recently emerged as a multi-disciplinary field that integrates domain knowledge with statistics and computer science to provide deep insights not only from structured data but also from noisy and unstructured data.
<urn:uuid:9ea39152-c406-4d37-92bd-c72ebf43040a>
CC-MAIN-2024-38
https://www.kyndryl.com/us/en/perspectives/articles/2023/07/data-driven-legislation
2024-09-15T21:22:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00194.warc.gz
en
0.933052
471
2.640625
3
What is a Firewall? The Different Firewall Types & Architectures One of the major challenges that companies face when trying to secure their sensitive data is finding the right tools for the job. Even for a common tool such as a firewall, many businesses might not know how to find the right firewall (or firewalls) for their needs, how to configure those firewalls, or even why firewalls are necessary. What is a Firewall? Firewalls are the first line of defense for your network security. A firewall is a type of cybersecurity tool used to monitor and filter incoming and outgoing network traffic – from external sources, internal sources, and even specific applications. The primary goal of a firewall is to block malicious traffic requests and data packets while letting through legitimate traffic. There are many types of firewall deployment architectures, including network-based (software), host-based (hardware), and cloud-based. Every firewall operates based on predetermined rules to determine which outside networks and applications can be trusted. As such, firewalls are a key component of any network security architecture. How Does A Firewall Function? So, how do firewalls work? Simply put, a firewall shields your network from suspicious data by inspecting incoming data packets for threats. Firewalls analyze network traffic for data content, which firewall ports (or entry points) the data is trying to use, and where the data originated. Different types of firewalls use different methods – or combinations of methods – to assess potentially malicious sources. These firewall tools include packet-filtering, TCP verification, deep-layer inspections, and proxy checkpoints. Next-generation firewalls (NGFWs) go even further by employing preventative measures, such as using machine learning to detect unusual data behavior. 8 Types of Firewalls and Deployment Architectures Firewall types can be divided into several categories based on their general structure, method of operation, and whether they offer basic or advanced threat protection (ATP). Examples of firewalls can be found below. - Packet-filtering firewalls - Circuit-level gateways - Stateful inspection firewalls - Application-level gateways (a.k.a. proxy firewalls) - Next-gen firewalls Firewall Delivery Methods: - Software firewalls - Hardware firewalls - Cloud firewalls To determine which firewall is best for your business’s cybersecurity needs, here are some detailed explanations: Type 1: Packet-Filtering Firewalls Packet-filtering firewalls are the most “basic” and oldest type of firewall. The process of packet filtering involves creating a checkpoint at at traffic router or switch. The firewall performs a simple check fo the data packets coming through the router – inspecting information such as the destination and origination IP address, packet type, port number, and other surface-level details without opening the packet to examine its contents. It then drops the packet if the information doesn’t pass inspection. The good thing about these firewalls is that they are not very resource-intensive. Using fewer resources means they are relatively simply and don’t meaningfully impact system performance. However, they are also relatively easy to bypass compared to firewalls with more robust inspection capabilities. Type 2: Circuit-Level Gateways Circuit-level gateways are another simple firewall type meant to quickly and easily approve or deny traffic without consuming considerable computing resources. Circuit-level gateways work by verifying the transmission control protocol (TCP) handshake. This TCP handshake check is designed to ensure the requested packet session is legitimate. While extremely resource-efficient, these firewalls do not check the packet itself. So, if a packet had malware but also had the proper TCP handshake, it would easily pass through. Vulnerabilities like this are why circuit-level gateways are not enough to protect your business by themselves. Type 3: Stateful Inspection Firewalls Stateful inspection firewalls combine packet inspection technology and TCP handshake verification to offer more serious protection than either of the two architectures could provide alone. They also can keep a contextual database of vetted connections and draw on historical traffic records to make decisions about the depth of scrutiny each packet warrants. However, these firewalls also put more of a strain on computing resources. This may slow down the transfer of legitimate packets compared to the other solutions. Type 4: Proxy Firewalls (Application-Level Gateways/Cloud Firewalls) Proxy firewalls (aka application-level gateways or cloud firewalls) operate at the application layer to filter incoming traffic between your network and the traffic source. These firewalls are delivered via a cloud-based solution or another proxy device. Rather than letting traffic connect directly, the proxy firewall first establishes a connection to the source of the traffic and inspects the incoming data packet. This check assesses both the packet and TCP handshake protocol, similar to the stateful inspection firewall. Proxy firewalls may also perform deep-layer packet inspections, checking the actual contents of the information packet to verify that it does not contain malware. Once the check is complete and the packet is approved to connect to the destination, the proxy sends it off. This rates an extra layer of separation between the “client” – the system where the packet originated – and the individual devices on your network, creating additional anonymity and network protection. The one drawback to proxy firewalls is that they can create a significant slowdown because of the extra steps in the data packet transfer process. Type 5: Next-Generation Firewalls Many recently-released firewall products are touted as “next-generation” architectures. However, there is no consensus on what makes a firewall genuinely next-gen. Next-generation firewall architectures typically include the same core features as other firewall iterations – deep-packet inspection, TCP handshake checks, and surface-level packet inspection. They can also consist of other technologies, such as intrusion prevention systems (IPSs) that automatically stop application-level attacks and malware attacks against your network. Since there is no one definition of a next-generation firewall, it is essential for you to verify what specific capabilities such firewalls have before investing. Firewall Deployment Architecture 1: Software Firewalls Software firewalls include any type of firewall that is installed on a local device rather than a separate piece of hardware or cloud server. The big benefit of a software firewall is that it is highly useful for providing in-depth security by isolating individual network endpoints from one another. However, maintaining individual software firewalls on different devices can be difficult and time-consuming. Furthermore, not every device on a network may be compatible with a single software firewall, which may mean having to use several different software firewalls to cover every asset. Firewall Deployment Architecture 2: Hardware Firewalls Hardware firewalls use a physical appliance that acts like a traffic router to intercept data packets and traffic requests before they’re connected to the network’s servers. Physical appliance-based firewalls like this excel at perimeter security by ensuring malicious traffic from outside the network is intercepted before the company’s network endpoints are exposed to risk. However, the major weakness of a hardware-based firewall is that it is often easier for insider attacks to bypass them. In addition, the actual capabilities of a hardware firewall may vary depending on the manufacturer – for example, some may have a more limited capacity to handle simultaneous connections than others. Firewall Deployment Architecture 3: Cloud Firewalls Cloud firewall – also called firewall-as-a-service or FaaS – refers to any firewall delivery architecture that uses a cloud solution. Many consider cloud firewalls synonymous with proxy firewalls since a cloud server is often used in a firewall setup (although the proxy does not necessarily have to be on the cloud, it frequently is). The primary benefit of having cloud-based firewalls is that they are straightforward to scale with your organization. As your needs grow, you can add additional capacity to the cloud server to filter larger traffic loads. Cloud firewalls, like hardware firewalls, excel at perimeter security. State of Firewalls in 2024: While numerous iterations of firewalls have emerged in the past decades, the continuous tenacity and adaptability of firewall technology consistently demonstrates that organizations with a resilient firewall infrastructure maintain a cybersecurity edge over those without firewalls.Here are some trends to watch out for in 2024: - Next-generation firewalls are trending towards increased usage of artificial intelligence (AI) and machine learning (ML) to automate security tasks and predict likely sources of anomalous traffic patterns. - Cloud firewalls are being increasingly adopted by security-conscious businesses, and as a result, cloud-based threats are similarly on the rise. - Hybridized cybersecurity architectures have become the norm, as companies are layering multiple firewall types and coordinating their firewall infrastructure with other network security tools. Which Firewall Architecture is Right for Your Company? To find the answer, consider the bottom line: - Simple packet-filtering or circuit-level gateway provides essential protection with minimal performance impact. - The stateful inspection architecture combines the capabilities of the previous two options but has a more substantial performance impact. - A proxy or next-gen firewall offers far stronger protection in exchange for additional expenses and an even higher performance impact. The real question is: Why would you only use one? No single protection layer, no matter how robust, will ever be enough to protect your business on its own. To provide better security, your networks should have multiple layers of firewalls, both at the perimeter and separating different assets on your network. For example, you could have a hardware or cloud firewall at the perimeter of your network, and individual software firewalls on each of your network assets. Additional firewalls help make your network tougher to crack by creating additional defense-in-depth (DID) that isolates different assets. This acts both as a deterrent and gives you more time to respond, as it forces attackers to perform extra work to reach all your most sensitive information. The particular firewalls you want to use will depend on your network’s capabilities, relevant compliance requirements for your industry, and the resources you have to manage these firewalls. Need help finding the ideal firewall architecture for your business needs? Click below for our FREE comprehensive guide on how to accelerate your firewall monitoring and management to keep your network exceptionally secure.
<urn:uuid:eeecd220-44b6-4899-ab1e-591974d44a5f>
CC-MAIN-2024-38
https://www.compuquip.com/blog/types-firewall-architectures
2024-09-17T01:12:39Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00094.warc.gz
en
0.924137
2,200
3.359375
3
Reliable home internet connectivity is more essential than ever for K-12 student success, yet many households still lack adequate broadband access. This ‘homework gap’ is especially detrimental for elementary school children who need online access to participate in education fully. Without the ability to get online from home, younger students can fall behind academically and struggle to complete assignments requiring technology or internet research. Elementary schools at the heart of communities are ideal centers to extend high-speed wireless coverage to the surrounding neighborhoods. By leveraging innovative technologies like private Long Term Evolution – 4G mobile telecommunication standard. More and The “G” in 5G stands for generation. 5G is the fifth generation of wireless technology. 5G is characterized by bigger channels (which improves throughput), lower latencies allowing for real time applications, and the ability to connect more devices (which is increasingly important as the number of devices has grown exponentially). More networks, schools can provide much-needed broadband access to students’ homes to bridge connectivity gaps. In their effort to provide neighborhood students with high-speed broadband at home, Alef has teamed with Frontera Consulting to handle deployment of their platform for families living near the Brooks-Quinn-Jones Elementary School in the Nacogdoches (TX) Independent School District with Alef’s CBRS-based Private Mobile Networks Platform. Alef is providing their Private Mobile Networks Platform while Frontera brings its expertise as a connectivity provider and consultancy firm specializing in community data connectivity engineering, and deployment solutions. To learn more, go to: Alef Teams Up with Frontera to Fast-Track Private Wireless to School Neighborhoods
<urn:uuid:c6bd3599-a454-44c9-93ee-0da20f025cba>
CC-MAIN-2024-38
https://alefedge.com/2023/11/07/alef-teams-up-with-frontera-to-fast-track-private-wireless-to-school-neighborhoods/
2024-09-18T05:29:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00894.warc.gz
en
0.950057
337
2.65625
3
Email Security Intelligence - Records Management - Complete Guide on Information Records Management - by Justice Levine Many growing companies are undergoing a journey of digital transformation, making creating and managing documents digitally a essential step in the process that may increase productivity, better client service, and reduce operating costs. Another benefit to doing business online is enhanced cybersecurity, which is often placed on the back burner for businesses despite the massive volumes of data generated daily. That being said, it’s critical to remember that the growth of your organization correlates with a potential risk of a cybersecurity disaster. All it takes is a single vulnerability to put an organization’s future in harm’s way. This article will discuss the benefits of a system of managing your records and why organizations need to integrate cybersecurity controls so that regulatory requirements are met efficiently. What is Records Management? Records management is the process of implementing control of the creation, maintenance, receipt, and disposal of information regardless of the format. Electronic Records Management or Information Records Management is the field of management responsible for the efficient and systematic control of the creation, receipt, maintenance, use, and disposition of records, including the processes for capturing and maintaining evidence of and information about business activities and transactions in the form of records. A record could be any information maintained as evidence or used for any business transactions including final reports, budget documents, company balance sheets, emails referring to an action, maps of field missions, and more. Why Records Management Is Important Records management helps to control the development of records as well as extract useful information from them. Organizations can be overwhelmed if they were to manage everything electronically due to the cost and productivity. Without records management, businesses face potential losses, and poor management can result in costly compliance penalties, loss of productivity, unnecessary audits, data overload, and more. Records management is the best possible solution to these issues and maintains compliance with data privacy regulations. Some other benefits of records management include: - Lower storage costs: your organization may have many files, emails, and business reports, but only a small portion of them are valuable. With proper management, you can dispose of unnecessary documents and reduce the cost of storing information. - Ensure regulatory compliance: compliance with regulatory laws can result in severe legal actions and penalties. Having proper records management in place can help organizations easily comply with laws and avoid penalties. - Efficient retrieval of records: with a powerful document management platform, you can easily store and retrieve information and better accessibility to information helps organizations make better business decisions. - Easy automation of workflow: a lack of organization can cause your business to spend unnecessary time storing and searching for records when a records management system makes the process efficient and automates the workflow. What is the purpose of Records Management? There are a number of different reasons why an organization might choose to implement records management. Typically, records management is used in order to ensure compliance with legal or regulatory requirements, to improve efficiency or productivity, or to protect sensitive or confidential information. In some cases, records management may also be used as a tool for risk management. Organizations subject to certain laws and regulations, such as the Sarbanes-Oxley Act or the Gramm-Leach-Bliley Act, may be required to maintain specific records in order to comply with these requirements. In other cases, an organization may simply want to improve its efficiency by streamlining its record-keeping processes. By implementing records management, an organization can more easily find and retrieve the records it needs, when it needs them. In some cases, records management may also be used to protect sensitive or confidential information. For example, an organization might choose to store certain types of records, such as client files or employee records, in a secure location. By doing so, the organization can help to ensure that this information is not accessed or tampered with without authorization. Finally, records management can also be used as a tool for risk management. By keeping accurate and up-to-date records, an organization can help to minimize its exposure to potential legal liabilities. In addition, by identifying and tracking key records, an organization can more easily detect and investigate instances of fraud or misconduct. What is Email Records Management? Email management assists organizations in properly capturing, retaining, and managing emails sent and received by employees. Classification schemes, retention periods, and access controls can be used to help manage emails. By collecting the metadata from the emails, employees with the proper permissions are able to access and maintain the information. Email is the preferred method of communication in business, so it is crucial that email records are retained to support the best practices of management. When high volumes of unmanaged emails in inboxes, sent folders, and deleted items folders, certain risks are presented. This includes: - Critical business data loss due to individual employees managing their own inboxes. - Inefficient discovery or access to records. - Failure of organizations meeting legal preservation requirements. Benefits of implementing an email management system include: - Retaining a history of communication. - Incident tracking, as all emails relating to a specific incident or user are automatically tracked and are able to be viewed as a single correspondence. - Employees can use reporting systems to provide additional valuable insights into an organization’s communications trends. Essential Records Management Capabilities Having a compliant record and information management (RIM) program is crucial for all organizations managing both physical and electronic records throughout their life cycle. In today’s ever-changing regulatory environment, volumes of information continue to rise so it’s a necessity that companies enforce essential records management practices. By creating a well-structured records management plan, your organization will meet regulatory compliance, improve workflow, and limit itself to exposed risk. Essential records management procedures that will set your RIM program up for success include the following: - Records retention schedule: a records retention schedule defines how long records should be kept from an operational and legal standpoint, and that outdated records are disposed of in a timely fashion. - Policies and procedures: policies and practices should be communicated clearly and applied consistently throughout your organization. When properly delivered your policies and procedures work simultaneously with your business continuity plan and disaster recovery program. - Accessibility, indexing, and storage: a successful records management program requires accessibility of information, indexing parameters (including date, subject matter, creator, and location of the record), and an online document management system (DMS) to be stored and retrieved. - Compliance auditing: auditing policies thoroughly ensure that historical records are routinely maintained and destroyed. - Disposal of obsolete records: incorporating the destruction of records at the end of their life cycle into your management system reduces the possibility of audits, legal risks, and storage costs. Improved Security With Records Management Implementing a management system makes it easy for an organization to integrate cybersecurity controls while ensuring that regulatory requirements are met efficiently. The right solution should have specific functions that protect your data and prevent attacks. You should consider a platform that addresses, the most common concerns such as: - Physical security of data, i.e. the security of physical servers and data centers where the data is hosted. - Operational security, as in access control, workflow approvals, and audit conformance. - Which encryption technologies are used and are they up to date? - The retention period identifies the records to be managed and communicates how long the records are to be retained before they are disposed of. - Monitoring and notification of system incidents. This includes flagging issues as well as detailed report generation. - Vulnerability testing of web applications and remote access to document tools - Backups, what policy is being applied, and how robust is the system? The cost of a cyber attack is cheaper to prevent as opposed to repair. While it is possible to determine the cost of litigation, assessing the reputational damage of a data breach is difficult. The proper solution provides data security and protection in the following ways: A cloud-based system ensures top-of-the-line data security solutions that move data to the cloud. By doing this, small businesses will have access to reliable data security services that can be controlled or viewed only by authorized employees and can be hidden, protected, and removed using cloud services. Persistent Data Backups In the event of a security breach, loss, corruption, sabotage, and manipulation of data are all possible. All documents stored on the management platform can be backed up to another secure database to quickly restore access. The system can be automated to back up the data at regular intervals seamlessly in the background without any intervention. Records management servers can be located wherever need be as data centers within organization premises provide greater control over the physical security of data. Whereas even in data centers hosted remotely, the servers can be securely sealed in hardened data centers and protected by biometrics-based access systems. Implement a Disaster Recovery Plan The value of Disaster Recovery is that it creates the ability to react to a threat quickly and efficiently. This is achieved with the help of a department that has informed staff, disaster supplies, and planned procedures. Planning is a senior-level function and requires top-level support to succeed and recognizing this as a necessity must be present at an early stage. Developing a disaster recovery plan involves stockpiling emergency supplies and arranging services, establishing a disaster recovery team, developing disaster recovery and records salvage procedures, and contingency planning. The key to having comprehensive disaster prevention and recovery plans is to draw from every resource at your disposal, including records management. The Bottom Line Advances in technology drive down costs including the cost of perpetrating cyberattacks. As acquiring the skills and tools needed to hack, infiltrate, and sabotage, the possibility of threats is expected. Security threats in records management can range from malware to data breach, making it essential to integrate a strong security strategy in records management. Vulnerabilities are not always discovered quickly and an organization on a digital growth trajectory must be able to mitigate risks. Must Read Blog Posts - Demystifying Phishing Attacks: How to Protect Yourself in 2024 - Must Read - How Phishing Emails Bypass Microsoft 365 Default Security - Must Read - Shortcomings of Endpoint Security in Securing Business Email - Must Read - What You Need to Know to Shield Your Business from Ransomware - Must Read - Email Virus: Complete Guide to Email Viruses & Best Practices - Must Read - Microsoft 365 Email Security Limitations You Should Know in 2024 Latest Blog Articles - Cloud Security Architecture Guide: Key Strategies, Components, and Challenges - Navigating the Advantages & Limitations of Host-Based Intrusion Detection Systems (HIDS) in Cyber Threat Protection - 7 Benefits of Investing in Cybersecurity Services for Business - Dynamic Duo: Maximizing Security with HIPS and Endpoint Protection - Why Small Businesses Must Prioritize Cloud Security Assessments - Fortifying Your Digital Security: A Definitive Guide to Multi-Factor Authentication (MFA) - What is Cyber Hygiene? Understanding Its Impact on Data Protection - Data Encryption in the Cloud: A Critical Pillar of GDPR Compliance - Deceptive Precision: Eye-Opening Spear Phishing Attack Examples - Practical Advice for Strengthening Cloud Email Security
<urn:uuid:4cb62775-743c-46f0-9d5f-046ce6ced4c7>
CC-MAIN-2024-38
https://guardiandigital.com/resources/blog/what-is-records-management
2024-09-18T05:14:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00894.warc.gz
en
0.920185
2,315
2.671875
3
The text discusses the significance and key aspects of FDA 21 CFR Part 606, a critical regulation that governs the safety and quality of food, dietary supplements, drugs, and clinical trials in the United States. This regulation is divided into two parts, Part A and Part B, with Part A addressing general requirements, including Good Manufacturing Practices (GMPs), and Part B covering specific areas such as record-keeping and labeling. One crucial aspect highlighted in the text is the complex terminology used throughout the regulation. Understanding terms like “adulteration” and “labeling” is essential for compliance. The regulation also encompasses various elements, including product storage, reporting requirements, and procedures for recalls. Meeting the requirements of 21 CFR Part 606 can be challenging for organizations, particularly those with limited resources. Compliance involves documentation, regular reviews of protocols, and effective record-keeping. Non-compliance can result in penalties, including fines and product recalls. The text emphasizes the importance of resources provided by the FDA, such as guidance documents and training materials, to help organizations navigate the regulation effectively. Despite its challenges, 21 CFR Part 606 is crucial for ensuring the safety and effectiveness of products in the food, dietary supplement, drug, and clinical trial industries. It sparks debates about balancing product safety with regulatory burdens, indicating ongoing discussions about potential updates to the regulation. In summary, FDA 21 CFR Part 606 is a vital regulation that organizations must take seriously to ensure consumer safety. It involves complex terminology, compliance challenges, and potential penalties for non-compliance. Staying informed about updates and utilizing available resources is essential for adherence to this important FDA standard. Introduction to FDA 21 CFR Part 606 The FDA 21 CFR Part 606 regulations are designed to regulate the safety of both food and drug products. This federal regulation, implemented by the Food and Drug Administration (FDA), outlines the requirements that must be followed to ensure that consumers are not exposed to unsafe products. It applies to a variety of products ranging from dietary supplements, prescription drugs, medical devices, and even food. 21 CFR Part 606 lays out the framework of guidelines for manufacturers, distributors, and researchers. It ensures that products that reach the general public are safe and do not pose any health threats. FDA 21 CFR Part 606 The Food and Drug Administration (FDA) 21 CFR Part 606 is a regulation aimed to ensure the safety of food, dietary supplements, and clinical trials. It is composed of numerous sections which cover different areas of regulation. 21 CFR Part 606 applies to several specific areas in the food and drug industry. It is primarily enforced in the manufacturing, packaging, labeling, storage and distribution of food products including those subject to recall. Additionally, it applies to dietary supplements, including pre-market authorization, adverse events reporting, and label elements. Lastly, it covers clinical trials involving drugs, biologics, and devices. The structure of 21 CFR Part 606 is divided into several separate parts and appendixes. The structure covers all aspects of food safety, such as production, processing, distribution, laboratory testing, and traceability. It also provides guidance for product recalls, labeling requirements, and other pertinent topics. Throughout the rule there are various technical terms used which organizations must be aware of when attempting to meet compliance. These terms and the nuances associated with them can significantly impact compliance. 21 CFR Part 606 Structure and Sections FDA 21 CFR Part 606 is arranged in a six-part structure with various subsections outlining its scope. This regulation covers topics such as food safety, dietary supplements, clinical trials, and more. It is important to understand the different areas this regulation covers so organizations can meet compliance. The six parts of 21 CFR Part 606 are as follows: - Scope and Definitions - Controls on Food Hazards - Food and Color Additive Controls - Food Packaging Requirements - Labeling Requirements - Miscellaneous Requirements The first part, “Scope and Definitions”, covers the policy underlying the regulation and what it sets out to achieve. This part also outlines certain terms and aspects related to the rule. For instance, the term “biological hazard” is defined here. The second part, “Controls on Food Hazards”, describes the various activities that government agencies must take to prevent potential risks from entering the food supply chain. This includes activities such as product checks, testing and verification of suppliers. The third part, “Food and Color Additive Controls”, is concerned with regulating food dyes and other ingredients that fall under the scope of this regulation. This part also covers labeling requirements for food ingredients. The fourth part, “Food Packaging Requirements”, outlines the rules that govern how food products should be packaged in order to reduce the risk of contamination and spoilage. The fifth part, “Labeling Requirements”, deals with the necessary information that must be included on food product labels. This includes warnings about allergens, nutrition facts, and so on. The sixth and final part, “Miscellaneous Requirements”, covers other elements of the regulation, such as record-keeping requirements and guidelines on how to handle food recalls. Terminology Used in 21 CFR Part 606 FDA 21 CFR Part 606 is a complex regulation that may be difficult to understand for those not experienced in the field. It includes a range of terminology that is important to understand in order to comply with the regulation. For example, the term ‘adulterated’ is used throughout the regulation and refers to any product or ingredient that falls short of the minimum standards of quality and purity required by the law. The term ‘misbranded’ is also regularly used in 21 CFR Part 606 and applies to any product or ingredient that is labeled incorrectly or fails to provide information on safety or ingredient listing. Furthermore, the term ‘investigational use’ is used to refer to any product or process still undergoing research and trial. It is important to understand nuances of the terms used in 21 CFR Part 606 as misuse of these terms can result in non-compliance. Additionally, using the wrong term could result in incorrect labeling or other failures to meet legal requirements. Thus, it is essential to correctly understand the terminology used in 21 CFR Part 606. Exploring Essential Elements of 21 CFR Part 606 21 CFR part 606 is a law created to regulate the safety of food, supplements, and medications, among other products. This section of the FDA regulation is set up with general requirements applicable to this class of products. All affected organizations must comply with the requirements laid out in 21 CFR part 606 to ensure the safety of their products. 21 CFR part 606 covers a broad range of topics, from personnel qualifications and training programs to record-keeping requirements and labeling regulations. It also sets standards for product testing, quality control procedures, and good manufacturing practices (GMPs). Organizations must understand and adhere to these regulations to remain in compliance. The terminology used throughout 21 CFR part 606 is important to note as it can have significant impacts on how compliant products are. Knowing which terms refer to which regulations is essential to understanding the entirety of the rule. For example, terms such as “batch,” “lot,” and “specifications” are used throughout the regulation, and they all reinforce different aspects of production. Additionally, many requirements found within the rule must be met in order for an organization to remain in compliance. These include matters such as personnel qualifications, written procedures, record-keeping, inspections, product testing, corrective actions, and others. Non-compliance carries certain penalties which may be enforced by the FDA or other governing bodies. Organizations that fail to meet the requirements of 21 CFR part 606 may be subject to fines or other punitive measures. They may also face restrictions on the production and sale of their products. Luckily, the FDA provides resources to help organizations comply with 21 CFR part 606. These include guidance documents, videos, webinars, and other materials to help affected organizations understand and implement the rule. Ultimately, 21 CFR part 606 is a necessary regulation that ensures the safety of products. Its provisions help ensure that food, drugs, devices, and supplements are safe for consumers. By understanding and adhering to the requirements of 21 CFR part 606, organizations can remain in compliance and uphold standards of safety. Challenges with Compliance 21 CFR Part 606 regulations require a lot of detail and close attention. Organizations have to dedicate resources to ensuring they remain in compliance, which may be difficult for smaller organizations with limited budgets. Furthermore, 21 CFR Part 606 regulations are constantly evolving as new developments happen. Keeping up with all the changes can be a challenge, as it requires regularly checking the FDA website for updates. Organizations must also have strong internal processes to ensure compliance. This includes updating policies and procedures with the latest regulations published by the FDA, as well as implementing tools like online databases or software that helps store and analyze data related to the regulations. Fortunately, the FDA offers guidance documents as well as free training sessions and webinars, which can help organizations better understand and address the challenges of 21 CFR Part 606 compliance. Complying with 21 CFR Part 606 Organizations must meet certain standards to be compliant with 21 CFR Part 606. Not doing so can lead to serious penalties and enforcement action. Compliance is achieved through a combination of procedural implementation, adequate record-keeping and diligent monitoring. Some of the key steps organizations need to take to ensure compliance include: - Documentation of training personnel in FDA regulations and related policies - Regularly revisiting existing protocols and revising them as necessary - Proper documentation of changes in protocols and records when necessary - Implementation of measures to ensure processes remain effective - Accurate and timely records of all activities related to compliance It is essential for organizations to carefully monitor their compliance processes to ensure they remain compliant. This includes periodic reviews of all related activities and a strong understanding of the requirements of 21 CFR Part 606. Outlining Penalties for Non-Compliance Failing to comply with 21 CFR Part 606 regulations can have serious consequences. The FDA can issue warning letters, fines, and recall orders and can even suspend or revoke approval of products or shut down a facility. In extreme cases, companies may be subject to criminal prosecution due to non-compliance. Warning letters are issued to alert companies of potential violations of food safety regulation that the FDA has identified. Companies must respond to warning letters and detail how they will address the violation. Fines for non-compliance range from $117 to $11,744 per violation, depending on the severity of the offense, and increases each day the business operates in violation of the law. The FDA can also issue recalls of products that do not meet regulation requirements. This could mean a company needs to recall contaminated food items or those manufactured using processes that are not in compliance. Finally, if necessary, the FDA has the authority to close down a company and criminally prosecute individuals if severe violations are not adequately addressed. Organizations can find a wide range of resources to help them comply with 21 CFR Part 606. The US Food and Drug Administration (FDA) provides various documents such as guidance for industry, compliance policies, and other educational materials which can be accessed online. Additionally, organizations may engage with professional services to assist with their understanding of the regulation. Professional consultants are highly experienced and up-to-date with the latest changes to 21 CFR Part 606, making it easier to navigate the compliance process. Companies can also find helpful information on websites such as the Center for Food Safety and Applied Nutrition, the Dietary Supplement Health and Education Act, and many other government and private sector websites. Finally, companies should seek legal advice if they feel uncertain about meeting the requirements of the regulation. FDA 21 CFR Part 606 is an important regulation that provides guidelines to ensure the safety and effectiveness of food, dietary supplements, drugs, and medical devices. The regulations of this part also help protect clinical trial participants from hazard or harm. By implementing the regulation, organizations demonstrate their commitment to keeping people safe from potential dangers in the production and use of food, dietary supplements, and drugs. 21 CFR Part 606 requires organizations to comply with certain standards to assure the safety of consumers. These standards include proper labeling and documentation of products, following standard protocols for laboratory testing, and providing a safe environment for clinical trials. Compliance with the regulation helps ensure the consumer’s access and safety of products. Ongoing Debates Surrounding 21 CFR Part 606 The implementation of FDA 21 CFR Part 606 has triggered intense debates in various industries about the efficacy and impact of the regulation. For example, the pharmaceutical industry has raised concerns about the increased cost associated with the additional testing required by the regulation, as well as the burden of maintaining extensive records. On the other hand, advocates for regulation cite the potential benefits of reducing the risk of drug contamination and increasing product safety. They argue that the cost of compliance must be weighed against the risks posed by not taking action to ensure safe products. It is clear that 21 CFR Part 606 has sparked much debate and there is much interest in how this regulation may be updated in the future. As the regulation continues to be developed, many stakeholders are looking for a balance between ensuring product safety and reducing regulatory burdens. The U.S. Food and Drug Administration’s (FDA) 21 CFR Part 606 regulation is a set of rules designed to ensure the safety of food, dietary supplements, and clinical trials. It applies to organizations that manufacture or market these products in the United States. The regulation is organized into two parts – Part A and Part B. Part A addresses general requirements for safety and quality control including Good Manufacturing Practices (GMPs). Part B covers specific areas such as record-keeping requirements and labeling. Terminology used throughout the regulation can be complex and it is important to properly understand key terms, such as “adulteration” and “labeling.” Organizations subject to 21 CFR Part 606 must have a comprehensive understanding of the regulation in order to meet compliance. Furthermore, essential elements of 21 CFR Part 606 include adequate storage and handling of products, reporting requirements, and recalls. Organizations may find it challenging to meet all the requirements outlined in 21 CFR Part 606. However, organizations are expected to keep records of their compliance and there are penalties for non-compliance. Resources such as guidance documents and online training are available to help organizations effectively comply with 21 CFR Part 606. Overall, 21 CFR Part 606 is an important regulation that must be taken seriously by organizations involved in manufacturing or marketing products, particularly those related to food, dietary supplements, and clinical trials. Ongoing debates surrounding the regulatory system mean that organizations need to stay abreast of changes in order to remain compliant with FDA standards. FAQs about FDA 21 CFR Part 606 1. What is FDA 21 CFR Part 606? FDA 21 CFR Part 606 is a regulation issued by the US Food and Drug Administration (FDA) that sets forth guidelines and standards for the production, labeling, advertising, and general safety of food, dietary supplements, pharmaceutical drugs, medical devices, cosmetics, and other products regulated by the FDA. 2. What areas does 21 CFR Part 606 apply to? 21 CFR Part 606 applies to virtually all businesses that manufacture, process, package, transport, store, or sell any of the items regulated by the FDA, such as food, dietary supplements, drugs, medical devices, and cosmetics. 3. What is the general structure of 21 CFR Part 606? 21 CFR Part 606 consists of multiple sections, each of which provides detailed information about specific aspects of the regulation. These include guidelines for hazardous materials, sanitation, product design and testing, record-keeping requirements, etc. 4. What are the penalties for non-compliance with 21 CFR Part 606? The penalties for non-compliance with 21 CFR Part 606 vary depending on the severity of the violation. Penalties can include fines, criminal prosecution, suspension or revocation of registrations, or seizure of products. 5. What steps should organizations take to meet compliance with 21 CFR Part 606? To meet compliance with 21 CFR Part 606, organizations should review the regulation closely to ensure they understand and can follow all requirements, document every compliance-related activity, and regularly audit their processes to identify any potential problems. 6. What resources are available to help organizations comply with 21 CFR Part 606? Numerous resources are available to help organizations comply with 21 CFR Part 606, including Fact Sheets and Regulatory Information from the FDA, online training courses, and FDA guidance documents. 7. What is the importance of 21 CFR Part 606? This regulation is essential for protecting public safety, as it establishes guidelines to ensure that food, dietary supplements, pharmaceutical drugs, medical devices, cosmetics, and other items regulated by the FDA are produced safely and according to laws and regulations.
<urn:uuid:1d86b9e8-aff4-40ca-b70c-0c039f51d3a3>
CC-MAIN-2024-38
https://msbdocs.com/security-compliance/21-cfr-part-606-guide/
2024-09-18T05:19:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00894.warc.gz
en
0.943712
3,526
2.78125
3
The following page provides information on the HTTPS Protocol. The HTTPS protocol is the secure web protocol used by web sites and modern web-driven technologies. HTTP is the unsecure version of this protocol. Hypertext Transfer Protocol Secure IP Protocol | Flow Percent | TCP 443 | 100% | Port Reference - RFC - TCP 443 - HTTP Protocol Over TLS/SSL Do you know how much HTTPS traffic flows through your network? Netify's protocol detection engine and reporting provides insights to help manage your network. What gets measured, gets managed.
<urn:uuid:2b52f085-3255-4b30-879e-9240bf0a576c>
CC-MAIN-2024-38
https://www.netify.ai/resources/protocols/https
2024-09-20T18:03:50Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00694.warc.gz
en
0.737893
114
2.640625
3
November 12, 2019 Few incidents have so dramatically and terrifyingly confirmed the need for meticulous monitoring of the transportation of volatile fuels than the devastating 2010 gas explosion in San Bruno, California. The rupture and explosion of a Pacific Gas & Electric 30-inch natural gas pipeline left a path of destruction that took the lives of eight people and destroyed or rendered uninhabitable 38 homes in San Bruno’s Crestmoor neighborhood. Any instrumentation that might have been in place failed to detected the flaws in the pipeline, and controls were so inadequate that it took more than an hour for emergency personnel to shut down the flow of gas that fed the inferno. During the California Public Utilities Commission’s ensuing investigation, it was revealed that PG&E not only failed to have appropriate detection equipment in place, but the utility also was negligent in regard to periodic testing of its transmission lines. Sensors Failed to Detect a Problem The San Bruno incident actually didn’t reflect a lack of transmission and transportation sensors and systems, but rather it showed how a fairly mundane occurrence — a power loss — could cripple a sensor-based system. A detailed report published by National Public Radio station KQED as part of its California Report programming provided details about the San Bruno explosion. The report notes that an electrical outage and a transmission terminal made the regulatory system go haywire, sending the gas pressure in the pipeline up to dangerous levels. But, the report continues, sensors were also disabled as a result of the outage, so technicians couldn’t see how high the pressure was building. IoT-Based Monitoring of Transporation Greatly Improved While the San Bruno explosion occurred just a decade ago, today’s state-of-the-art transportation and transmission systems are far more sophisticated, equipped with battery backup and other failover systems that ensure that they will remain operative at perhaps the most critical moments. These IoT-based monitoring networks are also designed to track the movements of gas and oil through a variety of transportation and transmission elements. That level of sophistication is, in fact, required as gas and oil distribution systems in the United States are vast and complex. In a 2018 paper published by the American Geosciences Institute (, authors Edith Allison and Ben Mandler note that there are millions of miles of oil and gas pipeline in the U.S. In addition to the network of pipelines, oil and gas is also transported by approximately 100,000 trucks and thousands of railroad tank cars. All of those means of conveyance need to be outfitted with sensors that can detect the volume of gas or oil being moved and its location as well as the health and safety of the pipes or vehicles that are carrying the products. Sensors on rail cars and tank trucks can be tracked via LTE broadband networks, with the data folded into the data collected from similar sensors mounted on pipelines. Being able to scrutinize via real-time information the location and status of oil and gas whether it’s traveling through a pipeline or barreling down a highway or rails is a powerful combination of that provides an effective level of oversight. John Hetherington, principal of John Hetherington Consulting, and oil and gas industry advisory service, notes that predictive maintenance also plays a big part in oil and gas transportation, to ensure that all forms of conveyance are in appropriate working order. “The industry is moving to exception-based monitoring, getting alerts with information about the alert that helps personnel respond,” he said. In a published presentation, Sanjeev Verma, chief executive officer of BizIntellia, an IoT platform and sensor provider, notes: “Tracking the live location of a vehicle was always a key challenge in the oil and gas industry.” But it’s a challenge that has been met. “Sensors read the pertinent data, like flow rate, temperature, pressure, and then gateways at the site connect these sensors to wireless networks.” The results of these technical developments have been profound, providing unprecedented supervision and management of oil and gas transport. “Actual monitoring of pipeline oil movement has been around for a while, but the data analysis is getting much more sophisticated,” Hetherington notes. “With the present technology of IoT in oil and gas, tracking the real-time location and health of the vehicle has become an easy game, ” wrote Verma. For some companies the level of instrumentation of their truck fleets go beyond location and status detection. “Some even have driverless trucks on their own sites,” noted Hetherington, adding that companies can track off-site deliveries in detail too. “Time to market is everything.” While keeping tabs on the commodities in transit is the biggest and most dynamic part of the monitoring effort, an alert eye must be focused on storage facilities, too, as they play a key part in the overall transportation picture. These storage facilities may be way stations for the products to be transferred to different distribution networks or delivery vehicles, but they are typically integral parts of the transportation grid. Early Detection of Leaks and Spills Beyond the obvious benefits of the ability to constantly monitor their commodities’ transportation facilities, oil and gas producers and distributors also benefit from early detection of leaks and spills, which not only saves money but also helps avoid ecological disasters. Allison and Mandler’s paper points out that there are hundreds of leak and spill incidents every year, resulting in the loss of tens of thousands of gallons of crude and refined oil products. These losses occur across all forms of conveyance — trucks, trains and pipes. Clearly, more work needs to be done to ensure that more pipelines are protected, although some spill and leaks may be unavoidable. Oil and gas aren’t the only commodities that have to be moved and monitored, Allison and Mandler point out. Water may need to be transported too. Water is used in many phases of oil and gas extractions, in a volume that the authors estimate at “a few billion gallons per day.” The water needs to be transported to the extraction sites, a process that requires monitoring. The post-process water will then have to be transported to disposal or treatment facilities. Because the water is likely to be unfriendly to the environment, its movements need to be monitored closely. All-In-One or À La Carte Transportation Monitoring How a natural resource company implements a grid-based transportation monitoring system depends on its currently installed production monitoring systems and the maturity of those monitoring operations. Some of the larger vendors may provide all-inclusive IoT packages that include transportation and transmission monitoring, while there are some application suppliers who cater smaller outfits and may be able to provide only the transportation modules. Hetherington sees GE and Schneider as two of the foremost providers of IoT-based oil and gas transportation management systems. A lack of standards and common protocols is an issue for this product category, as it is with many other energy industry applications that need to traverse private and public grids. “That’s one of the big challenges,” said Hetherington. “Getting data from a different system is always challenging.” About the Author You May Also Like
<urn:uuid:02018a1b-de5b-4d83-83ba-ff093fbd5a73>
CC-MAIN-2024-38
https://www.iotworldtoday.com/iiot/iot-based-monitoring-networks-role-in-oil-and-gas-industries
2024-09-07T10:38:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00158.warc.gz
en
0.959005
1,501
2.984375
3
Another day – another gosh!… The tortoise. Hmmm. Not the sharpest tool in the shed – even among reptiles, which aren’t known for their intellectual prowess. Probably the world’s slowest animal too. And when it comes to sweetness and honey and good manners and good looks – the tortoise is also toward the back of the line. Poor things. BUT!… But… there’s still something about these creatures that charms, enchants, enraptures and enthralls. Maybe it’s something in our genes that says that despite their outward appearance the tortoise is wholly… tasty… But more on that later. For now: giant tortoise pics!… Only a handful of the Galápagos tortoise complex remains today – the rest having become extinct. But, again – more on that later… …First – some fascinating Galápagos tortoise facts. First, how long they live: from around 200 to 250 years old! While still young and growing physically (up to 80 years old) one can determine their age by the rings on their shell: The tortoises here have really long necks, which seemed to me to be the only thing that set them apart from the Aldabra tortoises on the Seychelles; that and the different shape of the shell. But later, upon closer inspection, the Seychelles tortoises also have long necks. Here are some pics of both all mixed up. Can you discern the Seychellois from the Galápagos tortoises? Our guide told us how the DNA of both Seychellois and Galápagos tortoises is practically the same; i.e., they’re the same species family. That means that they could crossbreed (though no one’s tried such an experiment!) and have offspring. But these two types of tortoise got me thinking. Both are endemic to their habitat, yet they’re close relatives. How come?! If you look on a map of the world, the nearest mainland to the Galápagos Islands is South America – 1000km to the east. To get to the Seychelles there’s another 3500km to cover to get to the Atlantic, then another 6500 to get to Africa, then another 3000 to finally get to the Seychelles. That makes a whopping 15,500km between the Galápagos and the Seychelles. That’s going east. Going the other way it’s even further: 15,000km to Indonesia, 4000 to the Indian Ocean, and another 5000 to the Seychelles – 24,000km. In short – there aren’t many places on the planet further from each other! Yet still these tortoises are relatives?!! Here’s a theory of mine: A long time ago, the Aldabra giant tortoise wasn’t endemic to the Seychelles; it roamed everywhere along the equator where the climate was tropical – right around the globe. And they lived happily and long – as they do today, since their shells put most predators well off them as a dinner dish. But then along came… Homo sapiens and various other humans… They start to emigrate all over the world, wherever they go destroying the ecosystem around them. First they go after anything tasty and nutritious – especially if whatever it is doesn’t run fast, and that of course meant tortoises (with humans more able to get at the meat with their bigger brains, hands, and later – utensils, etc.). And nearly every single tortoise on the planet got gobbled up by hungry humans without a care. But the few that were left – they happened to be located in the Seychelles and on the Galápagos Islands. The more observant observer may at this point ask: ‘So where are bones?’ Since archeologists often find remains of extinct animals wherever they find the remains of ancient man. So where are the tortoise remains? Let me explain. Tortoises breed along coastal zones. Eggs laid are buried by the mother under the sand of a beach. This means that tortoise remains should be looked for along the coast, not up in mountain caves (where ancient remains are normally found). Also, since man settled across the world tens of thousands of years ago, the sea level then was much lower – by more than a hundred meters. So it doesn’t take Sherlock to work out that ancient remains of tortoises should be searched for underwater! One could counter this with the fact that the Polynesian islands were settled upon by man much later – some one or two thousand years ago. But those are islands! If there were any tortoises there then they ate them all up and threw away the bones into the sea. Any that remained on land were blown away by a hurricane, of which there are many around those parts. And that’s why there are no ancient remains of giant tortoises: no bones, and no drawings on cave walls. But there is one exception! You’ll remember how the flat planet stands upon the backs of four elephants, right? But do you recall how those same elephants stood on the backs of… a giant turtle?! So, why did a few survive on both the Seychelles and the Galápagos Islands? Easy. The Galápagos Islands were settled upon not long ago at all, relatively speaking, so Homo sapiens simply didn’t get the chance to destroy everything on the islands. And today, the islands’ tortoises are protected and cherished. And the Seychelles – they too were settled upon later than the norm: only after the mid-18th century. Before then only passing pirates and expeditions would visit the islands briefly. Thank goodness! Only because of that have giant tortoises survived to this day. Alas, it’s not such a happy ending. Still today subspecies of these tortoises are becoming extinct. In 2012 the last of tortoise on Pinter Island – Lonesome George – passed away. He now stands – stuffed – in the local national park: These days the tortoises are kept under close observation, records are kept on their number, and their eggs are often taken from the beach-burrows and put in incubators to guarantee survival and hatching. Freshly-hatched tortoises graze under the watchful eye of biologists at the tortoise conservation centers – which tourists are even allowed in to: So the future looks so bright for the giant tortoises here, they’ve got to wear shades ). One condition for entrance into the center – no touching the tortoises! Ahhh, such a shame; in the Seychelles the tortoises loved their necks being scratched! Want to feel what it’s like to be a tortoise – literally? Knock yourself out!… Even Midori Kuma was here at the tortoise center! He does get around… All the photos from Ecuadorian-Gray and the Oh-my-Galápagos are here.
<urn:uuid:b3fdecc2-05e6-4de8-809a-f8d5344d73ff>
CC-MAIN-2024-38
https://eugene.kaspersky.com/2019/03/11/galapa-gosh-pt-5-if-you-can-survive-humans-likes-giant-tortoises-can/
2024-09-09T19:04:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00858.warc.gz
en
0.954975
1,515
2.75
3
Every computing device capable of using a wireless or cable network has an Internet Protocol address (usually called an IP address). It differs one device to the other and provides a unique identity for all devices used to surf the internet all around the world. It is like a CNIC number or a phone number that separates for every single entity. This IP address is responsible for directing the user’s data to reach their unique specific destination. Basically it is your router’s job to assign your computing device a new unique IP address whenever a device is connected to the network for the first time. This IP address must be specific to that particular device and different from any other device. There are different types of IP addresses. You might have come across an issue where you have a static IP address but no internet connection. To solve this issue, let’s get to know more about static IP addresses. Watch Video Below: Summarized Solutions For “Static IP No Internet Connection” Problem A static IP address is a unique set of numbers that cannot be changed. It is also known as a fixed IP address and is assigned to your device by your router or your ISP. Your internet service provider usually is responsible for assigning a public IP address to your router, and then your router assigns an internal IP address to each device that is connected to it. The first computing device you plug into the new internet router sends out a network request asking for an IP address. Let’s say if your first device is assigned with an IP address xxx.xxx.x.1. So the next device you connect will get an IP address xxx.xxx.x.2. This means that for a single home or a router, all the given IP addresses are similar to and related to one another. Things You Should Know About Your IP Address Your public IP address is given to you by your ISP via your router so it is not something you can choose or change. It is provided to your device automatically at the time of your first network request. If you wish to have a static public IP address, you can use a specialist VPN service, although this can be rather expensive. In rare circumstances, you may be able to the IP address of your choice but this service is typically reserved only for business customers. Disadvantages of Having a Static IP Address Fixed or Static IP addresses must be manually configured so you would need to make some changes to your router’s configurations. An administration database is also required to keep track of all the data settings. Usually in home networks, this would not be an issue. In businesses, these administration problems can be a headache because incorrect configurations can lead to other bigger conflicts of IP address errors. For example, imagine one of your machines is given the IP address xxx.xxx.x.9, and you also have a router which continues to provide IP addresses to other computing devices. At some point, another device will be given the same IP address causing a clash or IP address error. Therefore, static IP addresses can become quite problematic on a large scale. Troubleshooting Ways to Solve the Static IP But No Internet Connection Most internet issues with static IP addresses are not that simple to solve as they requires a high level of knowledge about the coding and data settings. But there are still some basic settings that can be changed to try to resolve the issue of internet disconnection. 1. DNS Settings While setting your computing device with a static IP address, you must configure the DNS server by using the DHCP from your router. The router provides not only an IP address but its own DNS settings too. You can also try to ping “22.214.171.124? as your address or even better, you can try to traceroute 126.96.36.199. This way you will see that your internet starts working using these IP addresses only. Then, you can easily configure the DNS settings according to your preference. 2. Type Everything Yourself Sometimes when you are setting the static IP address, the default gateway looks as if it is prefilled correctly and doesn’t need changing. Most people tend to move ahead without taking the time to manually retype it. But it must be filled in by the user. If you try to click “ok” without typing the data yourself, the gateway is treated as though it is left blank. 3. Reboot the System Sometimes the system stops working, loses internet services and shows the internet connection is not available. This is not necessarily related to your static IP address. Sometimes, alll it needs to get back to work is a good old fashioned restart and reboot trick. Set the system to its usual static IP address and apply the trick. Restart and reboot the system. If the system is now working properly, it was just a hardware glitch and the problem is now fixed. You can carry on using the internet as usual and you will be able to browse LAN. (Note: This method is a somewhat unofficial method which is not guaranteed to work every single time.) 4. Ghost Adaptor Issue Sometimes the solution is what is known as a known Ghost Adapter Issue in VMW. To resolve this follow these steps. - Log-in to the servers - Run CMD - Here enter: devmgr_show_nonpresent_devices=1 - Now open the Control Panel - Go to Network & Internet - Open Network Sharing Center. Here you should see at least two adapters. - One of them is related to IP settings, so change the adapter to DHCP. - Change the semi-working adapter to static IP address and you will have internet access.
<urn:uuid:5c57ad64-5765-4678-8f9c-cafe178c0ed8>
CC-MAIN-2024-38
https://internet-access-guide.com/static-ip-no-internet/
2024-09-12T05:17:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00658.warc.gz
en
0.940513
1,177
3.21875
3
In this chapter, you learn about the following topics: Technologies and Protocols for DSL, Cable, and Ethernet Broadband Networks Bridged and PPP Access Mechanisms, with an Evaluation of how well They Solve the Requirements for Broadband, Such as Quality of service, Address Assignment, Service Selection, and so on A VPN is a service that can carry pure data or multiservice traffic. When you design or implement a VPN for broadband access, you need to understand how the different access architectures can impact design decisions and can actually have some interesting repercussions on the VPN service itself, because of quality of service (QoS) or security trade-offs, to name but two examples. This chapter reviews the two principal Layer 2 access architectures in use today: bridging and PPP. It looks at how each is implemented on different Layer 1 broadband media, such as digital subscriber line (DSL) and cable. Then it describes how each architecture solves some of the basic requirements of a network service, such as security, QoS support, routing, and address assignment. There are lots of permutations to go through: For example, security on a bridged cable broadband network is different from security on a bridged Ethernet broadband network, so it is worthwhile to look at each case. If you sometimes feel lost going through all of these different scenarios, remind yourself that you are looking at how common problems are solved on different types of broadband networks. The set of problems is important, because you will want to make sure that a broadband VPN service solves it too. The major topics that are covered are as follows: Bridged access architectures Bridging on DSL using Routed Bridge Encapsulation (RBE), including setup, routing, and address assignment Bridging on cable Bridging on Ethernet Security for bridged access, with a look at different scenarios for DSL, cable, and Ethernet Authentication and accounting for bridged access PPP access architectures PPP over Ethernet (PPPoE), including setup, routing, and address assignment PPP over ATM (PPPoA) PPP address assignment PPP authentication, accounting, and security Bear in mind that this is a review chapter. If you are comfortable with PPP and bridging, then you can safely skip ahead to the next chapter. Architecture 1: Bridged Access Networks Bridged access networks are so named because they transport Ethernet frames transparently across a network. Ethernet is the most successful LAN protocol ever. It has basically replaced all other forms of Layer 2 encapsulation in enterprise networks and is arguably in the process of doing the same thing in residential networks. Not so very long ago, subscribers connected to the Internet directly from their PC using a modem. Today, home networks use Ethernet. For example, laptops have built-in Gigabit Ethernet ports, and wireless LAN is very quickly proving to be an alternative to running cables between rooms the world over. All these scenarios use Ethernet framing, and the most cost-effective broadband service will be Ethernet-centric: Ethernet ports are cheaper, Ethernet cards are cheaper, and Ethernet equipment is cheaper, too. If Ethernet is the user-to-network interface of choice, broadband access networks need some way to carry Ethernet traffic from the subscriber premises to their destination. The easiest way to do this is to simply bridge the trafficafter all, bridging was invented to connect Ethernet LAN segments together, so it should be a pretty useful way to carry Ethernet over WAN connections, too. However, most broadband networks are not Ethernet based, so the Ethernet frames transmitted by a device on a home network must be converted to some other form before being carried over the native transport medium. For that very reason, bridging in a DSL environment is tricky, because today's DSL uses ATM as the modulation layer of choice. To carry Ethernet, you have to do a form of RFC 2684 bridging. Cable networks are easier because they can natively encapsulate Ethernet frames directly over Data Over Cable Service Interface Specification (DOCSIS). Of course, the simplest scenario of all is one where the access network is Ethernet based, using either standard Ethernet Layer 1 or some form of optical transport over longer distances. The advantage of bridged architectures is their simplicity. The customer premises equipment (CPE) has no difficult tasks to perform, so it can be very cheap. The overall simplicity has one significant cost, however: namely security. Unless the router enforces some form of security mechanism, all bridged subscribers are in the same broadcast domain and everyone in that domain can see sensitive traffic such as ARP requests, Windows neighbor discovery packets, and so forth. This is a situation best avoided. Although DSL, cable, and residential Ethernet networks each use radically different transport mechanisms, the issues and design considerations of bridged access are common across all the different media. The next section looks at the details of bridging in DSL networks. This is probably the most complicated scenario, because of the conversion back and forth between ATM cells. Fortunately, bridging over ATM is well standardized in RFC 2684, and the focus of the following discussion is on how bridging over ATM works on Cisco aggregation routers. Bridging in DSL Using RFC 2684 RBE is a Cisco implementation of bridged Ethernet over ATM with a separate broadcast domain for every ATM circuit. The CPE is a simple bridge that encapsulates Ethernet frames into ATM cells using the RFC 2684 bridging standard. Figure 2-1 shows a typical RBE architecture. Figure 2-1 RBE Architecture Figure 2-2 illustrates the packet encapsulations used at different points in the network. Figure 2-2 RBE Network Cross Section The flow of packets in Figure 2-2 works as follows: For upstream traffic: The subscriber PC is configured with the aggregation router's IP address as the default gateway. Just as on any Ethernet network, the PC sends an ARP request for the router MAC address and, once it learns it, transmits the Ethernet frame. The bridged CPE encapsulates the Ethernet frame in an AAL5 bridge protocol data unit (BPDU), then segments the BPDU into ATM cells and sends it across the DSL network. Figure 2-3 shows the protocol encapsulations used at different points of the network. The router reassembles the ATM cells, removes the AAL5 information and the Ethernet frame information, and routes the packet to its destination. Note how the router behaves: this is the behavior you would expect to see on a routed interface. For downstream traffic: A server sends a packet to the subscriber PC that is routed to the aggregation router. The aggregation router has a static route that identifies the interface to use to reach the subscriber's IP address. If necessary, the aggregator issues an ARP request to discover the subscriber PC MAC address. Then it encapsulates the Ethernet frame in an AAL5 Bridged format BPDU, segments everything into ATM cells, and transmits it. The CPE reassembles the ATM cells into AAL5 PDUs, removes the AAL5 information, and transmits the frame on its Ethernet port. The PC receives the data. Figure 2-3 Payload Format for Bridged Ethernet/802.3 PDUs (source: RFC 2684) For neighbor-to-neighbor traffic, note that if a subscriber PC sends a packet to another subscriber connected to the same aggregation router, the flow of packets is identical to the upstream and downstream flows described here. It is important to understand that there is no direct Layer 2 path between subscribers and that all traffic must be routed, even when subscribers' circuits are terminated on the same physical port on the router. In Cisco IOS Software terms, the router in Figure 2-2 uses a logical point-to-point subinterface for each subscriber and treats each of these interfaces as a separate IP network. The default requirement of such a topology is, of course, to have a different IP subnet on every link. But in broadband, you have to manage very large numbers of connections, and there can be thousands of RBE subscribers connected to a single router. In such a case, IP addresses can run out very quickly. To get around this, you use unnumbered interfaces. When using unnumbered interfaces, as in Frame Relay networks, the router no longer can know which interface to use to send traffic to a particular subscriber just by looking at the destination IP address, because no IP address space is associated with a subinterface. Additionally, to save IP address space, the subscriber IP addresses belong to the same subnet. Therefore, there must be an explicit route statement that maps the subscriber virtual circuit to its IP address. This is why Step 2 for downstream traffic mentions a routebecause use of the unnumbered link. Before learning about RBE configuration, you should understand the alternative to RBE, called integrated routing and bridging (IRB), because both RBE and IRB are used (although there is less and less use of IRB). IRB is a multipoint topology in which all the subscribers are terminated on a point-to-multipoint interface. The architectural problem with IRB is that all of the subscribers are on the same Layer 2 network and are thus part of the same broadcast domain, which makes a network open both to performance degradation because of broadcast storms and to security issues. RBE is a superior, more secure implementation. For example, ARP spoofing is not possible with RBE because an ARP request for a particular address is sent only on the subinterface for that address. With IRB, the request would be flooded to all interfaces in the bridge group. RBE also prevents MAC address spoofing, again because there is a distinct subnet for each subinterface. If a hostile user tries to hijack someone else's address by injecting a gratuitous ARP packet (using their MAC and the victim's IP address), Cisco IOS will detect a subnet mismatch and generate a "Wrong Cable" error. Note that, from a subscriber's point of view, they both look exactly the same. Now that you understand the theory and architecture, you are ready to look at some configuration scenarios: Basic RBE configuration RBE IP address assignment These sections all get into the details of Cisco IOS commands. RBE router configuration in Example 2-1 is straightforward. Example 2-1 Basic RBE Configuration interface Loopback0 ip address 192.168.1.1 255.255.255.0 no ip directed-broadcast ! interface ATM0/0/0.132 point-to-point ip unnumbered Loopback0 no ip directed-broadcast atm route-bridged ip pvc 1/32 encapsulation aal5snap ! interface ATM0/0/0.133 point-to-point ip unnumbered Loopback0 no ip directed-broadcast atm route-bridged ip pvc 1/33 encapsulation aal5snap The configuration is very similar to the regular IP over ATM on a Cisco router, with only the addition of atm route-bridge ip to enable RBE. The subscribers' hosts must be configured to use the aggregator interface as their default gateway, in this case 192.168.1.1. RBE Quality of Service RBE has a full range of QoS options. Because it runs over an ATM PVC, you can fully exploit all the capabilities of the ATM layer to offer different QoS profiles to subscribers. You need to remember that ATM class of service (CoS) is applied to any and all traffic on the circuit. You can't restrict it to an individual application or destination. Additionally, you can also use IP QoS. Cisco IOS has numerous bells and whistles that let you apply policies to combinations of application flows, IP destinations, and so forth. You can classify packets, police their rate, queue them, prioritize themwhatever it is you need to do to have different levels of service made available to user applications. You enable IP QoS by applying a service policy to a PVC. Example 2-2 adds a PREMIUM policy to output IP traffic on PVC 1/33. The PREMIUM policy is not included here, but is defined using standard Cisco IOS Modular QoS CLI (MQC) syntax. Example 2-2 RBE with IP QoS interface ATM0/0/0.133 point-to-point ip unnumbered Loopback0 no ip directed-broadcast atm route-bridged ip pvc 1/33 encapsulation aal5snap service-policy output PREMIUM RBE supports Weighted Random Early Detection (WRED), low-latency queuing (LLQ), and policing. As previously mentioned, RBE uses unnumbered point-to-point subinterfaces with a route to each subscriber IP device. Example 2-3 shows the Cisco IOS routing commands, with the required static route for each subscriber. Example 2-3 RBE Static Routes ! network routes router ospf 100 redistribute static 192.168.14.0 0.0.0.255 area 0 !subscriber routes ip route 192.168.1.2 255.255.255.255 ATM0/0/0.132 ip route 192.168.1.3 255.255.255.255 ATM0/0/0.133 Example 2-3 has just three lines of subscriber static routes, but imagine a configuration with 10,000 subscribers connected to the same router. If you announce all these host routes across an IP backbone, you can run into trouble with route table sizes, because each individual entry in a route table consumes memory. Figure 2-4 shows a simple network with RBE subscribers connected to an aggregation router. Consider the flow when the subscriber PC in this network pings the server at 18.104.22.168. Figure 2-4 RBE Packet Flow In Figure 2-4, the following happens: The subscriber PC uses the aggregation router's address as the default gateway, so the ICMP ECHO request packet is sent to R0. The aggregation router, R0, also has a simple default route that points to R1. R0 forwards the ICMP packet received from 192.168.1.2 to 192.168.15.1, which is the address of R1. R1 forwards the packet out its egress interface across the Internet. Assuming that IP routing is functioning correctly, the ICMP packet will eventually reach its destination at 22.214.171.124. Now, consider how the ICMP reply is routed back to 192.168.1.2. R1 has announced the 192.168.0.0/16 network to the Internet, so data sent to 192.168.1.2 will reach it. R0 has announced /32 routes for all the RBE subscribers, so R1 will find a route entry for 192.168.1.2/32. (Of course, the next hop will be some intermediary router between R1 and R0.) R1 sends the ICMP packet, which is routed to R0. R0 now looks up 192.168.1.2 in its routing table and finds the static route that points to PVC 1/32 on ATM interface 0/0/0. Multiply this scenario by several thousand RBE interfaces on several hundred aggregation routers; throughout the 192.168.0.0 network, they quickly grow to unmanageable sizes. It is a much better design to aggregate the routes as soon as possible, and using simple a static route to Null0 is a way to do this. (Plenty of other ways exist, such as configuring OSPF to announce subnet addresses, but traffic to an interface, even Null0, is processed very quickly on a router, so performance is quite good using this method.) In the example shown in Figure 2-4, the ISP 192.168.0.0 backbone routers now need to carry only a single announcement for network 192.168.1.0/24, as indicated in Example 2-4. Example 2-4 RBE Static Routes on R0 with Null0 Route ! subscriber routes ip route 192.168.1.2 255.255.255.255 ATM0/0/0.132 ip route 192.168.1.3 255.255.255.255 ATM0/0/0.133 ip route 192.168.1.0 255.255.255.0 Null0 ! default route ip route 0.0.0.0 0.0.0.0 ATM1/0/0.100 Now, suppose host 192.168.1.2 sends a ping to 126.96.36.199. The following would happen: On the path from 192.168.1.2 to 188.8.131.52, everything happens as before. On the return path, the server at 184.108.40.206 replies with an ICMP REPLY, which finds its way to R1. R1 has a route for 192.168.1.0/24 that was originally announced by R0. The ICMP REPLY packet will be forwarded to R0. R0 has the same static route to PVC 0/132 on ATM interface 0/0/0. Any traffic received by R0 that is for the 192.168.1.0/24 subnet, for which there is no static RBE route, will be forwarded to Null0 (i.e., dropped). Although the use of route aggregation is well understood in large IP networks, it has not been widely used in DSL wholesale scenarios, where traffic is tunneled, not routed, to the ISP. Route aggregation is one of the challenges that reappears with IP VPNS and will be discussed further in later chapters. In the next section, you will see how to create the subscriber routes automatically, instead of statically as in this section. Keep in mind, however, the importance of being able to aggregate as early as possible: You don't want tons of /32 routes wandering around your network. RBE Address Assignment You have seen RBE subscribers in all the examples so far with addresses already configured. How did they get them? How do you scale address assignment methods for broadband networks? Because the preceding sections are all about bridging, DHCP is the logical choice for dynamic address assignment. Statically configuring all the end-station addresses obviously is impossiblethe headaches this would create would completely outweigh any employment protection advantages for network operations staff. When using DHCP on a DSL network, you have two basic options: Configure a DHCP server on the aggregation router. This configuration is less common, but entirely possible. You will see configuration examples of Cisco IOS DHCP servers in the "Cable CMTS" section, later in this chapter. Use a central DHCP server to which the aggregation router forwards DHCP requests. In this case, the router behaves as a DHCP relay agent. To do DHCP relay, add the ip helper-address command to every subscriber interface. The ip helper-address command gives the address of the DHCP server, as shown in Example 2-5. Example 2-5 RBE Configuration with DHCP Relay interface Loopback0 ip address 192.168.1.1 255.255.255.0 no ip directed-broadcast ! interface ATM0/0/0.132 point-to-point ip unnumbered Loopback0 ip helper-address 192.168.2.100 no ip directed-broadcast atm route-bridged ip pvc 1/32 encapsulation aal5snap The sequence of events when using DHCP relay is as follows: When the subscriber host starts up, it broadcasts a DHCP Discover packet. This is carried in a BPDU to the aggregation router. The aggregation router recognizes this as the DHCP packet and knows that it needs to forward it to a DHCP server because of the ip helper-address on the subinterface. In this case, the aggregator is behaving as a DHCP relay agent. The relay agent actually converts the DHCP broadcast into a unicast IP packet to the DHCP server located at 192.168.2.100. The relay agent puts its own address in the giaddr field of the DHCP packet and puts the subscriber VPI/VCI in the Option 82 field. (This data also includes the receiving interface name, so it is unique per device. The combination of the giaddr IP address and Option 82 yields a globally unique circuit ID.) You can use the global rbe nasip command to set the interface address the router puts in the giaddr field. You can configure multiple DHCP servers by entering additional ip helper-address commands. The DHCP server returns a DHCP Offer packet. The PC chooses from the different servers that replied to its Discover message and sends a DHCP request to one of them. Remember, the PC still does not have an IP address at this point, so it broadcasts. The DHCP relay agent again forwards the packet to the DHCP server. The relay agent does have an IP address, so it unicasts the packet to the server using UDP. The DHCP server selects an address from an appropriate pool of IP addresses (it can use either the requesting MAC address or giaddr and Option 82 to select the scope) and returns a DHCP reply to the PC, which now has its own IP address and with it the default gateway address. Crucially, the router dynamically creates a host route for the new IP address. As DHCP replies are sent by the DHCP server, the aggregation router looks at the address being assigned and creates a host route to that address using the interface on which the request was originally received. This is one of the bits of magic needed for large-scale deployment. These routes are marked as static in the output of the show route command. You should still use the summarization technique discussed with dynamic addresses also. Announce in your favorite dynamic routing protocol the subnet of addresses that you know will be allocated to DHCP requests originating from a particular aggregation router. As new hosts connect to the network, host routes for them are created automatically as soon as they are assigned addresses. The aggregator will then have an aggregate route to make sure that packets are sent to it for the group of potential subscribers, and specific host routes for hosts that are actually active. This way you have the best of both worlds: An aggregate route is announced to peer routers, but per-subscriber routes are dynamically created as IP addresses are assigned. More Bridged AccessCable and DOCSIS The worlds of cable and DSL have some major differences, but from an IP perspective they are very similar. If you ignore the many Layer 1 details on a cable headend router, the router configuration is similar to RBE, and thus many of the points already introduced for RBE also apply to cable access. Cable modems communicate with headend routers, called CMTS, over the HFC plant using the DOCSIS standard: Data is modulated and demodulated using the North American DOCSIS specifications, with downstream 6-MHz channels in the 54- to 860-MHz range and upstream ranges of 5 to 42MHz. The cable interface supports NTSC channel operation, using standard (STD), Harmonic Related Carrier (HRC), or Incremental Related Carrier (IRC) frequency plans conforming to EIA-S542. NTSC uses a 6MHz-wide modulated signal with an interlaced format of 25 frames per second and 525 lines per frame. NTSC is compatible with the Consultive Committee for International Radio (CCIR) Standard M.PAL, used in West Germany, England, Holland, Australia, and several other countries. The DOCSIS radio frequency (RF) specification defines the RF communication paths between the CMTS and CMs (or CMs in STBs). The DOCSIS RF specification defines the physical, link, and network layer aspects of the communication interfaces. It includes specifications for power level, frequency, modulation, coding, multiplexing, and contention control. This DOCSIS standard is extremely rich, but, at a very high level, provides a TDM-like system of time slots on a shared infrastructure. In the downstream direction, variable-length MPEG-4 frames carry Ethernet frames. In the upstream direction, fixed-length time slots are assigned to the cable modems by the CMTS. The downstream bandwidth is policed by the CMTS according to the QoS profile of each subscriber. Standard Ethernet 802 LLC is run on top of the DOCSIS layer. Figure 2-5 shows the encapsulation stack. Figure 2-5 DOCSIS Protocol Stack As part of the session negotiation process, a Service Identifier, or SID, which is part of the DOCSIS Cable MAC layer, is allocated to each cable modem (CM). This is used somewhat like the ATM Circuit Identifier in DSL networks. All traffic sent to and by a given cable modem uses the same SID. The DOCSIS 1.1 specification enhances this to allow cable modems to use several SIDs, each with a different QoS profile so that voice or video can be run over the same infrastructure as data traffic. Again, the parallel with ATM PVCs is apparent. The cable modem bridges traffic from its LAN Ethernet port over the WAN DOCSIS interface to the CMTS. Subscriber hosts see a shared-access Ethernet network. For upstream traffic, they behave just as RBE clients and need to ARP for the CMTS MAC address. The CMTS also ARPs for PC MAC addresses in the downstream case. Figure 2-6 shows the encapsulations used at different points in the network. Figure 2-6 DOCSIS Network Cross Section DOCSIS Cisco IOS Configuration From a Cisco IOS perspective, there are commands specific to the cable plant (HFR) interfaces, cable-modem profiles, etc. Unlike RBE, cable interfaces are natively point to multipoint, which is less secure than point to point. The CMTS and CM have other techniques. Another difference between basic cable and RBE configuration, illustrated in Example 2-6, is the widespread use of secondary addressing (which is also supported with RBE, but is not used very much). In Example 2-6, the primary subnet is for the cable modems; the secondary subnet is for the hosts. Example 2-6 Basic Cable Router Interface Configuration2 interface Cable4/0 ip address 10.1.1.1 255.255.0.0 ip address 220.127.116.11 255.255.0.0 secondary load-interval 30 no ip directed-broadcast cable helper-address 18.104.22.168 no keepalive cable downstream annex B cable downstream modulation 64qam cable downstream interleave-depth 32 cable downstream frequency 525000000 Cable upstream 0 power-level 0 no cable upstream 0 shutdown Cable upstream 0 frequency 37008000 cable upstream 1 shutdown cable upstream 2 shutdown cable upstream 3 shutdown cable upstream 4 shutdown cable upstream 5 shutdown Cable-modem profiles are an important component of DOCSIS networks. These profiles contain configuration instructions, such as upstream and downstream bandwidth, the number of allowed hosts per connection, etc. Example 2-7 shows four different profiles. Example 2-7 Cable-Modem Profiles3 ! cable config-file platinum.cm service-class 1 max-upstream 128 service-class 1 guaranteed-upstream 10 service-class 1 max-downstream 10000 service-class 1 max-burst 1600 cpe max 10 timestamp ! cable config-file gold.cm service-class 1 max-upstream 64 service-class 1 max-downstream 5000 service-class 1 max-burst 1600 cpe max 3 timestamp ! cable config-file silver.cm service-class 1 max-upstream 64 service-class 1 max-downstream 1000 service-class 1 max-burst 1600 cpe max 1 timestamp ! cable config-file disable.cm access-denied service-class 1 max-upstream 1 service-class 1 max-downstream 1 service-class 1 max-burst 1600 cpe max 1 timestamp Cable Address Assignment Given that cable broadband is a bridged environment, it shouldn't be surprising to learn that DHCP is used for address assignment. There is a small quirk, though. Even though it functions as an Ethernet bridge, the cable modem also needs an IP address so that it can be managed and it can retrieve its configuration profile. Cable standards mandate the use of Trivial File Transfer Protocol (TFTP) and TOD protocols to retrieve configuration files. TFTP needs an IP address, so the modems use DHCP to get one. Subscriber hosts also use DHCP to get their addresses. However, for security reasons, the end stations are typically on a different IP subnet than the modems. The DHCP function on the CMTS router is quite sophisticated. You can either configure the CMTS to relay the DHCP requests to different servers, depending on whether the modem or host sends the packet, or configure different DHCP pools on the router itselfone pool for the modems, one pool for the subscribers. You can also use a mix of the two approaches. Because of this, the Cisco IOS commands on the CMTS are a little different from RBE and you use cable helper-address instead of the standard ip helper-address command, as demonstrated in Example 2-8. Example 2-8 Cable Router with Multiple DHCP Relay4 interface Cable3/0 ip address 22.214.171.124 255.0.0.0 no ip directed-broadcast no keepalive cable insertion-interval 500 cable downstream annex B cable downstream modulation 64qam cable downstream interleave-depth 32 cable downstream frequency 128025000 no cable downstream if-output cable upstream 0 frequency 28000000 cable upstream 0 power-level 0 no cable upstream 0 fec no cable upstream 0 scrambler cable upstream 0 data-backoff 5 12 no cable upstream 0 shutdown cable helper-address 126.96.36.199 cable-modem cable helper-address 188.8.131.52 host In this example, there are two DHCP servers. The 184.108.40.206 server receives DHCP requests from cable modems. The 220.127.116.11 server receives requests from subscriber hosts. If you don't specify the [host | cable-modem] parameter, all requests are forwarded to a single server. The cable dhcp-giaddr command is another powerful addition to CMTS. It modifies the giaddr field with different relay addresses, as demonstrated in Example 2-9. Example 2-9 Using the cable dhcp-giaddr Command interface Cable4/0 ip address 172.16.29.1 255.255.255.224 secondary ip address 10.1.4.1 255.255.255.0 cable dhcp-giaddr policy The policy parameter instructs the router to use the DHCP pool that matches the primary address for requests from the cable modems and to use the pool that matches the secondary address for host requests. Broadband EthernetEthernet to the Home/Business (ETTX) Ethernet is still the brave new world of broadband access, and many aspects of broadband Ethernet continue to evolve as market demand and technical solutions develop. Ethernet can be used in many different service types, which can create some confusion. An Ethernet access network may offer Layer 2 and Layer 3 VPN services as well as other IP-based services, such as Internet access. Figure 2-7 shows the hierarchy of Ethernet services. Figure 2-7 Ethernet Services The scenario of interest here, referred to as ETTX, is when Ethernet is used as a last-mile technology for a Layer 3 service, be it Internet or VPN access. The service offering can be for small and medium-sized businesses or residential customers, but today, residential Ethernet is still confined to a metropolitan area. Unlike DSL and cable, ETTX does not use an existing wiring plant, so it is only cost effective in places where there is an abundance of Category 5 copper cable or fiber, namely multitenant buildings or metropolitan areas. There is a lot of effort today to use Ethernet framing over copper. The next generation of high-speed DSL will, in all likelihood, be Ethernet based. Cisco had an early implementation called Long Reach Ethernet (LRE), covered in more detail in the next section. Getting back to ETTX, a common residential Internet access architecture, as shown in Figure 2-8, uses a 24-port 10/100 switch as a CPE. Each CPE has a GE trunk port that is ultimately connected to an aggregation router. Figure 2-8 shows a typical configuration that uses switches connected together in a ring between CPE and aggregators that is used to transport Gigabit Ethernet frames. Figure 2-8 Residential ETTX In a typical network today, there might be 10 CPE on a ring and 20 such rings per aggregator. Although sizable, the number of subscribers per aggregator is still low compared to DSL. Of course, the connection speed is many times higher. The distance from the CPE to the end station is either the standard 100 meters for Category 5 copper cable or several kilometers when using fiber. In the second case, an additional fiber-to-copper converter is required on the customer premises. The network operation is straightforward and is identical in many ways to a switched Ethernet network in a campus. Figure 2-9 shows the by-now-familiar cross section of protocol encapsulation across the network. Frames are switched from CPE to the aggregation router across intermediary Layer 2 devices. CPE connects to 10/100 Ethernet ports, which are typically trunked over a Gigabit Ethernet port using 802.1q VLANs. The VLANs are terminated on an aggregation router (so they are switched across the access domain), which is responsible for routing traffic to its destination. As with campus networks, each VLAN runs a different IP subnet. Figure 2-9 802.1q Network Cross Section Figure 2-10 802.1q Header Long Reach Ethernet Long Reach Ethernet (LRE) is another Ethernet-based solution, this time found within MxUs using copper wiring. Figure 2-11 shows a typical network architecture. Like DSL, LRE can use twisted-pair wiring. Figure 2-11 LRE Solution A typical LRE deployment scenario would be a hotel that wants to offer data and voice services to its guests. Because all hotels already have a telephone network, it makes sense to use this expensive infrastructure for data transmission rather than rewire. And because the data traffic is all Ethernet based, it makes no sense to use true DSL, which would require ATM. However, standard Ethernet transmission over telephony-grade wiring is not technically possible, and this is what LRE addresses. LRE uses a transcoding scheme that allows high-speed Ethernet at up to 15 Mbps to be offered across telephony-grade wiring. An LRE CPE encodes the Ethernet frame and transmits it to an LRE-capable switch. LRE allows voice traffic to either be carried on the same wire using existing analog bandwidth, as with DSL, or to be migrated to IP. Note that LRE works with analog and digital telephones, even if DSL is also used across the same pair of wires. LRE is not widely deployed today as a consumer solution because of ongoing issues with signal interference. It is unclear whether this solution will remain cost effective given the ever-increasing success of wireless Ethernet. At a basic level, the ETTX configuration is identical to a campus solution. A CPE switch runs 802.1q on a Gigabit Ethernet trunk interface, with each access port in the same VLAN. In the case of Example 2-10, the CPE uses VLAN2. Example 2-10 ETTX CPE Configuration ! CPE access port interface GigabitEthernet0/6 switchport access vlan 2 ... ! CPE trunk port interface GigabitEthernet0/1 switchport trunk encapsulation dot1q ... The Ethernet aggregator configuration has a trunk port and an IP subinterface for every VLAN, as demonstrated in Example 2-11. Example 2-11 ETTX Aggregator Configuration Interface GigabitEthernet1/0/0.22 encapsulation dot1Q 2 ip address 192.168.11.1 255.255.255.240 There are networks with a different VLAN for every subscriber, but others in which a different VLAN is used for every service. In the case of per-subscriber VLANs, there must be a scaling mechanism of some kind because the maximum number of VLANs by default is 4000. (The trick is to add a second VLAN tag, known as QinQ.) When there is a VLAN per service, all subscribers are on the same IP subnet. ETTX Quality of Service ETTX is often perceived to have the weakest QoS infrastructure of the three access network types under consideration. Although there is no standardized equivalent to ATM's classes (CBR, VBR, etc.) or DOCSIS, Ethernet switches do offer relatively rich QoS capabilities, such as IP- and TCP-based classification, IP DSCP or 802.1p tag-based prioritization, and sophisticated scheduling and policing. Additionally, COS-to-IP DSCP mapping can be done automatically, or COS can be set on a port basis depending on the trust that is ascribed to a user. Cisco IOS access lists can be used for classification, so different applications or hosts can be treated differently. Even quite low-cost switches can offer multiple queues per port, which is required for multiservice applications. As anecdotal evidence, remember that a lot of enterprises run voice over their switched infrastructure, which is a testimony to the level of QoS that Ethernet infrastructure can provide. Apart from transporting multiservice traffic, IP QoS can also be used to help compensate for the fact that Ethernet does not offer many increments for service offerings, with jumps from 10, 100, and 1000 Mbps. Subinterfaces can be policed to lower or intermediate rates, such as 2 Mbps, 34 Mbps, and others, as demonstrated in Example 2-12. You can mark down nonconforming packets, or discard them, to enforce the particular service contract. Example 2-12 ETTX QoS ConfigurationPolicing Subscriber Interfaces policy-map option-128k class class-default police 128000 10000 10000 conform-action set-prec-transmit 0 exceed-action drop policy-map option-512k class class-default police 512000 10000 10000 conform-action set-prec-transmit 0 exceed-action drop policy-map option-1Meg class class-default police 1000000 10000 10000 conform-action set-prec-transmit 0 exceed-action drop policy-map option-10Meg class class-default police 10000000 10000 10000 conform-action set-prec-transmit 0 exceed-action drop interface GigabitEthernet1/0/0.15 desc VLAN connecting to customer1 encapsulation dot1Q 15 ip address x.x.x.x y.y.y.y service-policy input option-128k service-policy output option-128k Interface GigabitEthernet1/0/0.88 desc VLAN connecting to customer2 encapsulation dot1Q 88 ip address a.a.a.a b.b.b.b service-policy input option-1Meg service-policy output option-1Meg ETTX Address Assignment Unsurprisingly, ETTX uses DHCP for address assignment to simplify address management and distribution. If, as is common today, the network is owned and operated by a single entity, there are no new issues related to address assignment beyond those already discussed thus far in the chapter. Addresses are assigned using DHCP, and the Ethernet CPE in Figure 2-8 adds option 82 information if port identification is required. The role of the CPE is very important for security, as discussed in the next section. Using IP addresses efficiently is just as important in ETTX networks as in any DSL or cable network. Consider the case in Figure 2-12 of an Open Access ETTX network in which each subscriber can belong to one of two ISPs, ISP A or ISP B, both of which use DHCP to assign addresses. Figure 2-12 Open Access Architecture for Residential Ethernet Ethernet can be delivered over point-to-point or ring topologies. Although Figure 2-12 shows just one ring, there can be multiple rings of CPE connected to every aggregation router. Each ring is terminated on a physical interface, with potentially many subinterfaces, each one corresponding to a different VLAN (and there are different policies for how VLANs are used, as previously discussed). As each VLAN corresponds to a different IP subnet, there must be as many subnets as there are VLANs. For an ISP, this can result in wasted address space. To understand the issue of address waste, consider the following sequence of steps: ISP A has 60 subscribers in the metropolitan region and wishes to use the 192.168.1.0/26 subnet. Subscriber A connects on the first ring and sends a DHCP request, which is relayed to ISP A's DHCP server. VLAN20 is used. The second subscriber connects, but this time in a different part of the city and on a different ring. This time, traffic is in VLAN34. ISP A would like to have the same subnet for all the subscribers in this metro area, but needs a different subnet for VLAN34, or else the aggregation router could not route traffic correctly to subscribers on different subinterfaces. The bad solution to this problem is to use a different subnet for every VLAN. Unfortunately, ISP A probably has no way of knowing how many subscribers will be on each VLAN and so would potentially need to use as many /26 subnets as there are VLANs in the network. This results in huge waste. The solution is for the metro service provider to use unnumbered interfaces. Each subscriber has a loopback interface configured for his pool and all the VLANs are unnumbered, as demonstrated in Example 2-13. Example 2-13 Unnumbered Interfaces and Loopbacks for ETTX interface loopback 0 desc ISP A ip address 192.168.1.1 255.255.255.0 interface loopback 1 desc ISP B ip address 192.168.2.1 255.255.255.0 interface GigabitEthernet1/0/0.15 encapsulation dot1Q 15 ip address unnumbered loopback 1 ip address secondary unnumbered loopback 2 Interface GigabitEthernet1/0/0.88 encapsulation dot1Q 88 ip address unnumbered loopback 1 ip address secondary unnumbered loopback 2 Open access is already widely deployed for DSL, but it is still a relatively new concept for ETTX networks. The architecture will continue to evolve. In Example 2-14, the addresses used by the different ISPs can't overlap, because they are terminated on the same router. You will see how to lift this restriction in Chapter 7, "Implementing Network-Based Access VPNs Without MPLS." Security Considerations for Bridged Broadband Architectures Security is an important part of an Internet access service, whether it is sold to residential or business customers. Security at the transport layer and application layers is beyond the scope of this work and, indeed, is independent of the type of access used. However, lower-layer security is an important part of overall network design. If the lower layers are not secure, then it is easy for an attacker to work up the stack and compromise application data such as usernames, passwords, credit card numbers, and so on. The common Layer 2 and Layer 3 risks are as follows: Address spoofingThis category loosely encompasses all attempts to modify an end station address. It can be something as simple as changing the address on a Linux station NIC or manually changing your IP address. ARP-based attacks are more sophisticated and, because ARP is not an inherently secure protocol, spoofing is not as difficult as it should be. These attacks involve, for example, sending an ARP reply packet with a spoofed IP address. Most routers or switches simply overwrite an IP address in their ARP table with the IP address obtained from the most recent ARP response, or may record multiple IP addresses from responses, making it easier for the attacker. Slightly more sophisticated is the use of gratuitous ARP, whereby the attacker spontaneously advertises an ARP packet with its MAC address and someone else's IP address. This is completely RFC compliant and all stations that receive the ARP packet happily install the spurious MAC/IP mapping in their ARP tables. Now, no ARP request will be sent when traffic is received for the IP destination. The use of gratuitous ARP is typically used for man-in-the-middle attacks. As you can gather from the preceding explanation, ARP attacks are limited only to stations that are on the same broadcast domain as the offending attacker. DoS attacksDoS attacks can be mounted in many ways. Some simple examples include using gratuitous ARPs to fill the CAM table on an Ethernet switch; using DHCP requests to exhaust IP addresses; or sending a very large number of Layer 3 flows to the default router. DoS attacks often use some form of address spoofing. DoS attacks can be very hard to prevent. Simply constantly changing Ethernet source addresses can be very effective against a switch. Sophisticated techniques are under development to improve network security against DoS attacks. Today's routers already have source address checking, access lists, and NetFlow statistics. Broadcast traffic and OS weaknessesThis is not really a category of network attacks, but more an observation that many host stations are inherently insecure "out of the box" and allow any neighbor machine on the same broadcast domain to browse disk contents. In remote access, the ability to broadcast is trouble. Security in DSL Broadband Networks RBE has two characteristics that contribute to network layer security: The router uses point-to-point interfaces. The broadcast domain is limited to a single site. If you send a gratuitous ARP packet, the router may install it in its ARP table, but downstream traffic is always directed to the correct PVC because of the host routes used with RBE. When the router receives a packet for a host, it sends the ARP only on the PVC that matches the host route for that address. In other words, ARP is used only after the correct subscriber interface has been identified by the IP routing table. Layer 2 attacks are hard to do successfully in this case. Even if you do successfully spoof an IP address, the host routes again make sure traffic reaches the correct destination, as described in the following step sequence: Host A sends a packet with a spoofed address of 18.104.22.168 to 192.168.1.100. Its true address is 192.168.1.99. The RBE router forwards the packet to 192.168.1.100. On the return path, the router either routes the traffic to the correct interface for address 22.214.171.124 or drops it if no such address exists in the routing table. It is not returned to host A. Before you conclude that this technique is a good way to send large and unsolicited streams of traffic to your neighbor, remember Unicast Reverse Path Forwarding (uRPF) checking. uRPF, if configured, will drop the packet because a router will not accept a packet if the incoming interface is different from the interface defined in the route table that reaches the packet's source address. Security in Cable Broadband Networks Because security has long been an issue on cable networks, the Baseline Privacy Interface was added to the DOCSIS specifications to give a secure communication channel between each cable modem and the CMTS. Unlike RBE, the CMTS router uses a point-to-multipoint interface. Potentially, then, ARP-based attacks are possible because the CMTS sends ARP packets to all hosts on its physical interface. However, remember that the DOCSIS layer also offers protection. The cable modem can store the MAC address/SID mappings, which the network administrator can poll to troubleshoot security issues. The cable source-verify command is important. It configures the CMTS to enforce address assignment by dropping packets with source addresses that it has not seen assigned by a DHCP server (using the cable source-verify dhcp option). For this to work properly, the DHCP server must support the DHCPLEASEQUERY message. The cable source-verify command prevents attacks based on theft of IP addresses (using a valid address that belongs to someone else) as well as attacks based on invented addresses (either addresses that are valid but not assigned or addresses that are made up). Security in Ethernet Broadband Networks Without VLANs, all subscribers in a switched network are in the same broadcast domain, which is an open invitation to trouble, because of the risks that this scenario creates (some of which were discussed earlier in this chapter). Dedicating a VLAN to each customer is, in theory, possible, but it is impractical because of the 4096 global VLAN limit in the 802.1q protocol. It is more common to configure a single VLAN per switch. This still leaves everyone on the same switch in the same broadcast domain. To solve this issue, private VLANs are used. Private VLANs prevent traffic from a subscriber port from going to any other port on the switch, with the exception of the trunk port, which is defined as a promiscuous port. From the perspective of the subscriber, as represented in Figure 2-13, subscribers "see" a private point-to-point link to the router. The aggregating router, in turn, sees a switched Ethernet segment with multiple subscribers. Another alternative is to use double VLAN encapsulation, called QinQ, where traffic from each subscriber's port is mapped to a different 802.1q tag, and then that frame is tagged again when it leaves the switch with a tag that uniquely identifies the switch on the Ethernet network. The aggregation router has to be smart enough to handle this double layer of VLAN tags. Figure 2-13 Private VLAN Limiting the broadcast domain stops some of the simplest attacks, but does not prevent ARP or IP address spoofing. Remember that for cable, the router enforces DHCP assignments. If a host tries to change an address, or sends a gratuitous ARP packet, the CMTS ignores it, because it was not assigned to the host by a known DHCP server. The Ethernet scenario is harder to manage because subscriber interfaces have been aggregated on the switch downstream in the network, which may not yet have the necessary mechanisms to enforce Layer 3 to Layer 2 bindings. Port security is a useful feature that prevents the switch from allowing a MAC address learned on one port to be used on another. Port security can also allow static MAC addresses to be configured for each port. Admittedly, it is hard to scale a solution based on static addresses for a large number of subscribers without an excellent OSS system. Ethernet switches do have an increasingly large array of tools to deal with DoS attacks. These tools include broadcast suppression, ARP throttling, route processor rate limiting, security ACLs, and so on. On good-quality switches, these features are all implemented in hardware, so they are very efficient and are well worth some extra cost. Identification and authorization should always be done as soon as possible in a remote-access network. There is a huge difference in effectiveness between being able to apply policies on a device where each subscriber is on a different port or Layer 2 circuit and one where this is not the case. Currently, the 802.1x standard is emerging as a possible solution to this problem. 802.1x, which is a port-based access mechanism, works as follows: The client station, or supplicant, sends a request to the switch, or authenticator, which forwards it to a RADIUS authentication server. The authentication server returns a challenge to the supplicant, which must correctly respond to be granted access to the network. The authenticator provides access to the LAN. This is reminiscent of Point-to-Point Protocol (PPP) authentication and you can think of 802.1x as using a PPP-like authentication mechanism to provide port-based access control. Authentication and Accounting in Bridged Broadband Architectures One of the significant differences between Ethernet-based access and the PPP-based solutions is how subscribers are authenticated on the network. With bridging, there is no authentication using a subscriber's name. This is always ideal, if you have the option, because you authenticate an individual and can then enforce policy with a fine level of granularity. In bridged architectures, the network-access control is Layer 2 based. If you have a valid MAC address, from a valid port, you will receive a valid IP address. At no time does the user have to enter a name and password. So the user identity must always be tied back to the Layer 3 or Layer 2 addresses and, as you've just seen, these are not the most secure. Billing is another weakness of the Ethernet solution if the service provider wants to offer a metered service. There is no standards-based way to retrieve usage statistics. Some switches have some useful data in MIBs, but some don't. On routers, if there is hardware for it, you can enable NetFlow accounting. However, NetFlow accounting on high-speed networks generates a considerable amount of data, and the OSS systems must do a lot of data crunching and cross checking between systems to work out which flow belonged to which person at a given time.
<urn:uuid:b7cebc60-96b5-4eb1-8d29-992622c09e99>
CC-MAIN-2024-38
https://www.ciscopress.com/articles/article.asp?p=363733&amp;seqNum=3
2024-09-15T23:02:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00358.warc.gz
en
0.914443
11,287
3.125
3
Data Management Glossary Power Usage Effectiveness (PUE) Power Usage Effectiveness (PUE) is the metric used to measure the energy efficiency of a data center or computing facility. It is calculated by dividing the total amount of energy consumed by the data center (including IT equipment and supporting infrastructure) by the energy consumed by the IT equipment alone. (See Data Center Consolidation.) The formula for calculating PUE PUE = Total Facility Energy Consumption / IT Equipment Energy Consumption “Total Facility Energy Consumption” refers to the combined energy consumed by the entire data center, including cooling systems, lighting, power distribution, backup generators, and other supporting infrastructure. “IT Equipment Energy Consumption” represents the energy used specifically by the IT servers, storage devices, networking equipment, and other computing hardware. The purpose of the Power Usage Effectiveness metric The purpose of PUE is to provide insight into the efficiency of a data center’s power usage. A lower PUE value indicates higher energy efficiency because it means a larger proportion of the total energy consumption is used directly by the IT equipment rather than being allocated to supporting infrastructure. A PUE of 1.0 represents a hypothetical ideal state where all the energy consumed is used exclusively by the IT equipment, with no additional energy needed for cooling or other infrastructure. In practice, achieving a PUE of exactly 1.0 is extremely challenging, and most data centers typically have PUE values above 1.0. Data center strategies to reduce PUE and improve efficiency - Efficient Cooling Systems: Implementing energy-efficient cooling technologies, such as hot and cold aisle containment, precision cooling, or free cooling, to optimize cooling efficiency and reduce energy consumption. - Virtualization and Consolidation: Using virtualization technologies to consolidate servers and optimize resource utilization, thereby reducing the overall power requirements of the IT equipment. - Energy Management and Monitoring: Implementing energy management systems and monitoring tools to track and optimize energy usage, identify areas of inefficiency, and make data-driven decisions for improvement. - Efficient Power Distribution: Employing efficient power distribution systems, such as uninterruptible power supplies (UPS) with high-efficiency ratings, to minimize power losses and increase energy efficiency. - Renewable Energy Sources: Incorporating renewable energy sources, such as solar or wind power, into the data center’s energy mix to reduce reliance on fossil fuels and lower the environmental impact. PUE is just one metric to evaluate data center energy efficiency. Additional factors like water usage efficiency (WUE) and carbon usage effectiveness (CUE) may also be considered for a comprehensive assessment of environmental impact and resource efficiency. Data Management and Sustainability In early 2022 supply chain challenges and sustainability were grabbing headlines: See this post for coverage. The focus on improving energy efficiency and reducing PUE has become increasingly important as pressures mount to consolidate data centers, accelerate cloud migration and reduce data storage costs, but to reduce the overall carbon footprint and contribute to sustainable IT operations. Komprise cofounder and COO Krishna Subramanian published this article: Sustainable data management and the future of green business. Here is how she summarized the importance of unstructured data management to sustainability in the enterprise: A lesser-known concept relates to managing data itself more efficiently. Most organizations have hundreds of terabytes of data, if not petabytes, which can be managed more efficiently and even deleted but are hidden and/or not understood well enough to manage appropriately. In most businesses, 70% of the cost of data is not in storage but in data protection and management. Creating multiple backup and DR copies of rarely used cold data is inefficient and costly, not to mention its environmental impact. Furthermore, storing obsolete “zombie data” on expensive on-premises hardware (or even, cloud file storage, which is the highest cost tier for cloud storage), doesn’t make sage economic sense and consumes the most energy resources. The recommendations for achieving sustainable data management in the article are: - Understand your unstructured data - Automate data actions by policy - Work with data owners and key stakeholders
<urn:uuid:acc147b9-ac9d-4860-abe3-8d723c0b3de5>
CC-MAIN-2024-38
https://www.komprise.com/glossary_terms/power-usage-effectiveness-pue/
2024-09-15T23:54:06Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00358.warc.gz
en
0.908922
854
2.78125
3
Learn how to navigate the DORA compliance checklist and meet DORA cybersecurity regulation requirements with our step-by-step guide. Digital Rights Management (DRM) Digital Rights Management (DRM) is a set of access control technologies used to restrict the usage of digital content and devices. DRM systems are designed to protect the intellectual property rights of content creators and distributors by preventing unauthorized copying, sharing, and modification of digital media. As the digital landscape continues to evolve, DRM has become an essential tool for protecting various forms of digital content, including software, music, movies, e-books, and more. Understanding Digital Rights Management DRM encompasses a wide range of technologies and strategies aimed at controlling how digital content is used and distributed. These measures help content creators maintain control over their intellectual property, ensuring they receive proper compensation for their work. Core Objectives of DRM: - Prevent Unauthorized Access: DRM systems are designed to restrict access to digital content to authorized users only. This ensures that only those who have purchased or been granted permission can view or use the content. - Control Distribution: DRM technology limits the ways in which digital content can be distributed. It prevents unauthorized copying and sharing, ensuring that content creators and distributors maintain control over how their work is disseminated. - Protect Content Integrity: DRM systems ensure that digital content remains unchanged and unaltered. This is particularly important for preserving the integrity of software, e-books, and other digital media. - Enforce Usage Rights: DRM enables content creators to specify how their content can be used. This includes limiting the number of devices on which content can be accessed, controlling playback options, and restricting printing capabilities. Digital Rights Management Software Digital Rights Management software is a key component of any DRM system. This software implements the various DRM technologies and policies that protect digital content. It can be embedded within digital files or operate as standalone applications. Key Features of DRM Software: - Encryption: DRM software often uses encryption to protect digital content. Encryption ensures that only authorized users with the correct decryption keys can access the content. This makes it difficult for unauthorized users to bypass DRM protections. - Access Controls: DRM software includes features that manage user access. This can involve user authentication mechanisms, such as passwords or biometric verification, to ensure that only authorized users can access the content. - License Management: DRM software manages licenses that define the terms and conditions of content usage. This includes specifying the duration of access, the number of devices allowed, and any other usage restrictions. - Usage Monitoring: DRM software can track how digital content is used. This monitoring helps content creators and distributors understand how their content is consumed and identify any potential misuse or breaches of the DRM protections. How Does Digital Rights Management Work? Understanding how DRM works involves looking at the various technologies and processes that protect digital content from unauthorized access and usage. The DRM Workflow: - Content Creation: The process begins with the creation of digital content, such as music, movies, software, or e-books. During this stage, content creators define the usage rights and restrictions they want to enforce. - DRM Implementation: The content is then processed through DRM software, which encrypts the content and embeds the specified usage rights and restrictions. This step ensures that the content is protected from unauthorized access and distribution. - Distribution: The protected content is distributed to users through various channels, such as online stores, streaming services, or physical media. During distribution, the DRM system ensures that only authorized users can access the content. - User Access: When a user attempts to access the DRM-protected content, the DRM software verifies their credentials and usage rights. If the user is authorized, the content is decrypted, and access is granted according to the specified restrictions. - Ongoing Protection: Throughout the lifecycle of the digital content, the DRM system continuously enforces the usage rights and monitors for any unauthorized access or breaches. Digital Rights Management System A Digital Rights Management system is an integrated framework that combines DRM software, hardware, policies, and processes to protect digital content. These systems are essential for organizations and content creators looking to safeguard their intellectual property in the digital age. Components of a DRM System: - DRM Software: As discussed, DRM software is the backbone of any DRM system. It handles encryption, access control, license management, and usage monitoring. - DRM Hardware: In some cases, DRM systems include hardware components, such as secure chips or dedicated servers, that provide additional layers of protection. These hardware components are often used in high-security environments to prevent tampering and unauthorized access. - Policies and Procedures: Effective DRM systems rely on clearly defined policies and procedures. These policies outline the terms of content usage, distribution, and enforcement, ensuring that all stakeholders understand their rights and responsibilities. - User Management: A key aspect of DRM systems is managing users and their access rights. This involves authenticating users, assigning permissions, and tracking usage to prevent unauthorized access. Digital Rights Management Technology DRM technology encompasses a wide range of tools and techniques used to protect digital content. These technologies are constantly evolving to address new challenges and threats in the digital landscape. Key DRM Technologies: - Encryption: Encryption is a fundamental technology in DRM systems. It involves encoding digital content in such a way that only authorized users with the correct decryption keys can access it. This ensures that even if the content is intercepted or copied, it cannot be used without authorization. - Digital Watermarking: Digital watermarking involves embedding hidden information within digital content that can be used to identify and track it. Watermarks are often used to trace the source of unauthorized copies and deter piracy. - License Keys: DRM systems use license keys to control access to digital content. These keys are distributed to authorized users and must be entered to unlock and use the content. License keys help ensure that only legitimate users can access the protected content. - Secure Containers: Secure containers are used to package digital content along with its DRM protections. These containers ensure that the content cannot be separated from the DRM controls, providing an additional layer of security. Digital Rights Management Encryption Encryption is a critical component of DRM technology. It involves transforming digital content into an unreadable format that can only be decrypted by authorized users. This process ensures that even if the content is accessed or copied, it cannot be used without the correct decryption keys. Types of Encryption Used in DRM: - Symmetric Encryption: Symmetric encryption uses a single key for both encryption and decryption. While it is fast and efficient, the challenge lies in securely distributing the key to authorized users without it being intercepted. - Asymmetric Encryption: Asymmetric encryption uses a pair of keys – a public key for encryption and a private key for decryption. This method is more secure for key distribution, as the private key is never shared. - Hybrid Encryption: Hybrid encryption combines symmetric and asymmetric encryption to leverage the strengths of both methods. Typically, symmetric encryption is used for encrypting the content, while asymmetric encryption is used for securely transmitting the symmetric key. Digital Rights Management (DRM) is an essential technology for protecting digital content in an increasingly digital world. By understanding how DRM works, including the role of DRM software, systems, and encryption, content creators and distributors can better safeguard their intellectual property. DRM technologies continue to evolve, addressing new challenges and ensuring that digital content remains secure and properly managed.
<urn:uuid:ab7820e2-1a7b-4ac6-9228-9ff2078c70c7>
CC-MAIN-2024-38
https://scytale.ai/glossary/digital-rights-management-drm/
2024-09-17T04:24:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00258.warc.gz
en
0.887703
1,549
3.703125
4
Over at TACC, Faith Singer-Villalobos writes that researchers are using supercomputers to better understand the lung development of premature babies. The insight derived from large datasets could help save lives. In 2016, over a dozen scientists and engineers toured a neonatal intensive care unit, the section of the hospital that specializes in the care of ill or premature newborn infants. The researchers had come together from all around the country, and brought with them a wide variety of expertise. Visiting the newborns helped put into perspective the reason for this gathering of researchers—lung development—and for their collaboration over the coming years. James Carson of the Texas Advanced Computing Center (TACC) was one of the research scientists in this group. He and his colleagues have been working on the Molecular Atlas of Lung Development Program, known as LungMAP, funded by the National Heart, Lung, and Blood Institute (NHLBI) of the National Institutes of Health. For the past five years, the LungMAP team has been building an open access data resource of the developing lungs in both laboratory mice and humans, in order to further our knowledge of how the lung begins to breathe. The resource contains highly detailed datasets of genes, proteins, lipids, and metabolites in the context of cell types and lung anatomy. Carson says LungMAP is now a uniquely comprehensive data resource on lung development. Thousands of babies are born prematurely every day,” says Carson, who is a co-principal investigator on the project. “With normal development, the lung has the shape and cell types to breathe plenty of air upon birth. However, a premature lung may not be able to breathe enough air at birth, and it is a challenge to help the lung develop normally those first few months. There can be health effects that continue into adulthood without proper lung development.” Babies born before week 37 of pregnancy are considered preterm. Preterm babies face a higher risk for one or more complications after delivery, and in many cases these involve the lungs. A baby’s lungs are typically considered mature by week 36. However, not all babies develop at the same rate, so there can be exceptions. Breathing problems in premature babies are caused by an immature respiratory system. Immature lungs in premature babies often lack surfactant, a liquid that coats the inside of the lungs and helps keep them open. Without surfactant, a premature baby’s lungs can’t expand and contract normally. We’re gathering data that’s never been collected before,” Carson says. “In the past, how scientists described lung development was limited by the methods available for measuring and capturing pictures. However, with access to the latest technologies for detecting molecules, we’re learning about new types and subtypes of cells, and passing that information onto the whole community of lung researchers.” The first 5-year phase of the project, which is now in its final months, focused on characterizing the details of healthy lung development in mice and humans. The researchers are hoping to be part of the second phase of the project which will include a new focus on understanding diseases in the human lung. Collaboration Across the Country The large, collaborative project involves researchers at universities, medical schools, federal laboratories, and companies. They are collectively organized into six separate centers, four providing data collection and research, one providing human tissue samples, and one serving as the data coordinating center.TACC is part of the Center of Lung Development Imaging and Omics, which also includes Pacific Northwest National Laboratory (PNNL), Baylor College of Medicine, and the University of Washington. TACC’s role is focused on providing data storage and curation of tens of thousands of images, most of which are larger than 100 megapixels. Charles Ansong at PNNL is the principal investigator of this research center. He and the project team at PNNL use proteomics and lipidomics to determine how much of each protein and lipid are in a tissue sample. We’ve done an excellent job over the past five years in pushing technology development to make measurements in smaller and smaller tissue samples,” Ansong said. “Now we’re able to perform single cell proteomics—so that given a single cell, we can detect and measure quantity for hundreds of different proteins.” The data in mouse is collected both before and after birth, in order to give insight into all the stages of lung development. A human baby born prematurely would have cells in the lung similar to those found in a mouse prior to its birth. This information helps researchers understand what cells in the lung need to do to get to the point where they can support breathing properly. We’re trying to figure out all the different cell types and where those cell types are,” Ansong said. “Sometimes cells start as one type and then change to another type, depending on what stage of development they’re in. The datasets from our center allow researchers to see where genes, proteins, lipids, and metabolites are located and in what quantities.” Carson says that it’s not as useful to look at genes that every cell has in equal amounts. “We’re more interested in genes that are unique to a specific cell type and function. With help from our collaborators at other LungMAP centers, I think we succeeded in identifying the most important genes for understanding lung development.” Cecilia Ljungberg at the Baylor College of Medicine collects the images from the donor mice. She also performs the tissue preparation, sectioning, and a technique called high-throughput in situ hybridization. This process is used to reveal the location of specific “messenger” ribonucleic acid sequences in tissues, a crucial step for understanding the organization, regulation, and function of genes. From there, Ljungberg and her colleagues use a high resolution microscope to take images of these tissue sections, images which can approach a gigapixel in size—many times the information captured by a 10 megapixel digital camera—and upload them into CyVerse’s BisQue, a powerful computational tool which provides life scientists the ability to handle huge datasets, perform analyses, and evaluate, curate, and share images. TACC is part of the advanced computing resources that are the foundation of the CyVerse infrastructure. The amount of data collected is pretty staggering,” Ljungberg said. “So far, we have looked at more than 700 different genes at four different developmental stages in mouse, and collected more than 20,000 images, with each image focused on a particular gene at a particular age.” The LungMAP website provides access to a repository of data and metadata to support scientific explorations in lung development. Images and other data types are standardized and organized within a common ontology, or set of concepts and categories that shows their properties and the relations between them. We want high quality images from each stage of lung development to be included on the website,” Carson says. “LungMAP.net contains all of the data from the different research centers. For any given molecule, one can access a summary page of activity across development, and you can begin to see trends in the different cell types.” Data Collection Techniques The PNNL- and TACC-led team leverage three types of data collection: 1) high-throughput in situ hybridization—a technique used here to detect RNA sequences in cells across a section of tissue; 2) nanospray desorption electrospray ionization (nano-DESI), a high-resolution technique for mass spectrometry imaging, to provide fundamental knowledge about where specific lipids and metabolites are found in the lungs; 3) and highly sensitive “omics” approaches at scales from whole tissue to region-specific to cell-type-specific.Ansong says, “The integrative spatiotemporal data generated, from genes to proteins to lipids/metabolites, provides a complementary and comprehensive view of genotype to phenotype relationships that is unprecedented in understanding normal lung development.” Researchers and doctors are able to explore this data to better understand what normal lung development looks like. This then allows them to understand what happens when a baby is born prematurely, its organs not fully formed, and what interventions may cultivate continued lung growth. At this point in the project, the focus is on human lungs. The team is wrapping up the processing of approximately 5,000 images representing normal lung development in humans. With mice, each tissue section consists of a cross-section of the entire lung or lung lobe. However, human lungs are a lot larger, so it’s not optimal to image cross-sections of the entire lung using these methods. “The cross-section of the entire human lung doesn’t fit on a standard glass slide, so we utilize sampling strategies instead” Carson says.In the first phase of LungMAP, the NHLBI sought large quantities of highly detailed data sets using high throughput imaging and omics technologies. “We delivered on that greatly, and did a really good job of pushing the technology development, too,” Ansong said. For Phase two, they’re interested in progressing to high resolution 3D imaging and single cell type technologies. And they’re interested in taking out the mouse and focusing on human lungs and diseases, which is good because by studying the human we’re getting closer to direct impacts,” he said. Carson notes that Bronchopulmonary dysplasia (BPD) is a natural fit to study. It’s a form of chronic lung disease that affects newborns (mostly premature) and infants, resulting from damage to the lungs caused by respirators and long-term use of oxygen. Most infants recover from BPD, but some may have long-term breathing difficulty. LungMAP is laying the groundwork for these investigations. The researchers involved believe the primary benefits will be felt in the near future. “Our goal is to fully understand the lung before and after birth so that doctors can apply new strategies to increase positive health outcomes for premature babies,” Carson said. The article, “Spatial distribution of marker gene activity in the mouse lung during alveolarization,” was published in February 2019 in the journal Data in Brief (Elsevier). The authors are M. Cecilia Ljungberg, Mayce Sadi, Yunguan Wang, Bruce J. Aronow, Yan Xu, Rong J. Kao, Ying Liu, Nathan Gaddis, Maryanne E. Ardini-Poleske, Tipparat Umrod, Namasivayam Ambalavanan, Teodora Nicola, Naftali Kaminski, Farida Ahangari, Ryan Sontag, Richard A. Corley, Charles Ansong, and James P. Carson. This data was generated as a resource for the public research community through support of the National Heart, Lung and Blood Institute (NHLBI) LungMAP program funding (U01 HL122703) and by an NIH Shared Resource equipment grant (S10 OD016167).
<urn:uuid:bf5339c5-3aa8-45ab-8387-cbb7c6b8281a>
CC-MAIN-2024-38
https://insidehpc.com/2019/03/how-supercomputing-could-help-save-newborn-babies-with-lungmap/
2024-09-18T10:53:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00158.warc.gz
en
0.939231
2,331
3.203125
3
Different Types & Causes of a Data Breach When people think about the causes of a data breach, the first thing to come to mind will almost certainly be hacking. However, not all breaches are a direct result of illegal activities by external sources; some can happen accidentally. Therefore, in order to ensure your cyber security and data protection measures are as robust as possible, it’s important to first understand the different types of breaches that can occur. Different causes of data breach - Main-in-the-middle (MitM) & eavesdrop attacks - Business email compromise (BEC) - Password guessing - Keystroke loggers - Physical theft Ransomware has become the most dangerous threat in the world of cyber security with the UK National Cyber Security Service listing it as the most common threat from external actors. It is a type of malicious software that encrypts files and demands payment from the victim to restore access to those files. This form of cyber attack has been around for over a decade, but it has grown more sophisticated with time. Ransomware can be delivered through various channels, including email attachments, social media messaging, or infected websites. Once it infects a victim's system, it will encrypt their files and display a message demanding payment in exchange for the decryption key. Cyber criminals often use Bitcoin or other cryptocurrencies as they are difficult to trace. The consequences of ransomware attacks can be devastating, leading to data loss, financial damage and even reputational harm. Victims often face a difficult decision: either pay the ransom or lose access to their valuable data forever. Cyber criminals typically demand payment in cryptocurrency such as Bitcoin, which makes it difficult for authorities to track them down. Hacking looks for weaknesses in a system which can be exploited in order to gain access, and can either be automated or performed manually in a highly targeted campaign. Once they have gained access, they may have different purposes. In some cases, hackers (also referred to as “threat actors” within the context of cyber security) are after personal data as this can be sold on the “Dark Web”, earning criminals thousands and in some cases, millions of pounds. Others hack websites or systems to install malicious code for other purposes such as remote monitoring, remote access, or potentially even to market things such as Viagra via hidden links in website code of an unwitting site. And it’s not just high profile companies who are at risk. If you have a Wordpress site, it’s highly likely that the site is receiving automated attacks on a daily basis. If you’re surprised (or concerned) by that and would like to check for yourself, install the “Word Fence” plugin. You’ll start to receive alerts for each automated attack. However, don’t panic. Adding Word Fence alone can help improve the security of your site, and the fact that you’ve put in place some measures to increase site security and monitor for any attacks may help you demonstrate you’ve taken reasonable steps to protect data, which could be invaluable in defending against a fine at a later date. Similar to hacking, malware and viruses can either cause a catastrophic failure resulting in a very clear indication of a breach (e.g. a computer system completely crashes or has all of its data wiped), or could be invisible to the typical user with a goal of sitting quietly on the host system undetected. Often, this type of attack will use the victim’s system to then perform various actions in such a way that if it’s ever discovered, it will be traced back to a victim’s machine, and not that of the hacker. Malware and viruses are often installed via opening an email with an infected attachment or clicking a link which automatically instals the malicious software onto the victim’s system(s). This type of attack relies on any vulnerabilities in data protection during transmission between two data sources. This could be as data is sent via an unsecured network, via email, or over VoIP (Voice over IP) telephone calls. An analogy would be to imagine you’re at a tea point at work chatting with a colleague about something you didn’t want anyone else to hear. However, because you hadn’t considered the risks, you didn’t realise that another colleague was able to overhear your conversation. They could then use what they overheard against you. MitM attacks work in a similar way, and by ensuring communication between multiple points is always encrypted you can help minimise the risks. This is why it’s recommended that websites use SSL (Secure Socket Layer) as this can help reduce some of the risks for users. Phishing has significantly increased in recent years and is now one of the biggest risks to data subjects as, unlike hacking, the user often unwittingly and unwillingly compromises their own data. In fact, results from the UK Government’s 2022 Cyber Security Breaches Survey revealed that 39% of surveyed businesses identified a cyber attack in 2021/22, with phishing attempts representing the most common risk vector at 83%. Phishing is a type of cybercrime where the criminal sends an email or text message, or makes a phone call to the victim, and pretends to be someone else…. such as the victim’s bank, internet provider or even the police! There are 5 main types of phishing attacks: - Email Phishing - typically a generic email which looks convincing, but isn’t from the true sender. Here’s an example provided by Royal Mail - Spear Phishing - A more targeted and convincing type of phishing attack where the criminal may be able to include personal data about the user to build trust - Whaling - Aimed more at high value targets and senior personnel - Smishing and Vishing - Text and phone communication is used as the chosen method of attack - Angler Phishing - A fairly new type of attack which is more prominent for targeting victims on social media The scammer then asks the victim to provide them with certain sensitive data such as passwords, date of birth, mothers maiden name or credit card details which criminals can then use and sell until such a time as the victim becomes away, or another party such as the victim’s bank, becomes aware and blocks their access. It sounds like it shouldn’t work… but in reality, the scammers can be extremely convincing, well practised, and also have ways to convince the victim of their legitimacy. An example of a phishing scam You receive a phone call from someone claiming to be from your bank. They say something to make you want to engage, such as their systems have detected unusual activity on your account and suspect your account may be compromised. However, in order for them to discuss it in more detail, they need to verify you are the correct person, and therefore need to take you through a few security questions. At this point, the victim may become suspicious or just be wary and want to protect their data. The scammers have a way to deal with this. They will often encourage you to hang up, call the number on the back of your card, and for them, and then a member of the team will transfer you back to them to continue the call. The victim duly does this, but what they don’t realise is that the scammer hasn’t hung up. Most of us don’t check for a dialling tone before calling another number (and on mobiles, we don’t have a dialling tone at all), and so the scammer simply stays on the line and passes it to a colleague. Once the victim has “called” the other number, scammer number two begins talking as if they’ve answered the call. They then offer to transfer the call. The victim feels they’ve verified the caller is from the bank as, to their mind, they called the number on the back of the card, whereas in reality, the original caller has been on the line all the way through. Why phishing emails with typo’s aren’t as silly as they look! As an interesting aside, phishing emails will often contain intentional errors such as typos and grammatical errors! This is another indication of how clever scammers are as this is actually a rather ingenious “self-filtering” method. People who are less likely to be susceptible will spot these errors fairly quickly and realise it’s a scam, which more often than not simply results in them deleting the email. This means a greater volume of responses, etc. will be from more vulnerable and susceptible victims. Not only does this improve their efficiency and success rates, it also provides criminals with a type of scoring system which increases the value of the data when sold on the dark web. This in-turn, signals to other criminals which victims are more likely to be easy targets, often resulting in some people falling victim to not just one, but multiple scams. Sadly, it's often the older generation or vulnerable people who fall victim to phishing scams, but they’re also becoming increasingly common for business attacks, especially via email. How to report phishing scams You can do your bit to help combat phishing scams and protect more vulnerable members of society by reporting any phishing scams you encounter to: https://www.ncsc.gov.uk/collection/phishing-scams As of June 2023, over 21 million scams have been reported via the system. Business Email Compromise is a certain type of phishing attack (specifically known as a “spear phishing attack”). It is a type of cyber attack that has been increasing in frequency and sophistication over the past few years. It occurs when a hacker gains access to an organisation's email system and impersonates a legitimate executive, employee, or vendor to initiate fraudulent transactions or information theft. The goal of the attacker is typically financial gain, but it can also involve stealing sensitive data such as intellectual property, customer information, or confidential documents. The BEC attack relies heavily on social engineering tactics rather than technical exploits or malware. The attacker carefully researches their target organisation through public sources and phishing emails to identify potential victims and gather intelligence on their operations. They then craft convincing emails that trick the recipient into taking action such as wiring money to a fake supplier account or providing login credentials for company systems. Password guessing is one of the most common methods used by hackers to gain unauthorised access to personal or corporate accounts. It is a technique that involves systematically trying different combinations of characters until the correct password is found. Password guessing may seem like a simple and outdated method but it remains an effective way for cyber criminals to breach systems and steal sensitive information. Social media has also aided this type of an attack as people are willingly disclosing a wealth of personal data; data such as birthdays, names, favourite places, football teams, etc which many people still include as part of their passwords. How are threat actors obtaining this type of data so easily? Well, you’ve probably seen (but hopefully not taken part in!) this type of post: This example is pretty brazen which makes it clear to see how much valuable information you’re giving away. However, others can be far more subtle and may just ask for one or two of the above, but cumulatively over a series of posts, you could still end up giving away far more personal information than you intend to. In fact, we’ve put together this interesting infographic which shows examples of how many accounts have been breached using a selection of football team names. We know what you’re thinking; I’ll never fall for that! Well, we imagine many of the people in this Jimmy Kimmel video probably would have said the same. The interesting thing is, if you make people feel comfortable enough, it’s surprising what information people will willingly share! Password guessing attacks are often carried out through automated tools known as brute force attackers. These programs can generate millions of possible passwords in a short time by using commonly used words, phrases, numbers and symbols. In addition, some hackers use social engineering tactics such as phishing emails or phone calls that trick users into giving away their passwords or sharing personal information that could be used to guess their passwords. To help combat this type of attack, systems will often limit the number of login attempts allowed within a given timeframe, or will require users to sign up for two-factor authentication which requires a unique code to be sent to the registered users email address or phone number. This code must then be entered as part of the login process; the logic being that even if someone (or an automated script) manages to correctly guess a password, they won’t have access to the legitimate user’s email address or phone. A keystroke logger, often called a keylogger, is a type of software or hardware device that monitors and records every keystroke made on a computer or mobile device. Cyber criminals can use these tools to steal personal information such as credit card numbers or login credentials for financial accounts. They can also use them to gain access to confidential company data and trade secrets. Keystroke loggers can be installed remotely through phishing emails or malware attacks, making them a popular choice for cybercriminals. The danger is compounded by the fact that many keystroke loggers are difficult to detect because they operate in stealth mode, making it possible for hackers to continue collecting data without being caught. Keystroke loggers are one of the reasons why many financial institutions such as banks switched from using “input” boxes where you could type numbers, to a series of individual drop-down lists of numbers where you need to use a mouse to select the relevant number. However, as safeguards evolve so too do the malicious tools, and sadly many keyword loggers can now also track mouse and trackpad movements. When considering digital security, it’s easy to forget about more traditional weaknesses and exploits which could make data vulnerable to theft. Traditional break-ins are still common and some organisations still hold records and sensitive data in paper form in filing cabinets, etc. In addition, break-ins pose a threat in that digital equipment can either be stolen, cloned, or have malicious software installed such as keyword loggers. Sometimes this can be done so discreetly that the victim is totally unaware that they have been compromised. However, it’s important to remember that physical theft can also be performed by employees or other visitors who have permission to be on the premises. Their motivation for this can be varied and depending on the nature of the industry, so too can the severity of such a breach. Physical theft can be reduced by reviewing potential weaknesses of your premises such as replacing single-glazed windows with double glazing and window locks, or ensuring that all ex-employees return any keys or key-fobs upon their departure, or by changing door codes regularly. CCTV is also useful for protecting not only staff and property, but also in being able to go back to identify any potential weaknesses following an intrusion. This is one of the most common causes for an unintentional data breach, and the causes can range from a simple lack of concentration to a need for improved data protection training and awareness. Between January 2019 and December 2020, nearly 100 devices belonging to parliamentary staffers, including MPs and peers, were lost or stolen. Some of these were due to leaving them in pubs, taxis, cars, and public transport. Others were due to leaving the items unattended. The risk of the latter (eg. if you need to quickly pop away from a table in a cafe to use the toilet) can be reduced by ensuring that devices are securely locked to a fixed object by using a Kensington Lock and ensuring the device is locked so it cannot easily be viewed by a passer-by. Other examples of human error can include sending emails with sensitive information to an incorrect address or email address, or failing to verify a user’s identity before discussing sensitive information over the phone. Potential breaches of private data can also happen unintentionally within an office amongst co-workers. For example, someone in a HR department may leave data relating to an employee on a shared printer for longer than necessary, and as a result, another employee (outside of the HR team) may happen to see that data when they go to collect their own print out. This relates closely with the physical theft of sensitive data mentioned above and can answer the aspect in regards to motivation. A disgruntled employee may decide to compromise sensitive data in order to gain favour with another entity - whether that be a potential new employer or even at the more severe end, a foreign government. In some cases journalists have secured employment within an organisation in order to gain information about that organisation. It may be that their intent is ethical (eg. to expose a company which is operating illegally, immorally, etc). However, regardless of the intent, a breach is a breach in the eyes of the ICO and therefore the organisation could be held liable if an unauthorised person gains access to personal data. This is another common and easy way to inadvertently weaken the robustness of a system. Many computer systems such as CRMs (Customer Relationship Management software) have a way to manage users and their corresponding permission levels so that the system knows what data they can or can’t access. If permission levels aren’t set up correctly, or are set too wide, it’s easy to grant users access to data which they should not be able to access or edit. This often happens due to laziness or lack of planning. It’s not just humans who can be granted the wrong permissions. Many computer systems talk to other internal or external systems, often via means of an API (Application Programming Interface) or TPS (Trusted Proxy Server). This integration with third-party systems, whilst not strictly permissions related, is actually the cause of many data breaches. That’s because, if that external system is compromised, hackers potentially have a direct route into all of the interconnecting systems. As we write this article, the BBC, British Airways, Boots, payroll provider Zellis and more have fallen victim to a cyber attack of this nature as they use a file transfer system called “MOVEit”. Therefore, when considering integrating with third-party software, it’s important to: - consider if you need to introduce a perpetual data integration - assess whether non-perpetual data exchanges can be facilitated using an alternative approach such as manual data import/exports, or via system connections which only work when enabled, and ensuring they’re disconnected when not in use - ensure that interlinked systems only have access to the data which they truly need. That way, if a breach via a third party system takes place, the data which is compromised is minimised to the smallest amount possible Would you know if you’ve suffered a data breach? Now that you know more about the different types of ways in which data can be breached, the next consideration should be how to identify if you’ve been exposed to a data breach. That’s a topic in its own right so check out our dedicated article: “How to identify and data breach and what needs to happen next”. Prevention is better than cure - Do you know all of your weak points? At Databasix, we’re experts in helping organisations improve their data protection and cyber security measures. We understand that no two clients are the same and so we tailor our services to each client to ensure you achieve the best results possible and maximise your ROI. Therefore, if you would like to learn more, please contact us today for a friendly and no-obligation consultation.
<urn:uuid:df40155e-54f6-4073-be2b-b9812b8eaa00>
CC-MAIN-2024-38
https://www.dbxuk.com/blog-2023/types-of-data-breach
2024-09-19T17:53:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00058.warc.gz
en
0.954199
4,122
2.859375
3
How is Omicron Affecting Global Travel? Omicron is the latest COVID variant to terrorize the world, originating in South Africa but slowly appearing in many countries. Initially, the variant was thought to be more dangerous and contagious than others, but early data is showing that it may be more contagious but less lethal. Since there are many unknowns about Omicron, governments are playing it safe and re-implementing travel restrictions that in many cases were only recently lifted. In the US, Canada, the UK, and many other countries, new travel rules are already in effect, including a ban on visitors from South Africa. The United States On November 29, the Biden administration placed a blanket travel ban on visitors from South Africa, Lesotho, Eswatini, Botswana, Namibia, Malawi, Mozambique, and Zimbabwe. The move came only three weeks after travel restrictions had been lifted from about 30 countries. Other inbound travelers – including US citizens, permanent residents, and visitors – must show a negative COVID test taken within a day before a flight. Airlines can accept both antigen and NAAT (nucleic acid amplification tests), including PCR. As of now, visitors do not have to take another COVID test when they land. Due to the new travel restrictions, major US airlines have waived flight change fees for international and domestic flights. Certain airlines have also implemented their own policies, such as waiving fare differences for countries that have banned visitors, like Israel and Japan, or extending the credit for a canceled ticket until the end of 2022. Canada has placed travel bans on the same African countries as the US, with the addition of Egypt and Nigeria. Canadian citizens and permanent residents who want to return to Canada from these countries must take a COVID test in a third country in order to do so. Canada has also reinstated COVID testing upon arrival at the country’s airports for all visitors, except those from the US. All travelers ages five and up must show proof of a negative COVID test taken within three days of their departure. If the on-arrival test results are positive, travelers must quarantine for 10 days. If the results are negative, they can be released from quarantine. The United Kingdom In the UK, people coming into the country must show proof of a negative COVID test before they travel. They must also self-quarantine until they receive a negative PCR test on day two after landing. Anyone suspected of having Omicron must quarantine for 10 days, even those who are fully vaccinated. Many European countries have instituted travel bans and/or restrictions, but they are constantly changing. Switzerland has been the first to ease travel rules, and other EU countries are in constant discussion about the matter. Is the South Africa Travel Ban Effective (and Fair)? Critics of international travel bans are saying that if Omicron is already spreading, why place a ban on South African travelers? Moreover, South Africa essentially acted as a “good citizen” and alerted the world to the new variant. Now it is being punished by travel bans. Will other countries be so quick to report variants? There is no simple answer to these questions, except that governments are simply trying to do their best to contain a virus that seems uncontainable. As more data is collected about Omicron, countries will likely adjust their restrictions. For now, would-be travelers need to sit tight through their disappointment and hope that Omicron is less dangerous than previously thought so international skies can open up again.
<urn:uuid:813c7848-95cb-415b-8da0-d428e4c74f5f>
CC-MAIN-2024-38
https://www.interforinternational.com/omicron-and-new-travel-guidelines/
2024-09-07T14:04:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00258.warc.gz
en
0.96148
731
2.640625
3
Table of Contents The time on the network devices are very important. In a network all the devices’ times need to be synronized. Using the same time on all the devices in network is especially very important for troubleshooting activities. This time synronization is done by NTP (Network Time Protocol). Here, we will focus on what is NTP, what is NTP port, roles, stratum levels and more. On another lessons, we will also learn How to Configure NTP on Cisco devices and PTP (Precision Time Protocol) and PTP Cisco Configuration. Think about. You are analyzing a problem and you are waiting a certain packet at a time. But because of an unsyncronized time, you are missing the logs related with that problem or it is coming lately. To overcome such issues, NTP (Network Time Protocol) is developed. With NTP, the times on various network devices in a network is synronized. NTP (Network Time Protocol) uses UDP (User Datagram Protocol) as Transport Layer Protocol. In other words, NTP uses UDP 123 Port. Network devices send their “timestamps”, eachother to synronize their clocks. You can see the same synronized clocks on all the device logs. NTP time information can be get from any source. This source is called NTP Server. The other devices in the network are called NTP Clients. There is also another type device that act as NTP Client/Server. What are these NTP devices’ roles? These roles are: Stratum Levels are shows the quality of the time source or NTP Server. The lover stratum values means that it is a better source. The higher ones means that it is not. What are these Stratum Levels? These Stratum Levels are: Stratum-0 is the directly attached source level. The time is received via dedicated transmitter or satellite with Stratum-0. Stratum-1 is the source level that is linked to the directly attached device (Stratum-0). Stratum-2 is the source level that is linked to the Stratum-1 device. Stratum-3 is the source level that is linked to the Stratum-2 device. Stratum-4 is the source level that is linked to the Stratum-3 device. And so on… In a network, there can be one more NTP Server configured with different Stratum values. According to these values, one of them become the best NTP Server. This is the one that has the lowest stratum level. For example, think about that, we have three NTP Server with Stratum levels 3,4 and 5. In this network our NTP Server will be the device that is set with Stratum Level 3. If this device fails, then the second NTP Server will become the device that has Stratum Level 4. Here, we have leared what is Network Time Protocol, Stratum Levels, Server and Client roles, NTP Port 123 and more. You can continue with Cisco NTP Configuration Example.
<urn:uuid:e66f8dda-18fa-4da0-b9ac-bd5168f7763c>
CC-MAIN-2024-38
https://ipcisco.com/lesson/ntp-network-time-protocol/
2024-09-08T19:29:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00158.warc.gz
en
0.931135
659
3.078125
3
Self-driving or autonomous cars are coming. Whether the buying public likes it or not, they’re coming. The advantages are as clear today as they were when the automobile was poised to replace the horse. It’s just a matter of when…not if. In the early days of the automobile, it was enough that you were driving a car on a paved road, at a speed greater than that of a horse. Then, as more and more cars were driving on more and more paved roads and cars were able to reach greater speeds, the buying public started to be concerned about safety. When the Nash Motor Company introduced the seat belt in 1949, the other manufactures accused Nash of ‘requiring’ seat belts because their cars were unsafe. It wasn’t until the 1970s that phrases like seat belts, air bags, safety glass and crumple zones became part of the automotive vernacular. Seat belts were included in all US-sold cars by 1968, but it wasn’t until 1986 that states like California legally mandated the use of seat belts. With all of these safety enhancements, the number of US deaths has dropped from 7.13 deaths per million miles traveled in 1949 to 1.16 deaths per million miles traveled in 2018. That’s quite an improvement, but the number of overall deaths across the US has increased during the same period by over 6,000 annually (see this entry for figures). If you’ve ever wondered which car is the safest, you cannot find that answer. The Insurance Institute for Highway Safety (IIHS) measures safety by vehicle type from compact car to SUV and everything in-between. In the new and upcoming world of autonomous vehicles, a new category will be necessary for evaluation: security of the car. ‘Security’ and ‘safety’ in this new world will be very different. All vehicle safety features are based on the survivability of an accident. All bets are off if you decide to drive your own car off a cliff or into a tree. With an autonomous vehicle, however, it’s not inconceivable that someone or some group would want to take control of a car or a group of cars in order to cause harm. Automakers won’t be able to subdivide security ratings by vehicle type like they’ve subdivided safety ratings. No one in their right mind would purchase an autonomous car with a less than perfect security rating because all of the safety ratings are based on “accidents”, not intentional harm. Because of this fact, automakers have no choice but to provide their cars with the same level of security as you might find in a military fighter jet. The technology does exist today to better secure the manufacturing, privacy and system-updating environment in autonomous cars but it’s expensive to design, expensive to build, expensive to maintain and it won’t last forever. The next problem will be that an automaker won’t want to maintain liability for an autonomous vehicle made 50 years earlier let alone 10 years earlier, even if there’s a profit associated with that maintenance. In order for these vehicles to remain secure over time, the security parameters must also change over time. Planned obsolescence will creep into the automobile business, potentially forcing the US government to mandate a limited life span of autonomous vehicles in order to guarantee their security and minimize liability. It's conceivable that in the future, automakers will not sell their autonomous cars but will rather provide them with a closed-ended lease. This limited life will guarantee that the security systems keep pace with the fast-paced hacker crowd. Another option might be that in the future, automakers or some other entity will provide vehicles on a subscription bases. Similar to the Ubers and Lyfts of today, you will summon a car to take you to a destination but without a driver. This option has a number of very interesting scenarios. With this option, all of the maintenance (including keeping up the security systems) would be the responsibility of the entity that owns the car and not the “rider”. Additionally, this would mean that a vehicle would end up being driven for greater than the average 10K miles per year and would therefore not last 20 to 30 years before being decommissioned and recycled. The new world of autonomous cars will be more disruptive than the advent of the first car 133 years ago. Our whole view of road transportation will need to be reevaluated. Once Vehicle to Vehicle (V2V) technologies start allowing cars to avoid each other, the theory is that when more cars communicate with each other, the number of accidents will decline. By that logic, the number of overall deaths should also decline. Additionally, if cars can communicate with each other, a line of cars on a freeway will be able to travel at a higher speed with less distance between them. This would mean that traffic jams could be eliminated and fuel economy improved by cars drafting each other. This would allow for more cars on current roads, which will decrease the expense of widening freeways. If these theories are born out, there is little doubt the US government will start removing non-autonomous/non-communicative vehicles from the road. Visit Entrust's dedicated landing page to learn more about connected vehicle security – and stay tuned for part II of my blog, where I delve further into the security question.
<urn:uuid:13532f9c-3271-41c9-bc08-37f15a3074a5>
CC-MAIN-2024-38
https://www.entrust.com/blog/2019/11/for-autonomous-vehicles-theres-a-difference-between-security-and-safety-part-i
2024-09-13T16:07:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00658.warc.gz
en
0.966924
1,108
2.890625
3