text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Many people’s main focus is taking eco-conscious actions that still align with their everyday tasks. Doing so will ensure the future is eco-friendlier. However, this would be impossible without many major accomplishments in the field of eco-centric innovation. To help you get a better understanding of the possible technologies of the future, here are the top innovations necessary for an eco-friendly future.
3D printing is quickly becoming one of the most notable technologies to spring up in the realm of eco-friendly construction. 3D printing is especially notable for an eco-friendly future because it minimizes waste in the production of goods. It can also create products that are biodegradable and heat resistant. 3D printers continually minimize waste in production while providing valuable, customizable tools for any job.
Electric vehicles are certainly a reality right now, but the innovation behind these gadgets is only just beginning. Even now, these cars are consistently being upgraded to have a longer battery life, better mileage, and greater energy efficiency. Electric vehicles may just become the norm rather than a cool gimmick that you may see on the road every now and then. Keep in mind that these cars only have the ability to be eco-friendly as long as the power feeding them is ecologically sourced.
One of the newest, strangest, and possibly most impactful innovations that has recently been pushed into the public eye is the concept of generating electric energy from liquid waste. This technology utilizes microbial fuel cells to feed on the organic material found in urine, producing electrons and generating electricity. With this system converting urine into electricity, you may be able to generate a portion of your electricity by essentially doing nothing.
Buildings That breathe
One of the most notable advancements currently underway is the idea of buildings that process carbon dioxide through algae. These algae, which are attached to the buildings, process the air around them, capturing harmful pollutants. The algae are then harvested to produce fertilizers, plastics, and other useful products.
This technology not only cuts down on pollution but also provides a valuable source of nutrients for the soil at a later time. We hope this article has been eye-opening to the top innovations for an eco-friendly future. If you’re concerned with helping the planet stay green, be sure to take steps to ensure the environment stays ecologically sound for years to come! | <urn:uuid:2c0c99b8-fd15-44b7-8bd9-d10f0ff9adc7> | CC-MAIN-2024-38 | https://coruzant.com/tech/top-innovations-for-an-eco-friendly-future/ | 2024-09-14T20:05:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00558.warc.gz | en | 0.956453 | 477 | 3.25 | 3 |
5G as a wireless power source?
5G could provide more than just fast connection speeds – it might soon deliver power to your devices too. But is the promise of wireless power transfer too good to be true?
Researchers at the Georgia Tech Institute for Electronics and Nanotechnology have developed an antenna system for 5G energy harvesting that could eliminate the need for batteries in IoT devices.
In their paper, “5G as a wireless power grid”, published in the January 2021 edition of Scientific Reports, researchers Aline Eid, Jimmy Hester and Manos Tentzeris outline their revolutionary new design for a playing card-sized antenna that can harvest spare electromagnetic energy from 5G signals and convert it to power – in effect, turning 5G networks into wireless power grids.
Wireless power transfer has been the goal of many scientists and researchers for decades, most notably by Nikola Tesla back at the turn of the last century, whose infamous Wardenclyffe Tower failed to transfer an electrical charge from Long Island, NY to Niagara Falls, despite the considerable funding of his benefactor, J. P. Morgan.
Hester, who is also co-founder of Atheraxon, a developer of 5G RFID tech spun out of Georgia Tech, said: “With the advent of 5G networks, this could actually work and we’ve demonstrated it. That’s extremely exciting — we could get rid of batteries.”
Recently, the RCA Airnergy has claimed to be able to charge devices using ambient wi-fi, while GuRU Wireless hopes to power robot vacuum cleaners with millimetre wave (mmWave) energy.
mmWave energy harvesting in the 28GHz band has been possible for some time, albeit impractical over long-ranges, as power-harvesting tends to require large rectifying antennas (or rectennas) that must be pointed directly at the transmission source – perfect for a housebound Roomba, but impractical for use in mobile devices.
The Georgia Tech team solved this by 3D-printing a 2”x4” Rotman lens, a beam-forming network commonly used in radar, to see targets in multiple directions without having to physically move the antenna. Like a prism in reverse, the Rotman lens diffracts six antenna beams into one, permitting much wider coverage.
According to the Friis Transmission Equation, the loss between the transmitter and receiver is greater at high frequencies, such as those that 5G signals transmit at; although more spectrum is available at this frequency, the power loss is too great at long ranges, so only point-to-point communication is possible.
Even with the Rotman lens, the device only managed to harvest around 6 microwatts (μW) of energy at a range of 180m from the 5G base station; this falls far short of the 5W (5,000,000μW) it takes on average to charge a smartphone, but is enough to power small IoT sensors.
In short, this new “development” is still somewhat inefficient, since the power drops off with distance quickly. But with some refinement, the device could power sensors such as this ultra-low power sensor proposed in 2018, which requires 26-63μW of power.
They could charge up a small battery, capacitors, or temperature sensors in hard to reach locations, for which replacing batteries would be a tricky task. And what these new devices lack in power, they make up for in versatility, in both size and applications in flexible structures, such as wearables. The paper itself concludes by saying the availability of wireless power may charge “5G-powered nodes for the IoT and… long-range passive mm-wave RFIDs.”
Though this case hardly represents the next ambitious stage of powering consumer devices, it does demonstrate the possibilities for new, emerging tech, greatly enhanced by 5G. Wireless power transfer could be just one of many sources of income for CSPs, much in the same way as data revenues augmented that of voice.
Wireless power has far to go before providing a reliable alternative to the grid, however this emerging technology has countless applications in sensor tech for smart agriculture and smart cities, providing the foundations for a growing IoT sensor network.
Just don’t go throwing your phone batteries out yet! | <urn:uuid:787236ce-d0bf-48f9-8748-2ae25079ee8a> | CC-MAIN-2024-38 | https://www.cerillion.com/blog/5g-as-a-wireless-power-source/ | 2024-09-16T01:57:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00458.warc.gz | en | 0.949034 | 894 | 3.421875 | 3 |
AI Case Study
Autonomous Healthcare detects different types of ventilator asynchrony in ICU patients with machine learning
Autonomous Healthcare has developed technology that can manage a patient's ventilation in the ICU. Using machine learning, it is able to detect anomalies between the patient's own and a mechanical ventilator's inhalation and exhalation patterns. The system was trained on data of waveforms from patients on ventilators and learned the signatures of different asynchrony types. In its first assessment on data it achieved the same accuracy as human doctors, and it is now being tested with real patients.
Healthcare Providers And Services
Autonomous Healthcare, based in Hoboken, N.J., is "designing and building some of the first AI systems for the ICU. These technologies are intended to provide vigilant and nuanced care, as if an expert were at the patient’s bedside every second, carefully calibrating treatment. Such systems could relieve the burden on the overtaxed staff in critical-care units. What’s more, if the technology helps patients get out of the ICU sooner, it could bring down the skyrocketing costs of health care. We’re focusing initially on hospitals in the United States, but our technology could be useful all around the world as populations age and the prevalence of chronic diseases grows. Our methodologies spring from an unlikely source: the aerospace industry."
When it comes to mechanical ventilators, "mismatches between the patient’s demand and the machine’s delivery are all too common, which can cause a patient to 'fight the ventilator'.
The first step in addressing this problem is to detect it. Experienced respiratory therapists can identify different types of asynchrony if they continuously monitor the waveforms on a ventilator’s display screen indicating the pressure and flow. But in an ICU, one respiratory therapist typically oversees 10 or more patients and can’t possibly monitor all of them constantly.
We've designed a machine-learning framework that replicates that human expertise in detecting different types of asynchrony. To train our system, we used a data set of waveforms from patients on ventilators, in which each waveform had been evaluated by a panel of clinical experts. Our algorithm learned the signatures of different asynchrony types—such as a particular dip in the flow signal at a specific point in time. In our first assessments of the algorithm’s performance, we focused on what’s called cycling asynchrony, which is the most challenging type to detect. Here the ventilator’s initiation of the exhale doesn’t match the patient’s own exhalation. The accuracy of our algorithm in detecting cycling asynchrony in a new data set matched that of human experts.
We’re now testing the algorithm at Northeast Georgia Medical Center’s ICU to detect respiratory asynchrony in real patients and in real time. The technology has been incorporated into a clinical-decision support system, which is designed to help respiratory therapists assess a patient’s needs. This framework can also provide researchers with a tool to better understand the underlying causes of asynchrony and its impact on patients. Our long-term goal is to design mechanical ventilators that can automatically adjust their own settings in response to each patient’s needs."
In its first assessment on data, the system's accuracy matched that of human experts in detecting cycling asynchrony. The system is now being tested with real patients to detect respiratory asynchrony in real time, at Northeast Georgia Medical Center’s ICU.
R And D
Core Research And Development
"In a hospital’s intensive care unit (ICU), the sickest patients receive round-the-clock care as they lie in beds with their bodies connected to a bevy of surrounding machines. This advanced medical equipment is designed to keep an ailing person alive. Intravenous fluids drip into the bloodstream, while mechanical ventilators push air into the lungs. Sensors attached to the body track heart rate, blood pressure, and other vital signs, while bedside monitors graph the data in undulating lines. When the machines record measurements that are outside of normal parameters, beeps and alarms ring out to alert the medical staff to potential problems.
While this scene is laden with high tech, the technology isn’t being used to best advantage. Each machine is monitoring a discrete part of the body, but the machines aren’t working in concert. The rich streams of data aren’t being captured or analyzed. And it’s impossible for the ICU team—critical-care physicians, nurses, respiratory therapists, pharmacists, and other specialists—to keep watch at every patient’s bedside.
In the United States, ICUs are among the most expensive components of the health care system. About 55,000 patients are cared for in an ICU every day, with the typical daily cost ranging from US $3,000 to $10,000. The cumulative cost is more than $80 billion per year.
Today, more than half of ICU patients in the United States are over the age of 65—a demographic group that’s expected to grow from 46 million in 2014 to 74 million by 2030. Similar trends in Europe and Asia make this a worldwide problem. To meet the growing demand for acute clinical care, ICUs will need to increase their capacity as well as their capabilities.
In ICUs today, the data from the raft of bedside monitors is usually lost as the monitor screens refresh every few seconds. While some advanced ICUs are now trying to archive these measurements, they still struggle to mine the data for clinical insights."
"Data set of waveforms from patients on ventilators, in which each waveform had been evaluated by a panel of clinical experts. Our algorithm learned the signatures of different asynchrony types—such as a particular dip in the flow signal at a specific point in time." | <urn:uuid:be2c7b22-6ddc-4496-b087-a59bcc450ba7> | CC-MAIN-2024-38 | https://www.bestpractice.ai/ai-case-study-best-practice/autonomous_healthcare_detects_different_types_of_ventilator_asynchrony_in_icu_patients_with_machine_learning | 2024-09-17T08:17:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00358.warc.gz | en | 0.93102 | 1,248 | 2.59375 | 3 |
White Paper: CloudOne
The Internet of Things(Iot) is the future of internet, powering billions of integrated devices and processes across industries and global locations. Typically, Internet of Things is expected to offer advanced connectivity of devices, systems, and services that goes beyond machine-to-machine communications (M2M) and covers a variety of protocols, domains, and applications.
Download this whitepaper on “IoT-The Interconnection of Everything” and learn about significant challenges of IoT and how to prepare for the interconnecting of everything. It covers:
Dimensions of the Internet of Things: Components, building blocks, and system of systems
Business challenges with the Internet of Things: Complex challenges that affect Internet-of-Things adoption and growth
Cross-Industry concepts for the Internet-of-Things technologies and its components
Internet of Things world: Connect and manage devices anywhere and simplify the last mile in the world of Internet of Things
By: Doyle Research
Network intelligence based in the cloud enables IT organizations to quickly, easily, and securely adapt their network to the new cloud-oriented traffic flows. SD-WAN technology delivers the network intelligence required to increase remote work force with cloud-based applications and data. Read this whitepaper, “Why the Cloud is the Network”, to learn how to leverage the cloud as a network and why the organizations accessing cloud-based applications should consider the adoption of SD-WAN solutions. It discusses: • How are enterprises leveraging network intelligence in the cloud based applications to produce flexible networks which adapt, improve, and secure access and performance of traditional cloud services? • How can intelligent cloud network impact user adoption, satisfaction, and productivity? • How can network intelligence based in the cloud remediate network challenges to reduce jitters, packet loss, and improve redundancy? • How can network intelligence in the cloud based applications minimize disadvantages like unpredictable reliability, poor latency, and limited security?
For CIOs and CEOs, cloud computing security is still a hot topic for discussion. The debate continues as to whether a public, private or hybrid cloud approach is best. While there are complex factors that can inform a decision between public or private cloud, security is the biggest. Choosing public vs private cloud services can be challenging, it is important to analyze the differences between applications virtualized in a private cloud and those in a public cloud. This white paper compares public and private clouds in contrast to the basic elements, features and benefits that both delivers. This White Paper highlights: How Public and Private Cloud deployment models differs? Cloud Computing Security — Hosted or Local Applications? The trends in Public, Private, and Hybrid Cloud: Which is right for you? An Overview on Security with Private Cloud Computing Implementing Private Cloud Computing with minimal effort and seamless end user experience | <urn:uuid:8f21f243-e198-41cf-80e6-67fc42ae7e25> | CC-MAIN-2024-38 | https://www.ciowhitepapersreview.com/internet-of-things/internet-of-things-is-the-future-of-internet-352.html | 2024-09-17T08:08:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00358.warc.gz | en | 0.912647 | 573 | 2.625 | 3 |
Picture this: You're building a fortress in the sky. Sounds impossible, right? Well, welcome to CCSP Domain 3, where we do just that—but with clouds and data. We're not talking about fluffy white castles, but robust digital fortresses that keep our information safe in the vast expanse of cyberspace.
Securing cloud platforms isn't just about firewalls and encryption. It's about architecting resilient systems that can withstand the storms of cyber threats and other threats. From designing secure data centers to implementing chaos engineering, we're diving deep into the bedrock of cloud security architectures. These strategies form the foundation of a robust cloud infrastructure, ensuring data integrity and service continuity in an ever-evolving threat landscape.
Let's explore the critical aspects of the Domain 3 of the CCSP exam and enhance your cloud security expertise.
3.1 Comprehend cloud infrastructure and platform components
There are two layers to cloud infrastructure:
- The physical resources – This is the hardware that the cloud is built on top of. It includes the servers for compute, the storage clusters and the networking infrastructure.
- The virtualized infrastructure – Cloud providers pool together these physical resources through virtualization. Cloud customers then access these virtualized resources.
In a general sense, physical environments include the actual data centers, server rooms or other locations that host infrastructure. If a company runs its own private cloud, it acts as the cloud provider and the physical environment would be wherever the hardware is located.
Compute nodes are one of the most important components. A compute node is essentially what provides the resources, which can include the processing, memory, network and storage that a virtual machine (VM) instance needs. However, in practice, storage is often provided by storage clusters.
Security of the physical environment
Now that we have described some of the major components that make up the physical environment of a cloud data center, it’s time to look at some of the ways we secure these environments. In order to maintain a robust security posture, we must follow a layered defense approach, which is also known as defense in depth.
In essence, we want to have multiple layers of security so that attackers can’t completely compromise an organization just by breaching one of our security controls, as shown in the image below.
Confidentiality | Keeping our data confidential basically means keeping it a secret from everyone except for those who we want to access it. |
Integrity | If data maintains its integrity, it means that it hasn’t become corrupted, tampered with, or altered in an unauthorized manner. |
Availability | Available data is readily accessible to authorized parties when they need it. |
The CIA triad is a fairly renowned model, but confidentiality, integrity and availability aren’t the only properties that we may want for our data. Two other important properties are authenticity and non-repudiation.
Authenticity | Authenticity basically means that a person or system is who it says it is, and not some impostor. When data is authentic, it means that we have verified that it was actually created, sent, or otherwise processed by the entity who claims responsibility for the action. |
Non-repudiation | Non-repudiation essentially means that someone can’t perform an action, then plausibly claim that it wasn’t actually them who did it. |
There are a number of different data security roles that you need to be familiar with.
Data owner/ data controller | The individual within an organization who is accountable for protecting its data, holds the legal rights and defines policies. In the cloud model, the data owner will typically work at the cloud customer organization. |
Data processor | An entity or individual responsible for processing data. It’s typically the cloud provider, and they process the data on behalf of the data owner. |
Data custodian | Data custodians have a technical responsibility over data. This means that they are responsible for administering aspects like data security, availability, capacity, continuity, backup and restore, etc. |
Data steward | Data stewards are responsible for the governance, quality and compliance of data. Their role involves ensuring that data is in the right form, has suitable metadata, and can be used appropriately for business purposes. |
Data subject | he individual to whom personal data relates. |
Cloud data life cycle phases
The CCSP exam covers the Cloud Security Alliance’s data security life cycle, which was originally developed by Rich Mogull. This model is tailored toward cloud security. There are six phases in the data life cycle.
Some important physical security considerations:
Guards | Guards can help to administer entry points, patrol the location, and act as deterrents. |
CCTV | Closed circuit television cameras (CCTV) are primarily for detecting potentially malicious actions, but they also act as deterrents. |
Motion detectors | There are a range of different sensors that can be deployed to detect activity in sensitive areas. |
Lighting | Lights can act as safety precautions, deterrents, and give CCTV cameras a better view. |
Fences | Fences are a great tool for both keeping people and vehicles away from the premises. Eight feet is a common fence height for deterrence. |
Doors | Doors should be constructed securely to limit an attacker’s ability to breach them. |
Locks | Locks are critical for restricting access to doors, windows, filing cabinets, etc. There are many types of lock, including:
Mantraps | Mantraps are small spaces in between two doors, where only one door can be opened at a time. |
Turnstiles | Turnstiles prevent people from tailgating or piggybacking behind an authorized person. Tailgating and piggybacking involve following a person who is authorized to enter a restricted area through a door and thus gaining unauthorized access. The difference is that in tailgating the attacker possesses a fake badge. In piggybacking, the attacker doesn’t have any badge at all. |
Bollards | Bollards prevent vehicles from entering an area. |
Networking and communications
Clouds typically have two or possibly three dedicated networks that are physically isolated from one another, for both security and operational purposes.
Service | The service network is the customer facing network–it’s what the cloud customers have access to. |
Storage | The storage network connects virtual machines to storage clusters. |
Management | Cloud providers use the management network to control the cloud. Providers use this network to do things like log into hypervisors to make changes or to access the compute node. |
There are two major networking models, non-converged and converged networks. In a non-converged network, the management, storage and service networks are separate. The service network generally connects to the local area network across Ethernet switches, while the storage network generally connects to the storage clusters via a protocol like Fibre Channel.
In contrast, a converged network combines the storage and service networks, with storage traffic and service traffic traveling over the same network. However, the management network remains separate for security reasons.
Zero trust architecture
Zero trust architectures involve continually evaluating the trust given to an entity. They contrast with earlier models that assumed that once an entity was on the internal network it should be automatically trusted. We all know that attackers can make their way into our network perimeters, so giving anyone free rein once they are inside the network is a recipe for disaster.
A simplified summary of the zero trust approach involves:
- Not implicitly trusting entities on the internal network.
- Shrinking implicit trust zones and enforcing access controls on the most granular level possible. Micro-segmentation is useful for dividing enterprise networks into smaller trust zones composed of resources with similar security requirements.
- Granting access on a per-session basis. Access can be granted or denied based upon an entity’s identity, user location, and other data.
- Restricting resource access according to the principle of least privilege.
- Re-authentication and re-authorization when necessary.
- Extensive monitoring of entities and assets.
Virtual local area networks (VLANs)
A core aspect of cloud computing involves abstracting resources away from the physical hardware via virtualization in order to use and share the resources more efficiently. Networking resources are also abstracted away in this manner.
One way of doing this is through virtual local area networks (VLANs). You can take a physical network and logically segment it into many VLANs. Let’s say that an organization wants to operate two isolated networks. The first is for the company’s general use, while the second is a network for the security department.
The organization could do this by purchasing two separate switches. It could set up the general use network on the first switch, and the security department’s network on the second switch. As long as the two switches aren’t linked up, then the organization would have two physically isolated networks.
Another option would be for the organization to have two logically isolated VLANs on the same physical switch. The diagram below shows a 16-port switch. Four computers are plugged into the switch, the first two for general use, and the second two for the security department. If the switch were just set up by default, all four of these computers would be able to talk to each other, which is not what the company wants—they want the first two to be separate from the second two.
Instead, the image above shows how the first two computers for general use have been grouped into a VLAN—VLAN1—while the second two computers for the security department are grouped separately as VLAN2. This would mean that the first two computers could talk to each other, but not talk to the last two computers. Similarly, the last two computers can communicate with one another, but they cannot talk to the first two general-use computers.
Having two separate VLANs means that the general use network and the security department network are logically isolated and cannot access each other but they are still on the same physical switch.
This same concept can be extended beyond a single physical switch. The image below shows a second 16-port switch that has been connected to the first one. This second switch has an additional four computers connected to it, two more for general use, and an extra two for the security department.
Even though these computers are connected to a separate switch, they have still been set up as part of the preexisting VLANs. This means that the four general use computers can only communicate among themselves in VLAN1. Likewise, the security department computers can only communicate among themselves in VLAN2.
VLANs are commonly used by enterprises to logically separate networks. One example involves providing an isolated guest network to customers. This helps to protect the main network against attackers who are trying to gain a foothold by logging in to the open Wi-Fi. Another use of VLANs is to form trust zones for zero trust architecture.
Software-defined networks (SDNs)
Software-defined networks (SDNs) allow a more thorough layer of abstraction over the physical networking components. These days, SDNs are used for virtualizing networks in most cloud services.
Key benefits of SDNs | |
They can create virtual, software-controlled networks on top of physical networks. Each of these virtual networks has no visibility into the other virtual networks. | |
They can decouple the control plane from the data plane. | |
They can provide more flexibility and make it easier to rapidly reconfigure a network for multiple clients. On a network that’s completely virtualized, you can make configuration changes just through software commands. | |
They are critical building blocks that enable resource pooling for cloud services. SDNs create a layer of abstraction on top of physical networks, and you can create virtual networks on top of this layer. | |
They centralize network intelligence into one place. | |
They allow programmatic network configuration. You can entirely reconfigure the network through API calls. | |
They allow multiple virtual networks to use overlapping IP ranges on the same hardware. Despite this, the networks are still logically isolated. |
Before we can fully explain SDNs, we need to back up a little. Network devices like switches and routers have two major components, the control plane and the data plane. The control plane is the part of the architecture that is responsible for defining what to do with incoming packets and where they should be sent. The data plane does the work of processing data requests. The control plane is essentially the intelligence of the network device and it does the thinking, while the data plane is basically just the worker drone.
In traditional networks, control planes and data planes are built-in to both routers and switches. In the case of a switch, the control plane decides that an incoming packet is destined to MAC address XYZ, and the data plane makes it happen. In a traditional network, if you want to make configuration changes to switches or routers, you have to log in to each device individually, which can be time consuming.
One of the major differences in software-defined networks (SDNs) is that the control plane is separated from the data plane and then centralized into one system, as shown in the image above. A big benefit of this is that you don’t have to log in to individual devices to make changes on your network. Instead, you can just log in to the central control plane and make the adjustments there. This makes management and configuration far easier. Another advantage is that if a switch fails, you can just route the traffic around it. In the cloud, the centralized control pane of an SDN is in turn controlled by the management plane.
The security advantages of software-defined networks (SDNs)
Most of the benefits of SDNs center around the fact that virtualized networks are easy and cheap to both deploy and reconfigure. SDNs allow you to easily segment your network to form numerous virtual networks. This approach, known as microsegmentation, allows you to isolate networks in a way that would be cost-prohibitive with physical hardware.
Let’s give you a more concrete example to demonstrate just how advantageous microsegmentation can be. First, let’s say your organization has a traditional network, as shown in the diagram below. You would have the insecure Internet, a physical firewall, and then the DMZ, where you would have things like your web server, your FTP server and your mail server. Under this setup, your firewall rules would need to be fairly loose to allow the web traffic, the FTP traffic and the SMTP traffic through to each of your servers. The downside of this configuration is that if the web server was compromised by an attacker, this would give them a foothold in your network that they could use to access your FTP server or your mail server. This is because all of these servers are on the same network segment.
In contrast to this traditional network configuration, SDNs allow you to deploy virtual firewalls easily and at low cost. You can easily put virtual firewalls in front of each server, creating three separate DMZs, as shown in the figure below. You could have much tighter rules on the firewalls for each of these network segments because the firewall in front of your web server would only need to let through web traffic, the firewall in front of your FTP server would only need to let through FTP traffic, etc.
The benefit of having these virtualized segments with their own firewalls is that the much stricter rules limit the opportunities for malicious traffic to get through. In addition, if an attacker does manage to get a foothold on one of your servers, such as your web server, they would not be able to move laterally as easily. They would still need to get through the other firewalls if they wanted to reach your FTP or mail servers.
The security challenges of cloud networking
Cloud networking has a number of benefits that are essential to the functioning of the modern cloud environment. However, there’s no free lunch, and SDNs also come with a range of disadvantages, many of which are related to the fact that the cloud customer has no control of the underlying physical infrastructure. Since physical appliances can’t be installed by the customer, customers must use virtual appliances instead, which have some limitations.
Virtual appliances are pre-configured software solutions made up of at least one virtual machine. Virtual appliances are more scalable and compatible than hardware appliances, and they can be packaged, updated and maintained as a single unit.
Virtual appliances can form bottlenecks on the network, requiring significant resources and expense to deliver appropriate levels of performance. They can also cause scaling issues if the cloud provider doesn’t offer compatible autoscaling. Another complication is that autoscaling in the cloud often results in the creation of many instances that may only last for short periods. This means that different assets can use the same IP addresses. Security tools must adapt to this highly dynamic environment by doing things like identifying assets by unique and static ID numbers, rather than IP addresses that may be constantly changing.
Another complication comes from the way that traffic moves across virtual networks. On a physical network, you can monitor the traffic between two physical servers. However, when two virtual machines are running on top of the same physical compute node, they can send traffic to one another without it having to travel via the physical network, as shown in the diagram below. This means that any tools monitoring the physical network won’t be able to see this communication.
One option for monitoring the traffic between two VMs on the same hardware is to deploy a virtual network monitoring appliance on the hypervisor. Another is to route the traffic between the two VMs through a virtual appliance over the virtual network. However, these approaches create bottlenecks.
In the cloud, compute is derived from the physical compute nodes which are made up of CPUs, RAM and network interface cards (NICs). A bunch of these are stored in racks at a provider’s data center, and interconnected to the management network, the service network, and the storage network. These compute resources are then abstracted away through virtualization and provisioned to customers.
Securing compute nodes
Cloud providers control and are responsible for the compute nodes and the underlying infrastructure. They are responsible for patching and correctly configuring the hypervisor, as well as all of the technology beneath it. Cloud providers must strictly enforce logical isolation so that customers are not visible to one another. They also need to secure the processes surrounding the storage of a VM image through to running the VM. Adequate security and integrity protections help to ensure that tenants cannot access another customer’s VM image, even though they share the same underlying hardware. Another critical cloud provider responsibility is to ensure that volatile memory is secure.
Virtualization involves adding a layer of abstraction on top of the physical hardware. It’s one of the most important technologies that enable cloud computing. The most common example is a virtual machine, which runs on top of a host computer. The real, physical resources belong to the host computer, but the virtual machine acts similarly to an actual computer. Its operating system is essentially tricked by software running on the host computer. The OS acts the same way it would if it was running on top of its own physical hardware.
But virtualization is used beyond just compute. We also rely on it to abstract away storage and networking resources (such as the VLANs and SDNs we discussed earlier) from the underlying physical components.
Virtual machines (VMs)
To simplify things, a normal computer runs directly on the hardware. In contrast, a virtual machine runs at a higher layer of abstraction. It runs on top of a hypervisor, which in turn runs on top of physical hardware. The virtual machine is known as the guest or an instance, while the computer that it runs on top of is the host. The diagram below shows multiple virtual machines running on the same compute node. Each virtual machine includes its operating system, as well as any apps running on top of it.
One huge benefit of virtualization is that it frees up virtual environments from the underlying physical resources. You can also run multiple virtual machines simultaneously on the same underlying hardware. In the cloud context, this is incredibly useful because it allows providers to utilize their resources more efficiently.
Hypervisors are pieces of software that make virtualization possible. There are two types of hypervisors, as shown in the image above and the table below.
Type 1 hypervisor |
Type 2 hypervisor |
Due to the fact that hypervisors sit between the hardware (or the OS in a type 2 hypervisor) and virtual machines, they have total visibility into every virtual machine that runs on top of them. They can see every command processed by the CPU, observe the data stored in RAM, and look at all data sent by the virtual machine over the network.
An attacker that compromises a hypervisor may be able access and control all of the VMs running on top of it, as well as their data. One threat is known as a VM escape, where a malicious tenant (or a tenant whose VM was compromised by an external attacker) manages to break down the isolation and escape from their VM. They may then be able to compromise the hypervisor and access the VMs of other tenants.
In type 2 hypervisors, the security of the OS that runs beneath the hypervisor is also critical. If an attacker can compromise the host OS, then they may be able to also compromise the hypervisor as well as the VMs running on top of it.
Containers are highly portable code execution environments that can be very efficient to run. Containers feature isolated user spaces but share the kernel and other aspects with the underlying OS. This contrasts with virtual machines, which require their own entire operating systems, including the kernel.
Multiple containers can run on top of each OS, with the containers splitting the available resources. This makes containerization useful for securely sharing hardware resources among cloud customers, because it allows them to use the same underlying hardware while remaining logically isolated. Each of these containers can in turn run multiple applications.
Another major advantage of containers is that they can help to make more efficient use of computational resources. The image below shows the contrast between virtual machines and containers. If we want to run three VMs on top of our hypervisor, we need three separate operating systems, three separate sets of libraries, and the apps on top of them. In contrast, on the container side, we just have one operating system, one containerization engine, libraries that can be shared between apps, and then our three apps on top.
The image below shows the major components of containerization, as well as the key terms. A container is formed by taking configuration files, application code, libraries, necessary data, etc. and then building them into a binary file known as a container image.
These container images are then stored in repositories. Repositories are basically just collections of container images. In turn, these repositories are stored in a registry. When you want to run a container, you pull the container image out of its repository, and then run it on top of what is known as a container engine. Container engines essentially add a layer of abstraction above the operating system, which ultimately allows the containers to run on any operating system.
Application virtualization is similar to containerization in that there is a layer of virtualization between the app and the underlying OS. We often use application virtualization to isolate an app from the operating system for testing purposes. It is shown below:
Traditionally, apps were monolithic. They were designed to perform every step needed to complete a particular task, without any modularity. This approach creates complications, because even relatively minor changes can require huge overhauls of the app code in order to retain functionality.
With a more modular approach, developers can easily swap out and replace code as needed, without having to redesign major parts of the app. These days, many apps are broken down into loosely coupled microservices that run independently and simultaneously. These are small, self-contained units with their own interfaces, as shown in the image above.
Serverless computing can be hard to pin down. The term is often used to describe function-as-a-service (FaaS) products like AWS Lambda, but a number of other services are also offered under the serverless model. These include the relational database, Amazon Aurora, or Microsoft’s complex event processing engine, Azure Stream Analytics.
At its heart, serverless refers to a model of providing services where the customer only pays when the code is executed (or when the service is triggered by use, such as Amazon Aurora’s database), generally measured in very small increments.
Function as a service (FaaS)
Function as a service (FaaS) is a subset of serverless computing. In contrast with serverless’ broader set of service offerings, FaaS is used to run specific code functions. Entire applications can be built under the serverless model, while FaaS is limited to just running functions. Under FaaS you are only billed based on the duration and memory used for the code execution, and there aren’t any network or storage fees.
We will start by discussing the storage types from 2.2 Design and implement cloud data storage architectures. This includes Exam Outline’s subsections on Storage types (e.g., long-term, ephemeral, raw storage), and Threats to storage types. We will also discuss storage controllers and storage clusters.
There are a number of different storage types you need to understand to truly grasp cloud computing. They are summarized below:
Long-term | Cheap and slow storage that’s mainly used for long-term record keeping. |
Ephemeral | Temporary storage that only lasts until the virtual environment is shut down. |
Raw-disk | A high-performance storage option. In the cloud, raw disk storage allows your virtual machine to directly access the storage device via a mapping file as a proxy. |
Object | Object storage involves storing data as objects, which are basically just collections of bits with an identifier and metadata. |
Volume | In the cloud, volume storage is basically like a virtualized version of a physical hard drive, with the same limitations you would expect from a physical hard drive. |
Cloud service models and storage types
Service model | Storage type |
Storage controllers manage your hard drives. They can be involved in tasks like reconstructing fragmented data and access control. Storage controllers can use several different protocols to communicate with storage devices across the network.
Here are three of the most common protocols:
Internet Small Computer System Interface (iSCSI | This is an old protocol that is cost-effective to use and highly compatible. However, it does have limitations in terms of performance and latency. |
Fibre Channel (FC) | Fibre Channel offers reliability and high performance, but it can be expensive and difficult to deploy. |
Fibre Channel over Ethernet (FCoE) | Fibre Channel over Ethernet relies on Ethernet infrastructure, which reduces the costs associated with FC. It offers high performance, low latency and a high degree of reliability. However, there can be some compatibility issues, depending on your existing infrastructure. |
Cloud providers typically have a bunch of hard drives connected to each other in what we call storage clusters. Storage clusters are generally stored in racks that are separate from the compute nodes. Connecting the drives together allows you to pool storage, which can increase capacity, performance and reliability.
Storage clusters are typically either tightly coupled, or loosely coupled, as shown in the image above. The former is expensive, but it provides high levels of performance, while the latter is cheaper and performs at a lower level. The main difference is that in tightly coupled architectures the drives are better connected to each other and follow the same policies, which helps them work together. If you have a lot of data, and performance isn’t a major concern, a loosely coupled structure is often much cheaper.
The management plane is the overarching system that controls everything in the cloud. It’s one of the major differences between traditional infrastructure and cloud computing. Cloud providers can use the management plane to control all of their physical infrastructure and other systems, including the hypervisors, the VMs, the containers, and the code.
The centralized management plane is the secret sauce of the cloud, and it helps to provide the critical components like on-demand self-service and rapid elasticity. Without the management plane, it would be impossible to get all of the separate components to work in unison and respond dynamically to the needs of cloud customers in real time. The diagram below shows the various parts of the cloud under the management plane’s control.
The diagram further down shows a simple diagram of the typical components of a cloud. The logical components are highlighted in yellow, while the physical components are shown in purple. Note that the management plane is actually both physical hardware and software.
Management plane capabilities
Management plane capabilities include:
- Service catalog
- Identity and access management
- Management APIs
- Configuration management
- Key management and encryption
- Financial tracking and reporting
- Service and helpdesk
Management plane security controls
The management plane is an immensely powerful aspect of cloud computing. Due to the management plane’s immense degree of control and access, it means that if it gets compromised by an attacker, they will have the keys to the castle. This makes securing the management plane one of the most important priorities. Defense in depth is critical—there need to be many layers of security controls keeping the management plane secure.
Orchestration is the centralized control of all data center resources, including things like servers, virtual machines, containers, storage and network components, security, and much more. Orchestration provides the automated configuration and coordination management. It allows the whole system to work together in an integrated fashion. Scheduling is the process of capturing tasks and prioritizing them, then allocating resources to ensure that the tasks can be conducted appropriately. Scheduling also involves working around failures to ensure tasks are completed.
3.2 Design a secure data center
There are many factors that influence the design of a data center. They include:
The type of cloud services provided | Different purposes will require different designs. For a service that offers cheap cloud storage, the data center would need a lot of storage hardware. In contrast, a service that is designed for training large learning models (LLMs) would need a lot of high-end chips. |
The location of the data center | Factors that affect the location include:
Uptime requirements | If a data center aims to have extremely high availability, it will need to be designed with more redundancy built in. |
Potential threats | Threats will vary depending on what the cloud service is used for. As an example, if a cloud service is designed to host protected health information (PHI), it will need additional protective measures to mitigate against attackers targeting this highly sensitive data. |
Efficiency requirements | Different cloud services will need varying levels of efficiency to ensure cost-effectiveness. The intended use impacts design choices. As an example, a data center that aims to provide cheap service will probably want to use a lot of relatively basic equipment. A data center for training AI models will need niche hardware that drives up costs. |
Tenant partitioning and access control are two important logical considerations highlighted by the CCSP exam outline that can both be implemented through software.
If resources are shared without appropriate partitioning, a malicious tenant (or a tenant who has been compromised by an attacker) could harm all of the other tenants. Obviously, we do not want this to happen, so we want to isolate the tenants from one another. With appropriate isolation, a compromised or malicious tenant cannot worm their way into the other tenant’s systems.
Tenants can be isolated by providing each one with their own physical hardware. One example is to allocate dedicated servers to each tenant. However, public cloud services tend to partition their tenants logically. They share the same underlying physical resources between their tenants and provide each one with virtualized versions of the hardware.
Looking for some CCSP exam prep guidance and mentoring?
Learn about our personal CCSP mentoring
Access controls are an essential part of keeping tenants separate. We discuss them in Domain 4.7.
The physical design of a data center goes far beyond the architecture. It includes things like the location, the HVAC, the infrastructure setup and much more. Each aspect needs to be carefully considered to produce an efficient and resilient data center.
Buy or build?
When a company needs a data center, it must decide whether to buy an existing one, lease, or build its own. Below are the key differences between buying, leasing and building:
Buy | Lease | Build |
High CapEx, low OpEx (but not as low as when building a custom data center) | Low CapEx, high OpEx. | High CapEx, but low OpEx. |
Will not be customized to an organization’s needs. | Will not be customized to an organization’s needs. | Can be tailor-made and incredibly efficient. |
The organization has a lower degree of control. | The organization has a lower degree of control. | The organization has a high degree of control. |
There are many important factors to consider when choosing the location of a data center. Some of the main considerations are:
- How close the data center needs to be to users.
- Jurisdiction and compliance requirements. Some jurisdictions may require that any data about their residents be stored within the region.
- The price of electricity in various regions.
- Susceptibility to disasters such as earthquakes and flooding.
- Climate also has an impact, with warmer locations generally requiring more energy to cool the hardware.
When designing a data center, we have three primary utilities that we need to worry about. It’s easiest to remember them as the three Ps.
Ping (network) | Your data center will need to have a high-speed fiber optic connection that links it up to the internet backbone. |
Power (electricity) | Your data center will need sufficient power to run its equipment. Given that data centers use large amounts of power, it is ideal to locate data centers in areas with affordable electricity. |
Pipe (HVAC) | To efficiently run your hardware and limit equipment failures, your data center will need to maintain the right temperature and humidity. This is what we consider “pipe”. It includes your air conditioning, heating, ventilation, dehumidifiers, water, etc |
Given that each of these utilities are critical for keeping your service available, you will need to have redundancies for each in place. The more uptime you wish to guarantee your customers, the more elaborate your redundancy plans will need to be.
DiscoveryaInternal vs external redundancies
Redundancies can be categorized as internal or external, depending on whether they are inside the server room or outside of it. Things like power distribution units and chillers are viewed as internal redundancies, while a generator is seen as external. You wouldn’t want to run your generator inside and clog the server room with fumes.
BICSI data center standards
When designing data centers, various resources from the Building Industry Consulting Service International (BICSI) are incredibly useful. For taking care of ping, BICSI has a number of cabling standards, such as ANSI/BICSI N1-2019, Installation Practices for Telecommunications and ICT Cabling and Related Cabling Infrastructure and ANSI/BICSI N2-2017, Practices for the Installation of Telecommunications and ICT Cabling Intended to Support Remote Power Applications.
Standards that focus on overall data center design and operations include ANSI/BICSI 002-2019, Data Center Design and Implementation Best Practices, as well as BICSI 009-2019, Data Center Operations and Maintenance Best Practices.
Keys, secrets and certificate management
HVAC stands for heating, ventilation and air conditioning, each of which are critical for operating a data center smoothly. In cold climates, a data center may need heating. Ventilation is important for dehumidifying and filtering a data center’s air. Air conditioning and other types of cooling are critical for keeping the hardware from overheating, especially in hot places.
The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) specifies that data centers should maintain the conditions listed in the table below:
Recommended air | | <urn:uuid:f3bdfa0c-7521-4f0e-9853-ed79e26a6e89> | CC-MAIN-2024-38 | https://destcert.com/resources/domain-3-cloud-platform-and-infrastructure-security/ | 2024-09-10T05:14:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00158.warc.gz | en | 0.939374 | 7,608 | 2.921875 | 3 |
When working with macOS applications, understanding how to handle ‘plist’ (property list) files is essential for effective application packaging and configuration management. ‘Plist’ files are key components for software packaging on macOS, storing data in a key-value format to define application behavior, preferences, and configurations across different sessions and users.
What are Plist Files?
A ‘plist’ file is a structured data format used by macOS and iOS applications to store settings, preferences, and other configuration data. These files can exist in either XML or binary format and are commonly found in application bundles and user preference directories. As part of the application packaging process, ‘plist’ files help to manage various aspects of software deployment and updates.
Where to Find Plist Files?
For effective mac packaging, it’s essential to know where ‘plist’ files are located:
- User Preferences:
These files contain user-specific settings for applications, which can be adjusted during the software deployment process to cater to individual needs.
2. All Users Preferences:
These files store system-wide settings, making them ideal for use in packaged business applications that need consistent configuration across all users.
Working with Plist Files
Converting Plist Files Between Formats
‘Plist’ files are often stored in a binary format for efficiency, but this can pose a challenge during application packaging and deployment because they are not readable in standard text editors. Use the plutil command to convert these files between binary and XML formats:
To convert a binary ‘plist’ file to XML format:
To convert an XML ‘plist’ file back to binary:
Converting between these formats can be a crucial step in the application packaging workflow, allowing easy editing and ensuring efficient storage.
Viewing and Editing Plist Files
Various tools are available for viewing and editing ‘plist’ files:
Xcode: Provides a graphical interface to modify ‘plist’ files, a valuable feature for application repackaging.
PlistBuddy: A command-line tool designed for managing ‘plist’ files, useful for automating tasks in the software deployment lifecycle.
Text Editor: Convert the binary ‘plist’ files to XML format to make them editable with any standard text editor.
Example: Using the defaults Command
The defaults command is an essential tool for working with ‘plist’ files, enabling you to read and write data effectively—a key part of mac packaging tools.
To read a value:
To set a value:
For example, to disable updates for Firefox, use:
Best Practices for Working with ‘plist’ Files
- Backup: Always create a backup of ‘plist’ files before making any changes, ensuring data integrity during the application packaging process.
- Permissions: Ensure you have the necessary permissions to edit files in /Library/Preferences/, especially when working with system-wide configurations.
By mastering the handling of ‘plist’ files, you can optimize macOS application packaging, making software deployment and updates more efficient and tailored to both individual users and entire systems. | <urn:uuid:7998fc9e-cc80-4aba-8d04-aa35740fba74> | CC-MAIN-2024-38 | https://apptimized.com/en/news/configure-macos-applications-plist-files-guide/ | 2024-09-11T11:00:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00058.warc.gz | en | 0.864958 | 670 | 2.625 | 3 |
How advancing cyber education can help fill workforce gaps
The ongoing cybersecurity skills shortage is a critical issue plaguing organizations and causing serious problems. The lack of trained and qualified professionals in the field has resulted in numerous security breaches, leading to the loss of large amounts of money.
In this Help Net Security video, José-Marie Griffiths, President of Dakota State University, discusses how this shortage is not just a mere inconvenience but a major threat compromising the safety and security of companies and putting the sensitive information of their clients and customers at risk.
With each passing day, the consequences of this shortage become more and more severe, making it imperative for organizations to take immediate action and find ways to address this critical challenge. | <urn:uuid:095fba1d-bfe2-4a13-af7d-a2597ded95dd> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2023/02/21/how-advancing-cyber-education-can-help-fill-workforce-gaps-video/ | 2024-09-11T11:07:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00058.warc.gz | en | 0.93281 | 143 | 2.640625 | 3 |
Mobile Ransomware Attacks. Device Security
While ransomware has been a persistent threat for over two decades, the rise of mobile ransomware attacks in recent years has intensified concerns. As the world becomes increasingly dependent on online and wireless connections, the prevalence of mobile ransomware attacks has escalated, capturing widespread attention in the 21st century.
Ransomware is a kind of malicious software or malware. As its name suggests, these cyber attacks lock computers, mobile devices or files so that owners have their access restricted unless they pay a ransom. In other words, it kidnaps your data or device.
There are two types of ransomware: blockers and encryptors. The former blocks a computer or device and sends a message telling the victim to send money for the system to be unlocked, whilst the latter encrypts files and will require ransom in exchange for the decryption key.
Encryptors are also the worse of the two. Blockers can be reversed by reinstalling the OS to recover data, but encrypted files due to ransomware attacks can only be opened by their corresponding decryption keys, no matter where you transfer the files, even on a freshly reinstalled OS.
They’ve infected all operating systems, whether Linux, Windows, Apple OS, or Android. Numerous types have also appeared, all of them having their own infection tactics, file targets and ransom amounts.
Damage in Figures
As the number of smartphone users grows, mobile ransomware attacks get more prevalent as well. In fact, last 2014-2015, there were just over 35,400 mobile users who got attacked. But in 2015-2016, this number increased by almost 400% to more than 136,500 victims. What’s worse, these numbers were just for Android ransomware.
Although iOS gadgets were relatively more secure from mobile ransomware due to tight security in App Store, they’re far from immune. In 2015, different variants of malware spread across App Store apps such as XcodeGhost and Youmi, which infiltrated around 4,000 apps and over 250 apps, respectively. Once inside a device, they send the personal credentials of the owner to the attackers, including credit card or bank details. They’re not ransomware per se, but they prove that iOS devices have been breached all the same.
Back up your files regularly, especially ones with sensitive information. It’s better if you can avoid putting this info in your device altogether. Unfortunately, a lot of times people use their smartphones for jobs and other transactions so there will be corporate files and other data like credit card details, personal info, etc. in these mobile devices.
In that case, be sure to back up the files in secure locations. Google made this process easier by integrating a single account from each user to be utilized on their services such as Gmail, Google Drive, Calendar, and others.
Saving backup files online in cloud storage is efficient and convenient, and their security is enhanced as often as possible through testing of groups like Cloud Security Alliance. For more safety though, Spinbackup and other cloud security programs enable users to further protect their data or restore them in case of loss due to a virus or ransomware attack.
A good rule of thumb, however, is to have a copy of your files in offline storage such as external drives as well. Just update them on a scheduled basis.
Be extra careful and avoid suspicious links or websites. Though premium smartphones have security features like Touch ID and other fingerprint scanners, these mostly work as outbound protection, i.e. for online purchases. Ransomware attacks are inbound, such that the hackers look for a security breach while you’re browsing the internet and opening websites, after which they send malware through the gap.
Remember to check and apply updates for your software. Releases like Android’s August Security Update, which comes in two patches, are crucial for the protection of your data. The first update contains major security fixes for all Android devices, while the second is for drivers and kernel-related improvements.
Consider upgrading your device as well. Some service providers have trade-in options, such as O2’s Refresh for example, wherein a subscriber can swap out his/her device anytime during contract period. Getting a newer model with better security features can mean the difference in being more vulnerable or not.
Exclusively written for Spinbackup
Was this helpful?
How Can You Maximize SaaS Security Benefits?
Let's get started with a live demo | <urn:uuid:94d0b68b-21ca-4cfc-a992-c53cf14a1ade> | CC-MAIN-2024-38 | https://spin.ai/blog/ransomware-attacks-mobile-security/ | 2024-09-12T12:56:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00858.warc.gz | en | 0.944271 | 915 | 2.875 | 3 |
The “Security Spotlight” blog series provides insight into emerging cyberthreats and shares tips for how you can leverage LogRhythm’s security tools, services, and out-of-the-box content to defend against attacks.
In this Security Spotlight, we’ll be talking about Remote Desktop Protocol and how attackers use it to breach vulnerable networks (MITRE ATT&CK® Technique T1563).
What is Remote Desktop Protocol Hijacking?
Remote Desktop Protocol (RDP) is a secure communications protocol developed by Microsoft that allows users to interact with a desktop computer remotely. While various protocols exist for remote desktop software, RDP stands out as the most ubiquitous.
RDP session hijacking occurs when an adversary steals a legitimate user’s remote session. Typically, users are notified of such attempts. However, if an attacker possesses SYSTEM-level permissions, they can hijack a session without needing credentials or prompting the user. This can be done either remotely or locally, and to make matters worse, the user’s session doesn’t need to be active to be susceptible to this vulnerability.
Why You Need To Look Out For RDP Hijacking
This issue is exacerbated by attackers using techniques to conceal their malicious intent within legitimate traffic. A common example of this is the utilization of multiple RDP sessions on a single machine, providing them with persistence, lateral movement, and potential privilege escalation within a compromised network. All the while, they mask their malicious access within the normal traffic of legitimate RDP sessions.
Attackers achieve this by modifying the Windows Registry settings associated with Terminal Services and disabling the single-session-per-user limit. Once this modification is made, and if left unmonitored, distinguishing between the legitimate user session and the adversary’s activity becomes exceedingly difficult.
RDP has been an official feature of the Windows Operating System since 1998. Despite its vulnerabilities being consistently identified and patched, its value and capability it provides individuals indicate it will not be going anywhere for the foreseeable future.
Sadly, it isn’t uncommon to encounter compromised RDP servers offered on dark web forums, promoted as “staging grounds” from which attackers can conduct a myriad of potentially malicious actions.
How Can LogRhythm Help You?
Given the business-critical nature of RDP for many organizations, security teams should first review the mitigations suggested by MITRE ATT&CK® that ideally will establish an initial layer of protection against potential exploitation. Simple measures like limiting access to specific IP addresses or disabling RDP when not explicitly in use can create significant barriers to entry for attackers, potentially leading them to abandon their attempts in favor of alternative, “simpler” avenues.
However, when monitoring for the implementation of multiple RDP sessions, attackers can execute a number of specific commands that bypass the single-session limitation within the Microsoft Registry.
The rules implemented here apply to both LogRhythm SIEM and LogRhythm Axon. They are designed to specifically detect these commands as an early warning system for machines that are vulnerable to this particular type of misuse. Such alerts may occasionally flag up ignorant users instead of malicious ones. However, it is crucial to recognize that a machine is vulnerable and mitigate that risk, irrespective of the intent behind the commands.
For more information on how to enable these rules within your LogRhythm deployment, check out our community page to read more, download, and then import the rule into your platform.
For other Security Spotlight episodes, you can access the full playlist here. | <urn:uuid:bc9528cf-717a-48b3-98c5-e02462e264ef> | CC-MAIN-2024-38 | https://logrhythm.com/remote-desktop-protocol-hijacking-security-spotlight/ | 2024-09-14T23:41:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00658.warc.gz | en | 0.912006 | 730 | 2.625 | 3 |
Classroom lighting can be an overlooked factor for children’s success in school. However, studies have shown that lighting quality affects students’ abilities to see clearly, concentrate and perform well in the classroom. Since lighting plays a critical role in our everyday lives, it’s worth our while to understand the quality of light that’s shining down on our children.
In the first part of our series we’ll explain how replacing traditional classroom lighting with full-spectrum lighting helps students’ performance in the classroom.
Classroom Lighting and Visual Acuity
Could it be possible that different classroom lighting, or the use of fluorescent classroom light filters, could improve children’s abilities to see clearly?
Visual acuity (VA) commonly refers to the clarity of vision. Visual Acuity is dependent on optical and neural factors, i.e. (i) the sharpness of the terinal focus within the eye, (ii) the helath and functioning of the retina, and (iii) the sensitivity of the interpretative faculty of the brain.
A 2006 study by Berman et al. on lighting and visual acuity suggests this is exactly the case. The study compared the use of standard color temperature fluorescent lighting with the use of high color temperature fluorescent lighting on children’s visual acuity in the classroom.
Let’s quickly define visual acuity: Visual acuity is the clarity or sharpness of vision. The graphic below shows the Snellen chart, a common method for measuring a person’s visual acuity.
Classroom Lighting Study Results
The results from the study were quite enlightening, if you’ll pardon the pun.
The study showed that high color temperature fluorescent lighting helps students see clearer and allows them to read faster. It also reduces the visual fatigue and glare that are typically experienced with standard color temperature fluorescent lighting.
Classroom lighting used in the study included standard 3600K correlated color temperature (CCT) lights and 5500K CCT fluorescent fixtures.
To better explain what this means, we’ve provided a breakdown of correlated color temperatures:
- 2000K CCT is equal to sunlight at sunrise or sunset under clear skies
- 3500K CCT is the equivalent to direct sunlight an hour after sunrise
- 4300K CCT is like morning or afternoon direct sunlight
- 5400K CCT is akin to noon summer sunlight
In short, high-quality classroom lighting improves students’ visual acuity, giving them the sight they need to perform well in school.
Pupil Size Matters
What does a smaller pupil diameter mean? When the pupil is smaller, the depth of the vision field increases and visual acuity improves. This, in turn, results in a reduction in visual fatigue, faster reading times and less visual glare. Essentially, the eye is seeing at its most optimal level.
Whether you are using high color temperature classroom lighting bulbs or classroom light filters to increase the color temperature, you will find your students benefit from improved visual acuity and often perform better in the classroom. By adding more blue/green into the color spectrum, you are essentially creating full spectrum light.
The study also notes a strong correlation between pupil size and reading performance, where light spectrum, not brightness is the driving factor. In a follow-up blog, we will examine exactly how replacing lighting in classrooms with full spectrum lighting has proven to increase reading and math test scores.
Erik Hinds is Vice President of Helping People at Make Great Light. For more information on how fluorescent light filters for classrooms can improve the learning environment, please visit the resource center.
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters! | <urn:uuid:cdc5b796-26db-4d08-a40c-c456b63359b4> | CC-MAIN-2024-38 | https://mytechdecisions.com/facility/how-does-classroom-lighting-affect-the-students-part-i/ | 2024-09-07T21:38:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00458.warc.gz | en | 0.915089 | 773 | 3.6875 | 4 |
Struggling to offer up responses to your kids’ sighs of “I’m bored….”? Consider sending them to Space Camp – without buying a space suit or packing any luggage – with an online Space Exploration activity from Hughes and National 4-H, the premier youth development organization.
With support from Hughesnet, the flagship satellite internet service from Hughes, National 4-H launched a free online resource for parents and teachers designed to inspire the next generation of scientists. It’s just one example of the types of STEM programs Hughes supports for young people – and one of the reasons Hughes twice received the Communitas award for Excellence in Corporate Social Responsibility for our collaboration with 4-H. In 2021, Hughes was recognized for making a difference by providing accessible STEM education during the pandemic. This year, Hughes was honored for connecting youth to STEM education resources amidst America’s drive to digital.
The online portal offers fun, hands-on activities developed to spark kids’ interest in Science, Technology, Engineering and Mathematics (STEM). Teaching critical STEM skills helps to build future leaders – many of whom may someday choose to work in the space and satellite industry.
In the Space Exploration activities, children join astronaut Isabella Hernandez in interactive videos as they see firsthand what it takes to live and work among the stars.
- Space Exploration: Quest 1 – In this adventure, kids help Commander Isabella Hernandez solve one of the biggest challenges of long-distance space travel: nutrition. If astronauts are ever going to explore the outer reaches of the solar system, we’ll have to figure out how to grow nutritious food in space. (Grades 3-8)
- Space Exploration: Quest 2 – Kids arrive at the moon and join Commander Hernandez as she shows how to assemble and repair a Lunar Terrain Vehicle (LTV). Then, they take it for a spin to collect sand and rock samples on the moon’s surface. (Grades 3-8)
Once they return from the outer space adventures, here are three more boredom busters to explore this summer:
- Parachute Away – Kids know that when they throw an egg up into the air it will crash land on the floor and make a mess. In this activity, they learn how to harness the power of physics and air resistance to develop different parachute designs and discover a safe way to deliver an egg (or an astronaut) to the ground. (Grades 3-5)
- DIY Flashlight – Every explorer needs the proper equipment! In this activity, kids learn about electrical energy using batteries and conductors as they create a battery-powered flashlight with an on/off switch. (Grades 4-7)
- Crazy Kites – It could be said that space exploration began when humans constructed and flew the first kites thousands of years ago. Kids will get to use their engineering design skills to make and build a kite of their own. While testing out the kite designs kids will learn about how lift and air pressure work together to make things fly!
If you and your children enjoyed these activities, find more STEM activities from 4-H and Hughesnet online. | <urn:uuid:bd7aa734-6b25-406a-9579-3e3e5735e290> | CC-MAIN-2024-38 | https://www.hughes.com/resources/insights/hughesnet/engage-your-kids-stem-and-space-summer-free-online-activities | 2024-09-07T21:20:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00458.warc.gz | en | 0.922677 | 649 | 3.25 | 3 |
In this video, scientists describes IBM GRAF, a powerful new global weather forecasting system that will provide the most accurate local weather forecasts ever seen worldwide. Deployed at The Weather Company, the IBM Global High-Resolution Atmospheric Forecasting System (GRAF) is the first hourly-updating commercial weather system that is able to predict something as small as thunderstorms globally. Compared to existing models, it will provide a nearly 200% improvement in forecasting resolution for much of the globe (from 12 to 3 sq km).
GRAF uses advanced IBM POWER9-based supercomputers, crowdsourced data from millions of sensors worldwide, and in-flight data to create more localized, more accurate views of weather globally. IBM says its new supercomputer, DYEUS, built just to run the model, will issue 12 trillion pieces of weather data every day and process forecasts every hour, while many global weather models update only every six to 12 hours.
Today, weather forecasts around the world are not created equal, so we are changing that,” said Cameron Clayton, general manager of Watson Media and Weather for IBM. “Weather influences what people do day-to-day and is arguably the most important external swing factor in business performance. As extreme weather becomes more common, our new weather system will ensure every person and organization around the world has access to more accurate, more finely-tuned weather forecasts.”
As we address climate change and intensifying severe weather, it’s more critical now than ever to have access to timely, reliable forecasting services around the world. Unfortunately, not all forecasts are created equal. To help address this issue, IBM’s The Weather Company has launched a powerful new forecasting tool called IBM GRAF. The hourly-updating Global High-Resolution Atmospheric Forecasting System (IBM GRAF) improves global model resolution by 3x, helping to bring the rest of the world’s forecasts up to the standard once limited to a small number of countries. Created in collaboration with NCAR and running on a GPU-accelerated IBM supercomputer, IBM GRAF is the world’s first operational high-resolution, hourly-updating model that covers the entire globe. It helps democratize weather forecasts so people, businesses and governments — anywhere — can make better weather-related decisions.
IBM GRAF’s speed, accuracy and resolution depend on massive computing power, a new weather model and the use of a wide variety of data from traditional and new sources.
A system that delivers high-resolution, global, hourly-updating forecast models needs infrastructure that can not only accommodate big data but can also supply massive computing capacity and advanced graphic rendering. This powerful new system is composed of 84 nodes of the IBM Power Systems AC922 server and 3.5 petabytes, or 3.5 quadrillion bytes, of IBM Spectrum Scale Storage. This is the same IBM POWER9 and IBM Storage technology used by the U.S Department of Energy’s supercomputers Summit and Sierra, the most powerful and smartest supercomputers in the world. Predictions from the new system will be made available globally in 2019.
All of that computing power is brought to bear on a new, advanced prediction model that resulted from a highly collaborative effort. Developed by the National Center for Atmospheric Research (NCAR) in concert with the climate modeling group at Los Alamos National Laboratory, it was later refined in collaboration with IBM and The Weather Company.
This is a great example of basic research leading to private sector applications,” said Antonio Busalacchi, president of the University Corporation for Atmospheric Research, which manages NCAR for the National Science Foundation.
IBM GRAF’s predictive capabilities, and global forecasting science as a whole, will continue to advance with improved data collection around the globe. As weather models use increasingly fine resolutions, Busalacchi said, it’s even more important to pull in data from nontraditional sources. IBM GRAF will eventually use data from airplane sensors and individual smartphones, if people choose to share them. “That’s where IBM has a role in terms of its ability to take advantage of those observations” and improve the models, Busalacchi said. | <urn:uuid:9bb46ca6-1fb2-4e69-99ec-13e728177dc7> | CC-MAIN-2024-38 | https://insidehpc.com/2019/12/video-democratizing-the-worlds-weather-data-with-ibm-graf/ | 2024-09-09T01:44:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00358.warc.gz | en | 0.928481 | 883 | 3.25 | 3 |
What are Smart Cities?
Evolving from the now widespread use of IoT, or the Internet of Things, smart cities are areas that integrate technology into their management systems in order to enhance their citizens’ living experience. From working conditions to energy efficiency, improved infrastructure and even revolutionised healthcare, smart cities have the potential to change and improve the way we live our day to day lives. So which smart cities are beginning to make an impact on a global scale?
Regularly topping the smart cities leaderboards, Singapore is renowned for being one of the most forward thinking smart cities in the world, with innovative uses of technology being used in almost every aspect of city living. As well as long term disaster prevention measures that work out which buildings are most at risk from the elements, Singapore’s smart cities initiatives aim to make everyday life that little bit more simple. Through the SingPass system, citizens can easily access government services, mobile banking and even their healthcare records, cutting down on both the costs and time spent dealing with bureaucratic processes. Singapore is even making steps to digitalise the city’s water management- the government is testing seawater desalination technology that only uses half the electricity of current systems. Processes like this show just how energy efficient and environmentally conscious smart cities can help us become.
Madrid is a key player among smart cities because of the way it makes use of technology primarily to deal with social issues. Madrid delivers public services like street maintenance, lighting, irrigation, cleaning and waste management to more than 3 million people; that’s a lot of people that need managing efficiently and sustainably each day. One of the main ways Madrid will go about this is through the use of real time updates to manage situations as soon as they happen. Let’s say a tree falls and blocks a road in the city centre. Citizens in the area will be able to take a photo of the incident, and upload it to a shared technology platform to give the authorities the information they need as soon as possible. Madrid’s government is also turning the city into a key player in the smart cities arena by making key improvements to the mobility and transportation around the city, cutting harmful emissions and encouraging tourism to the region.
Already a thriving metropolis, New York’s smart cities development focuses on large scale infrastructure overhaul, aiming to improve lighting, water quality, waste management and air pollution. Initiatives like this promise to change the way we see cities, and the issues that come with living in cities, for the foreseeable future. New York City uses 1 billion gallons of water each day; smart meters give every citizen an idea of their water consumption usage each day, saving both them and city hundred of dollars in overusage.
New York is also installing WiFi across the entirety of the city, making sure every single New York citizen has access to free Internet. All in all, processes and policies like this can help make the city a much safer, more energy efficient and cost effective place to live in.
Interested in learning more? Harnessing the power behind smart cities changes the way we see the world, and your business’s place in it. Explore CISto find out more about how your business can benefit from the digital revolution. | <urn:uuid:0e81f95d-2255-415f-8c63-14b2919278b8> | CC-MAIN-2024-38 | https://cisltd.com/blog/smart-cities/ | 2024-09-11T13:51:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00158.warc.gz | en | 0.946704 | 663 | 3.109375 | 3 |
Table of Contents
Data Breach Explained
A data breach refers to any incident where unauthorized individuals gain access to sensitive or confidential data. This can include personal information like social security numbers, financial data, healthcare records, or corporate information such as customer databases and intellectual property. A data breach occurs when the confidentiality of data is compromised.
It's important to differentiate between a data breach, a data leak, and data loss.
- Data Breach: Unauthorized access or acquisition of sensitive or confidential data, compromising its confidentiality. It involves intentional or unintentional breaches where data is accessed without permission.
- Data Leak: Unintentional or accidental exposure of data to unauthorized individuals or entities. It occurs when data is mistakenly disclosed or shared without proper authorization.
- Data Loss: The permanent destruction or loss of data, often resulting from hardware failures, natural disasters, or other catastrophic events. It refers to the inability to retrieve or access data due to its complete eradication.
In summary, a data breach refers specifically to unauthorized access to data. A data leak, on the other hand, refers to the unintentional or accidental exposure of data to unauthorized individuals. Data loss refers to the permanent destruction or loss of data, often resulting from hardware failures, natural disasters, or other catastrophic events.
Why Do Data Breaches Happen?
Data breaches can occur due to various factors, including innocent mistakes, malicious insiders, and hackers. The motivations behind data breaches are typically financial gain, identity theft, corporate espionage, or political agendas. Cybercriminals may seek to steal sensitive information like credit card numbers, personal identification details, or trade secrets for illicit purposes.
How Data Breaches Work
Data breaches occur when unauthorized individuals gain access to sensitive or confidential information stored in a system or database. These breaches can happen in various ways, and the methods used by attackers can be sophisticated or relatively simple. Here's an overview of how data breaches commonly work:
Phase 1: Initial Access: The first step in a data breach is gaining unauthorized access to a target system. Attackers may use various techniques to achieve this, including:
- Stolen or compromised credentials: Hackers obtain login credentials through brute force attacks, phishing scams, or purchasing stolen credentials from the dark web.
- Social engineering attacks: Attackers manipulate individuals into revealing sensitive information through techniques like phishing emails, phone calls, or impersonation.
- Malware and ransomware: Malicious software is used to exploit vulnerabilities in systems, infect devices, or encrypt data for ransom.
- System vulnerabilities: Hackers exploit weaknesses in software, operating systems, or websites to gain unauthorized access to data.
- Physical theft or site security errors: Breaches can occur when physical devices, such as laptops or hard drives, are stolen or when criminals gain access to secure premises.
Phase 2: Privilege Escalation: Once inside the system, attackers may try to escalate their privileges. This involves obtaining higher-level access rights to gain control over more sensitive data or to compromise other parts of the network.
Phase 3: Lateral Movement: With escalated privileges, attackers move laterally through the network, exploring and compromising additional systems or databases. This helps them locate the valuable data they want to steal and avoid detection.
Phase 4: Data Extraction: After identifying the desired data, attackers extract it from the compromised systems. They may copy the information to a remote server, external storage device, or cloud storage, where they can access it later.
Phase 5: Covering Tracks: To avoid detection and maintain access, attackers often attempt to erase any traces of their presence, such as log files or audit trails.
Phase 6: Data Exfiltration: Once attackers have collected the data, they exfiltrate it from the organization's network. This can be done using various covert methods, such as disguising the data within seemingly innocuous network traffic or encrypted communication channels.
How to Prevent and Mitigate Data Breaches
Preventing data breaches requires a multi-faceted approach, including regular software updates and patches, strong access controls and authentication mechanisms, employee training on security best practices, and ongoing monitoring and threat detection measures. Organizations must stay vigilant and proactive in their cybersecurity efforts to protect sensitive data from falling into the wrong hands. In more detail, organizations should implement:
- Incident response plans: Organizations should have well-defined incident response plans to detect, contain, and respond to data breaches effectively.
- AI and automation: Utilizing artificial intelligence (AI) and automation technologies can enhance threat detection and response, reducing the impact of data breaches.
- Employee training: Educating employees about cybersecurity best practices, such as recognizing social engineering attacks and handling data securely, can significantly reduce the risk of data breaches.
- Identity and access management (IAM): Implementing strong password policies, multi-factor authentication, and other IAM technologies can protect against unauthorized access through stolen or compromised credentials.
- Zero trust security approach: Adopting a zero trust security model involves continuously verifying and validating users or entities, restricting access privileges, and monitoring network activity to minimize the risk of data breaches.
How to Respond to a Data Breach
In the event of a data breach, organizations should follow a comprehensive response plan, which may include:
- Containment: Immediately isolating and containing the breach to prevent further unauthorized access to sensitive data.
- Investigation: Conducting a thorough investigation to determine the scope, impact, and root causes of the breach.
- Notification: Complying with applicable data breach notification laws and regulations to inform affected individuals, authorities, and stakeholders.
- Remediation: Taking steps to remediate vulnerabilities, strengthen security measures, and prevent future breaches.
- Communication: Maintaining transparent communication with affected individuals, customers, employees, and the public to address concerns and restore trust.
How IRONSCALES Prevents Data Breaches
IRONSCALES is a leading provider of advanced email security solutions designed to prevent data breaches. Their platform utilizes AI, machine learning, and user-driven threat intelligence to detect and respond to phishing attacks, which are a significant cause of data breaches. IRONSCALES offers features like real-time phishing alerts, incident response automation, and employee training to proactively protect organizations from evolving cyber threats. By empowering employees with the tools and knowledge to identify and report phishing attempts, IRONSCALES helps prevent successful data breaches and minimize their impact.
Explore More Articles
Say goodbye to Phishing, BEC, and QR code attacks. Our Adaptive AI automatically learns and evolves to keep your employees safe from email attacks. | <urn:uuid:d3e2fca5-e5f6-4e6e-8ed9-b863eb4491bf> | CC-MAIN-2024-38 | https://ironscales.com/glossary/data-breach | 2024-09-12T18:20:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00058.warc.gz | en | 0.897328 | 1,358 | 3.71875 | 4 |
Personally identifiable information (PII) data is any sort of data that might reveal a person’s identity. Moreover, PII data includes data such as a person’s name, address, date of birth, credit card details, Social Security number, or even medical records.
In the digital age we live in, data has become the most valuable asset for a company. This means a company might hold a lot of PII data, making it a target for hackers to breach their data. Nowadays, PII data is considered to be highly sensitive data that needs to be properly protected.
Many organizations underestimate the effort to protect their users’ data. Besides that, many users aren’t aware of the number of companies holding PII data about them. Therefore, we need strong data privacy tools like encryption to protect users’ PII data.
This article will guide you through encryption methods and define a plan that will help you get started with implementing data encryption techniques. First, let’s introduce encryption.
What Is Data Encryption?
So, what is encryption? Encryption refers to algorithms that scramble your data. This means that the data becomes unreadable and doesn’t make sense.
Because we still need to use this data, the encryption algorithm comes with an encryption key. This encryption key can be used to unscramble data in its original state. It’s an essential part of encryption that we can unlock the information turning it back into its readable format.
To give an example, WhatsApp uses encryption for transferring messages between two or multiple users. Each message is encrypted using a shared encryption key between the recipients. This means that only the sender and recipient can unscramble the data to a readable message. Moreover, for every new conversation you start, a new encryption key is exchanged in a secure way between smartphones.
What’s the Importance of Data Encryption?
Data encryption acts as the last line of defense. In case of a data breach, the impact on the business will be minimal besides the reputational damage. The encrypted data is useless for the attacker without the encryption key. That’s why data encryption is so important to minimize the risks of a data breach.
In addition, data encryption helps an organization comply with data privacy standards like the European GDPR standards. The GDPR standard doesn’t prescribe specific data encryption methods, but it does say that personal data needs to be encrypted and pseudonymized. Moreover, in case of a data breach that involves only encrypted data, the company doesn’t have to contact every affected user, which reduces the administrative costs and reputational damage.
Finally, data encryption helps ensure data integrity. You might wonder, how can data encryption enforce data integrity? By applying data encryption, you can ensure that only authorized users can access and modify the data. This means that an organization can have more trust in its data, improving the data quality.
Next, let’s explore four data encryption techniques.
4 Data Encryption Techniques to Secure PII Data
Many dozens of encryption algorithms exist for encrypting data. Let’s take a look at four common data encryption techniques.
1. Advanced Encryption Standard (AES)
AES is a trusted standard used by the U.S. government. The standard became popular through its low RAM usage and high-speed execution time. Also, AES works well for a wide range of hardware, which is important for the adoption of the algorithm.
Until now, no feasible attacks have been detected in AES. That’s the reason AES is widely adopted. For example, your WPA2 Wi-Fi network uses AES.
RSA is categorized as an asymmetric encryption algorithm. This means that both the encryption and decryption keys differ from each other. It’s a common encryption technique for securely transmitting data over the internet. Many PGP and GnuPG (GPG) software tools rely on RSA encryption.
RSA allows the user to pick the length of the encryption key. Typically, an RSA key consists of 1,024 or 2,048 bits. The only downside of using asymmetric encryption is a low execution speed.
Twofish is a less-known encryption technique that uses symmetric encryption. This technique works with a smaller block size between 128 and 256 bits. This means that the encryption algorithm performs well on less-powerful hardware.
The major advantage of Twofish is that you can freely use it because it’s open source.
4. Triple Data Encryption Standard (3DES)
At last, 3DES has been developed to replace the original Data Encryption Standard (DES). DES needed replacement because the standard used an encryption key length of only 56 bits. This meant that attackers could perform brute force attacks because of the short key length. The brute force attack is possible only because of major improvements in the hardware industry that created much more performant hardware.
Nowadays, 3DES uses a key length of 168 bits. The standard is mainly used in the financial industry. However, Microsoft uses 3DES for securing data for software like Microsoft Outlook or OneNote.
How to Implement an Encryption Strategy
Many organizations struggle to implement a process or strategy for implementing encryption for PII data. Here’s a list of steps to follow when implementing a data encryption strategy.
First, you need to know what kind of data you’re handling. Therefore, I recommend identifying PII data that needs encryption, such as credit card details or customer information.
Pick an Encryption Tool
Now that you know which data needs encryption, it’s time to pick a tool that provides the required data security. You might need a tool that assists with encrypting data in the cloud, email data, or payment data. Make sure to consult a data security expert to help you select the right tools for your data.
Next, let’s look at encryption key management.
Explore Encryption Key Management
Data encryption is an important aspect of your encryption strategy. However, we also need to safely store the encryption key. Explore key management solutions that help you safely store encryption keys. A good key management solution must offer the ability to define permissions and access rights.
Go Beyond Data Encryption
Finally, remember that data encryption is one thing. However, data encryption alone doesn’t guarantee 100% data security. Data security is a continuous process. For example, implementing two-factor authentication might be part of your encryption strategy to increase the security for your key management tool. Other solutions include intrusion-detection mechanisms or antivirus solutions.
In the end, an encryption strategy will succeed only when everyone sticks to the strategy. Therefore, it’s important to educate employees or other involved parties that work with PII data. One single mistake can be disastrous for your data encryption strategy. Always handle PII data with extreme care!
Data Encryption Done Right
Data encryption helps an organization comply with GDPR standards and other regulations. However, data security includes many more items than just data encryption. Data privacy is a continuous process that needs to be revisited now and then.
Nowadays, it’s not necessary to understand all the technical details of data encryption. Many tools provide built-in options that help an organization with data encryption. However, I recommend performing a data audit on a regular basis to measure the quality of your data security.
In short, protecting your data means protecting your company—and users.
Michiel Mulders is the author of this post. Michiel is a passionate blockchain developer who loves writing technical content. Besides that, he loves learning about marketing, UX psychology, and entrepreneurship. When he’s not writing, he’s probably enjoying a Belgian beer! | <urn:uuid:51789e5f-a1f9-49d3-9f13-b1375cec4f78> | CC-MAIN-2024-38 | https://www.dataopszone.com/how-to-encrypt-pii-data/ | 2024-09-13T22:37:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00858.warc.gz | en | 0.913379 | 1,604 | 3.421875 | 3 |
Data migration is transferring data from one application, database, file system, or storage to another. Businesses migrate data during system upgrades, big data analytics, or data centre consolidation. You could migrate data to other physical servers on-site or on the cloud. Unless you have a robust data migration strategy, you can lose or degrade quality during the migration, which makes the process critical.
What are the types of data migration?
This includes moving data from one on-premise database or storage system to another to improve security, compliance, and processing.
This includes migrating data from an on-premise database to cloud storage or a data warehouse. It is becoming common for organisations to choose this option given the security, scalability, and flexibility. You can reduce the cost of storing the data and managing on-site servers. Data on the cloud is well managed and offers deeper insights for business benefits.
This happens when an organisation migrates their application from an on-premise to a cloud server. Migrating an entire application to the cloud is best for higher security, scalability, and ease of usability. It also reduces the cost of maintaining the application.
What are data migration challenges?
If you do not have a data migration strategy, this process can pose several challenges, such as –
- Data loss – It is easy to lose data while migrating from one location to the other. You can prevent this by creating a backup on the original server before initiating migration.
- Data corruption -If you apply the wrong rules or validation to the source or destination, the migration process can corrupt the data. Creating a backup and setting the right validation rules prevents data corruption.
- Data compliance –When migrating to a new location, organisations need to be aware of the new governance legislation. For example, the GDPR framework prohibits the sharing of personal data outside the EU and penalises if you store excessive data when it serves no purpose.
- Data maintenance –After migration, organisations must maintain data and comply with the governance principles.
What are the data migration methods?
- Big bang migration -This strategy aims to transfer all the data within a set time slot. This method entails a lower cost, less complexity, and faster completion. However, you would need to take the system offline for the duration of the big bang migration. There is also a risk of losing the data if you do not back it up.
- Trickle migration -This migration happens in phases where both the systems work together without any downtime. There is less risk of losing data; however, the migration process is complicated and needs better planning.
What are the stages of data migration?
Any data migration typically goes through the following stages –
- Data extraction -Extract the data from the current system before starting the migration process.
- Data transformation -Ensure that the metadata reflects the actual data and matches it to its new form.
Data cleansing -
Clean any corrupt data and deduplicate by running tests.
Data validation -
Test the data on a mock environment to know if the migration will yield the desired results.
Data transfer -
Transfer the data to the new location and check for any errors. | <urn:uuid:a5db5e54-03a1-4e8d-b293-a7f3e438b075> | CC-MAIN-2024-38 | https://www.infosysbpm.com/glossary/data-migration.html | 2024-09-13T22:11:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00858.warc.gz | en | 0.91226 | 652 | 2.890625 | 3 |
LLM Grounding: Innovation for Performance
In the constantly evolving artificial intelligence technology, Large Language Models (LLMs) stand out as pillars of innovation, driving efficiency and productivity across various industries.
Among many strategies aimed at harnessing the potential of language models, LLM grounding emerges as a pivotal concept, designed to significantly enhance the capabilities of these models by embedding industry-specific knowledge and data.
This article delves into the essence of LLM grounding, unraveling its importance, methodologies, challenges, and applications, particularly through the lens of Retrieval-Augmented Generation (RAG) and fine-tuning LLMs with entity-based data products.
What is LLM grounding?
LLM grounding is the process of enriching large language models with domain-specific information, enabling them to understand and produce responses that are not only accurate but also contextually relevant to specific industries or organizational needs.
By integrating bespoke datasets and knowledge bases, domain-specific LLM is trained to navigate through the nuances of specialized terminologies and concepts, thereby elevating their performance to meet enterprise-specific requirements.
During their initial training phase, base language models (LLMs) are exposed to a diverse and extensive dataset, predominantly derived from the internet. This process is akin to an all-encompassing curriculum, designed to teach LLMs a broad spectrum of information.
However, even with such comprehensive training, these models frequently encounter difficulties in understanding the specific nuances, such as industry and organization-specific details and reasoning. Furthermore, they often struggle with organization-specific jargon, primarily because they have not been exposed to these details beforehand.
This is where the concept of grounding plays a critical role, transforming an otherwise basic premise into a strategic advantage. Grounding an LLM involves enriching the base language model with highly relevant and specific knowledge or data to ensure it maintains context.
This process enables LLM grounding to access and illuminate previously overlooked content, thereby revealing unique aspects of a particular industry or organization.
In simpler terms, grounding in LLMs serves as an enhancement to the machine learning process of these base language models. It acquaints them with the distinctive aspects of an industry or organization that might not have been included in their original training dataset.
Why LLM Grounding is Important
The significance of LLM grounding in enhancing the capabilities of AI within enterprises cannot be overstated. This LLM strategy brings about several key benefits, directly addressing some of the core challenges associated with deploying AI technologies in specialized environments.
Countering AI Hallucination: One of the foremost advantages of grounding in LLM is its role in mitigating “AI Hallucination” a phenomenon where base language models generate misleading or factually incorrect responses due to flawed training data or misinterpretation. Grounding equips models with a solid, context-aware foundation, significantly reducing instances of inaccurate outputs and ensuring that the AI’s responses are reliable and aligned with reality.
Enhancing Comprehension: Grounded LLMs exhibit a superior ability to grasp complex topics and the subtle nuances of language that are unique to specific industries or organizations. This improved understanding allows AI models not only to interact but also to guide users more effectively through complex inquiries, diminishing confusion and clarifying intricate issues.
Improving Precision and Efficacy: By incorporating industry-specific knowledge, LLM grounding ensures that AI models can provide more accurate and relevant solutions swiftly, thus effectively responding to user queries. This precision stems from a deep understanding of the unique challenges and contexts within specific sectors, enhancing the overall efficiency of operations.
Accelerating Problem-Solving: Another critical benefit of LLM grounding is its impact on problem-solving speeds. Grounded models, with their enriched knowledge base and understanding, are adept at quickly identifying and addressing complex issues, thereby reducing resolution times. This capability not only improves operational efficiency but can also lead to exponential gains by streamlining problem-resolution processes across the enterprise.
In essence, LLM grounding is pivotal for leveraging the full potential of AI technologies in specialized applications. By enhancing accuracy, understanding, and efficiency, LLM grounding addresses critical gaps in AI deployment, making it an indispensable strategy for businesses aiming to harness the power of artificial intelligence effectively. Now, let’s delve into how it is implemented to achieve these significant benefits.
How Does LLM Grounding Work?
LLM grounding revolutionizes how Large Language Models (LLMs) understand and interact within specific enterprise contexts by infusing them with a rich layer of domain-specific knowledge. This process involves several meticulously designed stages, each contributing to a more nuanced, accurate, and effective AI model. Here, we break down the intricacies of how LLM grounding is executed:
1. Grounding with Lexical Specificity
This foundational step involves tailoring the LLM to the specific lexical and conceptual framework of an enterprise. By exposing the model to data unique to the organization’s context and operational environment, it gains a profound understanding of the enterprise-specific language and terminology. Such data sources include:
- Enterprise-grade ontologies: Structures that encapsulate the enterprise’s lexicon, terms, and their interrelations, offering the LLM a comprehensive insight into the organizational language.
- Service Desk Tickets: These provide a wealth of problem-solving instances and solutions that enrich the model’s practical understanding of common issues within the enterprise.
- Conversation Logs and Call Transcripts: Real-world communication samples that enhance the model’s grasp of conversational nuances and enterprise-specific language patterns.
2. Grounding with Unexplored Data
To address and mitigate the biases inherent in pre-training phases, LLM grounding extends to incorporate new and diverse datasets that were not part of the initial model training. This includes:
- Industry-Specific Public Resources: Such as blogs, forums, and research documents, which introduce the model to broader perspectives and insights across various sectors.
- Enterprise-Exclusive Content: Proprietary documents, training materials, and backend system data that provide unique, company-specific knowledge.
3. Grounding with Multi-Content-Type Data
LLM grounding also entails teaching the model to interpret and process information across a myriad of formats, from text to multimedia. Understanding these diverse data types is crucial for tasks like:
- Content Comprehension: Grasping the hierarchical structure and relational context of information.
- Information Extraction: Identifying and extracting relevant details from complex datasets.
- Content Summarization: Condensing information based on structural significance, such as document headers or key spreadsheet columns.
Table: Stages of LLM grounding
Grounding with Lexical Specificity | Tailors the LLM to organizational lexicon and concepts, utilizing ontologies, service tickets, and communication logs. |
Grounding with Unexplored Data | Broadens the LLM’s knowledge base with industry-specific public resources and proprietary enterprise content, addressing pre-training biases. |
Grounding with Multi-Content-Type Data | Enhances the LLM’s ability to process and interpret various data formats, improving content comprehension, information extraction, and summarization capabilities. |
Through these stages, LLM grounding transforms base language models into highly specialized tools capable of navigating the unique linguistic and operational landscapes of specific enterprises.
By integrating a diverse range of data sources and content types, LLM grounding ensures that AI models can deliver precise, contextually relevant, and effective responses, marking a significant leap in AI’s practical application in business contexts.
LLM Grounding Technique with RAG
Retrieval-Augmented Generation (RAG) offers a sophisticated approach to LLM grounding by dynamically incorporating external data during the response generation process. This method enables LLMs to pull in the most relevant information from a vast database at runtime, ensuring that the responses are not only contextually appropriate but also up-to-date.
The integration of RAG into LLM grounding significantly enhances the model’s ability to handle complex queries across various domains, providing answers that are informed by the latest available data. However, implementing RAG presents challenges, including the need for efficient data retrieval systems and the management of data relevance and accuracy.
Despite these hurdles, RAG remains a promising avenue for elevating LLM performance, particularly in scenarios requiring real-time access to expansive knowledge bases.
Thus, RAG significantly amplifies LLM grounding’s effectiveness, paving the way for a discussion on another innovative application: entity-based data products.
LLM Grounding Technique Using Entity-Based Data Products
Grounding LLMs using entity-based data products involves integrating structured data about specific entities (such as people, places, organizations, and concepts) to improve the model’s comprehension and output. This method allows LLMs to have a nuanced understanding of entities, their attributes, and their relationships, enabling more precise and informative responses.
Entity-based data products can significantly enhance the model’s performance in tasks requiring deep domain knowledge, such as personalized content creation, targeted information retrieval, and sophisticated data analysis. The challenge lies in curating and maintaining an extensive, up-to-date entity database that accurately reflects the complexity of real-world interactions.
Additionally, integrating this structured knowledge into the inherently unstructured learning process of LLMs requires innovative approaches to model training and data integration.
LLM Grounding Challenges
Challenges in LLM grounding primarily revolve around the complexity of integrating diverse and specialized data into a cohesive learning framework for the model.
Firstly, sourcing and curating high-quality, domain-specific data poses significant logistical hurdles, requiring extensive expertise and resources.
Additionally, ensuring the data’s relevance and updating it regularly to reflect industry changes demands continuous effort. Another critical challenge is mitigating biases inherent in the training data, which can skew the model’s outputs and lead to inaccuracies in its understanding and responses.
Moreover, the technical difficulties of adapting LLMs to efficiently process and apply grounded knowledge without compromising performance or speed are non-trivial. Balancing the depth of grounding with the model’s generalizability also presents a delicate trade-off, as excessive specialization might limit the model’s applicability across different contexts.
Addressing these challenges is essential for harnessing the full potential of LLM grounding in real-world applications.
In the evolving landscape of grounding AI, LLM grounding stands as a pivotal innovation, steering enterprises toward exploiting AI’s potential for achieving remarkable efficiencies. This strategy enhances base language models with deep, industry-specific knowledge, making it an indispensable tool in the dynamic field of AI.
Through enriching comprehension, delivering precise solutions, rectifying AI misconceptions, and expediting problem-solving, LLM’s grounding significantly contributes across various facets of enterprise operations.
The journey of implementing LLM grounding, although intricate, yields substantial benefits, showcasing its value through diverse applications in IT, HR, and procurement, among others. It empowers organizations to transcend the inherent limitations of base models, equipping them with AI capabilities that deeply understand and interact within their unique business contexts, providing swift and accurate solutions.
As we navigate the complex terrain of AI integration in business, the adoption of LLM grounding emerges as not merely beneficial but essential. It heralds a future where AI and human expertise converge, driving enterprises toward unprecedented levels of advancement.
Indeed, as we embrace LLM grounding, we are laying the groundwork for a future that promises enhanced efficiency and innovation. Book an AI demo to experience Aisera’s Enterprise LLM capabilities today! | <urn:uuid:6c572789-9b1e-49d4-8333-e1ebceeffaaf> | CC-MAIN-2024-38 | https://aisera.com/blog/llm-grounding/ | 2024-09-17T14:54:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00558.warc.gz | en | 0.906838 | 2,388 | 2.59375 | 3 |
Presenter: Mike Brown, CTO & Co-Founder, ISARA Corporation
Tue Feb 21, 2017
Abstract: The rise of Artificial Intelligence (AI) is helping to fuel the fire to build Quantum Computers because they will be a great tool for areas such as Machine Learning (ML).
Public Key Cryptography as we know it today ceases to be effective when the age of quantum computers begins. With practical examples and an emphasis on network technologies like VPNs, this presentation will explore:
BrightTALK free registration is required. | <urn:uuid:4d3a2db1-db0f-4633-ae8b-3991b7ee55e7> | CC-MAIN-2024-38 | https://www.isara.com/resource-center/quantum-safety-and-your-corporate-network.html | 2024-09-17T16:20:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00558.warc.gz | en | 0.888936 | 110 | 2.625 | 3 |
Artificial Intelligence (AI) has been at the forefront of technological advancements, shaping industries, and revolutionizing the way we interact with technology. However, a recent statement by Vice President Harris has sparked concerns among data scientists and AI experts regarding potential misunderstandings surrounding AI’s capabilities and limitations. In this article, we delve into the details of the Vice President’s concerns, explore the current state of AI understanding, and highlight the importance of clear communication to address these issues.
Vice President Harris’s Remarks on AI
Vice President Kamala Harris recently made remarks about AI during a public address, expressing concerns about its impact on job markets, privacy, and the potential for biases in decision-making algorithms. While the Vice President did not dismiss the potential benefits of AI, her statements highlight the need for a deeper understanding of AI’s nuances and its responsible deployment.
The Importance of AI Literacy
The concerns raised by Vice President Harris underscore a broader issue: the knowledge gap surrounding AI. As AI technologies become increasingly prevalent in our daily lives, it is essential for policymakers and the public to comprehend AI’s capabilities, limitations, and ethical considerations fully.
To bridge this knowledge gap, educational initiatives are crucial. From schools to professional training programs, a focus on AI literacy can empower individuals to make informed decisions about AI’s integration into society, dispelling misconceptions and promoting responsible AI development.
The Vice President’s concerns about biases in AI algorithms resonate deeply within the data science community. Biases can inadvertently be introduced into AI models when training data contains skewed or discriminatory information. Addressing this issue requires a comprehensive approach that involves diverse representation in AI development teams, ongoing auditing of AI systems, and the use of unbiased datasets.
Data scientists and AI experts are at the forefront of ethical AI development. By continuously advocating for fairness, transparency, and accountability, they can ensure that AI algorithms are not only technically sound but also uphold ethical standards.
AI in the Job Market
Vice President Harris’s concerns about AI’s impact on job markets reflect broader societal discussions about automation and workforce displacement. While AI can automate certain tasks, it also has the potential to augment human capabilities and drive new job opportunities in AI-related fields.
Data scientists play a crucial role in shaping this narrative. By highlighting AI’s capacity to streamline processes, improve decision-making, and create innovative solutions, they can position AI as an enabler of growth and economic progress.
To maximize the benefits of AI and mitigate job displacement, reskilling and upskilling initiatives are vital. Data scientists and AI experts can spearhead these efforts by designing training programs that equip individuals with the necessary skills to thrive in an AI-augmented world.
By actively engaging with policymakers, educational institutions, and industry leaders, data scientists can drive the development of adaptive curricula that align with the demands of an AI-driven job market.
AI and Data Privacy
Data privacy remains a significant concern in the era of AI. The vast amount of data required for training AI models raises questions about how personal information is collected, stored, and used. Vice President Harris’s comments highlight the need for robust data protection measures to safeguard individuals’ privacy.
Data scientists can contribute to addressing this challenge by adopting privacy-preserving AI techniques. These include federated learning, differential privacy, and other methodologies that allow AI models to learn from data without directly accessing sensitive information.
Public Perception and AI Communication
Vice President Harris’s concerns serve as a reminder of the importance of effective communication between AI experts and the general public. As data scientists grapple with complex AI concepts, translating technical jargon into accessible language is crucial for fostering a common understanding of AI’s potential and limitations.
By engaging in public discussions, data scientists can clarify misconceptions, emphasize AI’s societal benefits, and build trust in the responsible use of AI technologies. | <urn:uuid:c7a575ec-6c64-4c7a-ba2b-afeadb4997b0> | CC-MAIN-2024-38 | https://datasciconnect.com/demystifying-artificial-intelligence-a-closer-look-at-vice-president-harriss-concerns/ | 2024-09-08T01:17:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00558.warc.gz | en | 0.919433 | 806 | 2.921875 | 3 |
As with most journeys, the destination is as important as how you get there. The data journey is no different. After taking data through collection, organization, and analysis, we must figure out how to pull it all together into something that tells a compelling story.
Data visualization enables effective data interpretation and contributes to a data-driven culture. It transforms processed information into visual results that we can see and interpret. This critical step takes the numbers, letters, and characters that make up data and converts them into graphs, charts, illustrations, and other visual elements that tell a story.
It also allows us to communicate business data more meaningfully and provides a way to express analyses and extract insights backed by evidence-based conclusions. This process ensures that our data journey efforts inform and educate while supporting data-driven decision-making (DDDM).
This article explores data visualization and helps us to understand just how this crucial step completes the data journey.
A data visualization process that enables data-driven decision-making (DDDM)
The data visualization process is vital to enabling us to make decisions based on our data. But if the data consists of information we can’t interpret, it becomes useless and doesn’t contribute to your data culture. Even once processed and cleaned, the endless lists, numbers, and lines of text can be impossible to decipher. Visualized data, however, is clear and understandable, even at a glance. This, in turn, supports DDDM, allowing us to extract the necessary insights to make the right choices.
As we explore the steps in the data visualization process, we’ll use a real-life example to help illustrate this final phase of the data journey. This project, carried out at C&F, assisted a pharmaceutical company who asked us to help them with their data journey.
Like many companies, this one faced data challenges from multiple between-system integrations, leading to issues in effectively transmitting and visualizing their sales data. This compromised accuracy and, most importantly, their ability to understand the data.
So, we integrated their data, bringing it from multiple data sources into a single source of trusted truth. This played a major role in allowing us to create dashboards. These dashboards present the data in a way that ties all the information together so users can comprehend and understand it.
Below is a snapshot of that dashboard. We’ll be using it to help demonstrate the process steps more effectively.
We unpack the data visualization process in seven steps.
Step 1: Determine the objective for the data visualization
Before commencing, identify what data you want in the visual. Consider why you want it there and which decisions you hope the data will influence. How does it support your data strategy? Note the purpose or function of the company’s data you’re including, and don’t forget about who will be seeing and using the visualization. These basic elements will help determine the reasons for having particular data in the visualization and ensure we avoid simply including data just to ‘dress things up’.
The example above covers all of these points. The data relates to company sales and its ability to track sales performance. Results will inform future campaign decisions for the sales team based on the current relationship between the metrics.
Step 2: Identify the metrics that inform the decision
Metrics feed data-driven decision-making. However, too many metrics may complicate finding the information most relevant to the decision question. Identifying the right metrics in larger data sets means picking the ones containing the information most pertinent to DDDM.
Only pick the metrics you can gather accurately. If specific data isn’t available or the data you have is compromised, you can choose to launch a new data collection project – for example, a survey. You can also re-evaluate the data objectives and the decision question you’re trying to answer.
The example contains two sets of data metrics. CPC, seen in the second column, indicates how much the company pays whenever someone clicks on their ad. CPM shows how much they pay for every one thousand impressions or views of the ad.
Both of the metrics in the example are accurate and relevant. They help us establish if the budget is being spent right relative to how many people see the ad. Both are important to data interpretation and DDDM on the marketing front.
Step 3: Develop the story you want to tell
A vague or unclear data story can confuse and frustrate users. A good data story begins with establishing its purpose and context relative to the data itself. Allowing for data interpretation along multiple paths is important while enabling the user to develop their own conclusions along the way.
The dashboard is the perfect tool for telling this story. Its job is to host the analysis results in a clear, comprehensible way. Dashboards are a tool, assisting the user and providing them a space to understand the information and to draw conclusions.
The example’s dashboard does these things pretty well. The information tells a story, taking the user through each campaign, metric, and result. Users can also draw conclusions by taking more than one path to get there.
Step 4: Select the appropriate visual
Choosing the right visuals can make or break an effective data visualization. Pick wrong, and users may get distracted, confused, or lose interest. Pick right, and the story becomes a compelling one, engaging users and making the data understandable and clear.
Clear presentation of information is fundamental for its interpretation and the decision-making process. This makes good data visualization indispensable. With a workable idea and proper tools, visualizing even very complex datasets is not a problem for an analyst who specializes in this area.
BI Architects value data visualization through multiple means, providing flexibility and agility in presenting the data. But this also means that data should be presented clearly and understandably without overcomplicating things. Simple graphs are only the tip of the iceberg. There’s a whole selection of visualization methods to present data in effective and interesting ways.
Some of the general types of data visualizations include typical charts, tables, and graphs. These are perfect for demonstrating comparisons, relationships, and trends. More complex types include geospatial visualizations, like maps and infographics. These color and shape-rich elements are ideal for use in combination with other data interpretation visuals.
Different visualizations come with their pros and cons, but each one is suited to a specific role or purpose.
Visualizations are often found together in collections on dashboards to help analyze and present data. The dashboard example above is a great illustration of this. It contains a combination of different visuals, all working together to tell the data story.
Step 5: Add relevant elements to the visual to boost data literacy
Relevant color and design elements are also important to the effectiveness of data visualization. They can be used to enhance aspects, build consistency, and optimize functionality.
Color can be used to highlight trends and patterns and to reinforce consistency, while design techniques should be responsive and optimized for different screens and devices.
Avoid using too many colors, or users may experience visual overload, losing focus on the most important elements. Poor design can also negatively impact the user experience.
These elements are crucial to ensuring the effectiveness of the data visualizations for data-driven decision-making.
The example dashboard at the top of this section demonstrates this well. Note how the design and color elements on the right of the dashboard help us to perceive the campaign efforts versus the budget achievements.
Step 6: Clearly label and create a visual hierarchy
Using a visual hierarchy to help a user process information in order of importance guides how people experience the journey. It ensures that the right information is taken in, in the correct order.
Here’s how to build a clear, labeled visual hierarchy:
- Begin with designing a single widget – rather than applying a theme to all of them, allowing for a focus on optimizing to display data more easily.
- Understand the level of data – to display the most important data first, helping users differentiate between critical and secondary data.
- Pick colors for data visualization – that assist in navigation, helping users differentiate data sets.
- Add readability in dashboards – to guide users better in reading through the data, avoiding people reverting to default ‘F’ or ‘Z’ scanning patterns.
Color schemes, descriptions, typography, size, and titles may seem like minor visual elements. But they all play an important role in establishing a visual hierarchy. This hierarchy impacts heavily on users’ ability to find data as well as how much confidence they have in it.
Our visual above demonstrates an effective hierarchy. Data arrangement, flow paths, emphasis points, and correlations all guide the user through the data better. C&F analysis shows that an effective visual hierarchy can significantly reduce the time spent across your organization on viewing the data, reduce the analysis cost, and lower user frustrations.
Step 7: Review the visual for clarity
Finally, reviewing data visualizations enhances the clarity of the data presented. This allows us to fine-tune the details of the data visualization and optimize it for a better journey. Audience feedback and data usage analysis help with spotting any visualization issues and provide better insights into data interpretation, respectively. Staying on top of industry developments also ensures visualizations reflect the most recent trends, boosting confidence in the information within.
Using an iterative approach has allowed C&F to produce a data visualization on a dashboard geared around assisting with effective DDDM.
Data visualization – ‘do’s and don’ts’
- Do: Choose the visualization type (and tool) on the basis of purpose
Pick a visualization method based on the insights you want the audience to gain from the data. Depending on the data’s complexity, select the tools and techniques that cater to their preferences, expertise, and specific needs. Remember, they’re going to rely on data presented in your visuals.
- Do: Prioritize transparency and ethics
Be transparent and ethical in the data visualization. Only present logical, coherent, and clear information in the analysis. Also, demonstrate the predefined business needs, terms, and definitions. Consider using consistent business language to avoid inconsistencies.
- Don’t: Refrain from visual complexity
Balance simplicity with complexity in your visualization. This allows the audience to grasp key insights quickly while still having access to detailed, accurate data. Clarity and functionality should come first here to help your team become more data-driven in their work.
- Don’t: Don’t make data visualization-related decisions autonomously
Since visualizations represent the end-product of data transformed into insights, they must reflect a collaborative team effort. Visualizations destined for external or non-technical use should be reviewed by people not involved in the process. This feedback can then be used to iron out any issues. Incorrect charts for different data or blanks in the visual narrative are common problems and make it hard if not impossible to make data-driven decisions.
Data visualization is key when you want to become a data-driven organization
Good data visualization depends on finding the balance between form and function. While a simple graph may look slick, it may lack sufficient detail. A colorful visual design may boost appeal, but it might fail to convey the message or even overcomplicate things.
Data visualization is an iterative process that marries analysis with a story. The ultimate goal is to convey the data in a way that allows people to draw their conclusions (i.e., understand data) while navigating the story without getting frustrated or confused. People across the organization should feel that they have access to data they need.
It is an often tricky art, but when done right, data visualization enables truly data-driven decision-making, assists with effective data interpretation, and completes the data journey.
Would you like more information about this topic?
Complete the form below. | <urn:uuid:4d5001a8-6d7e-4fe3-8a8a-2344d71d76c3> | CC-MAIN-2024-38 | https://candf.com/our-insights/articles/how-to-use-data-visualization-to-become-a-data-driven-company/ | 2024-09-09T05:26:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00458.warc.gz | en | 0.905673 | 2,457 | 2.6875 | 3 |
Tell us a little bit about how technology plays a role in your everyday life.
Mujtaba Ahmed: Obviously, with such technology like mobile phones we can talk easily to friends and family anywhere in the world. We can send instant text messages, unlike in the past when people had to send letters to each other, which took weeks. Now I’m able to communicate with my family and friends in India easily through social media, video calls and instant messaging. These are small things that we take for granted but are very important for my everyday life. Also, because English is not my first language, technology helps me learn, whether it’s using videos, artificial intelligence, search engines or online simulations.
Muhammad Khan: For me, it’s the cars and getting from one place to another. In the past, there was nothing — people used animals to go from one place to another. Now we just pick up our phones, book a ride and go anywhere we want to go. I came to the U.S. from Pakistan, where cybersecurity is playing a growing and crucial role in addressing challenges related to the digital safety and security for families. With more people doing digital banking and online transactions, cybersecurity has helped many families I know avoid scams and protect data. | <urn:uuid:9c0e215d-bf16-480e-a3b2-6b9edd0c5384> | CC-MAIN-2024-38 | https://www.kyndryl.com/fr/fr/about-us/news/2024/03/students-discusss-it-career-paths | 2024-09-09T04:35:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00458.warc.gz | en | 0.961626 | 261 | 2.59375 | 3 |
Imagine it’s an otherwise typical day. You wake up, head into the office, pour yourself a cup of coffee, and settle in to get some critical work done before its due. Something is different this time. Instead of your computer booting up smoothly and showing your desktop, you’re greeted with a scary-looking window featuring an image of a large padlock. Text in that window tells you that your data has been locked down, and will be permanently inaccessible after a few days unless a ransom is paid. The amount and method of payment may vary, but the threat does not: Pay up or never see your data again.
You’ve just become the latest victim of a ransomware attack.
So what is ransomware? Simply put, it’s malicious software that locks down data unless a ransom is paid, hence the name. It’s relatively new as far as malicious software goes; the first known version hit the scene in late 2013. But it’s far from benign, as thousands of computers have fallen victim to it since that infamous debut.
Ransomware likely came about as a result of both improved education of computer users and the work of both computer security professionals and antivirus companies. Just a few years ago, it was common for users to infect their computers by clicking malicious links they received in an e-mail message. At the same time, the various antivirus companies seemed to be locked in a constant game of catch-up with virus creators as more and more malicious programs were released. Nowadays, not only are people more savvy about what they click and what they download, the antivirus companies have made significant headway against global hackers and malicious software creators. This has led to those groups seeking more novel means by which to maintain their revenue streams without turning to old techniques such as credit card fraud.
Like most types of malware, ransomware generally infects computers through clicking an unsafe link or downloading unsafe programs. These can come in e-mails, torrents, botnets, or other forms of transmission. Unlike other types of malware, ransomware isn’t removed when the computer’s owner flashes the BIOS, wipes the drive, or attempts to return to a prior restore point. The program locks down user files and the ransom demand is made, while a unique decryption key is created and stored on the hacker’s servers. If the ransom is not paid in time, or if any attempt to alter the program directly is made, the decryption key is permanently deleted, rendering all encrypted files inaccessible. If the ransom is paid in time, the decryption key is transferred and the files will be decrypted. The ransom is usually demanded either in a currency like BitCoin or sent through a service like MoneyGram and loaded onto untraceable prepaid credit cards. Because the ransom leads to the files being released in most cases, this has led to desperate people simply paying the ransom instead of looking into alternative options. This emboldens the hackers, encouraging them to find more and more unscrupulous ways to make money.
Ransomware is more than merely a nuisance. While the infected computer can still be used, the risk of losing valuable data can impact productivity. With that in mind, there are ways to counteract or avoid a ransomware attack. The best defense is to remain vigilant. Like any malicious software, ransomware relies on social engineering to get users to click links and download the program. If users can avoid clicking the link that promises to lead to an outrageous video, or download the ‘free system scan’ program in the ad on a random website, they can avoid inadvertently allowing ransomware and other malware onto their computers. Also, keeping antivirus and firewall software up to date will help keep a computer from being remotely taken over and used as part of a botnet.
The other defense is to maintain consistent computer back-ups. Since ransomware encrypts data on the computer, hackers count on those files being the only ones of their kind in existence. If there is a current data backup, then that takes a large advantage away from the hackers. After all, why would someone pay a ransom to have their files decrypted if they have a recent backup of those same files that they can copy back onto a computer? Computer users should not only maintain a daily backup of their files, but they should also remember to disconnect the backup hard drive after each use. If it’s not feasible to keep a physical hard drive around for backups, cloud storage can also be utilized in order to keep files safe and secure.
If the worst does happen and a computer is infected by ransomware, the important thing is not to panic. Many antivirus companies now have fixes available to combat the most common types of ransomware. The fix can be downloaded from the company website and put on a USB flash drive, which can be plugged in when the ransom screen appears. Unfortunately, new types of ransomware are constantly being developed and released, so the ransom may have to be paid if the computer is infected with one of the newer programs.
When it first came onto the scene in 2013, ransomware caused a significant amount of panic due to its novelty and method of attack. Since then, experts have made significant strides in combating this type of malicious software. Small business owners can be especially vulnerable to ransomware attacks, as they may lack the funds to institute strong security measures. However, as long as data is kept safe and protected and users remain vigilant, ransomware can be defeated before it gains a foothold in a computer.
My passion is to make my mark on the world in a positive and lasting way. I want to set an example for my son that his father can compete with integrity in today’s world, be very successful, and leave the world a better place for him.
Combining my technical/business-based education with a long career steadily progressing up the corporate ladder, I decided to build a company that held true to my values. So, I founded and designed the next generation of IT support firm: CTECH Consulting Group Inc. We are a completely automated, cloud-based IT company designed to compete against any other IT firm without the overhead. We promote a lifestyle to all our staff where they can work anywhere, at any time, access any information on any device that is relevant to their job, and collaborate with anyone they want to. | <urn:uuid:4fcaa607-2153-41c5-ac55-937cba56621b> | CC-MAIN-2024-38 | https://www.ctechgroup.ca/ransomware-what-is-it-and-what-is-its-impact/ | 2024-09-11T16:09:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00258.warc.gz | en | 0.956689 | 1,294 | 2.875 | 3 |
Definition: Hash Rate Efficiency
Hash rate efficiency refers to the measure of performance and effectiveness of cryptocurrency mining hardware. It is defined as the ratio of the number of hashes that a mining device can produce per unit of energy consumed, typically represented in hashes per joule (H/J). This metric is crucial in determining the profitability and environmental impact of mining operations.
Understanding Hash Rate Efficiency
Hash rate efficiency is a critical parameter in cryptocurrency mining, especially for networks like Bitcoin and Ethereum that rely on proof-of-work (PoW) consensus mechanisms. A higher hash rate efficiency means that a mining device can produce more hashes (attempts to solve the cryptographic puzzle) for each unit of energy it consumes. This not only enhances the profitability of mining operations by reducing electricity costs but also mitigates the environmental impact by reducing the overall energy consumption.
Importance of Hash Rate Efficiency
- Cost Reduction: Energy consumption is a significant cost in cryptocurrency mining. Improving hash rate efficiency means more hashes per unit of energy, thus lowering operational costs.
- Environmental Impact: High energy consumption in mining has raised concerns about its environmental impact. Efficient mining hardware helps reduce the carbon footprint of mining operations.
- Profitability: Efficient miners can maximize their returns on investment by reducing electricity expenses while maintaining or increasing their mining output.
- Competitive Advantage: As mining difficulty increases, efficient hardware ensures that miners remain competitive in the race to solve blocks and earn rewards.
Factors Influencing Hash Rate Efficiency
Several factors affect the hash rate efficiency of mining hardware:
- Hardware Design: Advanced designs and materials can improve energy efficiency.
- Cooling Solutions: Effective cooling mechanisms prevent overheating and maintain optimal performance.
- Firmware and Software: Optimized software and firmware can enhance the efficiency of mining operations by reducing computational overhead.
- Operational Practices: Proper maintenance and operational practices ensure that mining hardware runs at peak efficiency.
Measuring Hash Rate Efficiency
Hash rate efficiency is measured in hashes per joule (H/J). This metric helps in comparing the performance of different mining hardware. For instance, if Miner A has a hash rate efficiency of 50 H/J and Miner B has 70 H/J, Miner B is more efficient as it produces more hashes for the same amount of energy consumed.
Improving Hash Rate Efficiency
To improve hash rate efficiency, miners can focus on the following strategies:
- Upgrade Hardware: Investing in modern, more efficient mining rigs.
- Optimize Settings: Fine-tuning software settings to maximize performance.
- Enhance Cooling: Implementing advanced cooling systems to reduce energy wastage due to overheating.
- Utilize Renewable Energy: Using renewable energy sources to power mining operations can improve overall efficiency and reduce environmental impact.
Benefits of Hash Rate Efficiency
- Cost Savings: Lower electricity bills directly enhance profitability.
- Sustainability: Reduces the environmental footprint of mining operations.
- Increased Longevity: Efficient hardware often has a longer operational lifespan due to reduced strain and better heat management.
- Regulatory Compliance: In regions with strict environmental regulations, high efficiency can help miners comply with legal requirements.
Uses of Hash Rate Efficiency
Hash rate efficiency is primarily used to:
- Evaluate and compare different mining hardware.
- Optimize mining operations to enhance profitability.
- Develop strategies to reduce the environmental impact of mining.
Features of Efficient Mining Hardware
- High Hash Rate: Capable of producing a large number of hashes per second.
- Low Power Consumption: Uses minimal energy for maximum output.
- Advanced Cooling: Efficient cooling systems to prevent overheating.
- Durability: Built to withstand the rigors of continuous operation.
- Optimized Firmware: Software that enhances hardware performance and efficiency.
How to Choose Efficient Mining Hardware
When selecting mining hardware, consider the following:
- Hash Rate: Ensure the device offers a high hash rate.
- Power Consumption: Check the power consumption specifications and aim for low energy usage.
- Cost: Balance the initial investment against the long-term savings from higher efficiency.
- Reputation: Choose hardware from reputable manufacturers known for quality and efficiency.
- Community Reviews: Look for reviews and feedback from other miners regarding the performance and efficiency of the hardware.
Frequently Asked Questions Related to Hash Rate Efficiency
What is hash rate efficiency?
Hash rate efficiency measures the performance and effectiveness of cryptocurrency mining hardware, defined as the ratio of the number of hashes produced per unit of energy consumed, typically represented in hashes per joule (H/J).
Why is hash rate efficiency important in cryptocurrency mining?
Hash rate efficiency is crucial as it helps reduce operational costs, mitigate environmental impact, enhance profitability, and maintain a competitive advantage in the mining industry.
How is hash rate efficiency measured?
Hash rate efficiency is measured in hashes per joule (H/J), indicating the number of hashes a mining device can produce per unit of energy consumed. Higher values represent better efficiency.
What factors influence hash rate efficiency?
Factors influencing hash rate efficiency include hardware design, cooling solutions, firmware and software optimizations, and proper operational practices.
How can I improve the hash rate efficiency of my mining operations?
To improve hash rate efficiency, consider upgrading hardware, optimizing software settings, enhancing cooling systems, and utilizing renewable energy sources. | <urn:uuid:45d75ad5-687c-4587-87db-a6fbde9e0381> | CC-MAIN-2024-38 | https://www.ituonline.com/tech-definitions/what-is-hash-rate-efficiency/ | 2024-09-12T23:27:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00158.warc.gz | en | 0.913883 | 1,103 | 2.9375 | 3 |
Intel Xeon Phi combines the parallel processing power of a many-core accelerator with the programming ease of CPUs. Phi has powered many supercomputers, e.g., in June 2018 list of Top500 supercomputers, 19 supercomputers used Phi as the main processing unit. This paper surveys works that study the architecture of Phi and use it as an accelerator for various applications. It critically examines the performance bottlenecks and optimization strategies for Phi. For example, the main motivation and justification for development of Phi was ease of programming. Certainly, programming Phi takes much less effort than that required for accelerators such as FPGA and may be even GPU. However, achieving peak performance on Phi is not easy, as it requires myriad set of optimizations and understanding of the microarchitecture. The paper also reviews works that perform comparison or collaborative execution of Phi with CPUs and GPUs. The insights and lessons from Phi will be useful for designers of next-generation computing systems.”
Sparsh Mittal is a Faculty member at the Department of Electronics and Communications Engineering, Indian Institute of Technology Roorkee. Sparsh does research in VLSI and electronics. His current projects are ‘hardware architecture in computer architecture, deep learning’ and non-volatile memories. | <urn:uuid:d898d047-f6d9-48f7-b7cf-078422f9843e> | CC-MAIN-2024-38 | https://insidehpc.com/2020/03/new-paper-surveys-optimization-techniques-for-intel-xeon-phi/ | 2024-09-14T03:35:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00058.warc.gz | en | 0.947828 | 259 | 2.96875 | 3 |
September 8, 2020
The system accurately identifies lesions associated with pre-eclampsia
Researchers have developed a machine learning algorithm that analyzes placenta samples to identify potential health risks in future pregnancies.
The team from Carnegie Mellon University (CMU) says the system can help doctors identify conditions like pre-eclampsia, which can be fatal to both mother and baby.
This condition can often be recognized by the presence of lesions – called decidual vasculopathy – on slides of a placenta’s blood vessels. But since the examination required to spot these lesions is time-consuming and requires highly specialized skills, it is rarely carried out. CMU’s machine learning technique makes the process more accessible, and in internal tests, the algorithm identified lesions more accurately than professional pathologists.
Knowing where to look
“Pathologists train for years to be able to find disease in these images, but there are so many pregnancies going through the hospital system that they don’t have time to inspect every placenta,” researcher Daniel Clymer said in a statement. “Our algorithm helps pathologists know which images they should focus on by scanning an image, locating blood vessels, and finding patterns of the blood vessels that identify decidual vasculopathy.”
The team trained the algorithm to spot lesions by feeding it images of placenta samples. After determining the relative health of each blood vessel, the system then accounts for additional factors, such as the term of the pregnancy and any outstanding health concerns the mother might have. If it detects any abnormalities, the sample will be marked as diseased.
The technology isn’t expected to substitute the input of medical professionals, though, but rather make it easier for doctors to understand where best to direct their time and resources. “This algorithm isn’t going to replace a pathologist anytime soon,” Clymer said. “The goal here is that this type of algorithm might be able to help speed up the process by flagging regions of the image where the pathologist should take a closer look.”
The team also hopes the system will help bring down the often-prohibitive costs associated with placenta examinations, thereby reducing the two to eight percent of pregnancies that are complicated by pre-eclampsia.
About the Author
You May Also Like | <urn:uuid:698a6f10-ae99-43af-b0aa-fd5c2b4ae577> | CC-MAIN-2024-38 | https://aibusiness.com/verticals/ai-based-analysis-helps-predict-complications-in-future-pregnancies | 2024-09-16T13:32:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00758.warc.gz | en | 0.954359 | 490 | 2.984375 | 3 |
The threats to sensitive data keep increasing, and organizations are struggling to stay secure. With the government considering new cybersecurity requirements for critical infrastructure, many organizations are reviewing their data loss prevention policies and are looking for ways to improve their security stance. This article reviews standard data loss prevention methods, their shortcomings, and how adding always-on email encryption to your toolbox can help futureproof your communications.
What is Email Data Loss Prevention?
Data loss prevention, also known as DLP, ensures that sensitive data is not lost, misused, or accessed by unauthorized users. DLP software allows users to classify business-critical data and take specific actions when those data are present in email messages. If sensitive data is identified, data loss prevention tools take some action to prevent users from accidentally or maliciously sharing data that could put the organization at risk.
How does DLP Technology work?
There are two main types of data loss prevention tools available:
- Rules-based DLP
- AI and Machine Learning based DLP
We will primarily discuss rules-based DLP in this article. But first, DLP tools that use AI or machine learning are trained on an extensive data set to identify when email messages sent by your employees contain sensitive information.
In rules-based DLP software, administrators create rules that trigger the data loss prevention technology to take a particular action. Some examples of rules include:
- Encrypting emails that contain social security numbers.
- Not sending emails that contain health data (as identified by the organization).
- Flagging emails that include specific keywords like “contract,” “financial report,” or “confidential information.”
Once the rules are in place, the DLP software will scan every outgoing email message to search for data that meets the criteria. When the DLP detects sensitive data, it takes an action that the administrator also determines. Some common protective actions include:
- Not sending the email at all.
- Adding a warning label or sending a notice to the email sender.
- Encrypting the email and sending it to a web portal.
Why is DLP technology insufficient for security and compliance?
While DLP technology may capture most sensitive data, it is not infallible. In industries like healthcare and finance, even one mistake could lead to a breach with severe financial penalties.
Looking at how most data loss prevention software works, it’s easy to see how it can fail. Rule-based DLP requires administrators to thoroughly document and catalog every possible variation of the keywords and number formats that could indicate the presence of sensitive data. Even one typo could throw off DLP software and cause data to be sent without protection. Sensitive healthcare and financial data do not always fall cleanly into pre-determined categories, and there are always exceptions to rules.
Conversely, false positives from extremely strict rule-making can result in delayed business communications and inefficiency. If DLP rules are too restrictive and too many messages are not sent or locked behind a portal, employees may use less secure channels to get around DLP technology.
How to Close Data Loss Prevention Gaps with Always-On Email Encryption
Highly regulated industries should consider sending all messages with a baseline of TLS encryption instead of relying on DLP technology to trigger it. TLS encryption is secure enough to meet most compliance requirements and has added usability benefits. TLS-encrypted messages appear just like regular, unencrypted emails in the recipient’s inbox, making them easy to read and respond to but without the risk of interception or eavesdropping. When all messages are automatically encrypted, you can worry less about DLP failure and data leakage.
DLP scanning can also trigger web portal pick-up encryption for more sensitive messages. Sending highly confidential information like financial statements, medical records, and board meeting minutes requires added security that can be triggered by DLP technology. Reducing the number of rules required makes data loss prevention tools easier for administrators to manage. Also, removing encryption choices from employees improves their productivity and reduces risk.
Message encryption may only be optional for a little while longer. In 2022, CISA issued Cross-Sector Cybersecurity Performance Goals, which recommended TLS encryption as part of prioritized cybersecurity practices that critical infrastructure owners and operators can implement to reduce the likelihood and impact of known risks and adversary techniques. Prepare for the future and protect your sensitive data by using LuxSci’s easy-to-use email encryption tools today. | <urn:uuid:94b7c31d-b807-42cf-a3e2-4c63803f734b> | CC-MAIN-2024-38 | https://luxsci.com/blog/tag/financial-data | 2024-09-17T19:34:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00658.warc.gz | en | 0.918348 | 915 | 2.953125 | 3 |
Several weeks later, the National Security Agency warned that BlueKeep could be “exploited and weaponized by malware.” It was a similarly unusual announcement.
What Is BlueKeep?
BlueKeep was discovered in Microsoft’s Remote Desktop Protocol (RDP), a feature that is used regularly to allow users to control computers being used remotely. BlueKeep is potentially “wormable,” meaning it could be exploited to run code on every machine connected to an RDP, without needing a username or password. A self-propagating worm could launch on almost 1 million machines, according to Microsoft estimates.
What Systems Can the BlueKeep Vulnerability Affect?
Microsoft’s issued patches for older systems, including those from Windows XP to Server 2008 R2. Included in those patches are those for operating systems such as Windows XP and Windows 7 that are no longer supported or for which support will cease in January 2020.
Why Is the BlueKeep Virus So Dangerous?
BlueKeep is so risky because of the ease in which a launched attack could allow control for machines throughout an organization quickly. “It is more of a mobile virus. It’s the kind of virus that once it infects a network it worms its way through the entire network before it actually starts its destructive behavior,” explained Luis Alvarez, founder of the Alvarez Technology Group, a leading IT security and managed services company, in a recent video.
Why Did the NSA Issue an Alert?
The NSA alert was issued to protect against a repeat of the WannaCry virus of 2017 that disrupted millions of computers worldwide. The BlueKeep vulnerability is very similar to the EternalBlue vulnerability that allowed WannaCry to wreak havoc.
What Can BlueKeep Do To Infected Computers?
Turn them into bricks, which means your computer cannot be fixed through normal means. The computer won’t power on or the OS will not launch. In some cases, it’s impossible to install a new operating system,
However, that’s just part of the disruption BlueKeep can cause. As Alvarez explains, BlueKeep can act like typical ransomware and encrypt your data files. It has also been known to encrypt system files and DLL files.
“It can literally brick your PC and force you to basically wipe everything out and start from scratch,” Alvarez said.
What Can Users Do to Combat the Virus?
Microsoft and the NSA are urging users to install the patch. Microsoft is also encouraging users to its latest operating system, Windows 10. That OS and Windows 8 are not affected by the BlueKeep vulnerability.
Alvarez Technology Group helps clients throughout the Salinas area assess and improve their network security. Our comprehensive security services protect your data and hardware from vulnerability like BlueKeep with constant monitoring and automated patching. To schedule an initial IT security consultation, contact one of our IT security experts today. | <urn:uuid:f09fdf00-8764-4e28-9830-22772c0cae45> | CC-MAIN-2024-38 | https://www.alvareztg.com/computer-threat/ | 2024-09-17T19:01:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00658.warc.gz | en | 0.952896 | 597 | 2.796875 | 3 |
This article will discuss data control, which involves managing and protecting valuable information. We will cover the goals, strategies, difficulties, and actions organizations can take to safeguard their data.
Fundamentals of Data Control
Data control is a crucial aspect of a company’s overall data management strategy. It contains rules, processes, and tools that control how to access and use data and keep it safe.
Proper data control measures ensure that data remains confidential. It also maintains the integrity of the data. Authorized users can easily access the data when needed.
By establishing a robust data control framework, organizations can mitigate the risks associated with data breaches, unauthorized access, and data corruption.
The CIA Triad: The Cornerstone of Data Security
At the heart of data control lies the CIA triad, which consists of three essential principles: confidentiality, integrity, and availability. Let’s explore each of these principles in detail:
- Confidentiality: it ensures that data is accessible only to authorized individuals who have the necessary permissions. Organizations achieve this by implementing access controls, such as role-based access, least privilege principle, and encryption. By restricting access to sensitive data, organizations can prevent unauthorized disclosure and maintain the privacy of their information assets.
- Integrity: Data integrity refers to the accuracy, consistency, and reliability of data throughout its lifecycle. It ensures that data remains unaltered and free from unauthorized redaction or corruption. To maintain data integrity, organizations employ techniques such as data validation, error checking, and data backup and recovery. By preserving the integrity of data, organizations can make informed decisions based on reliable information.
- Availability: Availability ensures that data and systems are readily accessible to authorized users when needed. Organizations must implement measures to prevent downtime, system failures, and data loss. This includes implementing redundancy, failover mechanisms, and disaster recovery plans. By ensuring the availability of data, organizations can maintain business continuity and avoid disruptions to their operations.
Real-World Examples of Data Control in Action
To better understand how data control works in practice, let’s consider a few real-world examples:
- Data Access Management: Imagine a healthcare organization that stores sensitive patient information. To comply with regulations and protect patient privacy, the organization implements data access controls. Each healthcare professional receives specific access rights based on their role and responsibilities.
- Data Encryption: An e-commerce company handles sensitive customer information, including credit card details and personal addresses. To protect this data from potential breaches, the company employs data encryption. The website encrypts the customer’s information when they enter it, before transmitting and storing it.
- Data Backup and Recovery: A financial institution depends on its databases to store customer accounts, transactions, and financial records. The institution has a strong data backup and recovery system in place to protect critical data.
Doctors may have full access to patient records, while nurses may have limited access based on their specific duties. By controlling who can access what data, the organization ensures privacy and prevents unauthorized access.
Even if a hacker manages to intercept the data, they would only see scrambled, unreadable content. Encryption helps maintain the privacy of customer data and builds trust in the company’s security measures.
We take regular backups and store them in secure off-site locations. If the system breaks or data gets messed up, the institution can easily bring back the data from the backups. This helps to reduce downtime and make sure the business keeps running smoothly.
Overcoming Data Control Challenges
While data control is essential for protecting information assets, it is not without its challenges. Let’s explore some common challenges organizations face when implementing data control measures:
- Data complexity increases as organizations collect and store data from various sources. This makes it harder to manage and control the data. Dealing with structured, semi-structured, and unstructured data, along with the sheer volume of data, can be overwhelming. Organizations need to implement scalable data control solutions that can handle the complexity and growth of their data landscape.
- Finding the right balance between giving authorized users access to data and keeping that data secure is important. Too many restrictions on access can slow down work and decision-making. Giving too much access can result in data breaches and unauthorized entry. Finding the right balance requires careful planning, regular audits, and continuous monitoring of access rights.
- Organizations may struggle to follow regulations like GDPR and CCPA. This is because the number of data protection laws is constantly growing.
- Compliance with these laws can be difficult for organizations. Failure to comply can result in hefty fines and damage to their image. Companies need to keep current with new rules and put in place data controls that follow these regulations.
- Insider threats are just as important as external threats. They can pose a significant risk to data security. Many people focus on hackers and criminals, but insiders can also be a threat. Important to remember this when considering security measures.
- Employees, contractors, or third-party vendors with access to sensitive data can compromise data security. Organizations must implement controls to monitor and detect insider threats, such as user behavior analytics and access logging.
Best Practices for Effective Data Control
To address the challenges and ensure effective data control, organizations should follow these best practices:
- Make a detailed data control plan. Start by assessing the data your organization uses. Understand the various types of data you manage. Define clear policies and procedures for data access, usage, and protection.
- Implement Access Controls: Implement a robust access control system that follows the principle of least privilege. Grant users access only to the data they need to perform their tasks. Regularly review and update access rights to ensure they align with job roles and responsibilities. Implement strong login mechanisms, such as multi-factor authentication, to prevent unauthorized access.
- Encrypt Sensitive Data: Encrypt sensitive data both at rest and in transit. Use strong encryption algorithms and securely manage encryption keys. Encryption helps safeguard data from unauthorized access by making it unreadable without the correct decryption key, even if it is intercepted.
- Conduct Regular Data Audits: Regularly audit your data control measures to identify gaps and weaknesses. Conduct vulnerability assessments and penetration testing to evaluate the effectiveness of your security controls. Use tools to find and sort important data, making sure to protect each type of data with the right controls.
- Educate your workforce on data security by investing in employee training and awareness programs. Teach them how to identify and report potential security incidents, such as phishing attempts or suspicious activities. Foster a culture of security awareness and encourage employees to be proactive in protecting the data.
- Implement Incident Response and Recovery Plans: Despite best efforts, data breaches and security incidents can still occur. Create a detailed incident response plan that clearly outlines the necessary steps to take in case of a data breach. Regularly test and update your incident response plan to ensure its effectiveness. Implement data backup and recovery mechanisms to minimize the impact of data loss or corruption.
By understanding the fundamentals of data control, implementing robust security measures, and following best practices, organizations can safeguard their valuable information assets.
Organizations must update their data control strategies to keep up with new threats and changing regulations. This will help them stay ahead of the curve. Companies must adapt to protect their data effectively. By staying proactive, organizations can ensure they are prepared for any challenges that may arise.
Organizations can protect their reputation by making data control a critical part of their data management strategy. This will help them maintain customer trust and drive sustainable growth in the digital age.
Remember, data is a valuable asset that demands protection. By learning how to control your data, you can make the most of it while keeping it safe and accessible.
Take proactive steps to safeguard your company’s information assets and build a secure foundation for the future. | <urn:uuid:1d7ab337-9bba-4212-b3bd-3cd173783860> | CC-MAIN-2024-38 | https://www.datasunrise.com/knowledge-center/data-control/ | 2024-09-19T00:18:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00558.warc.gz | en | 0.921565 | 1,603 | 3.578125 | 4 |
Industry dynamics around sustainability are constantly evolving, which makes them tough to navigate, with few guidelines, little oversight and conflicting opinions on the “right approach” to climate action. As a global technology company with decades of sustainability leadership, Dell Technologies has a strong point of view informed by data and science, and we’re working with others to chart the path forward.
We believe that data analysis and collaboration are key to climate action. With the largest and most diverse solutions portfolio in the industry, we know each product presents unique challenges, from materials to supply chain management. Capturing complete and accurate data and understanding how to use it requires a great deal of collaboration in and outside our industry. That’s why we collaborate with supply chain partners and customers and participate in industry working groups like the Circular Electronics Partnership (CEP) to find solutions and best practices.
One best practice critical to reducing our impact on the environment is to calculate the product carbon footprint (PCF) for each of our products, so we can identify opportunities to reduce emissions. A PCF is a way of calculating the total greenhouse gas (GHG) emissions generated during the entire lifecycle of a product, providing information about its impact on the environment. The problem – there isn’t a PCF calculation standard, making it impossible for customers to do an “apples-to-apples” comparison of competitive products. Today, Dell and others in the industry use a cradle-to-grave assessment tool called the Product Attribute to Impact Algorithm (PAIA), which calculates emissions related to four key lifecycle stages of a product: manufacturing, use (i.e., energy), transportation and end of life over a period of four years. Even though Dell and other manufacturers and suppliers use PAIA to estimate our PCF impact, there are challenges.
Databases often contain outdated or industry average data, which may not accurately represent individual suppliers’ manufacturing locations and energy sources. The PAIA approach is also limited in that it can’t account for the myriad configuration options available, making it difficult to calculate an accurate PCF. PAIA’s scope is also currently restricted to specific electronics like laptops, desktops and servers, leaving other products unaccounted for, such as peripherals and other equipment.
As more enterprises demand accurate environmental impact information, it’s crucial that the IT industry collaborates to agree on a standard that works. We have a long history of partnering with others to drive solutions. Our participation in innovative consortiums, such as NextWave Plastics, and explorations like Concept Luna, have already demonstrated the possibilities collaboration and knowledge-sharing inside and beyond our industry can bring. And we’re not stopping there.
To improve, we must be able to measure. We use data to inform all aspects of our business, and this extends to having deep understanding of the impact of our products and sharing those insights. We will continue to work with PAIA and other organizations seeking to find the best possible solutions to accurately capture data. We’re also taking steps alongside our industry peers to explore pathways to establish a standard for PCF measurement that works for everyone – most importantly meeting the needs of our customers and partners who share our focus on achieving a net zero society.
Sustainability is a journey that requires innovation, perseverance and collaboration. We’re incredibly proud of the work our teams do to embed sustainability across our product portfolio. From using recycled, renewable and lower emissions materials, to designing for energy efficiency and even recovering devices when they are no longer in use, we’re working to reduce emissions and waste at every stage for ourselves and our customers. We’ll continue our efforts to remove complexity around sustainability data so we can make it easier for our customers to understand this data and share our progress along the way.
For more information visit our Product Carbon Footprints page. | <urn:uuid:d2636921-8b2a-4d0a-a861-ef89d6f7d27f> | CC-MAIN-2024-38 | https://www.dell.com/en-sg/blog/data-driven-sustainability-dell-s-commitment-to-industry-standards/ | 2024-09-19T01:01:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00558.warc.gz | en | 0.928204 | 800 | 2.5625 | 3 |
Question: What should small business owners and managers know about cybersecurity?
Small businesses face many of the same cybersecurity threats as larger businesses, but they may not have the same resources to deal with them. It's important for small businesses to understand the risks and take steps to protect themselves. Here are some key things small businesses should know about protecting their systems:
Your business is at risk. Small businesses are often targeted by cybercriminals because they may have less secure systems and may be easier to compromise.
Protect your data. Make sure you have strong passwords and use two-factor authentication where possible. Regularly back up your data to prevent data loss in case of an attack.
Train your employees. Educate your employees about the importance of cybersecurity and how to identify and prevent potential threats.
Secure your network. Use a firewall to protect your network and secure your wireless network with a strong password.
Use security software. Use antivirus software and keep it up to date to protect against malware and other threats.
Be cautious when opening emails and clicking links. Don't open emails from unknown senders and be wary of links, even if they seem legitimate.
By following these basic steps, small businesses can significantly reduce the risk of a cybersecurity breach and protect their sensitive data.
What if they are not very computer-literate?
If your employees are not very computer literate, it's important to provide them with the necessary training and resources to help them understand the importance of cybersecurity and how to identify and prevent potential threats. This might include training sessions on topics such as how to create strong passwords, how to spot phishing scams, and how to safely browse the internet.
It's also a good idea to have clear policies and procedures in place to help guide employees in their use of technology and to protect your business from cyber threats. For example, you might have a policy that requires employees to use strong passwords and to change them regularly, or that prohibits the use of personal devices for work purposes without prior approval.
Finally, consider hiring a cybersecurity expert or consulting with a specialist firm to help you assess your business's vulnerabilities and implement measures to protect against potential threats.
How can a cybersecurity expert help me?
A cybersecurity expert can help you assess your business's vulnerabilities and implement measures to protect against potential threats. They can also provide you with guidance on best practices for keeping your systems and data secure.
Some specific ways a cybersecurity expert can help your business include:
- Conducting a security assessment: A cybersecurity expert can analyze your business's systems, networks, and data to identify vulnerabilities and recommend steps to address them.
- Implementing security measures: Based on the results of the security assessment, an expert can help you implement measures such as firewalls, antivirus software, and security protocols to protect your systems and data.
- Providing training and guidance: A cybersecurity expert can educate your employees on how to identify and prevent potential threats, such as phishing scams and malware, and provide guidance on best practices for using technology safely.
- Responding to breaches: If your business experiences a cybersecurity breach, a cybersecurity expert can help you assess the extent of the damage and recommend steps to contain the breach and prevent future incidents.
Overall, working with a cybersecurity expert can help you ensure that your business's systems and data are secure and that you have the knowledge and resources to protect against potential threats. | <urn:uuid:ddf3ff44-9b5e-4956-a959-787c58ae87b0> | CC-MAIN-2024-38 | https://www.infostream.cc/2022/12/what-should-small-business-owners-know-about-cybersecurity/ | 2024-09-08T03:38:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00658.warc.gz | en | 0.95253 | 694 | 2.5625 | 3 |
Adversarial Machine Learning, New Cybersecurity Threats
Through the development and implementation of machine learning algorithms, software engineers have been able to reach new heights as it pertains to artificial intelligence and technology. However, as is the case with any other method or technique that is used within a particular business or industry, the same processes that allow software engineers to create machine learning algorithms also enable hackers and cybercriminals to take advantage of such processes for nefarious purposes. To this point, adversarial machine learning is a technique that can be used to fool machine learning algorithms with deceptive data, in what has become a new form of cyberattack in recent years. Adversarial attacks are subtle manipulations that force machine learning systems to fail in unexpected ways.
To provide a real-world example of such attacks, experimental research that was conducted by vehicle manufacturer and technology company Tesla in 2018 illustrated that simply placing a few small stickers on the ground of a busy intersection could result in self-driving cars making abnormal and unexpected mistakes that would have otherwise not occurred. As such vehicles are trained in accordance with datasets pertaining to objects and information one would expect to see when driving a car, these stickers represented information that was outside of the scope of the vehicle’s training. As such, while Tesla can continue to expand the datasets they use to train their self-driving cars, machine learning algorithms that are used in other contexts within society are at greater risk of such attacks.
How does adversarial machine learning work?
Just as there are many different machine learning algorithms that can be used to create technological solutions, there are also a number of approaches that cybercriminals can implore when looking to launch an adversarial attack on a particular AI system. However, irrespective of the specific method or technique that is employed, adversarial machine learning attacks generally function on the basis of attempting to fool machine learning algorithms into making incorrect or detrimental decisions. As the entire premise of artificial intelligence is the creation of machines and systems that can function without the need for human interference, such attacks present an enormous challenge, as adversarial machine learning algorithms could be viewed as the technological equivalent of poisoning a public water system. Once the system has been poisoned, all of the water within would effectively become contaminated.
To provide an example of the methods that can be used to propel an adversarial machine learning attack, the FastGradient Sign method or FGSM can be used to fool image classification systems that are predicated on machine learning algorithms. As image recognition and classification systems function in accordance with the identification of specific features within images, such as the pixels within a particular photo, the FastGradient Sign method can be used to slightly alter the pixels within said photos in order to fool the algorithm into classifying them into a category that does not align with the training data that was used to create the said algorithm. For instance, an image classification system that is used to identify a photo of a dog could be fooled into identifying a photo of a cat after an adversarial attack, even though the two pictures that the system analyzed would both appear to be dogs to the human eye.
What can be done to defend algorithms against adversarial attacks?
While reducing or mitigating the effects of an adversarial machine learning algorithm can be extremely difficult once the attack has occurred, there are certain preventative measures that software developers can take to avoid such attacks altogether. One such approach is adversarial training, which as the name suggests, focuses on generating and using adversarial examples when training machine learning algorithms. Just as a school teacher would take their class on practice fire drills at the beginning of the school year to prepare their students for such an occurrence, adversarial examples can be introduced when training a machine learning model to ensure that the model will be able to deal with such attacks once the algorithm has been completed.
Conversely, defensive distillation is another method that can be used to thwart adversarial machine learning attacks. In keeping with the example of a school teacher practicing a fire drill with the children in their class, while such an approach is undoubtedly effective, the teacher must physically conduct the fire drills, and watch the children to ensure that they are following all directions and procedures. In the context of software development, adversarial training represents a brute force tactic, as software engineers will introduce as many adversarial training examples as possible to protect their respective algorithms. However, defensive distillation adds flexibility to these preventative approaches, as the technique hinges on training a machine learning model to predict different probabilities in relation to adversarial machine learning attacks as opposed to making specific hardline decisions.
When using defensive distillation techniques to safeguard a machine learning model, a software engineer will first obtain probabilities from a particular machine learning algorithm, such as a labeled data set in the case of supervised machine learning. As this model would be representative of the models that cybercriminals would attempt to attack in real-time, software engineers using defensive distillation techniques could incorporate these predictive algorithms into a new algorithm, effectively strengthening the defenses of the said new algorithm, as it would be created in accordance with two different sets of predictions. In turn, this new algorithm would be able to detect potential adversarial machine learning attacks in a more efficient manner.
Despite the immense level of complexity and nuance that is involved in the development of artificial intelligence systems and machine learning algorithms, these technological advancements are still subject to cyberattacks. Just as software developers continue to develop new techniques and methods that can be used to create new and innovative products and services, cybercriminals are simultaneously working to launch attacks against these new products and services. As such, while the advent of adversarial machine learning attacks may only be being discussed in the deepest of tech circles at the current moment, these attacks are sure to increase in frequency as machine learning algorithms and artificial intelligence continue to become more common in mainstream society. | <urn:uuid:e290b7a7-133d-4d47-a959-669852b61603> | CC-MAIN-2024-38 | https://caseguard.com/articles/adversarial-machine-learning-new-cybersecurirty-threats/ | 2024-09-09T09:17:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00558.warc.gz | en | 0.961785 | 1,176 | 2.921875 | 3 |
Quantum computing promises to solve difficult problems — and create entirely new ones.
We don’t have quantum computers just yet. But we anticipate they are on the horizon.
Eminent cryptographers speaking at February’s RSA conference said they hope quantum doesn’t happen any time soon. These individuals, who built classical crypto, said they hold that hope because quantum computing will break their algorithms.
The speakers at RSA were joking. But cybersecurity experts have real concerns about quantum.
This Could Get Spooky
Traditional computing is based on bits, or binary digits. The value of these bits is either one or zero. Transistors in today’s computers work by turning on (one) and off (zero).
Quantum computing is different. It is based on qubits, or quantum bits. A qubit can be either zero, one or a combination of both. With quantum computing, you can have both states existing at the same time. This dual state is known as quantum superposition.
Building a quantum computer also requires a phenomenon called entanglement, what Einstein referred to as "spooky action at a distance."
Show Me The Equal Money
Entanglement means two particles are connecting in some way so that in observing one you can observe the other even if the particles are kilometers or even light years apart. Some experiments have demonstrated that kind of behavior in a very constrained set of conditions that are extremely tricky to reliably reproduce.
But It Will Help Solve Human Challenges
What’s most important to understand about quantum, however, is that it will enable faster searching.
Mathematician Merill Flood popularized the traveling salesman problem (TSP). TSP highlights the difficulty in optimizing sales routes. Quantum can address such logistics challenges.
I envisage that quantum computers also could be paired with artificial intelligence-based computation to create a very powerful proposition. That could provide the computing power for an automated system to go through different permutations of a virus, for example, and identify vaccines.
This Powerful Technology Will Break Stuff, Too
Any problem that requires faster searching will benefit from a quantum computer. But this presents new challenges because hackers can use quantum computing for their dirty work.
From a computing and security perspective, quantum computing is deemed dangerous because integer factorization and discrete log-type problems can be practically solved. Both are used extensively in today’s classical asymmetric cryptography.
Consider the formula N is equal to P times Q, for instance. If you were to factorize the number N, the factorization would run in time exponential in n, (n being the actual number of bits that N was). That means the complexity or number of iterations to solve the problem is exponential in n and becomes practically impossible to solve using a standard computer for larger numbers.
But with quantum computers, the calculation can be done in time polynomial in n. This means that the computation will complete in a practically feasible time. That makes quantum computing significantly stronger and quicker than traditional computing in terms of running that search.
Bad actors can use that power to break into real-time encrypted communications. Store-now-and-decrypt-later could be one of the other big attacks in a post-quantum world. Sensitive data that was signed or encrypted using an asymmetric scheme will potentially be at risk.
On physical systems, we are vulnerable to a different attack where the integrity of software can be compromised. In the advent of a quantum computer being available, you could break the code-signing signature and load your own code onto the system.
Be Aware Of Risks, And Create Plans To Address Them
Organizations with sensitive data should be aware of these risks. Vendors that create algorithms and produce crypto modules should be exploring solutions to the quantum challenge as well.
Some regulators are holding business leaders’ feet to the fire about this potential for the future. Customers are already asking us what our post-quantum strategy is They, in turn, are being asked the same questions of their own customers.
People want to know about organizations’ strategies to manage this risk.
Some Solutions Are Already in the Works
The National Institute of Standards and Technology (NIST) is working to identify the best quantum-safe algorithms that are less likely to be broken by quantum techniques. That includes the creation of algorithms that are based on symmetric and hash-based schemes or using other approaches, such as code-based, lattice-based and multivariate cryptography.
NIST is accepting and reviewing submissions from the industry. But we’re probably looking at another three to four years before that work is solidified and standardized. The proposed quantum-safe schemes may require larger key sizes and management of state. These create new difficulties, including the need for secure and robust implementations.
Although not directly related to quantum computing, quantum random number generators are gaining the limelight. These produce high-density random entropy that can be useful in select situations, such as seeding multiple virtual clusters in the cloud where quality entropy is required.
Experimentation And Interoperability Will Also Be Key
Vendors like us can help organizations prepare for the post-quantum world by making quantum crypto available for them to experiment with. That way, organizations can take the time to understand how to implement these new post-quantum algorithms.
That’s important because it takes a long time to vet crypto and ensure it works properly.
Once organizations have implemented post-quantum techniques, they should address interoperability. They should work within their communities to align standards interpretations.
Prepare, But Don’t Panic
Recent pronouncements suggesting one industry player has achieved quantum supremacy have some people panicking. But 53-qubit solutions are a long way off from what’s needed to break a crypto algorithm. Estimates suggest we are anywhere from five to 20 years from a post-quantum world. So, there’s no need to panic. But this is an excellent time to create a post-quantum strategy.
If you are worried about how long-term data could be affected by quantum computers, then now is the time to act. Now is the time to review your organization’s assets and understand the impact that quantum computing is likely to have in your environment. | <urn:uuid:6a11b6a5-75a8-456d-b4d7-0579ba187a25> | CC-MAIN-2024-38 | https://www.entrust.com/blog/2020/11/what-to-expect-and-how-to-secure-your-organization-in-a-post-quantum-world | 2024-09-11T19:25:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00358.warc.gz | en | 0.930128 | 1,292 | 3.53125 | 4 |
Which word best completes the sentence?
The most appropriate word to complete the sentence would be:
The word "collaborate" best fits the sentence because it implies working together with others to achieve a common goal.
Collaboration is a crucial aspect of successful teamwork and project management. When individuals collaborate, they engage in a process of joint effort, communication, and coordination to accomplish shared objectives.
Effective collaboration involves actively listening to others, integrating different perspectives, and collectively working towards a common goal. It promotes creativity, enhances productivity, and fosters a sense of unity among team members.
By choosing to collaborate, individuals can leverage their strengths, address weaknesses, and overcome challenges more efficiently. It emphasizes the importance of teamwork, communication, and collective problem-solving.
Therefore, in the context of the sentence, selecting "collaborate" reinforces the value of working together towards a common purpose and achieving success through shared efforts. | <urn:uuid:af31d324-1039-4d96-802c-84fe2ac0ec18> | CC-MAIN-2024-38 | https://bsimm2.com/world-languages/choosing-the-right-word-for-collaboration.html | 2024-09-15T14:18:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00058.warc.gz | en | 0.928728 | 190 | 3.140625 | 3 |
Among the many types of network protocols, IMAP stands out as a term frequently mentioned in email-related discussions. This blog post will delve into what IMAP is, compare it with another protocol known as POP3, and highlight their pros, cons, and main differences.
What is IMAP?
Internet Message Access Protocol, commonly referred to as IMAP, is an Internet standard protocol used by email clients to retrieve messages from a mail server. It allows an email client to access and manipulate a remote mailbox as if it were local.
What is POP3?
Post Office Protocol version 3, or POP3, is another widely-used protocol for retrieving email messages. Unlike IMAP, which keeps all messages on the server, POP3 downloads the emails to the local device and, by default, removes them from the server.
Pros and cons of IMAP
IMAP offers several advantages:
- Synchronization across multiple devices
- Server-side search functionality
- Ability to organize emails into folders on the server
However, it also has some drawbacks:
- Requires more server space due to storing all emails
- Potentially slower than POP3 due to constant synchronization
- Higher risk of data loss if the server crashes
Pros and cons of POP3
POP3 comes with its own set of benefits:
- Faster access to new emails as they are downloaded directly to the device
- More privacy since emails are stored locally, but no encryption
- Less dependence on server storage
Nevertheless, POP3 has some limitations:
- Difficulty in synchronizing across multiple devices
- Risk of data loss if the local device crashes
- Lack of server-side organization features
Main differences between IMAP & POP3
The primary differences between IMAP and POP3 lie in how they handle emails. IMAP synchronizes the email clients with the server, making it ideal for accessing emails from various devices. On the other hand, POP3 is more suited for single-device usage due to its nature of downloading emails to the local device.
Both IMAP and POP3 serve unique functions in the realm of email communication. The choice between them ultimately depends on individual or business needs. If synchronization across multiple devices and server-side organization are priorities, then IMAP might be the best choice. Conversely, if speed, privacy, and less reliance on server storage are key considerations, then POP3 may be the preferred option. | <urn:uuid:3e79d07a-3704-4c83-9d10-fc2ab6c4c32d> | CC-MAIN-2024-38 | https://www.ninjaone.com/it-hub/remote-access/what-is-imap-vs-pop3/ | 2024-09-09T12:49:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00658.warc.gz | en | 0.922502 | 495 | 2.671875 | 3 |
Business and Enterprise
Protect your company from cybercriminals.
Start Free TrialIn the context of Information Technology (IT), provisioning is the process of setting up IT infrastructure. This infrastructure could be physical equipment, such as provisioning servers or laptops, or virtual, as in provisioning cloud instances or user accounts.
Provisioning is sometimes confused with configuration. Provisioning refers to making the infrastructure available for configuration. First, you provision the infrastructure, and then, you configure it.
As an analogy, you’re “provisioning” your home by signing a lease or a mortgage. Then, you “configure” it by moving in and setting up the space to your liking.
Provisioning is a broad term that’s used in a variety of contexts in IT. Let’s examine some of the most common ones.
For the purposes of this article, we will be focusing on user provisioning.
Prior to the introduction of cloud computing, IT hardware provisioning was performed manually. Admins had to manually set up and configure servers and other network hardware, which was a tedious, time-consuming process. Adding network or storage capacity was a capital expense that had to be planned well in advance. User provisioning was automated to some extent, but it still required quite a bit of manual work.
In modern, cloud-based data environments, most IT infrastructure is virtual, and provisioning is done through software. For example, the ability of cloud services to automatically scale network capacity is a major selling point for cloud migration. This eliminates the risk of organisations purchasing more hardware than they need, and also prevents them from being caught short during a sudden surge in business.
IT teams commonly use identity management platforms to automate user access provisioning. When a new employee is onboarded, IT administrators use the platform to assign them a “role,” and the employee is automatically granted access to certain applications based on that role. If the person changes roles or leaves the organisation, an IT administrator simply updates their role, and their access levels change, as appropriate.
Easier, faster and less error-prone user onboarding and offboarding. Having to manually configure user access for every employee, one by one, is tedious and time-consuming. This is especially true in very large or rapidly-growing organisations where dozens or even hundreds of employees each week must be onboarded, offboarded or need their access levels changed. Automating these tasks saves time and minimises the possibility of a configuration error.
Productivity enhancements and cost savings. New employees get all of the resources they need to do their jobs on day one. Instead of being bogged down in administrative tasks, IT teams can devote time to projects that drive the business. In addition to enhancing productivity, this saves organisations money by minimising overhead costs and downtime.
Security enhancements. New users get the minimum level of access that they need to do their jobs, a departing employee’s system access can be terminated immediately and it’s a lot less likely a mistake will be made. IT and security personnel also have better visibility into who has access to what.
Centralise your user identities. Use a central, cloud-based Identity and Access Management (IAM) directory service that can sync identities between Office 365, Google Workspace, HR and payroll systems, as well as other major directories, such as Active Directory.
Avoid overly-broad and narrowly-defined user roles. Properly-designed user roles are crucial to automating user provisioning and ensuring least-privilege access for all users. If roles are too broad, users will have more access than their job requires. If they’re too narrow, they won’t have access to the applications they need – and your IT team will have to manually provision more access, which defeats the whole purpose of automated provisioning.
Automate provisioning wherever and whenever possible. The more tasks you automate, the greater the benefits to your organisation.
Make sure you can de-provision departing users rapidly. Regardless of which IAM tool you choose, be sure it offers the ability to revoke user access to all organisational resources with one click. This helps prevent any potential security issues. | <urn:uuid:251f98f7-8fc0-4c5e-9452-f2cd1aae698c> | CC-MAIN-2024-38 | https://www.keepersecurity.com/en_GB/resources/glossary/what-is-provisioning/ | 2024-09-10T18:22:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00558.warc.gz | en | 0.944911 | 871 | 2.90625 | 3 |
Artificial intelligence (AI) is constantly evolving with the potential to revolutionize many aspects of our lives. However, alongside the exciting opportunities it presents, AI also introduces new challenges, particularly in the realm of cybersecurity.
At Futurism Technologies, we believe in a comprehensive approach to cybersecurity that goes beyond traditional methods. This guide delves into the intricate relationship between AI and cybersecurity, exploring the challenges, secure development lifecycles, and best practices to safeguard your AI solutions and systems from coming-of-age cyber threats and attack vectors. Join us on this journey as we explore the future, where AI and cybersecurity converge, and innovation thrives alongside protection.
By 2025, cybercrime could cost the world a jaw-dropping $17.65 trillion!
Understanding Unique Security Challenges of AI
The rise of artificial intelligence (AI) and machine learning (ML) presents exciting possibilities, but also introduces new security concerns. Unlike traditional IT systems, AI algorithms are vulnerable to adversarial attacks. In these attacks, malicious actors manipulate the data used to train the AI, or the algorithms themselves, to cause the system to produce incorrect or harmful outputs.
Securing AI goes beyond traditional methods like firewalls and intrusion detection. We need protection against threats that exploit weaknesses within the algorithms themselves. These attacks can be subtle and difficult to detect, potentially altering the system's decisions and compromising its effectiveness.
To address these challenges, a multi-layered approach is necessary. First, we need a deep understanding of the specific vulnerabilities present in the AI model. This involves analyzing the training data, the algorithm itself, and potential attack vectors. Second, we need to deploy specialized defenses like adversarial training to make the model more resistant to manipulation. Additionally, advanced methods can help us understand and audit the model's decision-making process, providing valuable insights for identifying and mitigating potential risks. By acknowledging these unique security challenges and taking proactive measures, we can ensure the responsible and secure deployment of these powerful AI systems.
Expanded Understanding of AI Security Challenges
Rising Statistics: A Global Concern
Recent research by Cybersecurity Ventures predicts that cybercrime will cost the world $10.5 trillion annually by 2025, a significant increase from $3 trillion in 2015. This stark rise underscores the expanding threat landscape, particularly as AI and ML technologies become more integrated into critical systems. AI-driven attacks are becoming more sophisticated, with adversaries leveraging AI for developing malware that can evade detection, automate social engineering attacks, and optimize breach strategies.
Use-Cases Highlighting AI Vulnerabilities
- Deepfake Technology: Deepfakes have emerged as a potent tool for misinformation, leveraging AI to create highly convincing fake videos and audio recordings. This technology poses significant threats in various domains, including politics, where it can be used to manipulate elections, and cybersecurity, where it can trick biometric security mechanisms.
- AI-Powered Phishing: Cybercriminals are using AI to craft highly personalized phishing emails at scale, which are more difficult to distinguish from legitimate communications. These messages often exploit social engineering techniques, tailored to individual vulnerabilities, making them significantly more effective.
- Adversarial Machine Learning (AML): In adversarial attacks, attackers input specially crafted data into AI models to manipulate their output, compromising their integrity. For instance, subtle alterations to input images can trick an AI-powered surveillance system into misidentifying objects or individuals, posing serious security implications.
Examples of AI Security Breaches
- Twitter Bot Influence: AI-driven bots have been used to amplify misinformation on social media platforms. For example, during significant political events, these bots have spread fake news, influencing public opinion, and potentially impacting election outcomes.
- Autonomous Vehicle Hacking: Researchers have demonstrated that slight alterations to road signs, imperceptible to the human eye, can cause AI-driven autonomous vehicles to misinterpret the signs, leading to potential accidents. This vulnerability exposes the critical need for robust AI security in emerging transportation technologies.
- Healthcare Data Breaches: AI systems in healthcare, used for patient data analysis and predictive modeling, have become targets for cyberattacks. The breaches not only compromise sensitive patient information but also undermine the trust in AI systems designed to enhance patient care.
Enhancing AI Security: A Forward-Looking Approach
Addressing these unique security challenges requires a holistic strategy that encompasses not only the technical defenses but also a broader understanding of AI's societal impacts. Efforts such as incorporating adversarial training, enhancing transparency through explainable AI, and fostering a security-first culture within AI development teams are essential steps forward. Moreover, regulatory frameworks and ethical guidelines will play a pivotal role in ensuring AI systems are designed, deployed, and maintained with security and privacy at their core.
By acknowledging these challenges and proactively seeking solutions, we can leverage AI's potential while safeguarding against the sophisticated cyber threats that accompany technological advancements.
Building Secure AI: A Lifecycle Approach
Developing trustworthy AI systems requires a comprehensive approach to security, starting from the very beginning. Here's a breakdown of the key stages in secure AI development lifecycle:
- Designing for Security: Just like building a house, securing an AI system starts with a solid foundation. This stage involves brainstorming potential threats that AI systems are uniquely vulnerable to, compared to traditional software. It's like thinking about the different ways someone could break into your house, but for AI systems, we're considering things like data leaks, manipulating the model itself, and even ethical concerns. We also need to make sure the AI system follows the rules and regulations in place, just like ensuring your house meets building codes.
- Building with Security in Mind: Once the blueprint is ready, construction begins. In the development phase, security is woven into the very fabric of the AI system. This involves practices like double-checking the code for vulnerabilities, similar to inspecting the quality of materials used in construction. Additionally, just like stress-testing a building to see how it handles pressure, we perform VAPT (Vulnerability Assessment and Penetration Testing) to see if the system can withstand potential attacks. Finally, just like using reliable suppliers to build your house, we need to ensure the security of everything that contributes to the AI system, from the software libraries to the data used to train it.
- Deploying Securely Moving the AI system from the building site to your actual house comes with new security considerations. We need to protect both the physical servers where the AI runs and the AI model itself. This involves setting identity and access management solutions for strong security controls, like using complex passwords and secure communication channels, similar to installing locks and alarms in your house. But security goes beyond technical safeguards; we also need to make sure the AI system is used responsibly and ethically, just like ensuring you use your house in a way that respects your neighbors.
- Keeping it Secure and Up-to-Date: Just like maintaining your house requires constant vigilance, keeping an AI system secure is an ongoing process. We need to continuously monitor the system for any suspicious activity, similar to having a security system that alerts you of potential break-ins. Additionally, like regularly updating your home security system, we need to keep the AI system updated with the latest security patches and adapt it to new threats and regulations that may emerge using advanced threat protection solutions. By staying proactive, we can ensure our AI systems remain resilient and adaptable over time.
This lifecycle approach highlights the importance of weaving security throughout the entire development process, from initial design to ongoing maintenance, to ensure trustworthy and reliable AI systems.
Building a Secure Foundation for AI
Securing AI systems effectively involves embracing a set of key principles and practices:
- Transparency and Accountability: AI shouldn't be a black box. People should be able to understand how the system works and makes decisions, especially when these decisions impact the public or have significant consequences. This transparency is crucial for holding the system accountable.
- Security by Design: Security shouldn't be an afterthought bolted onto the finished product. It should be woven into the fabric of the AI system from the very beginning, from the initial design phase all the way through to deployment and ongoing maintenance.
- Owning Your Security: Organizations have a responsibility to ensure the security of their AI systems. This ownership goes beyond the technical team and extends to leadership, making security a core value throughout the organization.
- Building a Security-Minded Team: Effective AI security requires the right team structure. This means having dedicated security experts focused on AI, clear channels for reporting security concerns, and consistent training for all staff on the latest security best practices.
Importance of Secure AI Guidelines
As AI becomes increasingly prevalent in our lives, prioritizing its security becomes paramount. This extends beyond safeguarding data and computer systems; it's about fostering trust in the technology itself. Following these guidelines is crucial for companies as they strive to ensure robust and reliable AI systems that inspire confidence in users and stakeholders alike. By implementing robust security measures, we not only shield critical information but also enhance the dependability of AI systems, ultimately fostering trust in this powerful technology.
The Critical Role of Guidelines in Evolving Threat Landscapes
According to the Global Risk Report 2023 by the World Economic Forum, cyberattacks rank among the top global risks by likelihood and impact over the next decade. As AI technologies become increasingly sophisticated, the potential for AI-specific threats grows, emphasizing the need for comprehensive security guidelines. The AI security market itself is projected to reach $38.2 billion by 2026, reflecting the growing investment in securing AI systems against evolving threats.
Use-Cases Demonstrating the Need for Guidelines
- Financial Sector AI Applications: In the finance industry, AI is used for fraud detection, algorithmic trading, and customer service chatbots. The misuse of AI in this sector could lead to massive financial losses and erode trust in financial institutions. Implementing secure AI guidelines ensures that AI applications in finance are robust against attacks, safeguarding sensitive financial data and maintaining the integrity of financial markets.
- AI in Autonomous Systems: Whether in self-driving cars or unmanned drones, AI systems control critical decisions that can affect human lives. Security vulnerabilities could lead to disastrous consequences. Secure AI guidelines are vital for ensuring that autonomous systems operate safely and predictably under all conditions, protecting users from harm due to security lapses.
Examples Highlighting the Importance of Secure AI Guidelines
- AI in Healthcare Misdiagnosis: An AI system designed for diagnosing diseases, if manipulated due to inadequate security measures, could lead to incorrect treatments. For instance, adversarial attacks on AI imaging tools can alter cancer screening results, putting patients' lives at risk. Secure AI guidelines ensure these systems are tested against such manipulations, maintaining accuracy and trust in AI-assisted healthcare.
- Social Media Manipulation: AI-driven algorithms control what content is shown to users on social media platforms. Without strict security and ethical guidelines, these systems can be exploited to spread misinformation or biased content, influencing public opinion and endangering democratic processes. Secure AI guidelines help in creating safeguards against the misuse of AI in manipulating content dissemination.
The Way Forward: Adopting and Implementing AI Security Guidelines
- Establishing Global Standards: The establishment of global standards for AI security is imperative. Organizations have begun to develop ethical guidelines for AI and autonomous systems, emphasizing the importance of transparency, accountability, and privacy. Adopting these guidelines helps ensure that AI technologies are developed and used responsibly, fostering trust among users and stakeholders.
- Collaborative Efforts for Enhancing AI Security: Collaborative initiatives between governments, industry, and academia are crucial for advancing AI security standards. These partnerships facilitate the sharing of best practices, threat intelligence, and research on new security methodologies, ensuring that AI systems remain resilient against sophisticated cyber threats.
- Continuous Education and Awareness: Educating developers, users, and policymakers about the importance of AI security is essential. Ongoing training and awareness programs can empower stakeholders to recognize and mitigate risks associated with AI systems, ensuring that security considerations are integrated throughout the AI lifecycle.
Creating secure AI systems is a challenging yet essential task. As AI increasingly integrates into diverse sectors, the need for robust security protocols grows more critical. Adhering to these guidelines enables organizations to reduce risks and fully leverage AI's capabilities, guaranteeing a secure and thriving digital landscape.
Futurism Technologies, an expert in AI security, safeguards your AI systems against evolving cyber threats. We offer comprehensive digital product engineering solutions, seamlessly integrating robust security throughout the entire AI lifecycle, from design and deployment to ongoing maintenance. Our skilled AI and cybersecurity team prioritizes both security and ethical compliance, ensuring your AI operates reliably and responsibly, fostering trust in the future of AI. | <urn:uuid:a5775126-11fc-4fc8-857e-6bb26427b14c> | CC-MAIN-2024-38 | https://www.futurismtechnologies.com/guides/ai-and-cybersecurity-a-futurism-guide/ | 2024-09-13T04:43:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00358.warc.gz | en | 0.920125 | 2,596 | 2.625 | 3 |
From drilling to logistics to preventing theft, automation in oil and gas is quickly becoming the standard. Learn how tech helps increase ROI and improves safety in this starter guide. Since the 1990s, businesses have looked at how automation in oil and gas can improve worker safety, improve efficiency, and increase the bottom line. Today, digital tools are being used in every facet of oil and gas cultivation. From improved efficiency when drilling for new oil to real-time weather monitoring and theft prevention, tech tools are quickly becoming the norm for the $2.1 trillion (according to IBISWorld) industry. In this starter guide, you’ll learn how innovation and automation in oil and gas is being used to connect, collect and analyze data, and increase ROI. What Is Automation In Oil and Gas? Oil and gas automation refers to the growing number of digital techniques and processes used to help producers cultivate energy. Automation and agility have found their way into nearly all oil and gas industry divisions, like drilling, logistics, supply chain, and safety. Tools such as rugged wireless radios and AI-powered real-time sensors for remote monitoring allow drilling sites to be run safely and efficiently from afar—protecting employees and increasing revenue simultaneously. Benefits Improved production yield while lowering costs Improved safety for employees in hazardous conditions Real-time insights 24/7 monitoring Theft protection at drill sites and along pipelines Sustainability Predictive technologies can be used for future application Where Does Automation In Oil and Gas Happen? Approximately 94 billion barrels of oil are produced every day worldwide, according to U.S. Energy Information. Here are some specific examples of how automation impacts the production and monitoring of two of the world’s most important resources. Drilling Autonomous drilling control systems (ADCs), such as those implemented in the North Sea (between England and Scotland) in 2020, enable safe and efficient oil drilling. In the past, “automatic” tools used to drill for oil were just manually assisted. Today, though, ADCs are believed to reduce drilling costs by as much as 50 percent. Diagnostics In an industry where downtime can severely disrupt a business and lead to financial downfall, remote diagnostics help predict future issues or pinpoint current ones to prevent delays and disruptions. For example, tools that measure pressure loss, temperature, and mercury levels can help businesses disrupt downtime and minimize costly delays. They also reduce the risk of oil rig disasters, such as the infamous Gulf of Mexico explosion on the Deepwater Horizon oil rig that left 41 miles of oil in the ocean. Weather Monitoring Oil and gas production are both contingent on weather patterns. Issues that can affect (or even shut down) drilling sites include: Tropical cyclones (most common) Extreme cold Extreme heat Earthquakes Automation in gas and oil includes the adoption of real-time sensors and monitoring tools, which detect changes in seismic activities and atmospheric levels, giving companies the ability to adjust in real-time to changing weather patterns. Security Theft in oil and gas is a real threat. In fact, a 2021 report said that Nigeria loses an average of 200,000 oil barrels per day (b/d) to theft, primarily from pipeline sabotage. Rugged, wireless radios allow oil and gas professionals to communicate safely without fear of interception or lost connection. Along with real-time monitoring of sites and pipelines, this can create a formidable defense against criminals. Trends in Oil and Gas Automation Oil and gas automation is quickly becoming standard, and with it, innovation in the field is coming. A significant push towards sustainability. Tech tools that help reduce a business’s carbon footprint and improve operating efficiency will be both sought after (and possibly required) in the coming years. Greater use of AI at oil rigs. Businesses are shifting away from manually assisted technologies in favor of artificial intelligence devices that make real-time, autonomous decisions on-site. More tech jobs. As on-site employees are phased out to make for a safer work environment, new jobs, such as coding, system architecture, and cybersecurity, will be created in the oil and gas industry. Contact FreeWave to learn how our technologies can boost your oil and gas business’s ROI today.
The Industrial Internet of Things (IIoT) is not just a means for organizations to harvest and analyze vast amounts of data to drive better business decisions. It is driving innovative ways for companies to keep their employees safe and out of harm’s way. In the latest IIoT Top News, we’ll take a look at some trending stories from the oil and gas industry, a quickly growing user of the Industrial Internet of Things to help power data-driven decisions, business operation optimization, and employee safety. The possibility of an industrial wireless oilfield is now not just a pipe dream, but a reality. Wearable Technology and the IoT Improving Safety for Oil and Gas Workers For many folks, wearable technology is viewed as a simple fad of smart watches and health tracking hardware. In the grand scheme of things, we’re just beginning to scratch the surface of wearable tech, with biotechnology, embedded smart tracking hardware, and much more right on the horizon. As noted in this article from the EconoTimes, one of the industries beginning to leverage the power of wearable technology is oil and gas. Looking back at 2014, occupational fatalities nationally were 3 per 100,000 workers. In oil and gas, that number skyrockets to 15. That’s why some organizations in this hazardous industry are turning towards the IIoT and wearable tech to keep their employees safe. From fall risk mitigation, to toxin and fume inhalation prevention and diagnosis, the applications for wearable technology for oil and gas employees in the field are many. One of the current limitations for wearable tech in the field is the ruggedness of the technology, but as new devices are designed that can withstand harsh environments, you can expect to see more adoption of this potentially life-saving tech. The IIoT and Operationalizing Excellence For the oil and gas industry, the advent of the Industrial Internet of Things (sometimes referred to as Industry 4.0) holds massive promise. From reacting to changing global trade conditions in real-time, to instantaneous equipment feedback, there are myriad uses for connected tech. This recent article from IoT Business News cautions us to heed the warnings of the dot.com era and take a strategic approach. The article argues that expecting the IIoT to be a silver bullet for business decisions will only lead to more confusion. It notes that Industry 4.0 is an incredibly powerful tool, one with the ability to fundamentally change the way oil and gas organizations do business, but it is important to go “back to the basics” and understand business needs and objectives before trying to dive into the data. In the Oil Industry, IoT is Booming Oil and gas is not always known for its agility, but when it comes to the Internet of Things, the industry is moving at a decidedly rapid pace. This article from Offshore Engineer asserts that the IoT is not only increasingly becoming part of many organization’s strategies, but is fundamentally becoming embedded in the “oil psyche.” Dave Mackinnon, head of Technology Innovation at Total E&P UK, provides quite a bit of color around this assertion, and he believes that oil and gas is moving towards a “digital supply chain” that was fundamentally revolutionize the sector. Mackinnon also believes that when it comes to the IoT train, it’s either get on, or get left behind. “In an IoT world, many companies will discover that being just a manufacturing company or just an Internet company will no longer be sufficient; they will need to become both – or become subsumed in an ecosystem in which they play a smaller role,” Mackinnon said. Cyberattack Concerns Loom for Oil and Gas While the highest profile cyberattacks have been in the commerce and financial sectors, industrial targets remain at high risk. A recent article from Hydrocarbon Engineering notes that “because of its complex layers of supply chains, processes and industrial controls, makes [the oil and gas industry] a high value target for hackers.” As oil and gas organizations look to leverage the Internet of Things to bring increased value to their companies, it will become more and more important to build extra layers of security into their systems. Enabling the Connected Worker While the IIoT is indeed changing the way oil and gas companies make decisions, it is also changing the way employees perform their jobs. This article from Gas Today notes some of the ways the IoT is changing the roles of workers in the field. From AI planning and scheduling, to predictive maintenance on equipment, the connected worker faces a vastly different workplace landscape than even a few years in the past. Ultimately, oil and gas companies will look to leverage the IoT to help their employees make better decisions, as well as to stay safer and work more efficiently. Final Thoughts The Industrial Internet of Things is growing with rapid adoption across many verticals, but oil and gas is already reaping outstanding benefits from this next phase of industry. Lowering costs, optimizing oil production, and increasing worker safety are just a few of the ways oil and gas is leveraging this technological revolution.
By Joyce Deuley, Sr. Analyst and Director of Content at James Brehm & Associates LLC State of the Industry This year has proved challenging for oil and gas companies: falling prices, crackdowns from environmental regulations, growing concern about the destabilization of land due to fracking, as well as an increasing gap between jobs and skilled engineers to name a few issues. Royal Dutch Shell, for instance, recently terminated its plans to drill off the Arctic coast of Alaska for the “foreseeable future”—this is after $7 billion dollars and more than five years spent on exploratory drilling (with disappointing results) and the purchase of costly leases and permits for the privilege to do so (Daily Mail). The Arctic Circle has been viewed by many as a “holy grail” in terms of rich oil and gas reserves—the largely untapped Great White North, if you will. Initiatives in the Baltic have also come under discussion lately, as Russia negotiates the political quagmire it has found itself in concerning territorial disputes. Still, it isn’t all doom and gloom. Our reliance on oil and gas for manufacturing, shipping, transportation, energy, and more hasn’t dissipated—rather, it will continue to increase with the rising population and result in rapidly expanding urbanization. More food will need to be shipped globally, more cars will be driven, more homes will be heated, more materials will need to be made, etc., providing rich opportunities for oil and gas companies to invest in scalable solutions, as well as to firmly root themselves as valued players in the market. Investors, and other interested parties, are paying close attention to the oil and gas markets to better determine how best to mitigate depleted reserves and improve overall productivity and efficiency: keeping their bottom lines low and profit margins high. To pull back from an environmental and global perspective on the state of the industry, let’s instead bring it into a sharp focus with its current business challenges. Problems with efficiency include legacy pipeline and refinery infrastructure that hasn’t been updated or modernized in decades, a shortage of skilled labor as qualified engineers approach retirement, the need for increased monitoring and control across remote areas, and the mission-critical need for the aggregation, interpretation and management of unprecedented amounts of data. But, effectively managing that data can present major challenges for oil and gas providers: with so many devices at the edge, they are practically drowning in the seemingly endless flood of information that is collected. The need to find reliable data management platforms that help remove complexities associated with data visualization is critical for these companies’ ability to identify and enact valuable business decisions. What to Do About It It is no secret that the Internet of Things (IoT) has proven to be disruptive across a myriad of markets. While the technologies and principles of the IoT have been around for decades, predominantly within the manufacturing and processing industries, its relatively nascent presence within the consumer electronics and wearables markets has helped rebrand the IoT with a level of “sexiness” it previously lacked. But at the heart of the IoT is a near-obsessive desire to decrease operational and deployment costs, meet compliance regulations and to dramatically increase productivity and efficiencies. The oil and gas industry happens to be one of the largest growing areas for IoT deployments and has found many ways to benefit from connected solutions, such as pipeline and wellhead monitoring. Oil and gas pipelines can span across hundreds of miles of rugged terrain. The ability to monitor such a territory can be challenging, as harsh winters and debilitating droughts, forest fires and or heavy rains can put stress on the integrity of a pipeline, plus the remote nature of its location can prevent technicians from being able to regularly service it. Another challenge is knowing when and specifically where a problem occurs. For instance, if there is a malfunction that results in a leak along one of the more remote sections of a pipeline and there is no sensor to alert someone, we could be looking at a nightmare of a situation: environmental damages, not to mention untold amounts of costly clean up, repairs and definitive losses to the oil and gas company at large. By utilizing connected sensors along the lengths of their pipelines, oil and gas companies can overcome these challenges and monitor flow, pressure, integrity of the pipeline and more. Empowered by the IoT, oil and gas providers can receive near real-time information about their entire operation, enabling decision makers to better manage their technicians, as well as improve overall production and reduce maintenance and operational costs. As oil and gas companies wait for the stock market to pivot from $50 a barrel, they need to look seriously at implementing business solutions that are going to help them weather this lull. The IoT provides many opportunities for oil and gas providers to tighten their belts by increasing efficiencies and production, ultimately reflecting in a more cushioned bottom line. Pipeline monitoring and control applications can help reduce non-productive times by up to 30%, which is just one small example of how dynamic transformations could be made by the IoT. About Joyce Deuley As Sr. Analyst and Director of Content, Joyce researches and interprets market trends, locates opportunities for growth, and researches the current happenings in the M2M and IoT space, providing our clients with up-to-date and actionable information. Joyce specializes in technical communication, translating complex data into layperson-accessible presentations, articles, and white papers. Additionally, Joyce manages, contributes, edits, and designs our newsletter, The Connected Conversation. She currently offices out of, and is a founding member of Geekdom, a tech accelerator-like co-working space in San Antonio, TX. Previously, Joyce worked as a Secondary Researcher at Compass Intelligence, learning the M2M markets alongside James Brehm. While at Compass Intelligence, she gained experience in market research, competitive analysis, content strategy, as well as qualitative research. Joyce graduated with a B.A. in English, focusing on Professional and Technical Communication, from the University of the Incarnate Word (UIW) in San Antonio. She | <urn:uuid:8478a420-5d18-47bd-82ae-3d63916c1faf> | CC-MAIN-2024-38 | https://www.freewave.com/tag/oil/ | 2024-09-14T11:24:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00258.warc.gz | en | 0.954736 | 3,111 | 2.578125 | 3 |
02 September 2020
The Differences Between Nodes & Elements
A node is a logical device such as a PC, Server, Switch, Router, IoT Device, Firewall as so forth. A virtualised server or network device would be a node and the physical host it runs on would be another node.
Each node will have items that you want to monitor. CPU, Memory, Disk, Interface, to name a few. At Opmantek we call these elements, and big nodes have a small number of elements, but big servers and routers have many elements.
Most network management software companies also refer to these as elements.
It is likely that with each node you monitor, it is sensible to report on that node further than just whether it is up or down. You need to know more about your network infrastructure beyond whether devices are up or down. You will want to measure and instrument for thresholds such as Utilisations, Throughput, Errors, Statuses and so forth. Opmantek CTO Keith Sinclair talks about that here:
Let us look at how elements are counted:
Say you have a 48 port switch. You want to monitor the device for whether it is online, Interfaces (up/down), CPU, and Memory (RAM).
48 Interface Elements
1 Device up/down Element
1 CPU Element
1 Memory Element
Total = 51 Elements
Let us look at monitoring the same items as a node count:
48 Interface contained in a node
1 Device up/down contained in a node
1 CPU contained in a node
1 Memory contained in a node
Total = 1 Node
At Opmantek we license our products by node not by element. Based on the examples above, using our 100 node license for your switches would be a 5100 element license with some companies.
Also, consider that the instrumentation that you may decide to set up is also included within that node. Other companies consider QoS, IPSLA and other types of instrumentation to be additional elements.
So there’s the difference between nodes and elements. It demonstrates how much further a node licence goes when compared to an element licence.
If you would like to see our software in action, request a one to one demonstration with our staff. It is a no-obligation demo with no hard sales push. We just want you to know what we can do, and the quickest way is to show you. | <urn:uuid:746a3de4-994c-4ebe-89ef-3e3af1c4b412> | CC-MAIN-2024-38 | https://firstwave.com/blog/the-differences-between-nodes-elements/ | 2024-09-16T23:26:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00058.warc.gz | en | 0.939945 | 500 | 2.9375 | 3 |
Why IoT Security Is Scary and What to Do About It
Because of its sheer complexity and variability, the Internet of Things poses unique security risks. There are, however, concrete steps you can take to address them.
May 13, 2016
The FBI has issued several alarming cybersecurity warnings recently. In late April, it noted that there had been a significant spike in ransomware against hospitals, schools, police departments, as well as individuals. In March, it announced that the U.S. government had charged seven Iranian hackers with exploiting nearly 50 financial institutions and compromising the controls of a New York dam. Before that, it released separate warnings indicating that cars, farm equipment, and medical devices were all vulnerable to cyber attacks.
Such warnings underscore the unique security problems posed by the Internet of Things, which encompasses billions of objects encompassing everything from connected cars to energy grids.
Security has been one of the top concerns in the IoT space since the British entrepreneur Kevin Ashton coined the term “Internet of Things” in 1999. According to a multi-industrial survey organized by Penton, security and data privacy were the two biggest concerns in the IoT space.
There are wholly new business models involved in the IoT and they are quickly evolving. “As these business models change, they require more interoperability and sharing of data and exchanging of command of control across ecosystems and partners,” said John Sirianni is VP of IoT strategic partnerships at Webroot in an interview at IoT World on May 11 (pictured). “The number of interfaces—between devices, databases, and networks—is growing exponentially. Those interfaces are opportunities for loss of command and control.”
Webroot debuted an IoT Gateway application dubbed BrightCloud Threat Intelligence for IoT Gateways at IoT World.
The sheer variability of the IoT field is another enormous challenge. “Every company or enterprise has a different view of what they would like to accomplish,” Sirianni said.
Webroot officials have identified integrated transportation as being one of the IoT areas with the biggest potential risk. This includes entities ranging from airports to smart seaports. “Both of those tend to have very distributed networks of remote devices with many different protocols, vendors, and interfaces,” Sirianni says. “It is complex. And your security is only as good as your weakest link.”
Webroot officials see DDoS as one of the major security concerns for critical infrastructure projects. In 2015, the company also observed an uptick in ransomware attacks targeting medical and energy-production facilities.
Smart cities also pose unique risks. “As you get into smart cities and look at the operational technology such as traffic control, parking meters, and energy management, sewage, water, and all that kind of stuff, you have a lot of complex devices that are often deployed for for decades,” Sirianni explains. “Where cyber-criminals decide to exploit those systems could be any number of areas: it could be from a PC, tablets that workers use to maintain or upgrade these devices. The threats can really come in anywhere.”
Tackling the Problem
1. Have Real-Time Threat Protection and Intelligence. Because of the unique concerns posed by the Internet of Things, Webroot says that it is crucial to have real-time threat protection and intelligence, and to adapt quickly once threats are identified. “If you can provide an up-to-date understanding of where those threats are coming from, you can stop an exploit whether it be the deed of data exfiltration, network intrusion, or loss of command and control. If you can detect it early enough, you can stop the ransomware in its tracks,” Sirianni says. “But there is no way to design in security 100% because the cybercriminals are innovating very quickly.”
IoT developers should be diligent to ensure that security is factored into every link in the IoT chain. For instance, while software breaches get a lot of press, companies developing IoT platforms sometimes dismiss the threat posed by hardware vulnerabilities.
2. Don’t Neglect the Endpoints. “Endpoint software agents can leverage cloud-based real-time data like threat intelligence to prevent, detect, and block new cyber threats targeting IoT devices and systems, and can be designed into the devices and turned on anytime once deployed in operation,” Sirianni says
“It’s important to pay attention to gateways within the network, as they can be used just like next-generation security appliances to inspect and filter all incoming and outgoing traffic between devices and their control systems in the local IoT platform or over the internet. By doing this, organizations will be able to detect malware before it reaches the network or any endpoint devices.”
3. Engage with Machine Learning and Automation. Automation and machine learning will be a crucial component in IoT cybersecurity, Webroot officials predict.” Leveraging machine learning technology allows organizations to draw correlations among the massive volume of data they collect, all in a streamlined manner,” Sirianni says. “With the amount of emerging vulnerabilities, automation, and machine learning are vital to combatting cybercrime effectively. Autonomous remediation of compromised systems is critical for continuity of service and to keep operational costs to a minimum.”
4. Pay Attention to the Cloud. With the influx of connected devices emerging, more information is moving from traditional on-premises systems into the cloud, Sirianni says. “This is a top challenge for OEMs and IT providers as they try to navigate IoT security, as many conventional security technologies only support on-premises systems.”
“At the same time, hackers have their eye on the cloud. The cloud’s rise in popularity has quickly become a key target for cybercriminals, and weaknesses are found and exploited on a regular basis,” Sirianni adds. “The vulnerabilities of cloud-based infrastructure can wreak havoc on IT providers and system integrators. OEMs and system manufacturers should implement a cloud-based security solution that offers a secure online backup solution. This way, it ensures organizations don’t lose data when an endpoint is compromised. The solution should also provide online access to files from any IoT device.”
5. Be Careful with Vendor Selection. Sirianni recommends that companies developing IoT platforms be extremely careful when working with vendors involved with their infrastructure. “You should have a good conversation about cybersecurity risks and do your diligence during vendor selection,” he says. Vendor choice is especially important because of the quickly growing number of IoT-related startups with little experience dealing with information security.
6. Ensure Only Authorized Users Have Access. The U.S. government has a long history of developing computer software that precisely restricts data access according to the rank of the user. Digital access control systems should be carefully planned to ensure that authorized users have access to sensitive information and studying how that data access is being used. That doesn't mean that such systems are foolproof, however. Edward Snowden's downloading of numerous NSA documents has prompted that agency to rethink how it stores sensitive information.
On a related point, passwords continue to be a standard method of authenticating users, yet weak passwords have long been one of the chief reasons behind data breaches. Authenticating users based on multiple factors is substantially more effective from a security standpoint.
7. Carefully Explore How Users Will Deploy Your IoT Application and Cybercriminals Might Exploit It. The Cloud Security Alliance recommends performing use case analysis for IoT platforms accompanied by an architectural diagram that covers how the system interfaces with other computers, the flow of data, and security resources. Following that, the association recommends a thorough exploration of how cybercriminals might target the IoT system.
8. Study the Latest Security Advice from Government and Other Relevant Associations. Companies developing IoT technologies would be well served by studying government recommendations on security. FTC and FDA, for instance, have each released specific security recommendations covering a range of consumer devices and medical technology.
Outside of the U.S. government, the GSMA has recommendations that are specific to the Internet of Things. “The GSMA has very good recommendations on security and security architecture,” Sirianni says. The association released its latest guidance in February 2016. “If most device system designers would adhere to those basic principles, they will create a system that is more robust than the guy next door. And the cybercriminals will go to the system next door,” Sirianni quips.
About the Author
You May Also Like | <urn:uuid:7fdc5b8e-22a7-451a-9187-d7f73e139a63> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/security/why-iot-security-is-scary-and-what-to-do-about-it | 2024-09-19T06:52:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00758.warc.gz | en | 0.951953 | 1,774 | 2.5625 | 3 |
Genomics is a branch of molecular biology that focuses on the study of an organism's entire genome, which is the complete set of its genetic material. It involves analyzing the structure, function, and interactions of genes within a genome to understand their role in various biological processes. Genomics encompasses a wide range of techniques and technologies used to sequence, assemble, and analyze DNA and RNA sequences, as well as to study the organization and regulation of genes. It plays a crucial role in advancing our understanding of genetic variations, hereditary diseases, evolutionary relationships, and the development of personalized medicine. By studying the entire genome, genomics provides insights into the fundamental mechanisms of life and offers opportunities for advancements in fields such as healthcare, agriculture, and environmental science.
Futuristic scope –
The field of genomics holds immense futuristic scope with the potential to revolutionize various aspects of our lives. Here are some key areas where genomics is expected to make significant advancements:
The futuristic scope of genomics is vast and continually evolving as advancements in technology, computational biology, and data analysis continue to drive the field forward. The integration of genomics with other disciplines, such as proteomics, metabolomics, and systems biology, will further enhance our understanding of biological systems and facilitate transformative breakthroughs in various sectors.
Merger & Acquisition –
The field of genomics has witnessed several notable mergers and acquisitions in recent years, as companies and organizations seek to strengthen their capabilities, expand their offerings, and capitalize on the growing opportunities in genomics. While the specific mergers and acquisitions may vary over time, here are some examples of significant deals in the genomics industry:
These examples demonstrate the active merger and acquisition landscape in the genomics industry, driven by the need for technological advancements, market expansion, and strategic collaborations. Mergers and acquisitions allow companies to gain access to new technologies, expand their customer base, and enhance their competitive position in the rapidly evolving genomics market.
Key segments in Genomics:
Genomics, as a broad and multidisciplinary field, encompasses various key segments that contribute to our understanding of genetic information and its applications. Here are some key segments in genomics:
These key segments in genomics represent different aspects of genetic information analysis and application, contributing to advancements in healthcare, agriculture, evolutionary biology, and other fields. Each segment plays a unique role in unraveling the complexities of the genome and its implications in various biological processes.
subsegments in Genomics
Within the broader field of genomics, there are several subsegments that focus on specific aspects of genetic information analysis, interpretation, and application. Here are some subsegments in genomics:
These subsegments within genomics represent specialized areas of research and application that contribute to our understanding of genetic information and its implications. Each subsegment plays a distinct role in unraveling the complexities of the genome, driving advancements in fields such as healthcare, agriculture, and environmental science.
(eco-system) - 1 para on each
The genomics ecosystem is a complex network of stakeholders, including research institutions, technology companies, data repositories, healthcare providers, regulatory bodies, and funding agencies. Collaboration and synergy among these entities are essential to drive advancements, translate genomic discoveries into practical applications, and ultimately improve human health and well-being.
The field of genomics is characterized by a diverse range of players, including research institutions, technology companies, healthcare providers, and diagnostic companies. While the landscape is dynamic and subject to change, here are some notable top players in the genomics industry:
These are just a few examples of top players in the genomics industry. Other notable players include Agilent Technologies, Roche Diagnostics, Illumina's subsidiary Pacific Biosciences, and numerous academic and research institutions that contribute significantly to genomics research and innovation. The genomics field is highly dynamic, with advancements and competition driving the emergence of new players and technologies.
High grown opportunities:
Genomics presents numerous high-growth opportunities across various sectors due to its potential to transform healthcare, agriculture, and other fields. Here are some key areas that offer significant growth opportunities in genomics:
These are just a few examples of the high-growth opportunities in genomics. As our understanding of the genome continues to advance, new applications and opportunities are likely to emerge, further fueling the growth of the genomics industry and its impact on various sectors.
Challenges in Genomics Industry:
The genomics industry faces several challenges that can impact its progress and widespread adoption. These challenges include:
Addressing these challenges requires collaborative efforts from researchers, industry stakeholders, policymakers, and regulatory bodies. Investments in research, technology development, infrastructure, and workforce training are crucial to overcome these challenges and realize the full potential of genomics for improving human health and addressing global challenges.
High CAGR geography
The field of genomics is experiencing significant growth across various regions globally. While the specific growth rates may vary, several geographies are witnessing high Compound Annual Growth Rates (CAGR) in genomics. Here are some regions that show a high CAGR in genomics:
It's important to note that genomics is a rapidly evolving field, and the growth rates in different geographies may change over time. Factors such as government support, research funding, regulatory environment, and healthcare infrastructure play crucial roles in driving the growth of genomics in specific regions.
Genomics is a branch of molecular biology that focuses on the study of an organism's entire genome, which is the complete set of its genetic material. It involves analyzing the structure, function, and interactions of genes within a genome to unde ....see more
The global medical device security market is projected to reach USD 6.59 Billion by 2023 from 4.36 Billion in 2018, at a CAGR of 8.6%. Factors such as increasing instances of healthcare cyberattacks and threats, growth in geriatric population and subsequent growth in chronic disease management, government regulations and need for compliance, growing demand for connected medical devices, and increasing adoption of cybersecurity solutions are driving the growth of the medical device security market. The prominent players in the global medical device security market include Cisco Systems (US), IBM Inc. (US), GE Healthcare (US), Symantec Inc.(US), CA Technologies (US), Philips (Netherlands), DXC Technology (US), CloudPassage (US), FireEye (US), Check Point Software Technologies (Israel), Sophos (UK), Imperva (US), Fortinet (US), Palo Alto Networks (US), ClearDATA (US), and Zscaler (US).
The Global DNA & Gene Chip (microarray) market was valued at $760 million in 2010 and is expected to reach $1,425.2 million by 2015 growing at a CAGR of and 13.4%. The oDNA segment accounted for the largest share – 98% – of the global DNA & gene chip market in 2010. | <urn:uuid:d43a61d9-1c24-4951-b841-b8ebb0550d75> | CC-MAIN-2024-38 | https://www.marketsandmarkets.com/genomics-market-research-33.html | 2024-09-08T10:11:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00858.warc.gz | en | 0.939061 | 1,416 | 3.296875 | 3 |
Types of Spyware
As the name suggests, this is software that is designed to harvest your data and forward it to a third party without your consent or knowledge. The data collected is sent to the creator of the application or perhaps a third-party, and can be stored in a way that is recoverable at later time. Some spyware programs may monitor key presses ('keylogger'), collect confidential information (passwords, credit card numbers, PIN numbers, etc.), harvest e-mail addresses or track browsing habits. In addition to all of this, spyware inevitably affects your computer’s performance.
Adware, Pornware, and Riskware include legitimately developed programs that – in some circumstances – can be used to pose specific threats to computer users (including acting as spyware).
Although many of these programs are likely to have been developed and distributed by legitimate companies, they may include functions that some malware creators choose to use for malicious or illegal purposes.
How can you prevent Spyware?
Discover more about the risks and how Kaspersky Lab can defend you against them: | <urn:uuid:a01853cb-7186-4d41-abef-058d8a7043f8> | CC-MAIN-2024-38 | https://www.kaspersky.com/resource-center/threats/types-of-spyware | 2024-09-13T07:31:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00458.warc.gz | en | 0.956051 | 221 | 2.90625 | 3 |
The terms “cybersecurity” and “environmental protection” aren’t typically used together. You might think of trees, clean air, or endangered species when you hear the phrase “environmental protection.” You might think of hackers, email scams, or identity theft when you hear the term cybersecurity.
So, what is the link between cybersecurity and environmental protection? There’s a lot more to it than meets the eye. Infrastructure is one of the most serious cybersecurity dangers to the environment. Take, for example, water infrastructure. Most municipal drinking water supplies in industrialised economies come from utility districts, which rely on huge infrastructure to capture, treat, and distribute drinking water.
Municipal water is then transported away via wastewater infrastructure, where it is treated before being returned to natural systems.
The drinking water and wastewater systems have one thing in common: they both require a lot of infrastructure. Pipelines, huge treatment facilities, and distribution networks are required to treat municipal water at both the consumption and disposal ends of the spectrum. Command and control centres link all of this infrastructure together. And those centres are all run on interconnected computer networks, which are exposed to a variety of security risks ranging from external hacking to malicious insiders.
The moral of the storey is that cybersecurity isn’t just a worry for the digital world. Having safe digital and information systems is equally crucial for the physical world, which is very real and very concrete.
The environment’s (and public health’s) health is becoming increasingly dependent on cyber command and control systems (which are controlled by computer networks). That means a cybersecurity failure could result in massive pollution events and critical infrastructure failure. Furthermore, cyber criminals compromising massive infrastructure would have a greater psychological toll (or social sentiment) on the public than what are now becoming commonplace data breaches affecting things like credit card accounts. Infrastructure hacks that harm the environment or public health would be much more expensive than just the physical damage.
This tutorial aims to give a broad overview of cybersecurity’s role in environmental protection. It will primarily focus on the function of cyber-physical systems’ command and control elements in protecting environmental health. However, the lessons acquired at the infrastructure level can easily be transferred to other networks and systems that contribute to environmental and public health protection.
The Environment and Cybersecurity
There are 153,000 public drinking water infrastructure systems and 16,000 public waste water districts in the United States alone, according to the Cybersecurity and Infrastructure Security Agency (CISA). Around 80% of residents in the United States acquire their drinking water from a public water system, while about 75% rely on municipal wastewater services.
The CISA list of National Critical Functions includes environmental services such as drinking water and wastewater. National Critical Functions (NCFs) are defined by the CISA as “government and private-sector functions so important to the United States that their disruption, corruption, or dysfunction would have a crippling effect on security, national economic security, national public health or safety, or any combination thereof.”
The list of NCFs is divided into four major categories:
- Connect — is a term that refers to information networks and the internet, as well as communications and broadcasting, as well as telecommunications and navigation services.
- Distribute — has to do with logistics and supply chains.
- Manage — refers to key services including election management and sensitive documents and information, as well as infrastructure, capital markets, medical and health facilities, public safety and community health, and hazardous materials and wastewater.
- Supply — relates to the distribution of fuel and energy, food, critical materials, housing, and drinking water.
Water supply and wastewater management are two of the four key types of infrastructure deemed vital to the continuous safe operation of local, regional, and national government, according to the CISA’s National Critical Infrastructure classification. Environmental infrastructure, such as water treatment plants, are desirable targets for cybercriminals such as ransomware seekers, disgruntled employees, and terrorists for this reason alone.
The magnitude of a possible infrastructure attack is similarly enormous. With the same amount of effort that it takes to hack one account or system, a cybercriminal might potentially influence the daily lives of millions of people through social engineering or insider attacks.
It’s worth noting that this type of critical infrastructure classification and designation isn’t restricted to the United States. The European Commission (the EU’s executive arm) also maintains a list of vital infrastructure, which includes drinking water and wastewater management as two of the most crucial systems to protect against attack or disruption.
Another parallel that can be drawn between environmental protection and sustainability and cybersecurity is that both are frequently viewed as issues that are subject to the “tragedy of the commons.” Like regulating the ocean or developing comprehensive climate change policy, cyberspace is seen as vast and poorly defined in terms of boundaries and responsibilities.
Most entities (in this case, companies, organisations, and people) only have a reactive or defensive posture when it comes to cybersecurity, much as most legal jurisdictions do not deal with concerns like carbon emissions, sea level rise, or ocean acidification proactively. At the moment, there isn’t much in the way of a proactive cybersecurity police force that acts in the public interest. The laws and procedures regulating cybersecurity best practises are proving difficult, just as environmental regulation and enforcement.
Cybersecurity Challenges to Environmental Protection Infrastructure: A Case Study
The first case study of a cybersecurity breach that could have an impact on the environment and public health was published in 2016. In some ways, this instance, in which authorities withheld numerous details in order to safeguard the investigation into the breach, serves as a fantastic example. The main reason for this is that it might have happened anywhere.
The attack was carried out by a group of hackers who were able to get access to the backend of a network that controlled a drinking water treatment plant. To mask identifying characteristics, this plant was given the name Kemuri Water Company in press sources (such as this one in Infosecurity Magazine).
Employees noticed strange behaviour in the company’s system’s programmable logic controllers at first. Computer programmes regulate the controllers, which are responsible for releasing predetermined amounts of chemicals at specified moments during the drinking water treatment process, among other things. The pace of drinking water flow from the plant was also hampered by the attack.
After some digital forensics by Verizon Solutions, which handled portions of the water treatment plant’s networking, it was revealed that hackers (apparently with connection to Syrian operations based on the IP addresses used) acquired access to 2.5 million ratepayers’ credit card and billing information. The assailants appeared to be for money and personal information, and it was unclear from the later investigations whether the assailants were even aware.
The second case study of cybersecurity hacking with environmental consequences resembles the first case study in certain ways. The majority of the information are shrouded in obscurity once again, although a few of media accounts provide a few peeks into the incident.
According to media reports, a group of hackers purportedly based in Russia infiltrated the computer networks managing drinking water infrastructure in two American communities in 2011. The first occurrence took place in an undisclosed city in Illinois, while the second took place in Houston, Texas.
To summarise the cyberattack, a hacker took control of a pump that distributes drinking water via a pipeline. The hacker repeatedly turned one of the pump’s valves on and off, causing the pump to break. Following the FBI and Department of Homeland Security’s investigation and comments, the hacker revealed that he or she (or they) had also acquired access to the South Houston Water and Sewer Department’s system, which was protected by a three-letter password.
It’s no accident that both of the above case studies were linked to hackers based outside of the United States. After being fired, unhappy employees of energy infrastructure corporations, including a nuclear reactor in Texas and offshore oil rigs in California, have hacked into proprietary systems to purposefully cause disturbances.
All of this is to suggest that infrastructure operators must account for and defend against multiple cybersecurity threats.
What Makes Cybersecurity Challenging with the Environmental Protection and Environmental Health Field?
Developing cybersecurity best practises in the environmental industry is difficult for a variety of reasons. To begin with, as previously stated, cybersecurity and environmental protection are not commonly related. Second, while critical infrastructure such as water and electricity systems are vital, they have never been subject to cyberattacks in the past. However, as more infrastructure becomes connected, the number of cyber attack surfaces continues to rise. Finally, in the past, bad actors have increasingly targeted environmental services or essential infrastructure as a strategy to multiply the effects of an attack by hurting social sentiment and public trust.
One of the most difficult aspects of adopting cybersecurity in the environmental domain is the requirement for a comprehensive and complete regulatory framework that is both tactical and surgical in nature. Individual environmental infrastructure operators should have considerable room to respond to specialised dangers and immediate incidents, ideally through regulation or rules.
Coming up with generally agreed-upon cybersecurity policies is difficult, as it is in other areas of environmental control. The problem is exacerbated by the fact that different drinking water and wastewater utilities (as well as other forms of infrastructure and environmental service providers) operate their systems using different types of technology and computer networks.
In other words, cybersecurity policy and best practise advice for infrastructure operators must be both specific and general in order to be effective and influential. Finding a happy medium is a difficult undertaking.
While some of the higher-level organisational and policy elements may appear out of reach for local drinking water and wastewater treatment plant operators, there are a few fairly basic things that can be done to assist protect environmental infrastructure against cyber attacks.
Some basic ideas are included in the Water Information Sharing and Analysis Center’s (additional information about this organisation can be found below) list of 15 Fundamentals for Water and Wastewater Utilities.
- Regularly assess the risks.
- User controls must be enforced (and password best practices)
- Physical access to digital infrastructure should be restricted.
- Create policies and processes for cybersecurity.
- Prepare for cyber-attacks and emergencies.
The Water Information Sharing and Analysis Center’s (WaterISAC) website has the whole list of suggestions.
Cybersecurity Solutions for the Environmental Field
Understanding all of the vulnerabilities encountered by environmental and infrastructure service providers is the first step in designing cybersecurity solutions for the cybersecurity area.
The good news is that a number of specialised companies are forming that are knowledgeable with and capable of dealing with the rise in cyberattack-related activity, especially as it relates to environmental infrastructure.
Here are a few instances of companies that are now reporting and investigating cyberattacks that are ecologically sensitive:
The Water Information Sharing and Analysis Center (WaterISAC) is a non-profit organisation situated in Washington, DC, that collaborates with the Environmental Protection Agency. The 2002 Bioterrorism Act established WaterISAC as an official information sharing and operations body. WaterISAC collects data on verified and suspected cyber events from water treatment and waste water treatment infrastructure operators.
The Cybersecurity and Infrastructure Security Agency (CISA) was established as a new governmental agency to address the growing threat of cyberattacks on infrastructure. The agency has a number of cybersecurity materials, as well as standards for reporting cyber incidents.
The American Water Works Association is a Denver-based nonprofit organisation dedicated to the water business. The organisation offers a variety of resources on cybersecurity protocol and best practises.
Preparing for and averting cyber attacks and cyber catastrophes will only grow more crucial in the long run. Lani Kass, a former adviser to the US Joint Chiefs of Staff on security issues, told the BBC that everyone needed to do a better job of understanding cybersecurity and the vulnerabilities of critical infrastructure after a cyberattack on water infrastructure in two American cities by hackers linked to Russia. In the news report, she was reported as saying, “The going in notion is always that it’s simply an occurrence or happenstance.” “And it’s difficult — if not impossible — to establish a pattern or connect the dots if each instance is seen in isolation. We were caught off guard on 9/11 because we failed to connect the dots.”
Additional Resources and Reading
American Water Works Association — Water sector cybersecurity risk management guidance, 2019.
Cybersecurity and Infrastructure Security Agency — Assessments: Cyber resilience review, 2020.
Institute for Security and Development Policy — Climate change, environmental threats, and cybersecurity in the European High North, (Sandra Cassotta, 2020).
WaterISAC — 15 cybersecurity fundamental for water and wastewater utilities — Best practices to reduce exploitable weakness and attack, 2019. | <urn:uuid:3d72ba54-20b1-44cb-beb5-0a758c043d01> | CC-MAIN-2024-38 | https://cybersguards.com/cybersecurity-in-the-environmental-protection-field/ | 2024-09-14T13:19:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00358.warc.gz | en | 0.952304 | 2,643 | 3 | 3 |
How to: |
The BEGIN/ENDBEGIN construction enables you to issue a set of commands. Because you can use this construction anywhere an individual Maintain command can be used, you can use a set of commands where before you could issue only one command. For example, it can follow ON MATCH, ON NOMATCH, ON NEXT, ON NONEXT, or IF.
The syntax for the BEGIN command is
Specifies the start of a BEGIN/ENDBEGIN block.
Note: You cannot assign a label to a BEGIN/ENDBEGIN block of code or execute it outside the bounds of the BEGIN/ENDBEGIN construction in a procedure.
Is one or more Maintain commands except for CASE, DECLARE, DESCRIBE, END, MAINTAIN, and MODULE. BEGIN blocks can be nested, allowing you to place BEGIN and ENDBEGIN commands between BEGIN and ENDBEGIN commands.
Specifies the end of a BEGIN block.
The following example illustrates a block of code that executes when MATCH is successful:
MATCH Emp_ID ON MATCH BEGIN COMPUTE Curr_Sal = Curr_Sal * 1.05; UPDATE Curr_Sal; COMMIT; ENDBEGIN
This example shows BEGIN and ENDBEGIN with ON NEXT:
ON NEXT BEGIN TYPE "Next successful."; COMPUTE New_Sal = Curr_Sal * 1.05; PERFORM Cleanup; ENDBEGIN
You can also use BEGIN and ENDBEGIN with IF to run a set of commands depending on how an expression is evaluated. In the following example, BEGIN and ENDBEGIN are used with IF and FocError to run a series of commands when the prior command fails:
IF FocError NE 0 THEN BEGIN TYPE "There was a problem."; . . . ENDBEGIN
The following example nests two BEGIN blocks. The first one starts if there is a MATCH on Emp_ID and the second starts if UPDATE fails:
MATCH Emp_ID FROM Emps(Cnt); ON MATCH BEGIN TYPE "Found employee ID <Emps(Cnt).Emp_ID"; UPDATE Department Curr_Sal Curr_JobCode Ed_Hrs FROM Emps(Cnt); IF FocError GT 0 THEN BEGIN TYPE "Was not able to update the data source."; PERFORM Errorhnd; ENDBEGIN ENDBEGIN
Information Builders | | <urn:uuid:10f444a6-18a6-4d0b-8049-bff48833515c> | CC-MAIN-2024-38 | https://ecl.informationbuilders.com/focus/topic/shell_7707/FOCUS_Maintain/source/topic67.htm | 2024-09-14T13:40:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00358.warc.gz | en | 0.704161 | 525 | 2.625 | 3 |
The Data Frame
The fundamental data structure used by the majority of R functions and packages is the data frame. In a data frame, sets of related values constitute rows, while an individual column vector in a data frame contains comparable measures that can be summed, averaged, or subjected to any number of numerical manipulations. At first glance, a data frame seems much like a table, but the parallel breaks down as soon as we look a little closer.
Above we see a rowset generated by a conventional SELECT statement in the Management Studio. (The data are from the ContosoRetailDW sample database.) While we could calculate the total for, say, 2009 from this rowset using R, it would not be straightforward. Similarly, we could not calculate a sum for Asia without jumping through some data hoops. What we need is a resultset something like this:
Such a resultset is referred to as a crosstab or pivot.
Of course, we could create a view in the SQL database that provides the data pivoted as desired and use the view to import the data into PowerBI. The necessary SQL can be written using either the classic CASE statement method or the newer PIVOT operator. (If you are curious, examples of SQL pivots are provided here.) Unfortunately, this technique has several drawbacks; the most notable is that each different pivot requires writing a new query. In addition, the SQL methods become more cumbersome if there are a large number of values to be pivoted to columns. We might, for instance, want to pivot 67 stock symbols instead of three calendar year. Fortunately, Power Query provides a straightforward means of accomplishing such tasks with little effort.
Pivoting In Power Query
We'll start with a slightly larger resultset than the one shown above. An example view was created that returns 1058 rows representing the sales totals by product subcategory for each of 34 countries. A view was chosen to get rid of some pesky ampersands in subcategory names that would only serve to upset Power Query. (The SQL for the view is available here.)
We start by importing data into Power BI. We will write a query rather than simply import the data from the view directly, because we want the countries to be in alphabetical order. It's a bit of planning ahead; we could easily sort the country names in alphabetical order if they were going to remain in in rows, but since we intend to turn the country names into columns, it's easiest if we sort them straightaway.
After clicking OK, we see the preview, but choose to "Edit" rather than immediately load the data.
Clicking Edit brings us to the Query Editor.
When Power Query first appeared, pivoting was available only through the Power Query Formula Language (colloquially known as "M"). Now, however, there is a Pivot Column menu choice on the Transform tab. You must highlight the column containing the data you would like to pivot before you click on the Pivot Column button, as implied by the tootip. When you click on the button, Power Query will make note of which column is highlighted and give you the opportunity to select which of the remaining columns will provide the displayed values.
In this example, we specify "TotalSales" as the values column.
After we click "OK", we see a potential problem in the data preview.
There are some cells that contain "null". This is unacceptable to many of the statistical algorithms we may ultimately wish to apply. It is important to note that these nulls are not present in the SQL resultset and cannot be eliminated using T-SQL functions such as ISNULL or COALESCE. No, these nulls appear in our pivoted data because there are combinations of subcategory and country that simply do not appear in any row in the original resultset. Once again, Power Query provides an easy solution. We simply highlight all the country columns and then click Replace Values, also on the Transform tab.
Now we have our final query. A resultset any R data frame would be proud to call its own.
Power BI is taking its place alongside Microsoft Excel as one of the predominant workhorses of business intelligence, analytics, and data visualization. Power Query is integral to both and can be indispensable for bringing data into the correct form for detailed examination. | <urn:uuid:0381f894-4556-4bb3-ac8f-881d6ad128ab> | CC-MAIN-2024-38 | https://www.learningtree.ca/blog/preparing-sql-data-r-visualizations-using-power-query-pivot/ | 2024-09-15T19:14:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00258.warc.gz | en | 0.923623 | 887 | 3.546875 | 4 |
The introduction of data center aisle containment has had a significant impact on the way in which these facilities are designed. This is because data center aisle containment delivers a host of impressive benefits, to both the facility owners and the planet.
An Overview of Data Center Aisle Containment
Aisle containment is a system that separates the hot air being emitted by hardware from the cooled air needed to maintain the functionality of this equipment. By avoiding the risk of air of very different temperatures being mixed, the data center can enjoy a consistent and predictable cool temperature, which helps to keep IT equipment from overheating. This, in turn, ensures greater uptime for hardware and a longer usable life for this expensive equipment.
Helping To Save Energy
Part of good data center management is keeping an eye on costs, and optimizing the use of all resources, from finances and staff to hardware. This makes data center aisle containment a vital part of maintaining cost and energy efficiency.
By keeping aisles contained, it is possible to significantly reduce the cost of keeping a data center at the required cool temperature, and also provides benefits to the environment, as containment reduces the data center’s carbon footprint. This means that opting for aisle containment is an excellent way to meet sustainability goals, with quantifiable decreases in carbon emissions once it is implemented.
Types Of Data Center Aisle Containment
Data center aisle containment comes in two main types: hot aisle containment and cold aisle containment. The choice of which type of aisle containment to implement will depend on the overall design of the data center, as factors such as the layout and size of the space can have an impact on the best approach.
Hot aisle containment is a system in which hot air, which has been exhausted from IT hardware equipment, is guided back to the facility’s air conditioning return without it meeting the cool air intake for this machinery. This is achieved by using a physical barrier for directing the hot exhaust air,together with doors at the end of the aisle and an arrangement of baffles and ductwork. As the hot air tends to rise, drop ceiling plenums are often chosen as a means to direct the exhaust air back to the AC system.
Cold air aisle containment fully encloses an aisle in order to ensure that the hardware within will receive a consistent and uniform supply of air at the correct temperature level. This system relies on air flow controls which are configured to meet the specifications of each aisle, thereby optimising the efficiency of the air delivery and preventing the risk of any hot spots of air. In addition to doors at the end of the aisle, cold air containment enclosures will also feature a roof to ensure the air supply is maintained. | <urn:uuid:00c279d1-8c72-4372-b7b1-7d15e9203689> | CC-MAIN-2024-38 | https://digitus-biometrics.com/blog/importance-of-data-center-aisle-containment/ | 2024-09-17T02:09:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651722.42/warc/CC-MAIN-20240917004428-20240917034428-00158.warc.gz | en | 0.930545 | 543 | 2.671875 | 3 |
In Today’s Rapidly Advancing Technological Landscape, Automation and Artificial Intelligence (AI) Are Transforming Various Industries, Including Healthcare
Speech recognition technology, in particular, has emerged as a powerful tool in the pursuit of balancing automation and personalized care. By harnessing the capabilities of speech recognition, healthcare providers can streamline administrative tasks, improve efficiency, and enhance patient experiences, all while maintaining a human touch.
Leverage Speech Recognition
One area where speech recognition technology excels is in automating administrative tasks. Healthcare providers are often burdened with extensive documentation, such as patient notes, medical records, and insurance claims. By leveraging speech recognition, physicians and nurses can dictate their notes, allowing the technology to transcribe them accurately and efficiently. This eliminates the need for manual typing and frees up valuable time for healthcare professionals to focus on direct patient care.
Moreover, speech recognition technology can integrate seamlessly with electronic health record (EHR) systems. This integration enables real-time data entry and retrieval, reducing the chances of errors associated with manual data input. Healthcare providers can quickly access patient information, medical histories, and test results simply by speaking commands, which enhances productivity and enables for more informed decision-making during patient consultations.
Speech recognition technology can be tailored to individual patients, taking into account their specific requirements and preferences. For patients with disabilities or those who face challenges in traditional communication methods, such as typing or writing, speech recognition can empower them to express their concerns, ask questions, and actively participate in their own care. This personalized approach fosters a sense of autonomy, dignity, and inclusivity for all patients.
Find the Balance
To strike the right balance between automation and personalized care, healthcare providers must invest in comprehensive training and education programs for both professionals and patients. By familiarizing healthcare professionals with the capabilities and limitations of speech recognition technology, they can effectively incorporate it into their workflow and leverage it as a support tool rather than a replacement.
Speech recognition technology holds tremendous potential in balancing automation and personalized care within the healthcare industry. By automating administrative tasks and streamlining documentation, healthcare professionals can focus more on direct patient care. Simultaneously, speech recognition technology can enhance the patient experience by facilitating active listening, personalized communication, and inclusivity.
Striking the right balance requires careful consideration of the technology’s capabilities, ongoing training, and open communication with both healthcare professionals and patients. With proper implementation, speech recognition technology can revolutionize healthcare, making it more efficient, effective, and compassionate.
The Leading Speech Recognition Distributor
Nuance® Dragon® Medical One is a secure, cloud-based speech platform for doctors to securely document complete patient care.
Designed for the fast-paced environment, Dragon Medical One is the industry’s leading speech-recognition software for capturing the patient’s story. This portable, flexible, secure and app can boost medical staff efficiency for producing documentation, from any location, up to 45% faster.
To learn more about Dragon Medical One, contact eDist today! | <urn:uuid:80c5ce41-5e78-4129-acf9-0b9f165a8bf1> | CC-MAIN-2024-38 | https://www.edist.com/balancing-automation-and-personalized-care-with-speech-recognition-technology/ | 2024-09-20T17:42:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00758.warc.gz | en | 0.915327 | 630 | 2.65625 | 3 |
A recent article by Maria Cosgrove in CSO asked the question “Wouldn’t it be nice if software developers had something like spellcheck, but instead of catching simple grammar mistakes, it caught basic security problems?”
Very good question, especially when you think about all the cyber security problems and attacks we’ve seen in recent months. The reality is that developers are still writing software with security vulnerabilities. As project timelines contract and more people are involved, the development cycle becomes more complex and is prone to problems. If the problems were rarely seen bugs, it would be one thing, but why are there so many basic errors inside a lot of software?
Ron Arden, Executive Vice President at Fasoo, was quoted in the article saying, “Today’s integrated development environments can already catch common syntax errors, like missing semicolons. If there’s a function you’re using, it shows the parameters, but it won’t tell you if there’s a SQL injection or cross-site scripting error.”
So back to the original question of using a tool like a spellchecker that would identify and help eliminate these problems. This would help developers fix vulnerabilities immediately and also learn to write more secure code in the process.
Traditionally companies test software for vulnerabilities after it has been written during a QA process, but that can be too late, since it introduces too many problems and delays in the development cycle. A better approach is to use application security testing during the code development process to detect security vulnerabilities using an analysis engine based on semantic and syntactic methods. This not only improves the code, but also helps meet a strict set of compliance requirements that follows CWE, OWASP, CERT and other international standards.
Cyber attacks typically target network weaknesses causing organizations to protect themselves with firewalls, intrusion prevention systems, and similar tools. These attacks target weaknesses in the software that companies develop and use. It is difficult to stop malware related attacks after software has been developed. It is better to eliminate these attacks before the software is developed by detecting all security vulnerabilities in the source code.
Another issue is the cost to fix vulnerabilities after you release software. Studies show it can cost less that $1000 to fix a bug during the coding process, but over $14,000 to fix it after it is released. This doesn’t take into account remediation needed by a customer to address any problems caused by the bugs in the first place.
Checking security vulnerabilities during development is the optimal approach and will help minimize potential problems before deployment. This will dramatically reduce the security attack surface in a production system and help us all sleep better at night. | <urn:uuid:51043583-1f1d-498a-8054-f1bd805228fb> | CC-MAIN-2024-38 | https://en.fasoo.com/blog/should-developers-have-a-spellchecker-for-security/ | 2024-09-11T00:50:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00822.warc.gz | en | 0.95776 | 558 | 2.6875 | 3 |
The upheaval of 2020 has forced us all to reimagine familiar pathways, and parents are no exception. Cautious about sending their kids back into the classroom, families across the country are banding together to form remote “learning pods.”
Learning pods are small groups of families with like-aged children that agree to educate their kids together. Parents also refer to learning pods as micro-schools, pandemic pods, and bubbles. According to parents, a pod environment will allow students to learn in a structured setting and safely connect with peers, which will also be a boost to their mental health following months of isolation.
According to media reports, each pod’s structure is different and designed to echo the unique distance learning challenges of each family. In some pods, parents will determine the curriculum. In others, a teacher or tutor will. As well, parents have set some pods up so they can take turns teaching and working. Some will have a cost attached to cover teacher fees and materials. Working parents are also creating “nanny share” pods for pre-school aged children.
Facebook is the place to connect for families seeking pod learning options. There are now dozens of private Facebook “pod” groups that enable parents to connect with one another and with teachers who have also opted out of returning to the classroom.
While parents may structure pods differently, each will need to adopt standard digital security practices to protect students and teachers who may share online resources. If pod learning is in your family’s future, here are a few safeguards to discuss before the pod-based school year begins.
To keep the family discussion about online safety fun, here are 6 Flashcard Tips from MBot to print out and discuss with your kids.
Digital Safety & Learning Pods
Be on the lookout for malware. Malware attempts, since COVID, continue to rise. Pod learners may use email, web-based collaboration tools, and outside home networks more, which can expose them to malware risks. Advise kids never to click unsolicited links contained in emails, texts, direct messages, or pop-up screens. Even if they know the sender, coach them to scrutinize the email or text. To help protect your child’s devices against malware, phishing attacks, and other threats while pod learning, consider updating your security solutions across all devices.
Use strong passwords. Back-to-school is a great time to review what makes a strong password. Opt for two-factor authentication to add another layer of protection between you and a potential attacker.
Consider a VPN. Your home network may be safe, but you can’t assume other families follow the same protocols. Cover your bases with a VPN. A virtual private network (VPN) is a private network your child can log onto safely from any location.
Filter and track digital activity. One digital safeguard schools usually have that a home environment may not, are firewalls. Schools erect firewalls to keep kids from accessing social networks and gaming sites during school hours. For this reason, families opting for pod learning might consider parental controls. Parental controls allow families to filter or block web content, log daily web activity, set time limits, and track location.
Learning pods are still taking shape at the grassroots level, and there are still a lot of unknowns. Still, one thing is clear: Remote education options also carry an inherent responsibility to keep students safe and secure while learning online.
(Download some fun, free content for kids. Here are 6 online safety flashcard tips from MBot. Just print out and discuss with your kids). | <urn:uuid:a1139d2e-db8a-4db0-8b25-d96f20675e4d> | CC-MAIN-2024-38 | https://www.mcafee.com/blogs/family-safety/how-to-keep-remote-learning-pod-students-safe-online/ | 2024-09-10T23:27:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00822.warc.gz | en | 0.955273 | 741 | 3.09375 | 3 |
Our personal identities are complex. We choose to share information about ourselves differently depending on the context of a relationship or situation. Aspects of our identity can change over time. We may choose to compartmentalize aspects of our lives for a greater feeling of privacy and safety.
Pseudonyms are Not New
Pseudonyms are an old concept that people use as part of managing their privacy. The word has been in the English language for over 300 years and originates in Greek. As described by Proofed, a pseudonym is ‘not the name someone uses on a day-to-day basis, and [is] only used for a specific purpose.’
These days, you may be most familiar with authors using a pseudonym when they publish books and other material under a different name. Authors may choose pseudonyms for many reasons, such as to:
- Publish without carrying over positive or negative expectations from their past work
- Publish in different genres and establish a separate brand identity for work in each genre
- Avoid gender, racial and other types of bias and discrimination that may prevent them from publishing at all
- Separate their existing professional career from an emerging writing career
- Protect the privacy of their personal life and those of their family
Using Pseudonyms Increases Privacy and Safety
Authors and publishers do not have an exclusive right to use pseudonyms. There are equally valid reasons why all of us can apply pseudonyms and reap the benefits, such as to:
- Share a limited portion of our identity for a specific purpose, transactions or interaction
- Act differently in a specific context from how our family, friends and work colleagues might know us
- Avoid some forms of bias and discrimination
- Increase our feeling of privacy and safety when interacting online
Pseudonyms and pseudonymity are central to how Anonyome Labs thinks about the world. We call our digital identities ‘Sudo’ (like pseudonyms) because a person can use them to compartmentalize communications and other capabilities using MySudo (our consumer application) or an application built using the Sudo Platform. When you create a Sudo identity, you can choose its name, avatar, email address, phone number and more. Based on your intended purpose for the Sudo, you will choose how closely it matches your personal identity.
It’s a good privacy technique for our MySudo users to employ for many interactions they have online and offline.
Pseudonymization as a Data Protection Technique
Pseudonymization (pronounced pseu-don-ym-i-za-tion) is a privacy engineering technique used to increase the protection of personal data. As with pseudonyms, pseudonymization uses alternate identifiers for data so that linking that data back to a personal identity is more difficult. A good way to think about pseudonymization is a mid-point between directly identifying personal data and anonymous data.
- Directly identifying personal data is data that can uniquely identify a person without further effort or data sets. These are the things we informally think of as PII. Examples include your full name, home address and driver’s license number.
- Anonymous data is unable to be used to identify a person, even when combined with other data sets. In practical terms, re-identification of a person from anonymous data is considered impossible. An example is a randomly generated identifier for a web browsing session, where no record is kept of the relationship between the session and personal data. Data protection regulations, such as the European Union’s GDPR, normally don’t cover truly anonymous data.
- Pseudonymous data is data that can be used to identify a person, but only when combined with other data sets. It could also be called indirectly identifying personal data. An example is where a randomly generated identifier is created for a user entry in an application database, and a separately controlled compliance database holds the directly identifying personal data and the links to the identifiers in the application database. By itself, the application database is considered pseudonymous data, but becomes directly identifying personal data when combined with the compliance database.
Because re-identification is possible from pseudonymous data, it is not considered anonymous data under regulations such as GDPR. But GDPR does acknowledge that pseudonymization is a desirable privacy engineering technique, providing these benefits:
- Reduced risk of breach of directly identifying personal data, since it may reduce the locations where that data is stored and how many people have access to that directly identifying personal data
- Reduced impact in the event of data breaches, since the data breached may not be directly identifying
- Increased amount data processing permited without increased privacy risk for individuals. This may be especially true in areas such as scientific and statistical research.
The protection offered by pseudonymized data may be very close to anonymous data where the complexity to re-identify a person is high. For example, if re-identification requires combining private data sets from multiple companies, then the likelihood of re-identification may be limited to the ability for a law enforcement agency to subpoena each of the organizations and combine the data.
How Anonyome Labs uses Pseudonymization
We use pseudonymization extensively in the Sudo Platform. This reduces the risk of personal data being breached and the impact if a breach was ever to occur (which we also work to mitigate in the first place through security and privacy by design).
Here are some examples:
- Where we do need to store personal data, such as the results of identity verification performed before a user can use MySudo virtual cards, we keep that data separate from other data in the Sudo Platform and protect it with additional technical and administrative controls to reduce risk of that information being breached.
- The way we analyze use of MySudo takes a minimalist approach, achieved in part by using pseudonymization of data (as well as aggregation, redaction and other techniques).
If the concepts of pseudonyms and pseudonymization interest you, download MySudo today to see how you can use it to organize and compartmentalize your personal identity. | <urn:uuid:25c3c17e-59d6-4661-88b1-fad1a3b5e7fd> | CC-MAIN-2024-38 | https://anonyome.com/2020/10/what-is-pseudonymization/ | 2024-09-12T05:38:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00722.warc.gz | en | 0.929877 | 1,248 | 3.171875 | 3 |
Dynamic Code analysis tests the application while it is executing. This methodology enables developers to discover errors and security threats that may not be spotted during static examination.
It simulates a malicious attacker or an end-user to look for runtime vulnerabilities, including web server misconfigurations. New tools are streamlining and automating this process to help CISOs sleep better at night.
What is Dynamic Analysis?
An outage caused by application issues or malware has an immediate and damaging effect on employees and customers, raising questions regarding user data protection and integrity. Such events can have lasting repercussions for a business’s reputation, which is why teams should implement dynamic analysis systems to detect bugs before reaching production.
Dynamic analysis refers to the practice of inspecting an application or software while it’s running, using specialized tools for program monitoring and behavior reporting. Fuzzers simulate unexpected inputs that stress test its resilience against them; memory analysis tools detect memory management defects like buffer overflows; while frameworks automate running different inputs through programs to observe its behavior and produce reports on them.
Dynamic analysis offers many advantages over static code analysis, chief among them the ability to quickly detect defects and vulnerabilities that would otherwise be hard to spot. Dynamic code analysis makes this possible by showing you their effects on actual application behavior rather than being limited by rules alone.
An issue in how an app handles exceptions could cause it to crash under certain conditions, making it hard for static code analysis tools to detect this error because it only manifests when running under specific circumstances.
Dynamic analysis requires running an application under evaluation in order to observe its behavior, which is known as black-box testing, unlike static code analysis which takes place within an open environment where testers have access to architecture, source code and libraries used by an application.
Static analysis can be invaluable in pinpointing numerous defects and issues on a site, yet can miss others. For instance, an attacker could potentially use clickjacking (repurposing clickable content from your website to deliver malware to visitors) which is known as clickjacking; dynamic analysis checks that your assets are properly displayed without redirecting visitors to malicious websites.
What is the Difference Between Static and Dynamic Analysis?
Dynamic analysis seeks out issues that might occur while an application is running in its production environment, by simulating attacks against its code and seeing how it responds. It serves as an excellent complement to static analysis by uncovering issues it might miss.
Dynamic analysis is used in the Quality Assurance phase of SDLC to detect issues that could be exploited during exploitation (deferring null pointers to dereference null pointers, accessing array elements beyond their end, or reusing dynamically allocated blocks without first freeing them). Static analysis, on the other hand, is conducted earlier during development to detect errors in source code that would otherwise remain undetected once an application has been compiled and run.
Static analysis plays an essential part of software lifecycle, yet is limited in its ability to address problems in complex applications. It cannot provide an in-depth view of an app and its interactions with other applications, databases and external services.
Therefore, both static and dynamic analysis should be employed concurrently in order to effectively detect all vulnerabilities. Many teams already employ dynamic analysis as part of their routine software testing and debugging process – running memory profilers or performing load/stress tests are examples of automated dynamic analysis used as part of routine software debugging activities.
Dynamic analysis combined with SAST can provide additional checks that address various forms of vulnerabilities. It can help validate SAST results as well as detect new bugs in running applications.
Dynamic Analysis tools come in all shapes and sizes; from free and open-source Valgrind to more expensive proprietary ones that offer greater speed and accuracy. However, it should be remembered that dynamic analysis can often be bypassed – for instance with packed files preventing access to their contents – making dynamic analysis less effective overall.
What are the Benefits of Dynamic Analysis?
Many development teams employ dynamic analysis in some form. From running applications to verify fixes or testing software against different inputs to simply starting up their app and watching its behavior – dynamic analysis provides invaluable information that static analysis alone cannot find such as memory leaks and null pointer dereferencing vulnerabilities.
When bugs or security threats emerge in a codebase, it must be detected quickly to avoid impacting production environments and having devastating repercussions for businesses. Dynamic analysis allows developers and testers to quickly detect issues as they execute an application so they can be debugged before hitting production environments and having disastrous consequences on businesses.
Dynamic analysis allows you to see exactly how your application runs and examine the data it generates in real-time, providing real-time visibility of performance bottlenecks, memory leaks and security flaws such as XSS attacks or SQL injection attacks that cannot be detected with static analysis alone.
Dynamic analysis differs from static analysis in that it only checks active lines as they are executed, making it much faster and more efficient. Therefore, dynamic analysis should be combined with static analysis for an in-depth examination of application codebase.
Dynamic code analysis can detect misconfigurations in your web servers and other infrastructure components that would go undetected by static scans, such as clickjacking attacks that redirect users from harmless site elements to malicious websites they hadn’t intended to visit.
Dynamic code analysis tools are simple to integrate into any development environment and continuous integration (CI) pipeline, and can even be deployed without additional overhead or workarounds. Although designed primarily for simple apps, more robust tools exist that can analyze complex multithreaded programs with multiple CPUs or GPUs utilizing them all simultaneously. Most tools offer an easy to use graphical user interface so developers can better understand what processes and threads are active while helping determine resource usage more accurately.
How to Perform Dynamic Analysis
Static code analysis is a powerful white-box testing methodology, but it isn’t effective when it comes to testing running applications and their dependencies. Dynamic analysis, on the other hand, looks at the tangible actions of software and provides a real-time view of potential security vulnerabilities that would be impossible to detect with static tools. This method can reduce mean time to identification for production incidents, improve visibility into application functionality and provide better risk management throughout the lifecycle of an enterprise application.
Dynamic analysis requires the program to be executed, which can be a lengthy process in the case of complex applications. To speed up the solution process, dynamic analysis uses a technique called program slicing. Essentially, the goal is to find a reduced form of the program that produces the selected behavior subset. For example, if you want to know the mode shapes for a building under vibration, the dynamic analysis will calculate the natural periods and mode shape of each frame member, solve the RSA and then compare predicted forces against allowable stresses based on the design codes.
As a result, reducing the time it takes to debug and identify bugs can significantly decrease the overall development and maintenance of an application. Performing dynamic analysis on a live application can also help developers quickly identify and isolate memory or performance issues, as well as security risks resulting from dependencies tied to the application such as database servers, web application services or third-party integrations.
While many tools claim to be capable of dynamic analysis, they are often limited in scope and may not be able to effectively handle complex applications. In addition, they are not as flexible or scalable as other solutions.
A dynamic analysis tool should be robust enough to handle complex applications and provide an easy-to-use graphical interface so that developers can control and examine the information gathered during the process. TotalView offers a complete suite of dynamic analysis tools for identifying complex problems that can affect application performance, security and reliability. Learn more about the benefits of using dynamic analysis or start a free trial to see how it can benefit your organization. | <urn:uuid:68207c80-fdbf-46b7-90f9-f22735c6eb25> | CC-MAIN-2024-38 | https://cybersguards.com/what-is-dynamic-code-analysis-2/ | 2024-09-15T23:19:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00422.warc.gz | en | 0.938179 | 1,582 | 2.859375 | 3 |
SAN DIEGO (AP)– A seldom seen deep sea fish looking like a snake was discovered drifting dead on the sea surface area off the San Diego coastline and was brought onto land for research study, aquatic professionals stated.
The silvery, 12-foot-long (3.6-meter) oarfish was discovered last weekend break by a team of snorkelers and kayakers in La Jolla Cove, north of midtown San Diego, the Scripps Establishment of Oceanography stated in a declaration.
It’s just the 20th time an oarfish is understood to have actually depleted in The golden state because 1901, according to establishment fish specialist Ben Frable.
Scripps kept in mind that oarfish have a legendary credibility as forecasters of all-natural catastrophes or quakes, although no connection has actually been confirmed.
Oarfish can expand longer than 20 feet (6 meters) and usually stay in a deep component of the sea called the mesopelagic area, where light can not get to, according to the National Oceanic and Atmospheric Administration.
Swimmers brought the La Jolla Cove oarfish to coast atop a paddleboard. It was after that moved to the bed of a pickup.
Researchers from NOAA Southwest Fisheries Scientific Research Facility and Scripps prepared a necropsy on Friday to attempt to figure out the reason of fatality. | <urn:uuid:1aa90be0-b646-4d9a-b678-55f192c747b3> | CC-MAIN-2024-38 | https://ferdja.com/index.php/2024/08/16/a-rarely-seen-deep-sea-fish-is-found-in-california-and-scientists-want-to-know-why/ | 2024-09-18T12:47:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00222.warc.gz | en | 0.946854 | 291 | 2.671875 | 3 |
There’s a common belief in the security world that obscurity shouldn’t be used as a layer of protection.
This line of thinking is based on Kerchoffs’s Principle, which states that the security of a cryptographic system should depend on its key, not on the secrecy of its design. When analyzing cryptographic primitives or doing any sort of system audit, letting auditors in on the details makes complete sense. Skilled reviewers should spend their time searching out novel weaknesses and not on layers that are intended to slow an attacker or to alert someone to an attack.
That said, there is much to be gained through properly applied obfuscation in deployed systems.
If there’s one thing that the history of cryptography has taught us, it’s that each system has a lifespan. Some of this is expected. Over time, RSA key sizes have grown as machines have increased in speed and power.
Yet, experience shows that by investing a significant amount of time to expert research, along with appropriate funding, some flaw will be discovered that will compromise a system. If not found in the design, flaws will be discovered in the implementation or one of its dependencies (BIND, OpenSSL).
The implementation is always more susceptible to attack than the algorithms and protocols. Any system in frequent use by more than a dozen people could be breached by a patient nation-state that’s willing to expend serious resources to gain access.
We have seen this time and time again. Hushmail, one of the first email providers to have security as its main feature, eventually backdoored its client under subpoena. Google and Microsoft were discovered to be providing (sometimes without their own knowledge) various levels of access to the NSA through programs like PRISM.
We have also frequently seen that what is currently impossible or merely theoretical may soon become the favored tool of script kiddies. So when it comes to an audit, the trick is having a useful model for evaluating each layer, This is done so there is confidence in the level of delay a system is expected to produce against certain classes of attackers. Security developers can then expend resources in line with expectations, keeping in mind that complexity yields vulnerabilities.
Looking at a few trivial measures will give us a model. For instance, what does moving your SSH daemon to a different port give you? Well, someone wanting to scan the internet for all SSH daemons on port 22 would have to send four billion packets, minus known private IP space, RFC1918/Bogons and so forth. The BGP Report says there would be a little shy of 3 billion advertised (and therefore routable) IPs. By randomly choosing a port, an attacker would have to send 2^15 extra packets on average for each host that moved their SSH daemon. If the attacker has to work exponentially harder, that’s a big win.
Yet, not every host is going to do this. If one in forty hosts randomly chooses a port, then the attacker’s work has only gone up by a small percentage; the attacker only has to scan on hosts that don’t reply on 22. Instead of sending 2.7 billion packets, they might send an additional 30,000 packets, but only on 2.5 percent of hosts. That’s still under the 3 billion mark.
This model really only helps against the least skilled level of attacker. For targeted attacks, it only increases the attacker’s work by the number of servers you have times 30,000. When we can scan the entire internet in 10 minutes/port, that’s inconsequential. An attacker can scan an entire class B, all ports in the same 10 minutes. An attacker with a small botnet can do every port on every machine in the same amount of time (as was seen a few years ago with the internet census that utilized compromised routers). The multiple does add up, but the search space is too small for it to make enough of a difference.
Another way to examine this is through lockpicking. Think of each pin as a layer of security. If you have a lock with five pins and a lockpicker uses a simple raking technique, they only have to brute force the correct pin configuration to open the lock. Yet advanced lock pickers separately manipulate each pin until the lock opens. It’s like a video game with checkpoints. You don’t restart from the beginning. Attackers may want to pick your digital locks, but they should at least be forced to manipulate each pin.
The layers do serve a purpose. Some of these mechanisms are like a Nightingale Floor, which was used centuries ago to alert temples and palaces to an intruder. Others are like barbed wire or a floor covered in tacks; easily handled by a prepared attacker, but effective against more opportunistic or time constrained intruders. The best security measures are invisible until tripped, relying on the equivalent of tribal knowledge shared among your team. This could be something that causes a system to fire an alert and drop all further traffic if anyone runs the ‘w’ or ‘ps’ command in a shell.
The first time an attacker does that, they’ll know not to do it again, but you’ve already been alerted and are watching out (and you’ve still got other mechanisms like that for the next time).
All security mechanisms, at some point, are ultimately based in obscurity. There are multiple types that people should consider. It is up to each of us to determine whether we want the ultimate in obscurity, or the metaphorical equivalent of a floor covered in tacks.
Jonathan Wilkins is a 22 year veteran of the information security industry, and an expert in both offensive and defensive techniques. Over the past two decades, he’s helped Microsoft, MySpace, Zynga, Yelp and dozens of other Fortune 500 companies secure their systems. He currently serves as Chief Security Officer for Blockstream, Inc. | <urn:uuid:020b2cc1-edb8-4b15-af7e-cd50851eeeed> | CC-MAIN-2024-38 | https://develop.cyberscoop.com/obscurity-cryptography-jonathan-wilkins-op-ed/ | 2024-09-07T14:52:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00322.warc.gz | en | 0.959681 | 1,233 | 2.578125 | 3 |
A few years ago, artificial intelligence (AI) was still ahead of us and “in the future.”
And yet, in today’s world, we see machine learning (ML) utilized everywhere. Almost all of us use this technology several times a day without even thinking about it. The range of areas where machine learning can be leveraged is nearly endless.
Speaking from my experience in the utilities industry, utilities around the world are now actively looking for opportunities to adopt AI. Other industries are following suit. While AI is not new, it’s easier to use than ever, given the prevalence of much more data combined with more affordable processing power.
What’s the difference, though, between artificial intelligence and machine learning? And what, for that matter, is deep learning?
Though clarity is key to successful innovation, these terms can be used loosely in conversations around the topic of artificial intelligence. In 2016, when Google DeepMind’s AI-empowered AlphaGo computer program took on Korean Go master Lee Sedol in a five-game match, media coverage used AI, machine learning, and deep learning interchangeably—as if they all referred to the same type of intelligence.
Let us discuss these terms separately, so that we can speak clearly and accurately on this subject of emerging importance.
What Are Artificial Intelligence, Machine Learning, and Deep Learning?
Artificial Intelligence is the broadest and most all-encompassing of these terms, having been coined as early as 1956, when the field of AI research was founded at a summer workshop at Dartmouth College. It refers, generally, to the ability of machines to exhibit human-like intelligence.
In studying AI, researchers often distinguish between narrow AI, or the ability of technology to simulate human intelligence in focus on one narrow task, such as playing chess, and artificial general intelligence (AGI), a still-hypothetical type of intelligent agent that is level with (or superior to) human intelligence in the majority of economically valuable tasks.
AI encompasses several different technologies and systems. Machine learning is one of these branches of AI; others include natural language processing, computer vision, and speech recognition.
Machine learning refers to the use of algorithms to parse large volumes of data, learn from the process of doing so, detect patterns, and then make decisions or predictions based on these patterns. It’s so named because it imitates the way that humans learn, gradually improving its accuracy through accumulated experience.
Deep learning is a subset of machine learning. It is inspired by neural networks in the human brain and is generally understood as a method for implementing machine learning, equipping artificial neural networks with a set of techniques known as representation learning, which automatically discover meaningful patterns from raw data. Recently, deep learning has proven highly successful. Again, it is dependent on massive datasets to “train” itself.
Why AI, and Why Now?
Since it was first conceptualized, artificial intelligence has seen its fair share of ups and downs, of ebbs and flows in public interest. But today, we’re finally seeing real progress, and AI is set to transform our world. The main reasons for this are:
- Massive computational power is now available at a low cost and can be provisioned quickly in the cloud. Improvements in graphics-processing-unit (GPU) computing have increased the training speed of deep-learning algorithms, given that graphics cards now feature thousands of cores and are therefore ideally suited to parallel workloads.
- Big data has exploded in recent years, with a massive increase in the amount of data we all create coupled with near-limitless storage capacity. Large and diverse data sets provide better training material for algorithms.
- Algorithms themselves are now better at finding patterns in the mountains of data. AI and machine learning platforms from players such as Google, IBM and Microsoft are making it much easier to develop powerful applications that support and enable these algorithms.
The technology world’s investment in AI, particularly in the areas of machine learning and deep learning, is growing at a rapid pace. Machines are already as competent as humans, if not more so, at specific tasks: playing chess, transcribing audio, analyzing images, and diagnosing diseases.
However, adoption of AI across business processes at the modern enterprise remains low. Business leaders are uncertain about what capabilities AI can truly unlock for them, where to obtain AI-powered applications and how to integrate them into their companies.
To that end, when should business leaders consider machine learning?
Machine learning is suited to business problems where:
- The decision or prediction to be made is complex in nature (for example: face detection, speech recognition, spam filters),
- The organization has access to high-quality, clean, and recent or real-time data, ideally labelled to allow the algorithm to make sense of it.
- Some margin of error is acceptable.
AI opens a wide range of exciting possibilities for all industries, but the changes will not be simple.
Early evidence suggests AI can deliver real value to serious adopters, but current uptake outside of the tech sector remains low. Few companies have deployed it at scale. As with all such innovations, opportunities and rewards still await those willing to take the plunge.
James McClelland is responsible for global marketing for the utilities industry at SAP. Learn more about this topic at SAP for Utilities, Presented by ASUG (October 9-11, 2023) in Chicago, IL. | <urn:uuid:6574285a-8128-41e9-bf5e-1edfc054ec6c> | CC-MAIN-2024-38 | https://www.asug.com/insights/guest-perspective-making-sense-of-machine-learning | 2024-09-07T14:57:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00322.warc.gz | en | 0.951814 | 1,125 | 3.46875 | 3 |
Applying Strong Cyber Hygiene Security to IoT Endpoints
The Internet of Things, better known as IoT. You’ve heard of it, right? But do you know what it is? Simply, it is the interconnection of things (or endpoints) on the Internet to send and receive data. Today, experts calculate that there are 31 billion things connected to the insecure Internet, and growing exponentially. Did you know there are different types of IoT applications? The most commonly referenced application is Industrial IoT (IIoT), but there are also Commercial IoT, Consumer IoT, and Enterprise IoT (EIoT). Although this blog provides a holistic view of the components that make up the Internet of Things, the focus will be the endpoints at the IoT gateway or computing edge up to the data center or cloud platforms.
A fun trivia fact. Cisco coined the term “Fog Computing” back in 2011. Fog Computing is a paradigm that extends cloud services to the network edge (access layer). So how is Fog Computing different from cloud computing? It’s a matter of proximity. Fog Computing is closer to users and highly concentrated connected endpoints that reside in company headquarters, manufacturing floors, or critical infrastructure at the municipal, state and federal levels. Fog Computing and edge computing are now synonymous. Today, endpoints can reside anywhere in the new Anywhere Workplace paradigm shift.
Here are examples of Industrial IoT applications:
- Energy includes electrical production and distribution.
- Utilities infrastructure includes water, sewage and waste management.
- Manufacturing comprises automated parts and assembly, energy efficiency, supply chain management and logistics.
- Oil and gas mining.
- Agriculture encompasses food production, irrigation and water management.
- Smart cities incorporates municipal infrastructure like roads, traffic controls, parking, and transportation. Transportation covers high-speed railways, buses, airports and airplanes, and smart automobiles.
- Building automation includes HVAC, access control, and safety and security.
Here are some of the Commercial IoT applications:
- Hospitality covers guest monitoring and customer service scoring.
- Healthcare encompasses telehealth or telemedicine applications for patient virtual care, wellness and prevention monitoring, and prescription fulfillment.
- Retail includes in-store shopping, ordering and payment, and shopper analytics and trends.
Here are the Consumer IoT applications:
- Hyperconnected endpoints comprised of smartphones, tablets, laptops and desktops used for work and entertainment.
- Smart TVs that host applications for entertainment.
- Smart home automation is comprised of security alarms, smart assistants that perform routine tasks and services (Amazon Echo, Google Home, or Apple HomePod). Also, smart lights, smart thermostats, and smart speakers that employ ambient intelligence that detects occupancy to trigger lighting, user heating and cooling for comfort and energy efficiency, and omnipresent sound experiences.
- Wearables include personal monitoring devices, medical alerting, Apple Watch or Google Smartwatch, and augmented reality (AR) or virtual reality (VR)
- Home appliances like your coffee maker, washer, dryer. and refrigerator.
Here are some of the Enterprise IoT applications:
- Voice communications over IP (VoIP) phones.
- IP cameras for video conferencing.
- Smart printers.
- Location-based geofencing sensors that use near field communications (NFC), Bluetooth (BLE), Wi-Fi, or cellular networks.
- Department of Defense (military) combat robotics, drones, and wearables like AR and VR.
There are certainly overlap and convergence with these different IoT applications, and conversely, there are applications that can be segmented into their own category like military combat applications from the Enterprise IoT umbrella. The nuances depend on the endpoints used and the use case application that accomplishes redundant, complex, or risky tasks by employing automation removing manual workflows.
The IoT technology common architecture consists of these processes: sensors that collect measurement data and/or actuator valves connected to gateway hosts that provide an uplink for the collected data and intermediary storage, send and receive messages brokering and Human Machine Interfaces (HMI), and then transmission to the final destination at the IoT central station located at the corporate data center or cloud platform. IIoT HMIs normally run legacy operating systems like Windows 7 or 8, or outdated Linux distributions and applications that may contain vulnerabilities. Newer Mobile HMIs run today’s modern operating systems like iOS, Android OS, Linux, macOS, or Windows IoT Enterprise LTSC. The central station provides the long-term storage or raw data (data lakes) and classification of all the data using machine learning models used for data intelligence and analytics (data warehouse).
Often mixed within the raw endpoint data are sensitive personally identifiable information (PII) and critical security vulnerability information from all connected sensors, actuators, HMIs, and gateway hosts. The telemetry data can contain a list of security exploits that provide a map of the unpatched vulnerabilities existing on these endpoints to potential threat actors. IoT ecosystem product details Common Vulnerabilities and Exposures (CVE) listings and maintained by NIST at the National Vulnerability Database.
Other challenges include the mixture of legacy and modern endpoints often co-existing within the same network access edge. These endpoints could be running any number of outdated hardware, operating systems and applications leaving them vulnerable to sophisticated morphing and chained exploits like data and credential theft using phishing and pharming, malicious exploit kits including ransomware, and unknowingly used to amplify distributed denial of service (DDoS) attacks. Shodan provides a search engine and network scanning service for internet-connected IoT endpoints.
Then comes the secure transport of this telemetry data to the data center or cloud using IPv6, private 4G/LTE or 5G Access Point Names (APN), low-power wide area network (WAN) or satellite as common medium for long-range network communications infrastructures. IPv6 implements IP security (IPsec) standards like packet authentication and payload encryption, although it must be turned on because it is not enabled by default. 4G/LTE and 5G have their own set of security challenges in their protocols and trust model. A private access point name (APN) that enables secure mobile virtual private network (VPN) specifically IPsec adds considerable security to the transmission of IoT data to the data center or cloud platform. A great blog that details the 5G security journey to the edge was recently published by my colleague, Russ Mohr.
The COVID pandemic have lessened the evolution and use of some sectors in the Industrial and Enterprise IoT applications because of shelter in place mandate, although Commercial and Consumer IoT applications have seen a dramatic spike in usage and growth because of the work-from-home shift.
In conclusion, Ivanti Neurons, MobileIron UEM, Threat Defense, and Zero-Sign On, and Pulse Secure Connect and Pulse Zero Trust Access provide robust management and the security tools to discover and track valuable assets and data, provision and manage apps and content, identity certificates, MFA, security policies and configurations including enabling encryption, WPA3 Wi-Fi security, VPN, and threat defense, and then provide the secure transport mechanism from the edge endpoint to the corporate data center or cloud platform.
Remember to ask yourself - how are you applying strong cyber hygiene security to your IoT endpoints? | <urn:uuid:d77b7689-9d4d-41ec-85d5-fc03dae25327> | CC-MAIN-2024-38 | https://www.ivanti.com/blog/applying-strong-cyber-hygiene-security-to-iot-endpoints | 2024-09-07T15:07:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00322.warc.gz | en | 0.898561 | 1,516 | 3.09375 | 3 |
Note: This is first in a series that we’re calling “Speak Geek” through which we’re defining terms used in the UPS world. Check it out to learn exactly what you need and exactly how to say it. You can thank us later.
Output Power Capacity
Output Power Capacity is the amount of Watts or VA that the UPS can supply at a steady state indefinitely. The unit can put out more power at time of need, i.e. supply more energy to start a computer, but then return to steady state.
Max Configurable Power
Max Configurable Power is the maximum amount of power the unit can supply for a short period of time, i.e. the amount of power the unit can supply to start a computer. | <urn:uuid:6c0a4224-63f9-4c8e-858e-e2c70bd4540e> | CC-MAIN-2024-38 | https://coasttec.com/output-power-capacity-max-configurable-power/ | 2024-09-08T20:12:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00222.warc.gz | en | 0.919149 | 158 | 2.875 | 3 |
Most cash transactions occur quickly with people spending no more than a second or two looking at banknotes. How, then, do public users benefit from the considerable investment by Central Banks in banknote design and complex anti-counterfeit security features if they barely have time to notice them? Until recently, answers to this question were elusive, even though the notion that the public play a significant role in counterfeit detection is widely accepted. However, clear evidence has now emerged to show that accurate authentication of a banknote is well within human capability even when a banknote is seen for less than a second. Moreover, accurate authentication, especially when time is short, appears to be highly dependent on banknote design.
The research leading to these important findings was conducted by the Four Nations Group of Central Banks. In 2018 they commissioned Professor Jane Raymond at the University of Birmingham, UK to design and conduct an intensive research program to investigate the human cognitive processes underpinning banknote authentication by public consumers under a variety of conditions, including time pressure. The resulting ‘CogNote Project’ comprised multiple studies conducted in four different countries (Australia, Canada, England, United States) thereby advancing understanding of the underlying mechanisms and socio-geographic factors that underpin human authentication behaviours. Supported by an unprecedented level of investment and collaboration, the CogNote Project used a range of approaches that have extended understanding of banknote authentication well beyond simple banknote discrimination tasks. The project measured eye movements, brain activity levels, and socio-geographic habits. It applied artificial intelligence approaches to probe how banknote design guides the eye and brain to rapidly identify counterfeit. The targeted objectives of the project were to provide the Four Nations with an actionable analysis of human authentication behaviour as well as concrete guidelines to inform decision making on banknote design, security feature selection, and quality policies.
Cognitive Theory of Authentication
The highly structured design of the CogNote Project was based on well-developed theories from psychological science that speak directly to questions concerning banknote authentication under time pressure. Broadly speaking, these theories predict that accurate authentication should be possible with less than a second of viewing a banknote if, and only if, attention and eye movements are rapidly attracted to hard-to-mimic design elements, e.g., security features, as opposed to easily mimicked elements, e.g., signatures. The CogNote Project put this prediction to the test by asking members of the public to authenticate a series of banknotes while their eye movements were being monitored. Eight different banknotes from four currencies (AUD, CAD, GBP, and USD, including paper and polymer banknotes) were targeted for study. Genuine banknotes drawn from general circulation were mixed with counterfeit that had been either forensically recovered or specially manufactured. A total of 128 adults aged between 18 and 64 years from four different countries participated in this component of the Project.
The psychological science basis for predicting that banknote authentication should be possible with only a second to view a note is rooted in a notion known as ‘predictive coding’. All sensory data obtained from the environment, including light patterns on a banknote, are initially processed by low level sensory systems in a non-conscious, rapid fashion. This takes a few tenths of a second at most and provides central brain areas with a flood of information. To avoid information overload, high-level areas of the brain attempt to predict what the sensory flood might yield, enabling the necessary high-level processing networks to be prepared. This process of ‘predictive coding’ creates a mental representation of expected of sensory events that can then be matched to actual sensory events when they occur. If sensory and predicted events match, then further processing of the current sensory data can proceed very quickly, and planned action can be rapidly executed. A mismatch, on the other hand, produces a ‘hiccup’ in the brain, causing it to alter course. Greater sensory analysis is instigated, planned action is halted, and slow conscious processing is engaged so that the source of the unexpected information can be analysed. For example, if you hear someone say, “I’ll pay with …”, your brain probably anticipates the word “cash” or “card” to follow, a prediction that sensibly eases the listening task. However, if, the word “paperclips” was spoken instead, your brain would register surprise, causing you to replay events and puzzle out what happened. This simple example illustrates how the human brain routinely relies on predictive coding to not only speed perception of the environment but also to efficiently detect unexpected events.
The basic notion of predictive coding has been applied to banknote authentication in a simple model called SIMBA (Suspicion Initiated Model of Banknote Authentication, Raymond et al. 2020), which served as the theoretical basis for the CogNote Project. It says that the first stage of banknote authentication involves rapid matching of the banknote’s sensory data with its expected look and feel. This involves collecting sensory data by using one or two eye movements to look at, or fixate, different spots on the banknote and can be completed in less than a second. If the sensory data obtained during these first few fixations is as expected, then no suspicion is raised and the transaction proceeds almost automatically without further visual or tactile analysis. However, if a mismatch arises during this first critical stage, then neural systems sound an ‘alarm’ and active a search for more information. Thus, the second stage of banknote authentication involves accrual of more information over a period of several seconds. This is used to support conscious, deliberate decision-making regarding the note’s authenticity. The important point here is that slow, conscious cognitive engagement with a banknote probably occurs only when suspicion is raised in the first, rapid stage of authentication.
This implies that where gaze is directed during the first second or so of viewing a banknote should directly determine authentication accuracy. If the location that attracts the eye contains high quality authentication information, as in a security feature, then sensory mismatch signals raised by counterfeit should be especially strong, leading to overt counterfeit detection. In contrast, if the eye is initially attracted to areas that are easy to mimic, e.g., a large image or signature, sensory mismatch signals raised by counterfeit might be small or undetectable, causing a person to accept a bad note. If security features are not directly looked at during the first stage of authentication, they may be of no assistance to the user even when they have been poorly imitated. In a nutshell, SIMBA makes the explicit prediction that authentication performance should be better when eye movements are quickly directed at security features than if they are captured by other design elements.
Up Next: Part 2 of this article series will explain what the CogNote Project revealed about authentication under time pressure and eye movements during authentication.
Rao, R. P. & Ballard, D. H. Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nat Neurosci. 2, 79-87 (1999).
2 Raymond, J. E., Dodgson, D.B. & Pearson, N. (2020). 3D Micro-Optics Enable Fast Banknote Authentication by Non- Expert Users. Optical Document Security 2020 Proceedings, Reconnaissance International, ISBN 978-1-9163806-4-6. https://opticaldocumentsecurity.com/
The Four Nations Group was founded in 1978 and consists of five central bank members – the Board of Governors of the Federal Reserve System, Bank of Canada, Reserve Bank of Australia, Bank of England and Banco de México. The group is a forum in which its members share information and experiences on banknote development, issuance and distribution, counterfeit deterrence, and relevant technical studies. | <urn:uuid:da52d0f7-b9ca-4691-a540-570e1139a5b9> | CC-MAIN-2024-38 | https://platform.keesingtechnologies.com/detecting-counterfeit-money-in-the-blink-of-an-eye-part-1/ | 2024-09-10T01:44:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00122.warc.gz | en | 0.937152 | 1,609 | 3.046875 | 3 |
Cyber Privacy is an important subset of your computer security regime which includes Anti-Virus, Anti-Malware and perimeter defense tools like Firewalls and Intrusion Prevention Systems (IPS). Most security products today focus on resolving problems based on known vulnerabilities and inbound traffic whereas Privacy is concerned specifically with data loss prevention.
This two part series discusses the different types of defense mechanisms available and how they differ from BlackFog’s approach to threat detection and prevention.
Perimeter Defense and Firewalls
Firewalls are perimeter defense tools that have been the cornerstone of threat prevention for well over a decade. They can block specific ports, endpoints and known vulnerabilities effectively. Since they exist at the gateway they provide an invisible layer of detection for devices within the network.
The challenge for firewalls and most perimeter defense tools is that they only work against well known vulnerabilities and attack vectors. Cyber threats have evolved considerably and now focus on completely different vectors to carry out their attacks. Instead of targeting known channels for weakness (which is always the first stage) they focus on specific channels and protocols that are already open.
Since primary access to the Internet is through a web browser, cyber attacks now focus on weaknesses in browsers and the HTTP/S protocol itself. With more than 80% of your exposure to the Internet through this mechanism cyber criminals utilize this to target you and effectively bypass firewall rules. By leveraging this open port (typically 80 or 443) cyber criminals are able to create tunnels through your network and communicate back to their command and control servers (C&C servers) at will.
In contrast to firewalls, anti-virus (AV) tools focus on the device itself. These tools are great at removing the problem once it is discovered. While clearly and important part of your toolkit once you have been infected they do little to prevent problems in the first place.
Anti-virus tools use a technique known as signature detection. When threats have been discovered researchers fingerprint the files, providing a unique signature that can be used to detect these problems on your device. The signatures are then added to a database which is then sent back to all the clients running the vendor’s software ready for the next detection scan.
Like perimeter defense tools, anti-virus tools focus on known vulnerabilities and therefore cannot adapt to future threats. In the next article we will explain how BlackFog approaches the problem of data loss and privacy protection using preventative techniques. | <urn:uuid:2a616b53-4475-4fb5-a95a-2b1e90f59749> | CC-MAIN-2024-38 | https://www.blackfog.com/protecting-online-privacy-known-vulnerabilities/ | 2024-09-10T00:26:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00122.warc.gz | en | 0.945845 | 503 | 2.9375 | 3 |
What the heck is a Router?
Router is a computer device that receives or forwards data packets to and from the Internet towards a destination in the process called routing. Router is the essential component of the computer networking that enables any sent data to arrive at the right destination.
As an illustration, imagine that the Internet is the world and one computer is one household. Other computers connected through the Internet are households around the world. Say one household will send a letter to another household in any part of the world. The letter has an address right? And that address would determine the destination of the letter. But without one reading the address, the letter would not arrive to the right receiver. The letter also would not be able to reach the intended receiver if there is not medium. This medium would be the courier. And the courier of the computer data is the router.
A router (broadband router) is also a device that enables two or more computer to receive data packets from the Internet under one IP address at the same time.
Remember that to be able to connect to the Internet, a computer must have an IP address unique from the rest of the computers. Therefore, every computer connected to the Internet has it own IP address. It is like having a fingerprint or ID as an access pass to be able to enter the web. With the presence of the router, this "fingerprint" or "ID" could be shared by two or more computer at the same time.
In simplest form, a router makes two or more computer use the Internet at the same with one access pass.
One more thing: a computer with cable modem could also be considered as a router. In this, the computer would do the process of routing like normal routers do. Other computers are then connected to the computer with Internet connection that would give it with the Internet connection. The computer with cable modem has the direct contact with the Internet and the ones connected to it are sharing the connection.
Why would anyone need a router?
For households with two or more computers who would want to have Internet connection to every computers they have, taking subscription for each would be too much. The solution is to buy a router that would enable every computer in the house to have an Internet connection. In the definition above, the broadband router would act as a hub to the existing Internet connection.
If the router is comparable to a hub, would it affect the Internet speed?
It should be taken into consideration that once a single Internet connection is divided, the connection speed is affected. But there are some broadband routers that would bring minimal slowdown to the Internet speed and the effect might not even be big.
Internet speed would also depend on the type of application used in a router. While some would inflict little effect on the speed like online games, others would terribly slowdown your connection and even hinder you to use the Internet at all.
Usually, offices use a more sophisticated router to redirect Internet connections to the large number of computers. These routers would give better data packeting compared to a typical router used at home that results to faster Internet speed.
About this post
Viewed: 1,862 times
No comments have been added for this post.
Sorry. Comments are frozen for this article. If you have a question or comment that relates to this article, please post it in the appropriate forum. | <urn:uuid:03cef241-a016-4a99-89a1-e1fe396c36b2> | CC-MAIN-2024-38 | https://www.fortypoundhead.com/showcontent.asp?artid=23745 | 2024-09-10T01:21:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00122.warc.gz | en | 0.965312 | 682 | 3.6875 | 4 |
Since its massive implementation, Bluetooth has become one of the most popular wireless connection technologies, allowing people to enhance their activities in a fully connected environment. Virtually any device is Bluetooth-enabled, including laptops, audio players, smartphones, smart speakers and more, which is very attractive to threat actors.
According to cybersecurity specialists, a recurring security problem is that users keep Bluetooth on their devices activated even if they do not use it, when it is best to activate it only when it is needed. As simple as it sounds, this and other security measures are frequently ignored or forgotten by users, who unknowingly open an attack vector for Bluetooth device engagement.
In this article, experts from the ethical hacking course of the International Institute of Cyber Security (IICS) describe the most well-known Bluetooth hacking methods, widely used by threat actors around the world. Before continuing, as usual we remind you that this article was prepared for informational purposes only and should not be taken as a call to action; IICS is not responsible for the misuse that may occur to the information contained herein.
According to the researchers, the most common Bluetooth hacks and vulnerabilities are:
- Bluetooth Impersonaton Attacks (BIAS)
Let’s see what each of these attacks and security flaws consists of in detail.
This is an attack that can be deployed through the air to compromise Bluetooth devices. Through exploiting a widely spread vulnerability, threat actors can take control of a Bluetooth connection and hijack the affected devices. The vulnerability exploited in this attack resides in smartphones, laptops and even Internet of Things (IoT) devices, say specialists in ethical hacking.
As a security measure, experts recommend disabling Bluetooth when not in use, in addition to keeping your devices always up to date, not using public WiFi networks and, if possible, using a virtual private network (VPN) solution.
This is a variant of network attack that occurs when hackers manage to connect to a user’s device and, without their consent, begin intercepting sensitive information.
While this is a highly intrusive attack technique, the attack will only work if the target user enables the Bluetooth feature on their device. However, the risk is considerable due to the ability to steal sensitive information.
Given the characteristics of this attack, the best way to protect you is to keep the Bluetooth function disabled when not in use. Storing sensitive information in secure locations and applying passwords to these folders on our devices could also prove useful, ethical hacking experts say.
This attack variant occurs when one Bluetooth device takes control of another using phishing techniques and malicious online content; so both attacker and victim must be in nearby locations. While this attack does not give hackers access to the affected device, this attack would allow sending all kinds of invasive ads or spam to users without the user knowing where these posts have come from.
The best way to avoid this attack, according to ethical hacking experts, is to keep the Bluetooth function off whenever it is not in use.
Bluetooth Impersonaton Attacks (BIAS)
In this attack variant, threat actors seek to compromise a legacy secure connection procedure during the initial establishment of a Bluetooth connection. The main advantage of these attacks is that the Bluetooth standard does not require the mutual use of the legacy authentication procedure during the establishment of a secure connection.
To prevent these attacks, the Bluetooth Special Interest Group (SIG) implemented mutual authentication requirements along with connection type verification to mitigate security risk.
This exploit was developed after hackers realized how easy it was to compromise Bluetooth devices using Bluejacking and BlueSnarfing. The BlueBugging attack uses the Bluetooth connection to create a backdoor on the exposed phone or computer.
This attack not only allows hackers to access information on the affected devices, but would also provide access to critical functions in the system.
How to protect against a Bluetooth attack?
These attacks are very common and pose severe security risks, so it is necessary to know the best ways to prevent this kind of situation. According to the experts of the ethical hacking course, these are the best methods to prevent a Bluetooth attack:
- Turn off Bluetooth when not in use
- Do not accept pairing requests from unknown devices
- Keep your systems always updated to the latest version available
- Enable additional security measures
These five scenarios described above are just a fragment of the multiple known Bluetooth hacking methods, so users and system administrators should try to maintain all possible security measures in order to avoid suffering the consequences of these and other attacks.
To learn more about information security risks, malware variants, vulnerabilities and information technologies, feel free to access the International Institute of Cyber Security (IICS) websites.
He is a cyber security and malware researcher. He studied Computer Science and started working as a cyber security analyst in 2006. He is actively working as an cyber security investigator. He also worked for different security companies. His everyday job includes researching about new cyber security incidents. Also he has deep level of knowledge in enterprise security implementation. | <urn:uuid:b4cdf28e-156d-40e6-9b8c-d0af5f6263b5> | CC-MAIN-2024-38 | https://www.exploitone.com/tutorials/top-5-techniques-used-to-hack-into-bluetooth-devices/ | 2024-09-11T07:51:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00022.warc.gz | en | 0.928642 | 1,010 | 2.921875 | 3 |
Managing files forms the central focus of most COBOL applications. This section discusses the implementation and special features
of the three types of files: sequential, relative, and indexed.
ACUCOBOL-GT supports variable-length records in accordance with ANSI standards for all file types. A file's records are variable-length
whenever any one of these conditions is true:
- The RECORD CONTAINS clause contains the VARYING phrase.
- The RECORD CONTAINS clause contains both a minimum and maximum size.
- There is no RECORD CONTAINS clause but the file's FD specifies more than one record, and those records have different sizes.
A file's records are fixed-length whenever:
- The RECORD CONTAINS clause specifies only a maximum record size.
- There is no RECORD CONTAINS clause, and the file's FD does not specify multiple records having different sizes.
Note: ACUCOBOL-GT automatically closes all of its files if it is killed by the user. However, a power failure, turning off the computer,
or issuing a kill -9 from the console are catastrophic exits. In these cases, ACUCOBOL-GT cannot close its files. | <urn:uuid:0958aad3-16b5-4316-804d-5125e823bbac> | CC-MAIN-2024-38 | https://www.microfocus.com/documentation/extend-acucobol/103/extend-Interoperability-Suite/BKUSUSPROGS001.html | 2024-09-12T09:59:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00822.warc.gz | en | 0.872054 | 260 | 2.53125 | 3 |
Security keys are a beautiful idea. Physical devices that unlock entry to sensitive enterprise systems and networks, they avoid the need to rely on passwords and other, somewhat unreliable forms of authentication. Your common variety hacker, too, will grit their teeth in frustration, unable to slip their hands into your pocket and swipe your credentials from thousands of miles away in their Sverdlosk or Pyongyang basement.
Theory, however, doesn’t always translate well in practice. While many cybersecurity experts swear by security keys as one of the best methods to secure your systems out there, that advice disguises the medium’s reliance on the very forms of authentication these devices are designed to eliminate.
That begins with their setup. Security keys require the user to be registered with an organisation, a process that relies on the provision of usernames and passwords to create an account. These methods are easily compromised: for example, IDEE’s own research highlighted that 35% of UK businesses were breached using stolen credentials in 2023 – the most common attack vector that year.
Incredibly, security keys not only ask users to use phishable methods to set up their first security key but also suggest using a second key as a backup in case the first gets lost – effectively doubling the attack surface. Continuing to use passwords and relying on them to add a second authentication factor only prevents password-based attacks and, worse, creates a false sense of safety, which plays into the hands of cybercriminals and gives the upper hand.
Moreover, if a security key is lost or stolen, especially after being used with a password, that user’s account is immediately in danger. This is even more so if the criminals have already compromised that user’s credentials. The threat actor can then plug in the key to a new device and enter the password, and you would be none the wiser.
Additionally, many companies that produce security keys have also developed and now use in-house cryptographic libraries, software packages providing functionality for implementing cryptographic algorithms and protocols. These libraries are unlikely to be as secure compared to more established and better-rested varieties, such as Python. This is unbelievable: in this day and age, the cybersecurity industry and everyone who works in it should refuse to accept that a method of authentication that is ‘Phishing resistant’ is good enough – it’s not even close.
The additional price of implementing security keys
Unsurprisingly, the problems continue to mount when we consider the hardware of security keys. This cannot be upgraded to meet future requirements, meaning that whenever a new set of specifications are released companies must buy completely new hardware to meet these stipulations.
When we move from hardware to firmware, it will come as no surprise that there are problems there, too. There is a push for complex PINs to be used as an additional security measure, but yes, you guessed it, the current firmware can’t support them. Also, they have limited storage capabilities – which, again, cannot be upgraded to store more credentials. So, what’s the option when storage runs out? You guessed it, buy new ones!
As you can see, much of this ‘additional security’ only strengthens one part of a business: its spending.
Let’s talk money, then. A top-of-the-range security key typically costs €75, excluding VAT. Add to this the fact that they suggest you have two keys per person in case one gets lost and that ends up being an outlay of nearly €200 per person. That’s before you factor in having to ship them across countries, maybe even globally.
However, it’s not just the financial costs that rack up from using security keys. Their use also compromises other parts of a business’s day-to-day functionality.
When companies employ chief information and security officers (CISOs) they don’t do so with logistics in mind. But that is exactly what happens when a company chooses security keys as its security option. CISOs should put all their time and resources into doing what they do best: creating new and more secure ways of protecting their businesses from sophisticated cyber criminals and their evolving methods.
However, as the company’s accidental logistics manager, they spend a lot more of their time ordering and shipping keys than securing the business’s cybersecurity. What an incredible waste of expertise, and what a shame for the talented cyber professionals who are being wasted by countless businesses.
Embrace innovation and strive for better
Is all of this meant to enhance the user’s experience? It doesn’t. What’s the expectation for users whose laptops have blocked USB ports for security reasons? Must IT departments now open those ports up?
And how inconvenient it is to remember to take your security key when nearly everyone already carries around more capable devices every day. Speaking from my own personal experience, I’m sick of carrying around a set of keys every day – why would I want to add another one?
It doesn’t have to be this way. In our tech-savvy modern world, we have developed better, completely secure ways of protecting your cybersecurity – not just ‘phishing resistant,’ but ‘phish proof’.
Using the concept of transitive trust and strong identity proofing, there is a method that means we will never have to rely on security keys and deal with their constant problems. Transitive trust works by ensuring all transactions occur on a trusted service on a trusted device by a trusted user’s control. There is no more reliance on phishable factors such as passwords, one-time passcodes, or push messages.
In short, the cybersecurity industry needs to move with the times and embrace keyless technology if it is serious about unlocking the door to a completely secure future. | <urn:uuid:ecfd926e-a353-4801-b2ef-d208bcdf9fe0> | CC-MAIN-2024-38 | https://www.techmonitor.ai/comment-2/security-keys-unlock-nothing-but-inconvenience | 2024-09-12T09:09:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00822.warc.gz | en | 0.962791 | 1,205 | 2.640625 | 3 |
Hot Standby Router Protocol (HSRP)
In this article, we will discuss on HSRP protocol, related terminologies, its operation and configuration. We will cover following points:-
- Understanding FHRP
- Definition of HSRP
- HSRP Packet
- Key Points
- Operation and Configuration of HSRP
Network resiliency is key component of network design. Modern network requires an important consideration to deal with the network failure. With this understanding, First Hop Redundancy Protocols was developed and employed in majority of network to provide resiliency, availability and redundancy. From the client’s perspective if the gateway goes down, then access to an entire network will go down. First Hop Redundancy protocols (FHRP) will allow default gateway redundancy, it means provision of having more than one default gateway.
In the event of a router failure, there’s a backup device that will kick in and transparent to their users, continue to forward traffic to remote networks, thus avoiding the situation of isolation. We implement a first hop redundancy protocol to deal with gateway redundancy. Below are the 3 types of FHRP technology:-
- Hot Standby Router Protocol (HSRP)
- Virtual Router Redundancy Protocol (VRRP)
- Gateway Load Balancing Protocol (GLBP)
Related – HSRP vs VRRP
Definition of HSRP
Hot Standby Router Protocol (HSRP) is a CISCO proprietary protocol, provides a mechanism which is designed to support non-disruptive failover of IP traffic in certain circumstances. UDP port is 1985. In this case, two or more routers give an illusion of a virtual router. HSRP allows you to configure routers as standby and only a single router as active at a time. All the routers in a HSRP group share a single MAC address and IP address, which acts as a default gateway to the local network. The Active router forwards the traffic. If active router fails, the Standby router takes up all the responsibilities of the active router and forwards the traffic.
Hot Standby Router Protocol (HSRP) Packet
Version Number is 8 bit HSRP version. Whether it is version 1 or 2.
Opcode is 8 bit.
- Op Code 0 – Hello. The HSRP is running and is capable of becoming the active or standby router.
- Op Code 1 – Coup. The router become the active router.
- Op Code 2 – Resign. The router is no longer the active router.
HSRP States is 8 bit.
1. Active – This is the state of the device that is actively forwarding traffic.
2. Init or Disabled – This is the state of a device that is not yet ready or able to participate in HSRP.
3. Learn – This is the state of a device that has not yet determined the virtual IP address and has not yet seen a hello message from an active device.
4. Listen – This is the state of a device that is receiving hello messages.
5. Speak – This is the state of a device that is sending and receiving hello messages.
6. Standby – This is the state of a device that is prepared to take over the traffic forwarding duties from the active device.
Hello time is 8 bits. The interval between successive HSRP hello messages from a given router is a 3 sec.
Hold time the interval between the receipt of a hello message and the presumption that the sending router has failed after 10 sec.
Priority is 8 bits.
Default priority is 100. Router with a higher priority wins. Priority field is used in election process the active and standby routers. In tie breaking situation, highest IP address wins.
Group is 8 bit.
This field identifies the standby group between 0 to 255.
Reserved is 8 bit.
Authentication Data is a 64 bit.
This field contains a clear text of 8 character reused password. If no authentication data is configured, the RECOMMENDED default value is 0x63 0x69 0x73 0x63 0x6F 0x00 0x00 0x00.
Virtual IP Address is 32 bits.
The virtual IP address used by this group. If the virtual IP address is not configured on a router, then it may be learned from the Hello message from the active router. An address should only be learned if no address was configured and the Hello message is authenticated.
- Active router: Primary router.
- Standby router: Backup router.
- Standby group: Set of routers that participate in HSRP.
- Virtual MAC address:MAC address is created by HSRP internal mechanism. The first 24 bits will be default i.e. 0000.0c. 16 bits are HSRP IDe. 07.ac. 8 bits is the group number.
- Virtual IP: This IP used by group virtual IP to forward traffic from LAN.
- Priority: Default priority is 100. Router with a higher priority wins. Priority field is used in election process of active and standby routers. In tie breaking situation highest IP address wins.
- Version 1: Multicast address is0.0.2 and uses the UDP port 1985.Group number range from0–255.
- Version 2: Multicast address is0.0.102 and uses the UDP port 1985. Group number range from 0 – 4095.
- Preemption: HSRP Preemption enables the router with the highest priority to immediately become the Active router.
- Interface Tracking: We can choose an interface tracking and if the link goes down it decrements the priority of active router in order for standby router to take over role of active router.
- Load Balancing: Multiple HSRP groups for multiple subnets have both routers in active state for different subnets and passive for the other subnets. This way it is able to utilize all available resource.
Related – HSRP vs VRRP vs GLBP
Operation and Configuration of HSRP
- User generates traffic from LAN towards WAN router.
- It uses virtual IP and MAC as a default gateway, the virtual IP address is chosen by the administrator, and the MAC address is auto generated. For version 1, a MAC address is 0000.0c07.acXX where XX is the group number in hex format. For Version 2 MAC address is 0000.0c9f.fXXX, with the last 3 digits again representing group number in hex format.
- HSRP configured in groups. In HSRP group consists of an active router and a standby router. Active router is responsible for ARP requests and handling packet forwarding. Hello messages are sent every 3 seconds to the standby router. HSRP multicast addresses are 220.127.116.11 for v1 and 18.104.22.168 for v2.
In summary, HSRP provides layer 3 redundancy in network via virtual IP and MAC, interface tracking, and load balancing. A group of physical routers, acting as a single virtual router, advertise a single IP address and MAC address into network. | <urn:uuid:7bba552d-59cd-43ab-9de9-562eb13fd63d> | CC-MAIN-2024-38 | https://networkinterview.com/what-is-hot-standby-router-protocol-hsrp/ | 2024-09-13T16:05:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00722.warc.gz | en | 0.900231 | 1,478 | 3.125 | 3 |
AI Case Study
SocialEyes diagnoses diseases in places where doctors are scarce by scanning the human retina with deep neural networks at the edge
The human retina contains a wealth of diagnostic markers for many diseases such as diabetes and hypertension. There is a significant shortage of eye doctors globally and most are based in urban settings meaning that many diseases go undiagnosed where doctors are scarce. SocialEye's mobile, autonomous, retinal evaluation (MARVIN) system uses 'edge' neural networks to diagnose diseases in real-time. This allows health workers in rural settings to better manage diseases that all create problems with vision.
Healthcare Equipment And Supplies
"SocialEyes MARVIN (for mobile autonomous retinal evaluation) –which runs on GPU-powered Android tablets – helps community health workers manage eye problems caused by diabetes, hypertension and cardiovascular disease close to home. Untreated, these lead to a host of debilitating problems, including blindness.
SocialEyes’ GPU-enabled image processing and machine learning software spots 'signatures' of retinal problems caused by diabetes and related diseases.
An app can assess the retina’s condition and treatment can begin immediately, rather than waiting days for a report to come back from a remote tele-health grading center. By working offline—without an Internet connection—MARVIN allows local management of symptoms that would otherwise trigger an automatic and costly specialist referral."
"With MARVIN, doctors, ophthalmic assistants and community health workers can now make informed decisions in moments, while the patient is there. This encourages personalized counseling and education, as well as integrating advanced eye care with treating the systemic disease."
"By working offline—without an Internet connection—MARVIN allows local management of symptoms that would otherwise trigger an automatic and costly specialist referral."
"Although the world’s supply of eye specialists is limited, there are over 15,000,000 general practitioners—and even more nurses, support staff and community health workers—who can use SocialEyes and MARVIN to manage their patients in primary care."
"SocialEyes technology, running on mobile supercomputer tablets, makes key retinal features easier to see. Neural nets and machine learning help ensure that high-risk cases are caught right at the point of initial contact."
"Marginal image quality is common in low-resource settings, especially with non-mydriatic (undilated) imaging."
"The human retina contains a wealth of diagnostic markers for many non-communicable diseases, such as diabetes and hypertension, as well as infectious conditions such as tuberculosis, HIV, dengue and malaria.
Eye care in developing countries occurs in overflowing general hospitals and clinics, as well as 'outreach camps,' where hundreds to thousands of patients are seen over a few days.
There are 200,000 eye doctors worldwide who are located primarily in urban setting. And there are one billion persons with diabetes, hypertension and other diseases all face problems with their vision."
"Most new cases are occurring in low-income countries, where resources are strained and many people aren’t even aware of their condition until a crisis occurs."
It is assumed that the deep neural networks would have been trained on images of retinal scans labelled with conditions. | <urn:uuid:5720fe9a-56aa-4948-93ac-e2839e7022c3> | CC-MAIN-2024-38 | https://www.bestpractice.ai/ai-case-study-best-practice/socialeyes_diagnoses_diseases_in_places_where_doctors_are_scarce_by_scanning_the_human_retina_with_deep_neural_networks_at_the_edge | 2024-09-17T09:13:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00422.warc.gz | en | 0.942771 | 665 | 3.140625 | 3 |
Adhering to cybersecurity frameworks like Federal Information Security Management Act (FISMA) and the Federal Risk and Authorization Management Program (FedRAMP) is essential for organizations working with federal agencies. FISMA provides a broad security framework for federal agencies and their contractors, while FedRAMP focuses on standardizing cloud service security. Understanding their similarities and differences enables organizations to protect sensitive information in alignment with federal requirements and effectively enhance their security posture.
Established cybersecurity frameworks like FISMA and FedRAMP are not just best practices but essential for organizations, especially those collaborating with the federal government. These frameworks play a crucial role in ensuring the security of sensitive information.
While FISMA and FedRAMP are vital components of the federal government’s cybersecurity strategy, they target different aspects of information security. Therefore, understanding their nuances empowers businesses and instills confidence in their ability to determine which framework best suits their needs.
This blog explores the similarities, differences, and key attributes of FISMA and FedRAMP. It provides a deep understanding of these frameworks, equipping organizations with the knowledge to enhance their security posture, gain trust from federal agencies, and streamline operations, thereby significantly impacting the security of federal information systems.
Understanding FISMA and FedRAMP
What is FISMA?
Enacted in 2002 as part of the E-Government Act, FISMA mandates the protection of government information, operations, and assets against natural or human-made threats, including cyberattacks, data breaches, and insider threats. Its primary objective is to ensure that federal agencies and their contractors develop, document, and implement programs to secure the information systems supporting their operations. This risk-based approach requires regular assessments and reporting to maintain a robust security posture.
Before FISMA, federal agencies had varying levels of protection, leading to vulnerabilities across government networks. FISMA standardized these security measures, enhancing the overall security of federal information systems. The Federal Information Security Modernization Act of 2014 further refined the law.
FISMA applies to all federal agencies and their contractors, including organizations handling or processing information on behalf of a federal agency, such as cloud service providers (CSPs), government contractors, and third-party vendors. These entities must implement and maintain a comprehensive information security program that aligns with the standards set forth by the National Institute of Standards and Technology (NIST).
What is FedRAMP?
Established in 2011, FedRAMP provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. Its primary objective is to ensure that CSPs meet stringent security requirements before offering services to federal agencies. This program not only streamlines the adoption of cloud technologies across federal agencies but also reduces time, cost, and effort while maintaining a high level of security.
FedRAMP was created in response to the growing adoption of cloud computing within the federal government and the need for a unified approach to securing cloud services. By consolidating security assessments into a standard set of requirements based on NIST guidelines, FedRAMP offers a streamlined path to cloud adoption.
Any CSP wishing to do business with the federal government must achieve FedRAMP authorization. This involves meeting a set of security controls and undergoing rigorous third-party assessments. The program covers various service models, ensuring that all aspects of cloud computing are secured.
Key Similarities Between FISMA and FedRAMP
FISMA and FedRAMP share the goal of significantly enhancing federal information security. Both emphasize the protection of sensitive data and the management of risks associated with information systems. Grounded in risk management principles, both frameworks require organizations to implement comprehensive security controls and conduct continuous monitoring to identify and address vulnerabilities in real-time.
Security Standards and Controls
FISMA and FedRAMP draw heavily on the National Institute of Standards and Technology (NIST) guidelines, which form the foundation for their security standards and controls. For example, NIST SP 800-53 is integral to both frameworks, offering a structured approach to selecting and implementing security controls.
The controls are categorized into families that address various aspects of information security, including access control, incident response, and continuous monitoring. They are designed to protect information systems across multiple sectors, particularly federal agencies and contractors. This systematic approach allows organizations to tailor their security posture based on their specific risk environment—a critical element in FISMA and FedRAMP compliance.
Key Differences Between FISMA and FedRAMP
Scope and Applicability
One of the primary differences between FISMA and FedRAMP is their scope and applicability. While FISMA applies broadly to federal agencies and their contractors, covering a wide range of information systems, FedRAMP is explicitly tailored for cloud service providers offering services to federal agencies. This difference in scope leads to variations in implementation and compliance processes.
Implementation and Process
FISMA requires each federal agency to develop and implement its information security program tailored to its specific needs and risks. This can lead to variations in how FISMA is applied across agencies. In contrast, FedRAMP provides a standardized process for cloud service providers, ensuring consistency in how cloud services are assessed and authorized for federal use.
Authorization and Certification
Another key difference is the authorization process. FISMA allows for agency-specific authorization processes, which vary depending on the agency’s requirements and risk tolerance. This approach gives agencies a sense of control and confidence in their security measures. On the other hand, FedRAMP uses a centralized authorization process managed by the Joint Authorization Board (JAB), involving rigorous third-party assessments. Once authorized, CSPs can offer their services across multiple federal agencies.
Security Controls Differentiators
FedRAMP requires the implementation of security controls specifically designed for cloud environments based on a subset of NIST guidelines. These controls address the unique risks associated with cloud computing:
- Multi-Tenancy: FedRAMP’s controls manage risks related to multiple customers sharing the same cloud infrastructure, ensuring proper data isolation and protection against cross-tenant data breaches.
- Data Segregation: FedRAMP mandates robust measures, including encryption and access controls, to prevent the inadvertent mixing of data from different customers or agencies.
- Dynamic Scaling: Cloud environments often involve rapidly scaling resources up or down. FedRAMP controls address the security implications of this dynamic nature, requiring continuous monitoring and adaptability in security practices.
- Continuous Monitoring and Incident Response: FedRAMP emphasizes the need for automated tools and processes to continuously monitor cloud environments, including real-time detection of vulnerabilities and threats and automated response mechanisms. Additionally, FedRAMP requires CSPs to have robust incident response plans that include cloud-specific scenarios, such as data breaches affecting multiple tenants or disruptions caused by scaling operations.
In contrast, FISMA employs a broader set of NIST guidelines applicable to various information systems, making its controls more generalized. These controls cover a wide range of information systems, from traditional IT infrastructure to newer technologies:
- Comprehensive Coverage: FISMA’s controls are designed to apply to various federal information systems, including those not cloud-based, and cover aspects such as physical security, network security, and personnel security.
- Risk Management Framework (RMF): FISMA utilizes NIST’s Risk Management Framework (RMF) to categorize information systems, select security controls, and monitor continuous activities. This framework is adaptable to different environments and provides a flexible approach to security.
Ultimately, whether managing traditional IT infrastructures or cloud services, navigating the complexities of FISMA and FedRAMP is critical for organizations aiming to align with federal cybersecurity requirements. By adopting the appropriate framework, your organization enhances operational efficiency, ensures compliance, and maintains a competitive edge in the federal marketplace. | <urn:uuid:b2007d22-3cb7-4712-b712-139ef8823362> | CC-MAIN-2024-38 | https://360advanced.com/fisma-vs-fedramp-understanding-similarities-differences-and-key-attributes/ | 2024-09-18T15:43:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00322.warc.gz | en | 0.922679 | 1,597 | 2.96875 | 3 |
Failure as a service or function as a service. Spell out on first use to avoid confusion.
A social media platform founded by Mark Zuckerberg and others in 2004.
An Apple videoconferencing product.
Apple DRM technology.
A piece of evidence (e.g., an old digital certificate) planted by hackers to deliberately mislead investigators about their identity.
Frequently asked questions. Pronounced as letters or “fack.” Write “an FAQ” in formal writing.
To teleport from one area of a video game to another.
To make a typo on a mobile device by pressing a nearby button. Informal.
The Food and Drug Administration.
Full disk encryption. Also called whole disk encryption. Spell out on first use.
Family Educational Rights and Privacy Act of 1974. Protects the privacy of student education records. Spell out on first use.
When referring to the specific HTTP request type, do not use as a verb.
The Federal Financial Institutions Examination Council. Spell out on first use.
If writing about a type of field, use the normal font. If it’s a named field, use the tech font, as in ”the address
Among other things, it protects U.S. individuals from self-incrimination.
Use all caps if writing only the letters. Use lowercase in the tech font if writing the dot form. The dot is spoken aloud, as in “dot-E-X-E,” so when writing the dot form, use “a” instead of “an” before it.
Ex: an XML file, a .exe
Use the tech font when citing full filenames with their extensions.
Ex: the PoC.xml
Use the tech font to show file paths, as in C:\Users\Fox\Downloads\fox.gif.
An international criminal hacker organization targeting payment card data.
A unique public key identifier. Use the tech font, as in “the SubjectPublicKeyInfo
Short for financial technology. Corporate jargon; use sparingly.
An Amazon media player.
Failures in time rate. One FIT is equivalent to one failure per billion hours, as in “1,000 FITs.” Briefly define on first use.
A mathematical pattern game sometimes used as a test in coding interviews.
A Boolean variable that signals a function or process to another program. Flags are often considered set or true if present. Use the tech font, as in “the HttpOnly
The supreme deity in the facetious religion of Pastafarianism, which was founded in 2005.
Foobar, foo, and bar are commonly used as placeholder variables in computer science courses.
First-person shooter. A type of video game. Pluralize as FPS games.
Frames per second. Put a space between the number and the unit, as in “60 fps.”
Fully qualified domain name. Spell out on first use.
Write frameworks in the normal font, as in AngularJS, React, and MVC-based framework.
Describes an app or service that is initially free to use but costs money to unlock crucial features.
A pen testing tool.
A fictional Anonymous-type organization from the USA TV show Mr. Robot.
“Faster than light” warp drives in the TV show Battlestar Galactica and other sci-fi.
A package of PII that can be bought on the black market. It usually includes SSN, DOB, and full name.
Corporate jargon. Better to describe specific functions or features.
Capitalize the name of a function as in “the Forgot Password function.”
A fuzzer generates or mutates input for consumption by the target program with the intention of finding bugs.
A framework that handles the crashes that result from a fuzzer.
Logic that is equipped to handle multiple truth values, as opposed to Boolean logic. | <urn:uuid:06dee8c8-a6d5-4ab5-a4e1-a0d3d36b6cb1> | CC-MAIN-2024-38 | https://bishopfox.com/cybersecurity-style-guide/list-of-terms/f | 2024-09-19T20:27:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00222.warc.gz | en | 0.906273 | 837 | 2.84375 | 3 |
As technology advances, the need for effective cybersecurity measures becomes increasingly important. The necessity for regular testing, including penetration testing, has raised awareness of best practices and standards for such assessments.
The National Institute of Standards and Technology (NIST) has developed comprehensive guidelines and standards to help organizations safeguard their information systems from cyber threats. Among these guidelines is NIST 800-115, a guide for conducting penetration testing on information systems.
This article will explore the fundamental principles of NIST 800-115 and the benefits of conducting penetration testing according to its guidelines. We will also discuss how organizations can use the information gathered from penetration testing to improve their cybersecurity. Organizations can better protect their systems and data from cyber threats by following the recommendations outlined in this guide. | <urn:uuid:b400263e-6ded-42d0-ae56-f4fabbc5b365> | CC-MAIN-2024-38 | https://lazarusalliance.com/tag/nist-800-115/ | 2024-09-21T02:40:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00122.warc.gz | en | 0.938779 | 150 | 2.953125 | 3 |
5 questions to align your IT team with sustainability goals
How does ITAD play an important role in helping to solve the challenge of excessive e-waste while meeting sustainability goals?
Now more than ever organizations are being held accountable to demonstrate progress in achieving sustainability goals-IT asset disposition (ITAD) is an important part of this.
By 2030, it's predicted that e-waste will reach an annual 74 million metric tonnes. If you combine this with tech refresh cycles every two to four years, it's easy to understand how e-waste is one of the fastest growing solid waste streams. If your IT team isn't aligned with your organization's sustainability goals, there could be reporting challenges on the horizon.
Traditionally, organizations' IT teams have been tasked with the secure disposition of IT assets in an environmentally responsible way. With the right systems and processes in place, your organization can:
- Reduce e-waste that goes to landfills
- Reduce carbon emissions
- Extend the life of IT assets
So how does ITAD play an important role in helping to solve the challenge of excessive e-waste while meeting sustainability goals? Here are the five questions we recommend asking:
1. What is your carbon footprint?
If you're not measuring your carbon footprint, it's not possible to show progress toward your organization's sustainability commitments.
A great way to reduce and maintain a smaller carbon footprint is for your IT team to put retired assets through a responsible and sustainable disposition process. This includes recycling and repurposing retired equipment, which reduces the need for new equipment. By reducing the amount of e-waste that is sent to landfills, you help to limit the release of harmful gasses into the atmosphere. And as fewer devices are used or purchased, the energy and resources required to produce new equipment declines along with your carbon footprint.
2. How much of your e-waste goes into landfills?
When retired IT assets go into a landfill, precious metals can't be recovered, toxic chemicals are leached into the soil, and harmful gasses are released into the air. Therefore, anything that can be done to reduce your organization's contribution to landfill waste ultimately benefits the environment.
With proper IT asset disposition, including recycling and refurbishing parts, your IT team can reduce the amount of hazardous e-waste that ends up in landfills. This is why keeping track of proper IT asset disposition versus what has to go in the landfill is crucial to supporting your sustainability efforts.
3. Are your IT assets included in your organization’s recycling program?
Your organization should have a disposition plan for IT assets. If assets are being thrown in the trash, not only is the data being put at risk, but the potential value of the asset is lost. However, by actively choosing to recycle retired IT assets, your organization is contributing to the larger circular economy, and you’re helping to reduce the amount of waste that goes into landfills.
4. Has your organization considered remarketing its IT assets?
The more your organization can eliminate or lessen its contribution to e-waste, the better it is for the environment. Remarketing your IT devices is another way to help extend the life of IT equipment and keep it out of the waste stream.
When IT devices are remarketed, their data is sanitized before being re-sold to other groups who can still use and get value out of them. Remarketing is the most beneficial way to support sustainability goals because it helps:
- Avoid or prolong the need to buy new IT equipment, which reduces costs
- Keeps the entire IT asset out of the landfill because it’s being deployed for a new use
- Lower energy consumption (older devices tend to be more efficient than newer models)
5. Have you done everything you can to extend the life of your IT devices?
Addressing questions like these helps to highlight how your IT asset disposition program is or could be contributing to your organization’s overall sustainability efforts. You are being accountable to sustainability goals by working to extend the life of your IT devices.
Or at the very least, these questions can help guide your IT asset disposition program toward environmentally conscious practices like reducing landfill waste and energy usage.
A good way to share your sustainability progress with stakeholders is to work with your vendors to put together an environmental benefits report (EBR). If you work with Iron Mountain to support your ITAD program, good news—we have an EBR that your IT team can use to show its environmental impact.
To learn more about our easy-to-consume report, check out our Environmental Benefits Report. | <urn:uuid:47d49f75-d8be-4547-ac35-dfc734c4e8ce> | CC-MAIN-2024-38 | https://www.ironmountain.com/en-my/resources/blogs-and-articles/q/5-questions-to-align-your-it-team-with-sustainability-goals | 2024-09-07T17:18:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00422.warc.gz | en | 0.946371 | 961 | 2.5625 | 3 |
Ready to learn Artificial Intelligence? Browse courses like Uncertain Knowledge and Reasoning in Artificial Intelligence developed by industry thought leaders and Experfy in Harvard Innovation Lab.
If a picture’s worth a thousand words, then a video’s worth a thousand pictures. For as long as videos have existed, we’ve trusted them as a reliable form of evidence. Nothing is as impactful as seeing something happen on video (e.g. the Kennedy assassination) or seeing someone say something on video, especially if it’s recorded in an inconspicuous manner (e.g. Mitt Romney’s damning “47 percent” video).
However, that foundation of trust is slowly fading as a new generation of AI-doctored videos find their way into the mainstream. Famously known as “deepfakes,” the synthetic videos are created using an application called FakeApp, which uses artificial intelligence to swap the faces in a video with those of another person.
Since making its first appearance earlier this month, FakeApp has gathered a large community and hundreds of thousands of downloads. And a considerable number of its users were interested in using the application for questionable uses. Creating fake video is nothing new, but it previously required access to special hardware, experts and plenty of money. Now you can do it from the comfort of your home, with a workstation that has a decent graphics card and a good amount of RAM.
The results of deepfakes are still crude. Most of them have obvious artifacts that give away their true nature. Even the more convincing ones are distinguishable if you look closely. But it’s only a matter of time before the technology becomes good enough to even fool trained experts. Then it’s destructive power will take on a totally new dimension.
The capabilities and limits of deepfakes
Under the hood, deepfakes is not magic, it’s pure mathematics. The application uses deep learning application, which means it relies on neural networks to perform its functions. Neural networks are software structures roughly designed after the human brain. When you give a neural network many samples of a specific type of data, say pictures of a person, it will learn to perform functions such as detecting that person’s face in photos, or in the case of deepfakes, replace someone else’s face with it.
However, deepfakes suffers from the same problems that all deep learning applications do: It relies too much on data. To create a good fake video, deepfakers need a lot of face pictures from the person in the original video (which is not a problem if they created it themselves) and the person whose face will be placed in the final production. And they need the pictures to be from various angles and with decent quality.
This means that it will be hard to make a deepfake from a random person, because the deepfaker will have to somehow gather thousands of pictures from the target. But celebrities, politicians, athletes and other people who regularly appear on television or post a lot of pictures and videos online are ripe for deepfaking.
The future of AI forgery
Shortly after the FakeApp was published, a lot of fake porn involving celebrities, movie stars and politicians surfaced because of the abundance of online photos and videos available. The backlash that those videos has caused in the media and in discussions by experts and lawmakers give us a preview of the disruption and chaos that AI-generated forgery can unleash in a not-too-distant future. But the real threat of deepfakes and other AI-doctored media has yet to emerge.
When you put deepfakes next to other AI-powered tools that can synthesize human voice (Lyrebird and Voicery), handwriting and conversation style, the negative impact can be immense (but so’s the positive, but that’s the topic for another post).
A post in the Lawfare blog depicts a picture of what the evolution of deepfakes could entail. For instance, a fake video can show an authority or official doing something that can have massive social and political repercussions, such as a soldier killing civilians or a politician taking a bribe or uttering a racist comment. As a machine learning expert told me very recently, “Once you see something, unseeing it is very hard.”
We’ve already seen how fake news negatively impacted the U.S. presidential elections in 2016. Wait until bad actors have deepfakes to back their claims with fake videos. AI synthesizing technologies can also usher in a new era of fraud, forgery, fake news and social engineering.
This means any video you see might have been a deepfake; any email you read or text conversation you engage in might have been generated by natural language processing and generation algorithms that have meticulously studied and imitated the writing style of the sender; any voice message you get could be generated by an AI algorithm; and all of them aim to lure you into a trap. This means you’ll have to distrust everything you see read or hear until you can prove its authenticity.
How to counter the effects of AI forgery
So how can you prove the authenticity of videos and ensure the safe use of AI synthesizing tools? Some of the experts I spoke to suggest that raising awareness will be an important first step. Educating people on the capabilities of AI algorithms will be a good measure to prevent the bad uses of applications like FakeApp having widespread impact—at least in the short term.
Legal measures are also important. Currently, there’s no serious safeguards to protect people against deepfaking or forged voice recordings. Putting heavy penalties on the practice will raise the costs for creating and publishing (or hosting) fake material and will serve as a deterrent against bad uses of the technology.
But these measures will only be effective as long as humans can tell the difference between fake and real media. Once the technology matures, it will it be near-impossible to prove that a specific video or audio recording has been created by AI algorithms. Meanwhile, someone might also take advantage of the doubts and uncertainty surrounding AI forgery to claim that a real video that portrays them committing a crime was the work of artificial intelligence. That claim too, will be hard to debunk.
The technology to deal with AI forgery
We also need technological measures to back up our ethical and legal safeguards against deepfakes and other forms of AI-based forgery. Ironically, the best way to detect AI-doctored media is to use artificial intelligence. Just as deep learning algorithms can learn to stitch a person’s face on another person’s body in a video, they can be trained to detect the telltale signs that indicate AI was used to manipulate a photo, video or sound file.
At some point however, the faking might become so real that even AI won’t be able to detect it. For that reason, we’ll need to establish measures to register and verify the authenticity of true media and documents.
Lawfare suggests a service will track people’s movements and activities to setup a directory of evidence that could be used to verify the authenticity of material that is published about those people. So, for instance, if someone posts a video that shows you were at a certain location at a certain time, doing something questionable, you’ll be able to have that third-party service verify and either certify or reject the claim by comparing that video’s location against your recorded data.
However, as the same blog post points out, recording so much of your data might entail a greater security and privacy risk, especially if it’s all collected and stored by a single company. It could lead to wholesale information theft, like last year’s Equifax data breach, or it could end up in Facebook’s Cambridge Analytica scandal all over again. And it can still be gamed by bad actors, either to cover up their crimes or to produce evidence against others.
A possible fix would be to use the blockchain. Blockchain is a distributed ledger that enables you to store information online without the need for centralized servers. On the blockchain, every record is replicated on multiple computers and tied to a pair of public and private encryption keys. The holder of the private key will be the true owner of the data, not the computers storing it. Moreover, blockchains are resilient against a host of security threats that centralized data stores are vulnerable to. Distributed ledgers are not yet very good at storing large amounts of data, but they’re perfect for storing hashes and digital signatures.
For instance, people could use the blockchain to digitally sign and confirm the authenticity of a video or audio file that is related to them. The more people add their digital signature to that video, the more likely it will be considered as a real document. This is not a perfect solution. It will need added measures to weigh and factor in the qualification of the people who vote on a document.
For the moment, what’s clear is that AI forgery may soon become a serious threat. We must speculate on a proper, multi-pronged approach that makes sure we prevent malicious uses of AI while also allowing innovation to proceed. Until then, IRL is the only space you can truly trust. | <urn:uuid:ed4651ea-3ed7-41ea-abe1-245242a4d727> | CC-MAIN-2024-38 | https://resources.experfy.com/ai-ml/how-to-counter-the-threat-of-ai-based-forgery/ | 2024-09-08T23:16:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00322.warc.gz | en | 0.95583 | 1,920 | 3.046875 | 3 |
In 1954, the psychologist Leon Festinger infiltrated a cult led by a woman who Festinger dubbed Marian Keech. Keech believed she received messages from an alien race telling her that on a certain date a flying saucer would appear to collect her and her supporters. At which point a catastrophic flood would decimate the remaining population of the earth.
Events didn’t exactly work out that way. But Festinger hadn’t joined the cult to test the validity of Keech’s claims. Instead, he wanted to observe how the cult would react to the discovery that their prophecy had failed. Would they admit the error and change their beliefs?
Festinger is now a mainstay in psychology textbooks due to his theories on cognitive dissonance, which describe a state of mind in which a person holds two or more conflicting thoughts, ideas, or opinions. Cognitive dissonance is by its nature an uncomfortable condition, one that the brain wants to quickly resolve by reinterpreting one or more of the conflicting thoughts. Which lets our brain return to a stable, coherent state.
But of course, reinterpreting facts or beliefs on the fly can be short-sighted. Take for instance the cult that Festinger studied—when the alien apocalypse didn’t materialize, most cult members chose to simply reinterpret what did happen (nothing) as a sign that the aliens had, in fact, saved humanity in response to the cult’s efforts and faithfulness. They opted to restore mental coherency at the expense of truth.
But don’t let the outlandishness of an alien cult fool you, the bias its members exhibited—interpreting the outside world in terms of pre-existing expectations—is something that we all do. In fact, bias can have beneficial aspects as a labor-saving shortcut that lightens the cognitive load on our brain. But bias is also responsible for limiting our understanding of things that are new or different. We’re not really immune from it at home or at work; it can infect how we discover and interpret information, and impact our behavior in workplace settings.
Fortunately, having a greater awareness of confirmation bias gives us tools to limit its shortcomings at work and in life. Let’s start by looking at how confirmation bias works for and against us, explore some of the ways it can surround us in ideological bubbles, and then discuss how we can burst our own personal bias bubbles.
Beliefs we hold dear feel as though they are an inseparable part of us. But this can lead to problems when new information seems to contradict them, particularly when we’re not prepared to readjust. When we reinterpret an experience to conform with our prior beliefs, we are feeding confirmation bias—we place greater emphasis on information that supports our beliefs while discrediting or ignoring conflicting information.
A meta-analysis of 54 social psychiatry experiments concluded that people have stronger memories of events that conflict with their expectations, yet maintain stronger preferences towards things that support them. So much so that we’ll devote 36% more reading time to attitude-consistent material. When we do encounter opposing information, not only are we likely to try to interpret it in a corroborating way, we sometimes go so far as to use contradiction to strengthen already existing beliefs, something researchers call the ‘backfire effect.’
The confirmation bias does not only influence how we interpret new information, it also helps dictate what we go out looking for in the first place, and what we recall from our memory banks in response to certain questions and decisions.
Take for instance the question “Are you happy with your social life?” It is, on the face of it, basically the same as asking “Are you unhappy with your social life?” The state of someone’s happiness should not be swayed by the words used to ask about it. Yet this is often the case. So those asked if they’re happy will call forth memories of joy, while those asked if they’re unhappy will remember moments of sorrow.
The effects of this associative bias can appear out of nowhere, in the absence of conscious thought. If you’re about to purchase a particular model of new car, you’ll start noticing that car all over the place. Likewise, you might be considering starting a family and begin to see the world around you filled with children. Or go through a breakup and see everyone else traveling in pairs.
If we fail to realize that this is simply a byproduct of our brains seeking efficient means of directing our attention, we end up erroneously assuming that that car is really popular or that there are more kids in the area than there in fact are. This is a classic means of creating and maintaining stereotypes in social interactions, as we tend to see what we expect to see and neglect counter evidence.
Moreover, research suggests that when we’re forming impressions of a person’s personality, we place greater importance on information learned earlier versus that which we learn later. When asked to form an opinion about someone who is “intelligent, industrious, impulsive, critical, stubborn, envious,” subjects rate the person as more positive than when the same words are presented in the reverse order. Like those pesky first impressions that linger, the information we learn about first becomes the baseline against which new evidence is compared—while we may adjust this baseline, each new piece of information ends up correcting it to a smaller degree. “Intelligent” makes for a better first impression than does “envious,” and while the subsequent adjectives provide some nuance, they don’t overwrite that initial judgement.
Bias is an unfortunate side effect of the brain’s need to reduce strain on its processing capabilities. By referencing past experiences and memories when finding and interpreting new information, we avoid having to start from scratch and can more effectively filter our environment for what’s relevant. But when we let bias run unchecked, we end up with some very unfortunate side effects. Especially when it comes to online news and information.
Confirmation bias shows up in our news feeds and web searches because we tend to network with like-minded individuals and give more attention to belief-confirming information sources.
While search engines have not been shown to display a heavy bias in their results, the language we use in our searches can implicitly support an assumption or belief. If you believe in astrology and search for the Gemini horoscope, you’re going to find what you’re looking for without seeing (or paying much attention to) information that questions astrology itself. In a subtler sense, a simple comparison of a search for “how confirmation bias affects learning” and another for “does confirmation bias affect learning?” returns front page search results that are markedly different—some were the same, some weren’t.
When it comes to our social media feeds, the ability to personalize the people and brands we follow allows us to form networks that expose us to confirmation bias. Further complicating the issue are recommendation engines, which are designed to show us what the engines think we’ll like and nothing else, minimizing our exposure to information that might provide a well-rounded view or alternative perspective.
What happens if we’re only ever exposed to people and ideas that support our existing beliefs? Those beliefs are reinforced and strengthened by an affirmation feedback loop—each news item, blog post, and status update further demonstrates what we think we already know. And when opposing ideas do manage to sneak in, people tend to quickly label those ideas “exceptions that prove the rule” given that it’s much easier to reinterpret a fact than change a fundamental belief.
Of course, how can one be expected to traverse complex informational landscapes free from this bias? The brain can only process so much information, after all. It takes time and effort to process and internalize new ideas and concepts. Plus, the Internet is filled with fake news to such an extent that it is seldom possible to check the reliability of every source we encounter. And so despite even good intentions, our bubbles grow.
Bursting the bubble
Perhaps the most important and, thankfully, simplest way to battle confirmation bias is to acknowledge that there are many sides to every story. While we may hold a strong opinion, it is but one perspective, of which there may be many more, each with its valid points and arguments. When we allow ourselves a small measure of doubt, we keep ourselves from drawing conclusions too quickly.
A more time-consuming defense against confirmation bias is to act as though you’ll need to explain yourself to someone. Researchers have found that people were more likely to critically examine information and to learn more about ideas if they believed they would need to explain them to another person who was well-informed, interested in the truth, and whose views they were not already aware of. Call it accounting for the effects of accountability.To go even further, get in the habit of playing devil’s advocate—that is, take the opposite view and try to argue from there. It is not always easy to see things from the other side, but then, if you cannot take an opposing perspective seriously, chances are you don’t really understand your own views, and are missing important elements required for a full understanding of the topic.
Getting to the bottom of an issue is an endeavor that requires time and effort, coupled with robust reasoning skills, and as such it is not possible for each of us to get to the bottom of every important topic. But this does not mean we are doomed to suffer at the hands of our biases. If we can simply admit to ourselves that we are missing important elements of the story—that our view is incomplete—we will be more likely to open ourselves to conflicting ideas, and less likely to cling to inconsistent viewpoints.
When it comes to the beliefs we hold dear, we may benefit when we take the time not only to find support for our views, but to discover the contradictions and counterarguments. Time to put ourselves in the other person’s shoes. Time to imagine the outsider’s perspective. Time to resolve cognitive dissonances. And time to purposefully burst our own bubbles of bias. | <urn:uuid:eaad39c1-96e7-45a2-a8b5-b007c644590e> | CC-MAIN-2024-38 | https://www.entefy.com/blog/the-hazards-of-confirmation-bias-in-life-and-work/ | 2024-09-10T05:00:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00222.warc.gz | en | 0.943006 | 2,117 | 2.609375 | 3 |
Cyber threats account for some of the most serious and prolific risks against SMEs, many of which can cripple a company that isn’t sufficiently protected. For many companies, it’s worth asking: can our IT department protect us? And if not, are we prepared in the event of an unexpected disaster that prevents us operating as normal?
However, cyber attack is not the only risk to business continuity. Power outages, adverse weather, terrorism and a host of other unforeseeable incidents can result in IT systems being unavailable, or inaccessible.
Preparing for the unexpected is something every business should do at least once a year (or more frequently, if you operate a high-risk business and the HSE recommends greater precautions are taken).
Business continuity planning
Business continuity planning is defined as: “A holistic management process that identifies potential impacts that threaten an organisation and provides a framework for building resilience with the capability for an effective response that safeguards the interests of its key stakeholders, reputation, brand and value-creating activities.”
Without continuity planning, companies can go out of business. Over 40% of those affected by the 1996 IRA Manchester bombing went under, never to return.
Maintaining IT systems in the event of a disaster is mission-critical. Without phone lines, email, computers – ways for the team to communicate and keep in contact with customers – a shutdown could cause prolonged damage to a business.
Are your IT systems resilient enough?
Business continuity planning is a different exercise for companies with on-site IT systems. When your servers, phone lines and core systems are on premises, you are running a much higher risk than companies that operate in the cloud.
Not only are on-site systems easier to cripple when a cyber attack happens, but without backup, anything that happens in the office – such as a fire or flood – could potentially destroy vital records, customer data, and everything you need to run the company. It is hard to ignore how much we store in digital files and how important they are to most businesses. Anything that damages your head office could have serious ramifications for years to come.
When weighing up the risks to vital IT systems, it’s recommended that you go through the following 5 step process:
- Analyse the business – what do you rely on? How/where does it operate and how soon could you get back to ‘business as usual’?
- Assess the risks – what risks are likely, what should you plan for? And how do you prevent them? If that isn’t possible: How soon would your IT team be able to get everything back online so your team can get back to work (even if that meant moving office or staff working remotely for a while)?
- Develop your strategy around an operational goal of getting back to work as soon as possible.
- Work out the details so that plan can be implemented
- Rehearse that plan. Make sure you’re confident it can be implemented in the event of a disaster scenario actually occurring. Unless your IT team can handle a trial run, you can’t know how they would manage a real emergency.
As part of this plan, ask your IT team the following questions:
- In the event of anything from a power cut to a fire, do we have a cloud-based backup of everything on our systems?
- How quickly can we restore vital operational systems and are we clear which need restoring with backups in place in the first 24 to 72 hours after a disaster?
- During a rehearsed disaster plan, how soon until – as far as clients are concerned – will the business be operating as closely to normal as possible?
From a planning perspective, if there is uncertainty on any of these points, you may need to rethink your business continuity planning. Putting the right steps in place to recover from anything unexpected is the best way to ensure you are prepared in the event of an emergency. Making sure vital systems are backed-up or running from public cloud and cloud-based platforms; one of the most effective ways to maintain business continuity if the unimaginable should happen.
Note: Business continuity planning is not just for the major incidents that have an effect on the entire business or departments. It’s also for more ‘minor’ incidents where individuals may be unable to continue with their work. This can have a significant impact on productivity and effect the bottom line.
Many of these incidents can be resolved by your IT support team or service desk, but it’s important that they know how to prioritise support tickets to ensure that mission-critical and other important functions don’t suffer from unnecessary downtime. Speak to your IT service provider about this to ensure that the processes are in place to identify the incidents that impact business continuity most severely, and that these are prioritised accordingly.
Download our article on ‘how to drive IT service desk efficiencies’ for advice on how to make your IT service desk more efficient… | <urn:uuid:10522a2e-d0cd-4079-aed2-4c94d7f29354> | CC-MAIN-2024-38 | https://cloudbusiness.com/tag/business-continuity/ | 2024-09-11T10:48:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00122.warc.gz | en | 0.954113 | 1,027 | 2.5625 | 3 |
Building so-called Smarter Cities has long been touted as a major goal for municipalities around the globe. But building systems capable of responding to events in real time is a major challenge for any government operating on a limited IT budget. The hope is that processes will become integrated enough to create a massive pool of data that will then be employed to drive any number of artificial intelligence (AI) applications.
The problem is that while city governments typically have access to massive amounts of data, most of it resides in isolated systems run by departments that are often at odds with one another. In fact, Daniel Newman, principal analyst with Futurum, says smart cities are mostly a figment of vendor marketing imagination.
“The trouble with smart cities is they don’t exist yet,” says Newman.
What is being built is a set of isolated next-generation applications working with a very limited set of data. But while isolated applications offer some compelling returns on investment (ROI), many local governments have yet to fully appreciate the business and cultural impact so-called Big Data will play in driving AI applications that will transform almost every government process.
Notable exceptions, however, include Indiana and Illinois. Back in 2014, then Governor Mike Pence of Indiana issued an executive order to create a massive data hub. Illinois has a similar data initiative. These types of initiatives long term will be critical because machine and deep learning algorithms that drive AI models are only as good as the amount of data they can be applied against. Before localities can take advantage of those algorithms, they will need to create massive Big Data hubs.
In the meantime, the three things that most drive governments to invest in smart city projects are a need to be economically competitive, sustainability and public safety, says Dante Ricci, global lead public services and health care marketing and communications for SAP.
Attracting businesses to a locality is critical to growing the tax base. It’s already clear a race is on to put the types of advanced IT systems in place that will attract business investment. Amazon, for example, in its search for a new headquarters somewhere on the east coast of the U.S., has made it clear that investments in IT infrastructure are just as important as transit and other forms of physical infrastructure. Just about every company now takes note of those same requirements before deciding to build another factory or open a new office because they impact quality of life, which in turn impacts the number and types of employees an organization can hire.
The challenge most local governments face is that they don’t have ready access to funding to create even a proof of concept (PoC) project, says Ricci. Many of them rely on various non-profit organizations and government think tanks to create those projects, says Ricci.
“It’s a way to see what the art of the possible is,” says Ricci.
It’s worth noting that cities located in regions run by totalitarian governments may, for better or worse, have a distinct advantage when it comes to aggregating data as part of a series of smart city projects. One locality that is well down the path toward building a smart city, for example, is Singapore, which last year was recognized by Juniper Research as the “cleverest” city on earth, followed by London, New York and San Francisco.
Each of these cities has access to substantial economic resources. But over the long haul, it’s probable that cities such as Singapore located in countries that are not liberal democracies might have a significant short-term advantage, notes Larry Carvalho, an industry analyst with International Data Corp. (IDC). A big part of the reason for that is that a totalitarian government can mandate change.
“You can get there faster when you control everything,” says Carvalho. “Liberal democracies will get there. It will just take longer because of privacy concerns.”
The most obvious benefit of investing in technologies to build a smart city is public safety. While there are clearly privacy concerns to be considered, digital cameras, for example, can act as sensors capable of streaming massive amounts of video and audio that can be analyzed in near real time. As that data gets analyzed in the cloud, it becomes possible to, for example, better coordinate fire, police and ambulance services in real time.
For example, Cape Town in South Africa has worked with SAP to deploy a centralized Emergency Policing and Incident Command (EPIC) to coordinate the efforts of fire and rescue, traffic, metro police, law enforcement, disaster risk management, and the city’s special investigative unit.
At a simpler level, Big Data analytics coupled with AI should also be able to make suggestions about when to optimally issue work permits spanning multiple agencies that would substantially reduce traffic jams. There might even come a day when traffic itself is managed in real time using AI applications infused with machine and deep learning algorithms.
Obviously, it’s still early days when it comes to building smart cities. No one should expect cities to magically transform overnight. It may take years for the technologies such as machine and deep learning that would enable a smart city to be fully formed and mature to the point where they can be widely implemented. There’s already a significant shortage of AI and data science talent. Most local governments are not going to be able to hire and retain that talent without either financial help from a central government or a grant from an IT vendor anxious to develop a broader market opportunity.
The one thing that is clear is that AI advances in one form or another are coming to cities, like it or not. It’s also probable that some of those advances are going to be perceived to be controversial. But as is often the case, most people are willing, at least to a point, to make some philosophical adjustments in the interests of the greater good. | <urn:uuid:d718ff50-0523-46d2-b439-95d90e2270cf> | CC-MAIN-2024-38 | https://www.itbusinessedge.com/applications/ai-to-accelerate-race-to-build-smarter-cities/ | 2024-09-13T17:38:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00822.warc.gz | en | 0.9609 | 1,203 | 2.609375 | 3 |
What is threat intelligence? Definition and explanation
Threat intelligence is the process of identifying and analysing cyber threats. The term ‘threat intelligence’ can refer to the data collected on a potential threat or the process of gathering, processing and analysing that data to better understand threats. Threat intelligence involves sifting through data, examining it contextually to spot problems and deploying solutions specific to the problem found.
Thanks to digital technology, today’s world is more interconnected than ever. But that increased connectedness has also brought an increased risk of cyberattacks, such as security breaches, data theft, and malware. A key aspect of cybersecurity is threat intelligence. Read on to find out what threat intelligence is, why it's essential, and how to apply it.
What is threat intelligence?
The definition of threat intelligence is sometimes confused with other cybersecurity terms. Most commonly, people confuse ‘threat data’ with ‘threat intelligence’ – but the two are not the same:
- Threat data is a list of possible threats.
- Threat intelligence looks at the bigger picture – by interrogating the data and the broader context to construct a narrative that can inform decision-making.
In essence, threat intelligence enables organisations to make faster and more informed security decisions. It encourages proactive, rather than reactive, behaviours in the fight against cyber attacks.
Why is threat intelligence important?
Threat intelligence is a crucial part of any cybersecurity ecosystem. A cyber threat intelligence program, sometimes called CTI, can:
- Prevent data loss: With a well-structured CTI program, organisations can spot cyber threats and prevent data breaches from releasing sensitive information.
- Provide direction on safety measures: By identifying and analysing threats, CTI spots patterns hackers use and helps organisations put security measures in place to safeguard against future attacks.
- Inform others: Hackers get smarter by the day. To keep up, cybersecurity experts share the tactics they have seen with others in their community to create a collective knowledge base to fight cybercrimes.
Types of threat intelligence
Cybersecurity threat intelligence is often split into three categories – strategic, tactical, and operational. Let’s look at these in turn:
Strategic threat intelligence:
This is typically a high-level analysis designed for non-technical audiences – for example, the board of a company or organisation. It covers cybersecurity topics that may impact broader business decisions and looks at overall trends as well as motivations. Strategic threat intelligence is often based on open sources – which means anyone can access them – such as media reports, white papers, and research.
Tactical threat intelligence:
This is focused on the immediate future and is designed for a more technically-proficient audience. It identifies simple indicators of compromise (IOCs) to allow IT teams to search for and eliminate specific threats within a network. IOCs include elements such as bad IP addresses, known malicious domain names, unusual traffic, log-in red flags, or an increase in file/download requests. Tactical intelligence is the most straightforward form of intelligence to generate and is usually automated. It can often have a short lifespan as many IOCs quickly become obsolete.
Operational threat intelligence:
Behind every cyber attack is a 'who', 'why', and 'how'. Operational threat intelligence is designed to answer these questions by studying past cyber attacks drawing conclusions about intent, timing, and sophistication. Operational threat intelligence requires more resources than tactical intelligence and has a longer lifespan. This is because cyber attackers can't change their tactics, techniques, and procedures (known as TTPs) as easily as they can change their tools – such as a specific type of malware.
Cyber threat intelligence life cycle
Cyber security experts use the concept of a lifecycle in relation to threat intelligence. A typical example of a cyber threat lifecycle would involve these stages: direction, collection, processing, analysis, dissemination, and feedback.
Phase 1: Direction
This phase focuses on setting goals for the threat intelligence program. It might include:
- Understanding which aspects of the organisation need to be protected and potentially creating a priority order.
- Identifying what threat intelligence the organisation needs to protect assets and respond to threats.
- Understanding the organisational impact of a cyber breach.
Phase 2: Collection
This phase is about gathering data to support the goals and objectives set in Phase 1. Data quantity and quality are both crucial to avoid missing severe threat events or being misled by false positives. In this phase, organisations need to identify their data sources – this might include:
- Metadata from internal networks and security devices
- Threat data feeds from credible cyber security organisations
- Interviews with informed stakeholders
- Open source news sites and blogs
Phase 3: Processing
All the data which has been collected needs to be turned into a format that the organisation can use. Different data collection methods will require various means of processing. For example, data from human interviews may need to be fact-checked and cross-checked against other data.
Phase 4: Analysis
Once the data has been processed into a usable format, it needs to be analysed. Analysis is the process of turning information into intelligence that can guide organisational decisions. These decisions might include whether to increase investment in security resources, whether to investigate a particular threat or set of threats, what actions need to be taken to block an immediate threat, what threat intelligence tools are needed, and so on.
Phase 5: Dissemination
Once analysis has been carried out, the key recommendations and conclusions need to be circulated to relevant stakeholders within the organisation. Different teams within the organisation will have different needs. To disseminate intelligence effectively, it’s worth asking what intelligence each audience needs, in what format, and how often.
Phase 6: Feedback
Feedback from stakeholders will help improve the threat intelligence program, ensuring that it reflects the requirements and objectives of each group.
The term ‘lifecycle’ highlights the fact that threat intelligence is not a linear, one-off process. Instead, it’s a circular and iterative process that organisations use for continuous improvement.
Who benefits from threat intelligence?
Everyone who has an interest in security benefits from threat intelligence. Particularly if you’re running a business, benefits include:
Hackers are always looking for new ways to penetrate enterprise networks. Cyber threat intelligence allows businesses to identify new vulnerabilities as they emerge, reducing the risk of data loss or disruption to day-to-day operations.
Avoiding data breaches
A comprehensive cyber threat intelligence system should help to avoid data breaches. It does this by monitoring suspicious domains or IP addresses trying to communicate with an organisation’s systems. A good CTI system will block suspicious IP addresses – which could otherwise steal your data – from the network. Without a CTI system in place, hackers could flood the network with fake traffic to carry out a Distributed Denial of Service (DDoS) attack.
Data breaches are expensive. In 2021, the global average cost of a data breach was $4.24 million (although this varies by sector – the highest being healthcare). These costs include elements like legal fees and fines plus post-incident reinstatement costs. By reducing the risk of data breaches, cyber threat intelligence can help save money.
Essentially, threat intelligence research helps an organisation to understand cyber risks and what steps are needed to mitigate those risks.
What to look for in a threat intelligence program
Managing threats requires a 360-degree view of your assets. You need a program that monitors activity, identifies problems, and provides the data you need to make informed decisions to protect your organisation. Here’s what to look for in a cyber threat intelligence program:
Tailored threat management
You want a company that accesses your system, spots weaknesses, suggests safeguards, and monitors it 24/7. Many cybersecurity systems claim to do this, but you should look for one that can tailor a solution to your specific needs. Cybersecurity isn’t a one-size-fits-all solution, so don’t settle for a company selling you one.
Threat data feeds
You need an up-to-the-minute feed of websites that have been placed on a deny list plus malicious actors to keep an eye on.
Access to investigations
You need a company that provides access to its most recent investigations, explaining how hackers obtain entry, what they want, and how they get it. Armed with this information, businesses can make more informed decisions.
A cyber threat intelligence program should help your company identify attacks and mitigate risks. The program has to be comprehensive – for example, you don’t want a program that only identifies potential problems and does not offer solutions.
In a continually expanding threat landscape, cyber threats can have serious consequences for your organisation. But with robust cyber threat intelligence, you can mitigate the risks that can cause reputational and financial damage. To stay ahead of cyber attacks, request demo access to Kaspersky’s Threat Intelligence portal and start exploring the benefits it can provide to your organisation. | <urn:uuid:89b95eed-4573-43c7-9fe6-d864e1fdcba3> | CC-MAIN-2024-38 | https://www.kaspersky.com/resource-center/definitions/threat-intelligence | 2024-09-13T19:21:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00822.warc.gz | en | 0.93761 | 1,863 | 2.96875 | 3 |
by Ran Goldblatt, Trevor Monroe, Douglas Glandon, Samuel Fishman, Daynan Crull, Gabe Levin
The Bottom Line
The 2030 Agenda for Sustainable Development aims to end poverty and set the world on a path of peace, prosperity and opportunity for all on a healthy planet. It recognizes the importance of space-technology-based data, in situ monitoring, and reliable geospatial information for sustainable development, policy-making, programming and project operations. Geo4Dev, a collaboration between academia, public and the private sector, aims to integrate rigorous research with innovative sources of geospatial data–including satellites, sensors, and mobile phones, together with artificial intelligence, machine learning, and computer vision approaches to help reduce poverty and accelerate global development.
Our World Is Changing
Our world is rapidly changing in almost every dimension. In 2007, for the first time in history, the global urban population exceeded the global rural population and by 2050 nearly 70% of the world’s population will live in urban areas.
While there is no doubt that urbanization has many positive implications for society, economy, welfare and more, this does not come without immense challenges. Rapid urbanization may damage ecosystems, increase emission of gases, increase demand for public services, put pressure on public infrastructure and may increase the gap between rich and poor. Urban areas were also the most vulnerable to the recent COVID-19 pandemic, and the majority of global COVID-19 cases are reported in urban areas.
With slower investments in infrastructure, housing and other basic services, more people are living in slums, and in parallel, poverty and extreme poverty are also rising. In 2018, almost 8 per cent of the world’s workers and their families lived on less than US$1.90 per person per day, and COVID-19 may push between 88 and 115 million people into extreme poverty (Poverty and Shared Prosperity Report 2020).
In parallel, since the 1940`s, there has been an increase in the frequency and severity of global natural disasters. Extreme weather and floods account for the majority of global natural disasters with immense impacts to vulnerable populations in low-income countries.
Food security is another challenge many developing countries face. Shocks related to climate change, conflicts and infectious disease are hurting food production, disrupting supply chains and limiting accessibility to affordable food. According to the FAO, nearly 690 million people–or 8.9 percent of the global population experience hunger, up by nearly 60 million in five years.
Yet, accurate data on the number of poor people, their living conditions and vulnerability are scarce, especially in developing countries. Census counts and population surveys comprise the primary sources of this type of data, but these are published infrequently, varying in resolution and precision, due in part to limited available resources for data collection. On the other hand, innovative sources of data facilitate the development of new approaches for demographic modeling and measurement of wealth.
Geospatial Data For Development
And indeed, everything in our world has a spatial dimension. Poverty and other societal characteristics, economic trends, food security, changes to ecosystems and aspects related to society well-being or inequality hold physical dimensions.
Ground and remote sensors measure and collect climatological and environmental data, such as the quality of water, air and soil; mobile phones and GPS devices collect data on movement of people; cameras constantly monitor the functionality of critical infrastructure; sensors on board satellites, airborne and UAVs capture almost every location on Earth – monitoring economic activity and agriculture land productivity; stream gauges help track the impacts of floods on vulnerable populations; Volunteered Geographic Information is collected and contributed by volunteers, complementing (or replacing) traditional data sources; while geodata is used to support disaster management operations. Satellite data is being used today to measure the economic and societal impacts of COVID-19, for example, by looking at changes in nighttime light emissions.
Publicly available remotely sensed data are collected, for example, by sensors on board Landsat or Sentinel satellites at a spatial resolution of up to 10 m and used to understand socio-economic dynamic in every location on Earth, while nighttime light data collected since the 1990s are leveraged to map poverty or estimate the distribution of populations and economic activity.
With the increased availability of so much diverse data, new tools and methodologies are being developed and made available and accessible to sectors who traditionally did not rely or knew how to make sense of geospatial data. In parallel, much effort is put towards capacity building and development of basic training on the use of geospatial data, applications and tools to promote a more sustainable environment and society.
And this is exactly the objective of The Geospatial Data for Development (Geo4Dev). Geo4Dev aims to drive the development of new data, tools and methods for conducting geospatial analysis across diverse sectors, including agriculture and food security, urbanization, climate change, impact evaluation, humanitarian crisis, and disaster response. It brings together a network of leading researchers, government ministries, NGOs, private enterprises, and funding partners to inspire and support new research collaborations, share knowledge, and build capacity to utilize geospatial data, tools, and approaches.
On December 10th-11th 2020, Geo4Dev hosted an online symposium and workshop to showcase cutting-edge tools, datasets, and applications of geospatial data for global development research, followed by a hands-on workshop for those interested in building or honing their skills in this space. The convening also formally launched the new Geo4Dev website–an open-access hub for geospatial data, research, and several Nighttime Lights datasets, developed for public use by agencies including the National Oceanic and Atmospheric Administration (NOAA), the World Bank and the Colorado School of Mines.
The event showcased a wide range of applications of nighttime light data in development research and operations, including the use of big geodata for impact evaluation, prediction and inference of wealth, income and population with satellite data, measurement of the economic impacts of COVID-19 with nighttime lights and more. Dr. Chris Elvidge, who delivered the keynote, provided an overview of the DMSP and VIIRS low light imaging datasets and the new annual VIIRS nighttime series. The second day of the symposium provided a series of hands-on training on how to interpret, process, visualize and analyze this vast amount of raw nighttime light imagery and convert it into meaningful information that can help answer policy questions.
Other speakers illuminated a range of applications for nighttime light data (no pun intended), in particular in the transport sector. Théo Bougna and Alice Duhaut of the World Bank’s DIME Analytics Group presented on uses of nighttime light data to measure the impacts of road and transport corridor projects that include changes in economic activity, income, and urbanization. Their research demonstrated the possibility of using satellite imagery to develop fast approaches to delivering policy relevant information about impacts in data scarce and conflict affected regions.
A slate of speakers also presented novel methods for using satellite imagery and predictive modeling to transform approaches to poverty measurement and alleviation. Gordon Hanson, Marshall Burke, and Joshua Blumenstock all discussed machine learning approaches using satellite imagery to predict socio-economic and demographic characteristics and/or changes at disaggregated levels.
And in a world dominated by COVID, Mark Roberts of the World Bank’s Urban, Resilience and Land Global Practice, showed how nighttime lights in Morocco could be used to trace real-time GDP at the national and sub-national level, and in turn track economic impacts of the pandemic.
Also featured in the Geo4Dev symposium and workshop was a forthcoming World Bank data set called “Light Every Night” via the AWS Open Data program, which is the culmination of a several year collaboration between the World Bank, NOAA, and the University of Michigan. The data set comprises the DMSP and VIIRS data catalog published as Analysis Ready Data (ARD) under a World Bank open data license. For the first time in one place, almost 30 years of daily measurements of nighttime lights will serve as the foundation for insights into a wide array of policy and research applications. Open source tutorials and tools from the World Bank and the ARD community will help the development community transform the data into valuable insights for sustainable development.
Geo4Dev aims to provide a hub that inspires and supports new research collaborations, share knowledge, and build capacity to utilize geospatial data, tools, and approaches to measure all aspects of developing countries. There is an increasing interest in the use of geospatial data to measure development aspects. Remote Sensing Journal has recently launched two Spatial Issues related to the use of geospatial data for development which are now open for submission of papers and communications: Remote Sensing Measurements for Monitoring Achievement of the Sustainable Development Goals (SDGs) and Remote Sensing of Night-Time Light. Development Engineering journal accepts papers for submission to a Spatial Issue in the field of Geospatial Analysis for Development.
About New Light Technologies
New Light Technologies, Inc. (NLT) is a leading provider of integrated information technology, technical, scientific, consulting, and research services based in Washington, DC. NLT provides a broad range of integrated cloud, agile software development, cybersecurity, data science, geospatial, and workforce services and ready-to-use solutions for customers and offers distinctive capabilities in developing secure cloud-native AI/ML data analytics and decision support tools. The firm also provides unique expertise in developing, implementing, and managing enterprise solutions that enable the collection, integration, modeling and analysis, privacy protection, quality control, visualization, and public release of large-scale datasets and web-based data dissemination platforms. Contact us for more information and set up a conversation with our team members at a conference, or get on our chatbot, and we’ll be on standby to get you connected with our team on the conference floor. Visit https://newlighttechnologies.com | <urn:uuid:4953594a-248e-4e5e-bd0d-fecf24646f11> | CC-MAIN-2024-38 | https://newlighttechnologies.com/blog/2020/12/geo4dev-understanding-developing-countries-with-innovative-geospatial-data-and-tools | 2024-09-14T23:22:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00722.warc.gz | en | 0.91277 | 2,046 | 3.375 | 3 |
If you have an email or social media account, you’ve likely received phishing messages before. Phishing happens when a cybercriminal poses as a trusted organization or contact in an attempt to steal sensitive information such as passwords, birthdays, bank account information, or even social security numbers.
Many cybercriminals have taken their phishing attacks to the next level with more sophisticated strategies and targeted messages. Whaling attacks have become a particularly popular way to gather valuable information that you wouldn’t be able to get through traditional phishing emails.
Whaling attacks are an advanced form of phishing attack that targets senior-level executives or other employees with valuable high-level credentials. Because whaling emails are highly targeted, they are often harder to spot and prevent than traditional phishing emails. They have also been increasing in frequency in recent years – whaling scams resulted in a whopping $12.5 billion in losses during 2021.
Let’s dive deeper into how whaling attacks work and what you can do to prevent them at your organization.
Whaling is a type of cyber attack that uses targeted social engineering techniques to steal data from high-level employees within an organization. During these attacks, the cybercriminal poses as a trusted contact to build trust with the target. These attacks typically happen via email, but can also happen via social media, text messages, or even voicemail.
Unlike traditional phishing attacks, which are often sent en masse, whaling attacks are highly targeted. Hackers will research the victim in-depth and customize the message based on what they know about them.
Whaling attacks can target any high-level employee who has access to valuable information, such as financial data, sensitive customer information, or even intellectual property. They often target C-level executives, but any high-ranking individual should learn the signs of whaling and watch out for attacks.
Scammers planning a whaling attack will start by researching their target to learn more about their work and what data they have access to. They will use LinkedIn and other digital tools to source this information, and may even look for information about the target’s colleagues as well.
Then, the scammer will find the target’s email, phone number, and other contact information. They will then craft a message impersonating a trusted source, such as a colleague, a potential client, or a business partner, for example.
They might also pose as the FBI, SEC, or another government authority. To do this, they will create an email account or social media account impersonating this source.
One of the definitive characteristics of any whaling attack is a sense of urgency. The cybercriminal will encourage the target to share personal information or even wire transfer money in a timely manner, often threatening serious financial or personal consequences as a result.
Whaling attacks will often use the victim’s status or job title to their advantage, threatening legal consequences or reputational damage to the organization.
If this whaling attack is successful, the hacker will then use the information they’ve obtained to further infiltrate your organization to achieve their goals. This can happen in a variety of different ways.
For example, they might infiltrate your customer database to find credit card information or other sensitive data they can sell. If your organization has proprietary intellectual property that no one else has, they might also steal this information to sell or hold for ransom. This is why it’s so important for your organization to have multiple layers of cybersecurity protection.
While whaling is a type of phishing, it differs significantly from other forms of phishing attacks. Whaling is very different from traditional phishing scams in that they are much more targeted and specific.
Traditional phishing scams are often sent out in high volumes to thousands of people, and the messages aren’t customized. This makes them easier to spot and ignore than a whaling attack. These messages are often characterized by poor spelling and grammar, a sense of urgency or fear, or an offer that is too good to be true.
Standard phishing attacks often pose as popular social media platforms, financial platforms, or e-commerce retailers, such as Facebook, PayPal, or Amazon. Whaling attacks, on the other hand, will usually pose as a trusted colleague, client, or other specific contact.
Whaling is often confused with spear phishing attacks, but they are actually very different. Spear phishing is a type of targeted phishing attack that focuses on a specific group of people. However, they do not have to be high-level executives or C-suite employees. Spear phishing might target an entire organization or a specific group of people instead.
Spear phishing is typically done to gain access to passwords and other login information. Whaling usually targets more valuable intellectual property, financial information, or customer data that only executives would have access to.
Whaling phishing attacks have become increasingly sophisticated in recent years, and it’s more important than ever for high-level executives to learn how to spot whaling messages. Having a reliable cybersecurity strategy is also crucial for any organization. This way, even if a whaling attack is successful, there will be additional security measures in place to prevent it from escalating.
In order to prevent whaling attacks, you first need to learn how to spot them. Hackers will often go to great lengths to make these messages look legitimate. However, there are usually a few tell-tale signs that will help you identify and ignore these messages. These include:
If you don’t already, consider implementing an anti-phishing program across your organization to educate everyone on signs of a phishing or whaling message. Whaling attacks are often successful because CEOs, CFOs, and other senior executives don’t know the signs to watch for. Regular security awareness training goes a long way towards preventing these issues.
Many whaling attacks can be prevented simply by knowing how to identify them and deleting the message. However, there are other steps you can take to help prevent these attacks from happening.
The first step is simply to be careful with the information you put online. Any information you have on social media, such as your birthday, your location, or even your hobbies could be used in a whaling attack. It’s also important to be cautious when sharing your email and other contact information.
There are also steps you can take on an organizational level to filter out some whaling emails. For example, many spam prevention software programs will filter out messages with obvious signs of phishing, and DNS filters can also be very helpful.
Beyond that, there should be organizational safeguards in place to prevent data sharing or money transfers at a high level. All employees, including the CEO, should have to go through a verification process before taking these kinds of action. | <urn:uuid:d2029e40-a1b5-4e76-a4c5-2d75361724c1> | CC-MAIN-2024-38 | https://parachute.cloud/what-is-whaling/ | 2024-09-16T06:45:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00622.warc.gz | en | 0.957485 | 1,401 | 3.140625 | 3 |
In Assumed Breach Testing we test in a way where we already have a limited access into the Network. That can be provided by client in number of ways including VPN to their Network or a machine in their environment which we can have access to. Assumed Breach Testing is an essential component of an organization’s cybersecurity strategy. It helps to identify potential security risks and vulnerabilities in internal network and to develop strategies to strengthen the security infrastructure.
The goal of Assumed Breach Testing is to identify vulnerabilities and weaknesses in an organization’s security infrastructure by assuming that the attacker has already breached the perimeter defenses and is operating inside the network.
1) Identified Vulnerabilities: The report will outline the vulnerabilities that were identified during the testing process. This may include details such as the vulnerability’s severity, the potential impact on the organization, and the likelihood of exploitation.
2) Exploitation Attempts: The report may also include details about the number of attempts made to exploit the identified vulnerabilities, the techniques used, and the level of success achieved.
3) Network Architecture: The report may provide an assessment of the organization’s network architecture and topology, highlighting any weaknesses or vulnerabilities that were identified.
4) Security Controls: The report may provide an evaluation of the effectiveness of the organization’s security controls, such as firewalls, intrusion detection systems, and antivirus software.
5) Incident Response: The report may provide an assessment of the organization’s incident response capabilities, including the effectiveness of the response process and the coordination between different teams.
6) Recommendations: The report may provide recommendations for remediation strategies to address the identified vulnerabilities and weaknesses. These recommendations may include technical controls, policy changes, or user awareness training.
It’s important to note that the findings in an Assumed Breach Testing report may be different depending on the scope and objectives of the testing, as well as the specific tools and techniques used. However, the report should always provide a clear and actionable summary of the vulnerabilities and weaknesses identified, along with recommendations for remediation.
1) Misconfigured Systems: Misconfigured systems are a common vulnerability that can be exploited by attackers. During an ABT, misconfigurations in servers, databases, or other systems may be identified.
2) Weak Passwords: Weak passwords are another common vulnerability that may be identified during an ABT. This includes default passwords, easily guessable passwords, and passwords that are not regularly changed.
3)Missing Security Patches: Missing security patches can leave systems vulnerable to known exploits. During an ABT, missing security patches in servers, applications, and other systems may be identified.
4) Unsecured Ports and Services: Unsecured ports and services can provide attackers with a pathway to gain access to systems. During an ABT, unsecured ports and services in servers and applications may be identified.
5) Lack of Logging and Monitoring: Lack of logging and monitoring can make it difficult to detect and respond to security incidents. During an ABT, the effectiveness of logging and monitoring systems may be tested.
6) Outdated or Unsupported Software: Outdated or unsupported software can leave systems vulnerable to known exploits. During an ABT, outdated or unsupported software in servers and applications may be identified.
One of the key objectives is to identify vulnerabilities that allow an us to move laterally within the network and gain access to sensitive data or systems. Lateral movement refers to the process by which an attacker gains access to a system or resource that is located beyond the initial point of entry. During the assessment we will try to do Lateral Movement and Pivot from one System to another and the end goal will be to have the full control over the Internal Network and find ways through which we can bypass Security Mechanisms which are in place like EDR, Firewalls, IDS, IPS, AD Logs, etc.
Overall, the goal of identifying vulnerabilities related to lateral movement and security mechanism bypass is to help organizations understand the potential impact of a sophisticated attacker who is intent on accessing sensitive data or systems. By identifying and addressing these vulnerabilities, organizations can take steps to strengthen their security posture and reduce the risk of a successful attack. | <urn:uuid:7f3405aa-c3a7-4b44-ace0-2366df2579a8> | CC-MAIN-2024-38 | https://www.infopercept.com/blogs/assumed-breach | 2024-09-18T18:16:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651931.60/warc/CC-MAIN-20240918165253-20240918195253-00422.warc.gz | en | 0.947033 | 869 | 2.71875 | 3 |
A British hacker has found two Blu-Ray-borne attacks that could be run to infect machines, a technique that remind the method used by the Equation Group.
Security expert Stephen Tomkinson from NCC Group has discovered a couple of vulnerabilities in the software used to play Blu-ray discs. The exploitation of the flaw could be used to implant a malware in the machine using the vulnerable devices.
Tomkinson engineered a Blu-ray disc which detects could be used to run two Blu-Ray attacks, the disc could be used to discover the type of player it is running on use one of the exploit developed by the hacker to serve a malware on the host. Tomkinson presented his Blu-Ray attacks at the Securi-Tay conference at Abertay University in Scotland on Friday.
One of his exploits relies on a poor Java implementation in a product called PowerDVD from CyberLink that is used for play DVDs on PCs and creates rich content (i.e. menus, games) using a variant of Java, the Blu-ray Disc Java (BD-J). PowerDVD is installed by default on Windows computers commercialized by many vendors, including Acer, ASUS, Dell, HP, Lenovo and Toshiba.
Basically, the researcher succeeded to put executables onto Blu-Ray disks and to make those disks run automatically on startup even when the autorun feature is disabled by default.
The Blu-ray Disc Java uses small applications called “xlets”to implement the interfaces, despite they are prohibited from accessing computer resources a flaw in PowerDVD allows to bypass the sandbox to run malicious code.
“By combining different vulnerabilities in Blu-ray players we have built a single disc which will detect the type of player it’s being played on and launch a platform specific executable from the disc before continuing on to play the disc’s video to avoid raising suspicion. These executables could be used by an attacker to provide a tunnel into the target network or to exfiltrate sensitive files, for example.” states the researcher in a blog post.
The second flaw affects some Blu-ray disc player hardware, the exploitation of the attack relies on an exploit written by Malcolm Stagg that allows an attacker the opportunity to get root access on a Blu-ray player.
“This gives us a working exploit to launch arbitrary executables on the disc from the Blu-Ray’s supposedly limited environment,” explained Tomkinson.
Tomkinson wrote an xlet that exploited a small client application called “ipcc” running on the targeted machine to launch a malicious file from the Blu-ray disc.
The researcher also proposed some improvements to his attacks, like the implementation of a technique to identify the system host to launch the appropriate exploit and in order to hide the activity, the Blu-ray disc engineered by the expert will start playing the legitimate content after the execution of the malicious code.
The attacks proposed in this post remind us a technique of attack exploited by the Equation Group APT to compromise the machine of some participants of a scientific conference held in Houston. The participant received a CD-ROM containing the material of the conference, and some zero-day exploits including a high sophisticated backdoor codenamed Doublefantasy.
NCC Group has contacted the vendors to fix the issue but is still waiting for a reply. | <urn:uuid:fedbd679-bd18-4f45-8a8d-b58237f9f2da> | CC-MAIN-2024-38 | https://www.cyberdefensemagazine.com/how-to-serve-malware-by-exploiting-blu-ray-disc-attacks/ | 2024-09-19T23:47:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00322.warc.gz | en | 0.936428 | 686 | 2.640625 | 3 |
Alex Cranz, tech reporter at Gizmodo, recently said in an article that about five years ago, she was approached by a rep from a major laptop manufacturer who promised to use “Ice Lake” — the 10th generation of x86 chips from Intel — in super thin laptops within a year. Flash forward years later, and Cranz says the power of this chip will change laptop computing.
Ice Lake Chip – It’s Significance
Ice Lake refers to the 10th generation of processors from Intel’s “Sunny Cove” microarchitecture, explains Cranz in the article.
“It’s based on a 10nm technology node, which is significantly smaller than the 14nm node Intel has been using since 2014, but larger than the 7nm node used on chips by AMD, Apple, and Qualcomm.
“A smaller node generally means a boost in speed because information doesn’t have to travel as far. It also means better power efficiency because less power is needed to move the information.” — Alex Cranz
In a phone conversation with Cranz, an Intel Fellow and head of the Intel Client Architecture Team said that they “changed everything” with the production of the Ice Lake Chip. It was a “ground-up overhaul of the CPU,” Cranz explains, accounting for the years of delay to its release.
So what’s the big deal about the Ice Lake Chip? It is apparently powerful enough that PC manufacturers are excited to use it.
“PC manufacturers have told me that that Ice Lake Y-series chips will be just as powerful as the popular U-series part found in the laptops the majority of us use, all while using less battery, and taking up less space,” Cranz says in the article.
“Intel claims that its 10th generation chips will have graphics that two times faster (meaning even better framerate in games), wifi that is three times faster, and the ability to handle complex AI-related tasks 2.5 times faster.”
Cranz says she attended a demo where the chip “managed 70 frames per second in CSO: GO at 1080p while the 8th-Gen chip could only average 40fps.”
If you enjoyed this article and want to receive more valuable industry content like this, click here to sign up for our digital newsletters! | <urn:uuid:35c9cab9-271f-4587-91e5-d26270234663> | CC-MAIN-2024-38 | https://mytechdecisions.com/it-infrastructure/intel-processor-ice-lake-chip/ | 2024-09-07T20:18:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00522.warc.gz | en | 0.962404 | 501 | 2.703125 | 3 |
Cyber attackers are highly motivated to obtain or corrupt your company’s data. But whether their motivation is to steal your funds outright, hold your data for ransom, practice espionage, or simply disrupt your business, most hackers cannot access your network without an “in.”
In other words, they require a login, personal access codes, or network access through malware to initialize their breach. Unfortunately, a recent report released by Verizon has concluded that 93% of the time, a cyber attacker’s “in” comes to them in the form of a social engineering attack on your employees.
The only way to prevent such breaches in your security is with proper cybersecurity training.
What is a social engineering attack?
Social engineering attacks are frankly less high-tech than traditional cyber attacks by highly knowledgeable tech criminals. In other words, they don’t require the extensive knowledge and tools needed to directly hack a highly protected computer system out of nowhere.
Social engineering attacks are more like street scams — only they’re usually done online or sometimes, over the phone. These scams use human psychology to fool individuals into willingly giving up sensitive information. In the case of your business, the targets are your employees.
There are several types of these attacks, including “phishing” and “pretexting,” which are quite similar and often go hand-in-hand. Phishing emails, however, remain the most common type of social engineering scam.
What are phishing emails?
In short, a phishing scam might be an email sent to the employees of your company that looks legitimate. It might (appear to) be from the employee’s bank, for example. It might request that your employee “click here” and login to (what looks like) the bank website so that the bank can “update your information” or “confirm your identity.”
A phishing email might also promise something to the recipient: “Here’s your free 50% off coupon! Click here!” or use a so-called emergency to illicit fear: “Someone has hacked your account. Click here to get it back.”
If your employee does indeed click on the malicious link of a phishing email, they will likely be taken to a blank or uninteresting page. In the meantime, however, the link click will have initiated the installation of malware onto the employee’s computer. This malware then enables the hacker to obtain sensitive information or disrupt or damage your company’s data.
How can company’s prevent phishing scams?
The reputational implications of any type of security breach — even one that doesn’t actually corrupt or steal your data or funds — can be enormous. Of course, it goes without saying that if you are caught in the crosshairs of a data ransom or cyber theft, the financial implications will be equally devastating.
As we’ve learned from the Verizon report, most security breaches are linked with phishing. Therefore, cybersecurity training for your employees is the best preventive solution you have for stopping security breaches before they start.
Employee training is not expensive, yet it is highly effective. Your employees should learn the following throughout their ongoing training:
Cybersecurity training should be frequent and come at regular intervals throughout the year as attack strategies often come randomly in spurts and habitually change tactics.
While cybersecurity training is your best line of defense when it comes to phishing and security breaches, it’s also important to hire a reputable IT managed service provider (MSP) to handle your network and security. Your MSP should have experience and broad skill in protecting their clients from network breaches. Contact qualified MSPs in your area today to learn more about protecting your business from cyber attacks. | <urn:uuid:7ed09789-19f6-4482-9a3c-43112e51cae5> | CC-MAIN-2024-38 | https://www.fuellednetworks.com/5-crucial-elements-to-training-your-employees-in-optimal-cyber-security/ | 2024-09-09T01:27:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00422.warc.gz | en | 0.945761 | 791 | 2.671875 | 3 |
The inventor of the World Wide Web wants the Contract for the Web to be a first step towards addressing problems such as misinformation, mass surveillance and censorship online, but the list is not a realistic blueprint for action.
Tim Berners-Lee's Contract for the Web outlines nine high-level principles for governments, tech companies, and individuals. The contract—which is non-binding—calls on companies to respect consumer data privacy and governments to ensure everyone has access to the internet. The high-level principles were announced last November, and over the past year, the World Wide Web Foundation, a non-profit campaign group set up by Berners-Lee, worked with partners to expand the principles into a framework of 76 detailed clauses.
One of the things the contract calls for is clearly defined national laws that give individuals greater control over the data collected, and for countries to establish independent, well-resourced regulators to offer the public effective means for redress. That works for Europe, with its General Data Privacy Regulation (GDPR), but not for the United States, where different states have enacted their own versions of privacy legislation.
The contract also says governments should make sure everyone has access to the internet and that the internet should be available all the time. This is not as straightforward as it sounds, as countries such as China, Iran, and Russia, have taken steps recently to tighten their control over domestic networks. By restricting all communications leaving and entering the country, these governments can increase censorship and surveillance.
There are a few incentives built into this, but not enough for the vast majority of the world’s governments or corporations to change their behavior or beliefs," said Jason Kent, the hacker-in-residence at Cequence Security. "This is a nice-to-have list of things that would make the Internet a better place for us all, but it isn’t enforceable.
What Does Agreeing Mean?
For the contract to be useful, tech companies, governments, and other groups have to sign up and agree to follow those rules. After signing up, these organizations are expected to show progress towards meeting those principles through regular reports. For example, tech companies that have agreed to the contract would have to show they have created the control panels for consumers to see what data has been collected and is stored about them.
The Contract for the Web is already supported by 160 organizations, including the governments of France, Germany, and Ghana, technology companies such as DuckDuckGo, Facebook, GitHub, Google, and Reddit; and other organizations such as the Electronic Frontier Foundation, Public Knowledge and Ranking Digital Rights, and Reporters Without Borders.
The fact that Facebook and Google signed on and took part in the discussions to shape the cluases for the nine principles suggest the companies are thinking about way to give control of the data back to the individual consumer. On the other hand, their presence could just be virtue signaling, as these are the companies with business models that depend on data-hungry algorithms, Kent said.
“This is just another way for them to say 'See, we care about privacy,' when in fact they benefit more when people give up their privacy,” said Kent.
For the most part, though, most of the Contract is very broad and doesn't include elements that may deter acceptance among different groups. As SecurityWeek noted, there is nothing to prohibit governments from stockpiling zero-day vulnerabilities for offensive or defensive campaigns. This requirement is "probably the main reason" Microsoft's Digital Geneva Convention has not gained a lot of traction, SecurityWeek's Kevin Townsend wrote.
Where Is Security?
The problems Berners-Lee wants to fix are serious. The Web Foundation published statistics noting that a false story reaches 1,500 six times quicker, on average, than a true story, and online scams cost 20 countries around the world an estimated $172 billion in 2017. And the never-ending list of data breaches and data exposures means consumers have lost control of their information.
However, there is nothing in the Contract about information security. If security isn't part of the discussion, then even the organizations focused on privacy will still end up leaking out information they didn’t intend to, Kent said. If the Contract is going to talk about improving privacy online (and two of the nine principles are specifically about privacy rights), then there needs to be some discussion about what security looks list.
"It won’t be possible to create better internet privacy without security," Kent said. | <urn:uuid:fc61d35c-92a7-4c16-a814-0091f05875e4> | CC-MAIN-2024-38 | https://duo.com/decipher/contract-for-web-can-t-fix-privacy-problems-if-security-isn-t-included | 2024-09-12T18:23:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00122.warc.gz | en | 0.958711 | 923 | 2.84375 | 3 |
WordPress is one of the most used and most popular content management systems used by millions across the globe. More than 40 percent of the top million websites on the internet run on WordPress. Being an open-source content management system, WordPress makes it easy for anyone to create a website. Be it, beginners or advanced users, WordPress is used by everyone and is one of the most used platforms for building a website.
With several businesses trying to build an online presence, they are also looking for ways to build professional websites. E-commerce, banking, healthcare, insurance, education, etc., are among the industries that have built and are building an online presence. As much as technology is emerging, threats of cyber attacks are also increasing. It is becoming more and more crucial to secure websites from cyber criminals who are looking to gain access to websites for monetary benefits.
This article explains the possible vulnerabilities in WordPress websites that can be exploited by cybercriminals and how to prevent your WordPress website from being hacked.
Why Do Hackers Target WordPress Websites?
Hackers target all websites on the internet and WordPress websites are among the websites they commonly hack, as it is one of the most popular content management systems. As it is more popular and used by many, hackers easily find WordPress sites that fail to implement the required security measures and exploit vulnerabilities. WordPress’s widespread usage is one of the reasons why it is targeted by hackers.
Cybercriminals hack websites for several reasons. While beginners attack small websites that are less secure when compared to the rest, some hackers attack websites to distribute malware, disrupt services and shut down a website to steal money or valuable information.
In most cases, hackers do not target specific WordPress websites, but they look for known vulnerabilities they can exploit. They target many websites at a time and finally end up hacking a certain number of websites. Most of the time, small businesses happen to become victims of such attacks.
Most hackers hack websites for monetary benefits. With that said, by now, you must have understood that you will need to prioritize security if your website has sensitive user information and involves financial transactions. It doesn’t mean you need not secure your website because it does not involve financial or other sensitive information. Hackers might find something else on your website useful and hack it for other reasons. However, it is not that WordPress is an unsafe platform, but like every other platform, it does have vulnerabilities. All you need to do to keep your website safe is to identify vulnerabilities and fix issues if any.
Below are the top causes of WordPress websites getting hacked.
Vulnerabilities in WordPress Sites Exploited by Hackers
Using Weak Passwords
One of the most common vulnerabilities in WordPress websites is poor passwords. Still, many users are found to use weak and easy-to-guess passwords. Hackers can easily guess simple passwords like 12345, password, admin, etc. When they guess the password, they will be able to compromise your website and access your admin accounts.
The best way to prevent hackers from guessing your password is by using a strong password. A strong password is something that includes special characters, numbers, letters, and more. Secure your Database Access, FTP account, etc., with strong passwords that aren’t easy to guess. If you wish to keep your password secure, you can also go for a password manager, a tool to automatically generate high-strength passwords and store them. This way, you can strengthen the security of your WordPress site and also prevent hackers from gaining access to your passwords and causing damage to your site.
Outdated WordPress Version
Like many think, creating a WordPress website is not a one-time process. WordPress websites have to be regularly updated and maintained. WordPress website owners fail to apply software updates, as a result of which, their websites get hacked. Most updates are free to download, but website owners think that their websites could crash if they are updated and fail to update their websites to the newest version. They continue to use older versions that could have vulnerabilities or bugs. Hackers take advantage of such known vulnerabilities and carry out SQL injections and other malware attacks. Updating your website regularly will help you stay away from cyber attacks. New updates to WordPress will include patches for new malware threats. When you update your website to the latest version, your website’s cybersecurity will increase.
To make sure your website does not become a victim of attacks, it is important to keep your website updated. Update your website to the latest version when you receive a notification about an update. You can install the update on your staging site to test it before you update your live website to the newest version.
Outdated WP Plugins and Themes
Just like the point mentioned above about updating WordPress software, outdated plugins and themes can make your site vulnerable to attacks. Hackers can easily take advantage of vulnerabilities in outdated themes, unused plugins, etc. installed on your website, to infect your website. There are thousands of plugins available for WordPress websites and one can easily install a theme or plugin from any website. Many website owners forget about the installed themes and plugins and fail to update them to the latest version. Here is where the problem starts, as keeping an outdated version of themes or plugins on their websites will make it very easy for criminals to attack the website.
In order to avoid this problem, make sure you update every plugin and theme installed on your site regularly. Remove unused plugins and check if there are alternatives you can use. Checking them on a weekly basis or once every fortnight will help ensure there are no problem-causing plugins or themes on your website. Update plugins and themes with patched versions to make sure your website cannot be attacked.
SSL certificates are mandatory for websites to encrypt data transmissions. Websites with SSL certificates will have HTTPS and not HTTP. When a website is encrypted with an SSL certificate, an encrypted connection will be enabled and it will make it almost impossible for third parties like hackers to read the data that is transmitted between the web server and the browser by assigning random values to the data. A website that has an SSL certificate will have HTTPS in the address bar, which means the website is secured. Websites without an SSL certificate can be easily hacked and may not rank on Google, resulting in receiving only a few visitors.
If your website is not encrypted, you can quickly migrate to HTTPS by getting an SSL certificate for your website. When your website is encrypted with an SSL certificate, hackers and other cybercriminals will not be able to intercept data that is transmitted. You can almost secure your website instantly with an SSL certificate from a trusted certificate authority like DigiCert, Comodo or RapidSSL. As soon as your website is issued an SSL certificate, you will see HTTPS on the address bar.
Many website owners do not pay much attention to web hosting. If you are one among them, your website could become vulnerable to hacking attempts. Just like any other website, WordPress websites are also hosted on a web server. Many go for free web hosting services or cheap hosting providers. Remember, such hosting platforms will not be secure, thus making the websites hosted on their servers vulnerable to attacks.
To secure your website and prevent hackers from attacking your website, your website has to be hosted on a safe platform and not a free or shared hosting platform. When your website is hosted on a shared hosting plan, your website will share resources with several other websites. So when one website hosted on the shared server is hacked, all the other websites hosted on the server can be hacked, as well.
If your website is hosted on a shared server, it is time to consider switching to a secure hosting plan. Go for a web host that offers best-in-class security features to keep hackers at bay and improve the performance of your website.
Common Admin Usernames
Just like weak passwords that make your website vulnerable to attacks, simple and common usernames for admin accounts that are easy to guess will also let hackers gain access to admin accounts. Common admin usernames include admin, admin123, etc. Some admin accounts have the same username and password. If your website has one such user name, it is high time to consider changing it to something unique.
When a hacker gets into your admin account, he/she will be able to gain control of your backend files and cause damage to your website. To avoid this problem, change common usernames to unique names. You can start with changing the default username of your admin account and see to it that only the users who really need access to it have access to it and limit access to other users.
Access to the WordPress Admin Folder
In most cases, hackers gain access to a website’s admin area/folder to hack a WordPress website and crack it. Hackers generally target the admin folder, as they can easily extract all sensitive data by accessing the admin area.
Hackers use different methods to get into a website’s admin folder. The best way to prevent your website’s admin folder from being hacked is by adding an extra layer of security, i.e., by using multi-factor authentication. You can secure the admin folder with a strong password and also enable two-factor authentication to make it more secure and hard for hackers to get into your admin folder.
A Firewall will identify cyber threats and help protect your website from attacks online. It will identify and block requests from malicious websites. You can use a firewall along with a malware scanner for increased security. Firewalls will look for anomalies in the web traffic and alert website owners of attacks.
When a website does not have firewall protection, hackers can easily get around website security measures and gain access to the website’s backend resources. Firewalls can prevent attacks like SQL injections, DDoS attacks, brute force attacks, etc.
Three types of protocols, namely, FTP, SSH, and SFTP, are used to transfer data between the server and the client. When compared, SFTP and SSH are more secure than FTP. Many use FTP to upload files, which makes them vulnerable to attacks as their passwords will be sent unencrypted to the server when they use FTP. In this case, hackers can easily access your password.
So, it is wise to use SSH or SFTP, as the files sent using these protocols will be encrypted and hackers may not be able to intercept the data transmitted. If your website uses the FTP client, you just need to change the protocol to SFTP – SSH, as most hosting providers allow FTP connections using SFTP or SSH. So, this way, you can switch to SSH/SFTP without changing your FTP client.
Unsecure WordPress Config File
wp-config.php is the WordPress configuration file that contains important information. This file contains information like database login details, user data, login details, and more. Websites need this file to run and the importance of this file makes it one of the most favorite files and primary targets for hackers. If at all any third-party compromises this file, they can gain complete access to your website. So, it is important to add an additional layer of security to this file.
One of the best ways to secure this file is by denying access to it, which is by using .htaccess.
Here is the code you need to add to your .htaccess file to secure it.
deny from all
Quick Tips to Prevent WordPress Sites from Getting Hacked
Here are a few methods to prevent hackers from attacking your website.
- Update WordPress, themes, plugins, etc., whenever there is a new update available.
- Use complex passwords to secure your admin account and other important files.
- Use multi-factor authentication wherever required, for added security.
- Enable HTTPS by securing your website with an SSL certificate. This will help secure your website and your website visitors, as well.
- Make sure to install plugins and themes only from known sources.
- Regularly scan your website for viruses and malware.
- Clean up WordPress by removing unwanted and unused files you no longer need.
Avoid free hosting providers and spend a few bucks and go for a secure hosting service that offers firewalls, DDoS protection, network monitoring, etc.
The Bottom Line
WordPress is used by millions across the globe to build websites and to build an online presence. It comes with plenty of features and a few drawbacks, as well. You can easily secure your website and keep hackers at bay by implementing the required security measures and by keeping your website updated. We hope this article helped you understand what are the top causes of WordPress sites getting hacked and how to prevent your website from getting hacked. | <urn:uuid:10fdd4aa-7111-4cfc-bbc2-83fc0fa4662c> | CC-MAIN-2024-38 | https://cheapsslsecurity.com/blog/why-wordpress-websites-get-hacked/ | 2024-09-18T22:21:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00522.warc.gz | en | 0.936791 | 2,592 | 2.53125 | 3 |
Clean cloud data centers offer a sustainable solution for digital products.
As the world continues to turn its attention towards sustainability, some sectors are more overlooked than others. The UK government has ambitious goals to make all sectors carbon neutral by 2050 – including IT.
On the surface, the prospect of data causing carbon emissions seems baffling: how can a non-physical entity impact the real world? However, when we look at the statistics, the reality is far more complicated.
Breaking Down the Impact of Data Centers
Every image, video or text file needs to be stored somewhere. In a world dominated by social media, the demand for data is only growing. Today, data centers account for some 2% of the world’s entire energy consumption. In a more chilling finding, the demand for data centers is set to double over the next five years.
A big culprit for energy demands is websites. The average website emits 6g of CO2 – the same as driving a petrol car 12,000 miles per year. The reality is that much of this can be reduced simply by investing in website performance. My team investigated this when building their Web Vitals Index tool, which ranks the UK’s top e-commerce brands based on Google’s Core Web Vitals.
Website performance has proven to be synonymous with reduced energy loads. By speeding up a site and optimizing images, webmasters put smaller demands on data centers and thus reduce their carbon footprint. This doesn’t need to be a huge investment either. While some companies may want to start from square one, others can benefit from quick wins and supplier changes.
Website Testing with Synthetic Monitoring
Improving Processing Times Through Image Optimization
Images carry a lot of weight for web design, and are prominent in transactional sites. However, images that are too large risk putting unnecessary strains on servers. We may see these identified in tools such as the Web Vitals Index, highlighting issues in areas such as the largest contentful paint.
Top ITechnology News:
A simple image optimization tool can reduce the size of these images without compromising quality. It makes a huge difference at scale – one image at 1MB too large could result in 1 million extra megabytes being unnecessarily downloaded.
Making Little Changes for Big Differences
How Do These Changes Impact the Customer?
While reducing carbon footprints should be top priority for organizations, there are other considerations. Today’s consumer is more socially conscious, with 83% of consumers concerned about the sustainability of their shopping habits. This means that they are more wary of practices such as green-washing – they need certifiable evidence that companies are doing all they can to limit the damage they do.
Embedding good practices such as switching to green suppliers can give customers real assurance. Measurable statistics, such as the carbon impact of a website, will help customers to make informed decisions. Likewise, with faster page loading speeds, customers will have a better experience and be more incentivized to return.
The energy demands of data are quite startling, but the tide is turning thanks to more information and government initiatives. We each have a role to play to make sure we continue in the right direction – and we can start with these practices. | <urn:uuid:e1427b49-7db4-42a0-a480-3254b79e36e4> | CC-MAIN-2024-38 | https://cioinfluence.com/technology/what-can-organizations-do-to-reduce-their-data-carbon-impact/ | 2024-09-07T23:48:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00622.warc.gz | en | 0.945428 | 657 | 2.890625 | 3 |
In my last blog post, I explained how a firefighter died battling a house fire in Connecticut in 2014. I talked about the steps that an incident commander should be looking at. In this article, I’ll talk about recommendations.
As with all NIOSH Reports, there is a section on recommendations and I would like to share three of them with you but you can read them all in the full report via the link I provided at the end of the article.
“Recommendation #6: Fire departments should ensure that firefighters are properly trained in Mayday procedures. Discussion: It is essential to train firefighters to recognize when they are in trouble, know how to call for help, and understand how incident commanders and others must react to a responder in trouble [Jakubowski and Morton 2001]. One of the most difficult situations a firefighter can face is when they realize they need to declare a Mayday. Recognizing that they are (or about to be) in a life-threatening situation is the first step in improving the firefighters’ chances to survive a Mayday event. Many fire departments don’t have a simple procedure for what to say when a firefighter gets into trouble—i.e., a critical situation where communications must be clear [Jakubowski and Morton 2001]. A Mayday declaration is such an infrequent event in any firefighter’s career that they need to frequently train in how to recognize the need for a Mayday, how to declare the Mayday, and what steps to take to improve their chances for survival. Firefighters must understand that when they are faced with a life-threatening emergency, there is a very narrow window of survivability, and any delay in egress and/or transmission of a Mayday message reduces the chance for a successful rescue. Knowledge and skill training on preventing a Mayday situation or how to call a Mayday should be mastered before a fire fighter engages in fireground activities or other immediately dangerous to life and health (IDLH) environments. Firefighter training programs should include training on such topics as air management, crew integrity, reading smoke, fire dynamics and behavior, entanglement hazards, building construction, signs of pending structural collapse, and familiarity with a self-contained breathing apparatus (SCBA), a radio, and personal protective equipment (PPE). “
“Any Mayday communication must contain the location of the firefighter in as much detail as possible and, at a minimum, should include the division (floor) and quadrant. When in IDLH environments, firefighters must know their location at all times to effectively be able to give their location in the event of a Mayday. Once in distress, firefighters must immediately declare a Mayday. The following example uses LUNAR (Location, Unit, Name, Assignment/Air, Resources needed) as a prompt: «Mayday, Mayday, Mayday, Division 1 Quadrant C, Engine 71, Smith, search/out of air/vomited, can’t find exit.» When in trouble, a firefighter’s first action must be to declare the Mayday as accurately as possible. Once the incident commander and rapid intervention team (RIT) know the firefighter’s location, the firefighter can then try to fix the problem, such as clearing the nose cup, while the RIT is en route for rescue [USFA 2006].”
“The word Mayday is easily recognizable and is an action word that can start the process of a rescue. The use of other words to declare an emergency situation should be discouraged because it is not as recognizable as an immediate action word that will start a rescue process. During this incident, the fireground radio traffic was busy and many different communications were taking place. A Mayday message transmitted over the radio much earlier in the event may have gotten the attention of command officers and other firefighters when a rescue attempt might have had a better chance of locating the firefighter. In this incident, the firefighter never called a Mayday and never activated his emergency button (emergency buttons were inoperable) or PASS device. His officer called a Mayday that went unacknowledged and a second one that was not recorded on the radio transmission log.”
I have always been a fan of using a tactical worksheet so that you can use your eyes and hands to assist your mind in the keeping track of the location of companies on a fire scene. This recommendation talks about a sheet that is available at the end of the NIOSH report.
“Recommendation #7: Fire departments should provide the incident commander with a Mayday tactical checklist for use in the event of a Mayday. Discussion: When a Mayday is transmitted for whatever reason, the incident commander has a very narrow window of opportunity to locate the lost, trapped, or injured member(s). The incident commander must restructure the strategy and incident action plan (tactics) to include a priority rescue [Bachrach and Egstrom 1987]. Some departments have adopted the term LUNAR—location, unit assigned, name, assistance needed, and resources needed—to gain additional information in identifying a firefighter who is in trouble and in need of assistance. The incident commander, division/group supervisors, company officers, and firefighters need to understand the seriousness of the situation. It is important to have the available resources on-scene and to have a plan established prior to the Mayday [Bachrach and Egstrom 1987; Corbin 2000]. At this incident, when the Mayday occurred, the incident commander quickly called for additional resources and conducted a personnel accountability report to determine if any companies were lost or missing. Due to the influx of resources, trying to determine the location of companies and identifying crews that were missing, the incident commander was quickly overwhelmed. The intent of this Mayday worksheet, like the tactical worksheet, is to assist the incident commander during a very difficult and stressful time on the fireground operations.”
In Recommendation 8, NIOSH talks about the need for every department to have an SOP or SOG for fire ground communications.
“Recommendation #8: Fire departments should develop and implement a fireground communication standard operating procedure that includes a communication protocol and specifies equipment and capacity of the communication system. Discussion: Effective fireground radio communication is an important tool to ensure fireground command and control as well as helping to enhance firefighter safety and health. The radio system must be dependable, consistent, and functional to ensure that effective communications are maintained especially during emergency incidents. Fire departments should have a “communications” standard operating procedure (SOP) that outlines the communication procedures for fireground operations. Fire departments should ensure that the communications division and communication center are part of this process. Another important aspect of this process is an effective education and training program for all members of the department. “
“Radio frequency usually refers to the radio frequency of the assigned channel. A radio channel is defined as the width of the signal depending on the type of transmissions and the tolerance for the frequency of emission. A radio channel is normally allocated for radio transmission in a specified type of service or by a specified transmitter. Fire departments should ensure that an adequate number of radio channels are available. Multiple radio channels are necessary at large-scale or complex incidents, such as a commercial structure fire, mass-casualty incident, hazardous materials incident, or special operations incident [NFPA 2014; FIRESCOPE 2012]. A fire department should provide the necessary number of radio channels for complex or large-scale incidents needing multiple tactical channels. NFPA 1561 Standard on Emergency Services Incident Management System and Command Safety states in Paragraph 6.1.4, “The communications system shall provide reserve capacity for complex or multiple incidents.” This would require fire departments to preplan radio channel usage for all incident levels based upon the needs of an emergency incident including large-scale or complex incidents [NFPA 2014].”
“Fire departments should preplan for not only large-scale or complex incidents, but also for the ability to handle daily operations. Standard operating procedures, radio equipment (e.g., mobile radios, portable radios, mobile data terminals, laptop computers), other hardware (e.g., CAD system), and dispatch and communications protocols should be in place to ensure that these additional channels are available when needed [NFPA 2014].”
“Every firefighter and company officer should take responsibility to ensure radios are properly used. Ensuring appropriate radio use involves both taking personal responsibility to have your portable radio turned on and to the correct channel. A company officer’s responsibility is to ensure that all members of the crew comply with these requirements. Portable radios should be designed and carried in a position that allows a firefighter to monitor and transmit a clear message [IAFF 2010; Varone 2003].”
“A fire department’s SOP on communications should address issues on what to do if your Mayday transmission is not acknowledged, such as activating your emergency button. If there is a complete radio failure, the firefighter should evacuate the building as a matter of safety. In this incident, a Mayday was not acknowledged and the emergency button was not functionally activated by the fire department.”
“When a fire department responds to an incident, the incident commander should forecast for the incident to determine if there is potential for being a complex or long-term operation that may require additional resources, including demands on the communications system. As incidents increase in size, the communication system has to keep up with the demands of the incident. The incident commander must be able to communicate with company officers and division/group supervisors [FIRESCOPE 2012].
Before communications become an issue, the incident commander must consider options for alleviating excessive radio traffic. Several options are:
- Assign non-fireground resources (e.g., Staging, “Rehab”) to a separate tactical channel or talkgroup channel.
- Designate a “command channel,” which is a radio channel designated by the fire department to provide for communications between the incident commander and the division/group supervisors or branch directors during an emergency incident [NFPA 2014].
- For incidents involving large geographical areas, designate a tactical channel or talk-group for each division. “
“Communications between the incident commander and tactical-level management units and/or company officers is essential for successful fireground operations. Communication during the fire attack may be difficult at times due to the noise created by the hose stream striking walls, ceilings, and furnishings. However, the engine company officer must monitor the portable radio for critical information that may affect the engine company. This includes ventilation delays, water supply difficulties, collapse potential, and Mayday and/or «urgent» transmissions. The engine company officer can provide the incident commander with vital information that may affect how the fire operation is handled. Messages such as those listed below should be transmitted to the incident commander, other units, or individual members on the scene:
- «Start a 1¾-inch line to the second floor.»
- «Start water.»
- «We have two rooms knocked down; making progress.»
- «Main body of fire has been extinguished.»
- «Increase/decrease pressure on Engine 2’s 1 ¾ line.»
- «We need a back-up line to the second floor» [Brunacini 2002].
In this incident, there were several breakdowns in communication, including transmissions not being understood, a Mayday not acknowledged, and transmissions not getting through.”
Radio Communications on the fire ground is difficult at best but it should not fail to the point where a member losses a life. We have been fighting fires for over 200 years and we have many traditions and killing firefighters is one that have to stop.
Radio traffic from fatal fire:
LODD and Communications issues | <urn:uuid:a9b850a1-9554-4e67-9f2f-138182d661b5> | CC-MAIN-2024-38 | https://www.basecampconnect.com/es/second-part-firefighter-lodd-and-the-communications-issues-involved/ | 2024-09-08T01:08:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00622.warc.gz | en | 0.946958 | 2,446 | 2.578125 | 3 |
What is beamforming? Beamforming is a signal processing technique that uses the multiple antennas available with massive MIMO (MIMO) to create a focused signal (or beam) between an antenna and specific user equipment. Signals can be controlled by modifying the magnitude and phase giving the ability for the antenna to focus on specific users. This concept can be compared to a music concert where a spotlight is focused on specific performers onstage. To learn more, read our blog ‘RF and 5G new radio: top 5 questions answered’. | <urn:uuid:682e3685-496d-48ed-ac04-d935adc5fbc2> | CC-MAIN-2024-38 | https://www.exfo.com/en/resources/glossary/beamforming/ | 2024-09-08T00:19:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00622.warc.gz | en | 0.926072 | 110 | 3.59375 | 4 |
Let’s face it, dealing with Multi-Factor Authentication (MFA) can sometimes feel like an inconvenience. But before you dismiss it as a hassle, consider this: in today’s digital age, it’s a crucial tool for safeguarding your sensitive information from cyber threats.
So, what exactly is MFA?
In simple terms, it’s an extra layer of security that goes beyond your typical password. Even if someone manages to crack your password, they’ll hit a roadblock without the second factor—a code sent to your phone or generated by an app, for example.
Think of it as a safety net. Without MFA, cybercriminals who get their hands on your password—through phishing or data breaches—can waltz right into your accounts. But with MFA in place, they’d need physical access to your second factor as well, making their job much harder.
It’s like having an extra lock on your front door
Imagine this scenario: someone tries to infiltrate your account. With MFA, they’ll hit a brick wall at the second factor checkpoint. It’s like having an extra lock on your front door—a small inconvenience for you, but a major deterrent for intruders.
Now, here’s the good news: setting up and using MFA is easier than you might think. Many online platforms offer it as an option, and in some cases, it’s mandatory for sensitive accounts. Once you’ve got it up and running, the minor inconvenience is well worth the peace of mind knowing your accounts are better protected.
But MFA isn’t the only game in town when it comes to beefing up security. There are alternative methods worth exploring, like biometric authentication, which uses physical traits like fingerprints or facial recognition to verify your identity. Then there are hardware tokens, handy devices that generate one-time passwords for added security.
Push notifications offer another layer of defense by alerting you whenever a login attempt is made, giving you the power to approve or deny access right from your device. And let’s not forget about good old email or SMS authentication, which sends a one-time code to verify your identity during login.
No matter which method you choose, one thing’s for sure: education is key. By teaching users about security best practices—like creating strong passwords and avoiding personal info—you can further fortify your defenses.
So, the next time you’re tempted to bypass MFA, remember this: it may be a minor inconvenience, but it’s a small price to pay for the added layer of protection it provides. In a world where cyber threats are ever-present, a little extra security goes a long way. | <urn:uuid:30f9b0ee-1c27-4a33-b5b0-53b77406c10f> | CC-MAIN-2024-38 | https://www.foxtrot-technologies.com/unlocking-the-power-of-mfa/ | 2024-09-10T10:50:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00422.warc.gz | en | 0.916365 | 580 | 2.734375 | 3 |
Blockchain infrastructures and distributed ledger technologies are very well known as the technological pillars of crypto-assets and cryptocurrencies like Bitcoin, Ethereum, Cardano, and Dogecoin. In recent years the popularity of these cryptocurrencies has been skyrocketing as it enables novel decentralized approaches to payments and other financial services, i.e., they enable financial transactions without traditional banks. In 2021, the total market value of crypto assets has exceeded 1 trillion dollars for the first time in history. Nevertheless, the value of blockchain technology is not limited to providing support for decentralized finance services. Rather, blockchains are also deployed and used in many other industries like manufacturing, energy, healthcare, and supply chain management. These deployments are motivated by the decentralized nature of blockchain technologies, which obviates the need for controlling and managing business transactions through some trusted third party.
One of the most popular applications of blockchain technology beyond cryptocurrencies is its use for tracking and tracing transactions in the manufacturing chain. Specifically, blockchain technologies enable the decentralization of the manufacturing chain by offering distributed, secure, and reliable ways for tracking and tracing products, processes, and transactions. These benefits are very relevant for complex supply chains, where there is no effective way for centralized actors to track, trace and manage transactions. A prominent example of such a supply chain can be found in the pharmaceutical industry. This industry is highly regulated as it manages a variety of products that are made available in different countries and are subject to different laws and conditions.
Benefits of Blockchain Technology for the Pharma Supply Chain
The deployment and use of blockchain technology in the pharma supply chain is motivated by the following benefits of distributed ledger technologies:
- Cyber-Security and Data Protection: Blockchains are distributed and have no single points of failure. In case a blockchain node goes down they continue to operate. As such they are less susceptible to cyber-security attacks. Likewise, they can better protect the sensitive data that are exchanged across the pharmaceuticals chain.
- Data Provenance and Transparency: Distributed ledger technologies and blockchain developments offer full transparency to the pharma chain participants. This facilitates auditing and fosters trusted relationships across the various supply chain actors. Moreover, blockchains are excellent for tracking and tracing industrial data across the supply chain. Specifically, they enable the implementation of resilient and trusted data provenance systems, leveraging the tamper-proof properties of blockchain technology i.e., the fact that the ledger cannot be changed. This facilitates the process of tracking goods and services in a reliable way.
- Fight fraud and theft with provenance authentication: The tamper-proof properties of blockchain solutions also enable the implementation of fraud detection and counterfeiting applications. For instance, they ease the detection of potential fraudulent actions based on access to an immutable record of transactions in the blockchain. Moreover, when using a blockchain, manufacturers, retailers, regulatory authorities, and other stakeholders can very easily spot fake drugs by accessing end-to-end information about them. Hence, blockchains facilitate pharmacies to ensure that the products on their shelves are authentic.
- Regulatory Compliance: Along with fraud detection, regulatory authorities can process the blockchain ledger to audit the transactions’ compliance to applicable laws and regulations. For instance, they can check whether pricing constraints are met and whether general practitioners adhere to their prescription limits. Likewise, other national or regional constraints can be checked on specific segments of the supply chain where they apply.
- Cold Chain Management: Pharma blockchains can be also enhanced with value-added services such as cold chain management services. The latter is key towards auditing and ensuring the quality of temperature-sensitive drugs. In this direction, blockchain services are augmented with functionalities for managing and tracing temperature information across the different locations where the drugs are transported. Likewise, smart contracts for spotting and reporting temperature and other quality-related violations can be implemented.
Deployment Examples and Persisting Challenges
Acknowledging the benefits of distributed ledger technologies for the pharma chain, many companies have implemented, deployed and experimented with blockchain systems. As a prominent example, back in 2019, several pharma companies participated in FDA’s Drug Supply Chain Security Act (DSCSA) pilot. The pilot implemented blockchain-based tracking, tracing, and verification for pharma products. Security and interoperability requirements were validated during the pilot implementation and operation. The results of the pilot were positive, as the blockchain systems were proven capable of meeting the requirements of the DSCSA.
As another example, the PharmaLedger project in Europe demonstrated the management of patients’ electronic medical records using blockchain technologies. As part of the project, pharmaceutical companies used the blockchain to provide accurate and up-to-date information about their products. Patients were able to access this information as part of their medical records and prescriptions. The project demonstrated the benefits of a decentralized and fully digital process. It was sponsored by the Innovative Medicines Initiative (IMI) and proved the benefits of blockchain technology for accurate and trusted information sharing between pharma companies and patients.
Despite the benefits of blockchain technology for the pharma chain, there is still a lack of very large-scale implementations. Most projects are either small scale or at a pilot stage. This is because of the performance limitations of blockchain technology, the lack of experience in blockchain application development, but mainly due to that there is no easy way to transition from conventional centralized and paper-based platforms to decentralized blockchain-based ones. Moving from a centralized model to a decentralized paradigm requires not only a technology shift but also a change of processes and of the overall culture of supply chain participants. These are some of the barriers that inhibit the adoption of distributed ledger technology at scale.
Overall, blockchain technologies and applications provide excellent use cases for pharmaceutical companies, including use cases that boost security, increase automation, and save costs. Moreover, the blockchain use cases for the pharma chains are relevant to all stakeholders, including pharmaceutical manufacturers, retailers, pharmacies, patients, regulatory authorities, and other supply chain actors. Blockchain’s potential in the pharma industry is nowadays proven, yet several adoption barriers persist. In this landscape, technology companies and high-tech startups should look for innovation opportunities at this space. At the same time, other business actors of the pharma industry and the pharma supply chain should gradually prepare to adopt more decentralized, yet more reliable and secure processes. | <urn:uuid:3eb2867b-4ba8-4721-bcf3-60ae7787a9cf> | CC-MAIN-2024-38 | https://www.itexchangeweb.com/blog/increasing-trust-in-the-pharma-value-chain-using-blockchain-technology/ | 2024-09-10T09:51:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00422.warc.gz | en | 0.934529 | 1,282 | 2.796875 | 3 |
Resource Pool Category
Cmdlets are usually implemented around resource operations. The four basic operations are CREATE, READ, UPDATE and DELETE. This set of operations is known as CRUD. Most of the cmdlets support CRUD which are respectively cmdlets that start with the New/Get/Set/Remove cmdlet verbs but they also may have additional operations
Step 1: Retrieve a object by running a Get command
# Retrieves information of the resource pool to which the virtual machine MS Win belongs.
$server = Connect-VIServer -Server 10.23.112.235 Get-ResourcePool -Server $server -VM VM
Step 2 : Run commands from the CRUD group
# Creates a new resource pool named ResourcePool2 in the cluster's root resource pool ResourcePool1.
$resourcepool1 = Get-ResourcePool -Location Cluster -Name ResourcePool1 New-ResourcePool -Location $resourcepool1 -Name ResourcePool2 -CpuExpandableReservation $true -CpuReservationMhz 500 -CpuSharesLevel high -MemExpandableReservation $true -MemReservationGB 5 -MemSharesLevel high
# Moves the resource pool named ResourcePool to the virtual machine host Host.
Move-ResourcePool -ResourcePool ResourcePool -Destination Host
# Removes the resource pool named ResourcePool.
Remove-ResourcePool -ResourcePool ResourcePool
Step 3: Explore More Related Commands:
Set-ResourcePool | This cmdlet modifies the properties of the specified resource pool. | | <urn:uuid:20152f3f-322c-4c64-a305-ccc16ad7513b> | CC-MAIN-2024-38 | https://developer.broadcom.com/powercli/latest/products/vmwarevsphereandvsan/categories/resourcepool | 2024-09-12T22:15:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00222.warc.gz | en | 0.729126 | 324 | 2.6875 | 3 |
Making sustainability the ninth principle of information management
In a world confronting the dire consequences of climate change, sustainability has become a corporate imperative.
In a world confronting the dire consequences of climate change, sustainability has become a corporate imperative. More than 90% of CEOs now say sustainability is important to their organisations’ success and nearly half of the top 100 U. S. MBA programs offer academic programs on business sustainability, according to Stanford Social Innovation Review.
The information management association ARMA lists eight principles of records and information management (RIM): accountability, transparency, integrity, protection, compliance, availability, retention and disposition. Given the challenges we are facing, sustainability should be added as the ninth principle to ensure that RIM is involved in inspiring meaningful change.
The benefits of sustainability go far beyond just corporate social responsibility. The payoff is also in lower costs, improved employee morale and better efficiency. Sustainable businesses have an easier time attracting talent, particularly the people for whom environmental issues are critical. They are less likely to run afoul of compliance rules and better able to attract customers. And the bottom line is that more than two-thirds of the respondents to one recent study said they consider sustainability when making a purchase and are willing to pay more for sustainable products.
At first glance, RIM professionals may think there is little they can do to further their organisation’s sustainability goals, but their potential impact is much greater than they might think.
Paper is a good place to start. Recycling one ton of paper saves enough energy to power the average U.S. home for six months, saves 7,000 gallons of water and reduces greenhouse gas emissions by one metric ton. Paper records that have met their retention requirement and those that have been digitised are logical candidates for recycling by shredding. Removing paper entirely through the creation and management of records in digital format prevents trees to be felled in the first place.
A “shred-all” policy is a good way to distribute responsibility for paper recycling throughout the organisation, strengthen security and reduce regulatory vulnerability. Shred-all simply instructs employees to securely shred any information no longer needed for business rather than placing it in a trashcan or a recycle bin. This removes uncertainty about what qualifies for shredding and ensures that as much paper as possible can be recovered and recycled.
Reuse and recycle electronics
Electronic equipment such as computers, phones, monitors and copiers are full of hazardous materials including beryllium, cadmium, mercury, arsenic and lead; they may also contain private and sensitive information. Electronic refuse – or e-waste – is a growing problem; more than 53 million tons of it was created globally in 2019, or about 16 pounds per person, and that figure is expected to grow 40% by 2030. The U.S. has the dubious distinction of being the world leader.
A large amount of e-waste is shipped overseas where it finds its way into landfills, waterways and even streets and vacant lots. Much of this material can be safely disposed of and even mined for value. For example, 200 laptops collectively contain about five troy ounces of gold. Yet only 17% of e-waste is recycled, according to the Global E-Waste Statistics Partnership.
E-waste can also be a major security and compliance vulnerability, since data storage media may not be thoroughly scrubbed of sensitive information. Secure data erasure programs exist to ensure all proprietary and sensitive data has been removed, allowing the device to be refurbished and put back into the economy.
A responsible e-waste program first and foremost looks for ways to repurpose electronics such as computers, printers, fax machines and mobile devices. A successful remarketing program can not only help you recover some value from your assets, but it helps your organisation participate in the Circular Economy. A Circular Economy differs from the Linear Economy by repurposing items and keeping them in use. In doing so, we can both minimise the growing need for virgin goods as well as divert waste from the landfill.
Once a piece of equipment no longer retains value, it should then be securely and sustainably recycled. Experienced service providers can recover value through donation tax credits and precious metal reclamation. They can also ensure that equipment is disposed of through secure, safe and environmentally sound processes that meet regulatory requirements.
Equipment that is not eligible for refurbishing or recycling can be disposed of through waste-to-energy incineration, which is a way to generate energy and divert waste from landfills.
Choose vendors visely
Your organisation may have embraced sustainability goals, but can the same be said of your vendors? With more functions than ever being outsourced, organisations need to perform due diligence on the companies that do business with them. Ask questions of the contractors that serve your organisation's needs for waste disposal, drayage, cleaning, storage, food service and equipment maintenance. How transparent are they about their practices? Do they provide reporting and comply with best sustainability practices? Are their facilities and vehicles powered by alternative energy? What do they do to minimise emissions? Make sure their sustainability goals are in line with yours.
Don't forget data centres
Many organisations are becoming more digital, and while this minimises physical waste, there is the unseen environmental impact of needing to power data centres. Data processing facilities consume about 1% of the world’s electricity and are also major users of water and diesel fuel. An older mid-sized data centre can use up to 360,000 gallons of water a day.
If your organisation manages its own IT needs, consider moving to a co-location service which can provide economies of scale and are held to strict regulatory guidelines on power and water efficiency. Many data centres are innovating in the use of green energy. For example, Iron Mountain’s colocation data centres around the globe are powered by 100% renewable sources and meet the toughest standards for power efficiency. In fact, data centre customers are eligible for Green Power Pass points that can be incorporated into their organisation’s sustainability reporting and help meet their reduced carbon output goals.
Rethink the office
The COVID-19 pandemic has prompted many organisations to take a fresh look at the purpose and design of offices. With work-from-home programs likely to proliferate in the future, now is a good time to look at reducing square footage, achieving economies through shared workspaces and digitising paper records or moving them to offsite storage.
The shift to remote work is also catalysing digital transformation initiatives. Digital records are not only more sustainable but easier for people to retrieve, share and integrate into digital workflows. A standard four-drawer file cabinet consumes 17 square feet of floor space, including the space needed for people to access its contents. Between 8% and 10% of the space in a typical office is taken up by paper records, according to Iron Mountain Clean Start® estimates.
Consolidating or digitising records and moving rarely accessed files to long-term storage frees up space for people to work safely or enables organisations to reduce space needs to save on rent, heating and cooling.
Organisations are finding that the benefits of flexible remote work policies go beyond space savings. Commute times are reduced along with emissions and traffic congestion, and research has shown that many people give at least some of that saved time back to their work. Remote workers report higher overall job satisfaction and productivity, with many saying they will trade off compensation for flexibility. The workplace is also safer when employees who are sick don’t have to come in to work.
Implement governance practices
The paper reduction, digitisation and office redesign initiatives described above are part of a sound information governance strategy. Organisations that have a complete picture of all their data, where it is, how it’s used and where it came from, are more informed, efficient and sustainable. The productivity savings alone are compelling. One recent survey found 36% of office workers said it's difficult to find the most recent version of a document most or all of the time. A sound governance strategy all but eliminates this waste of time.
Good information governance practices also contribute to sustainability when defining rules for disposing of redundant, obsolete and trivial data, destroying outdated or unneeded paper documents and aligning records use with sustainability goals. Having a single version of the truth in a place where everyone can find it isn’t just a sound business practice. It’s also good for the environment.
In conclusion, making sustainability the ninth principle of records and information management and adopting it in your daily work makes a statement about both your organisation and your profession. It must be part of a larger, holistic sustainability imperative. | <urn:uuid:a34774ea-ac72-4bfb-988a-50362f4f8a59> | CC-MAIN-2024-38 | https://www.ironmountain.com/en-ph/resources/blogs-and-articles/m/making-sustainability-the-ninth-principle-of-information-management | 2024-09-12T21:21:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00222.warc.gz | en | 0.947759 | 1,789 | 2.734375 | 3 |
The Internet of Things (IoT) bridges the divide between physical and digital worlds. It’s a system of interconnected devices which have the ability to collect and transfer data over a network without (in the successful Applications) requiring manual administration. Enterprises are getting leverage in shifting markets by implementing IoT Solutions in their business models to reduce time-to-market, boosts productivity and improve customer experiences.
Nonetheless, every technology has its own challenges. The Internet of Things faces a major challenge in terms of testing. To build a world-class IoT product, an end-to-end IoT solution needs to undergo significant amounts of QA throughout its lifecycle. That means every component, such as a sensor, gateway, user interface and the web services that bind them need to be tested before, during and often after delivery to the end-customer.
We’ll call such end-to-end IoT testing multistage validation. Let’s explore the concept of multistage validation in more detail.
An end-to-end IoT solution consists of multiple components including (in descending order of abstraction):
- User Access Component: Mobile Application or Web Application
- Cloud Infrastructure
- IoT Gateway
- IoT Embedded Devices/Sensors
Each of the above components plays a very critical role in the functioning of the IoT solution. Such a multilayered stack requires multistage validation. Multistage validation ensures that each component should perform its designated action correctly.
It advocates the process of validating each component of an IoT solution while doing the system testing with closed-loop tests where the forward path (from the mobile application to IoT device) and reverse path (from IoT device to the mobile application) is considered.
Applications: An IoT System for a Smart Air Conditioner
Let’s say a user wants to set the temperature of their bedroom AC from a mobile application while leaving from the office, then the AC unit sends a notification to a mobile application when the desired temperature is achieved.
The IoT solution for the above Applications would have the following components:
Mobile Application: User can set the AC temperature on his/her mobile application.
User Access Cloud: The mobile application sends the temperature value to a user access cloud using Rest APIs and also updates the database.
IoT Cloud and Gateway Device: The IoT cloud delivers the “change temperature” commands to the IoT gateway device installed at a user’s home.
Smart AC: The gateway device sends the desired temperature on the bedroom AC, and the AC sends a notification on the mobile application once the desired temperature is achieved.
For end-to-end IoT testing, multistage validation plays a critical role as the verification at each component level is required to ensure full system functionality.
Stage – 1: The validation requires the mobile application level to check the mobile application functionality. In this case, the validation would be whether the temperature of the AC unit is changed to the desired level.
Stage 2: The validation checks the user access cloud where the mobile application or web application accesses the cloud using the resource APIs. It’s mandatory to ensure that the functional requirements meet those at the API and database level. It must also make sure that the changes made by the mobile application, which are reflected in the database, are sent to the gateway device through the IoT cloud logs validation. This validation would be done to ensure that the APIs are working as expected and that the changes made to the database for AC are as expected for the desired device.
Stage 3: Here, the verification is needed at the gateway stage where the IoT cloud sends the “temperature change” command using an IoT communication protocol like MQTT, XMPP, XML or JSON. The validation is done to ensure that the correct message is received by the IoT gateway device and that the message is getting forwarded to the intended end device via a communication protocol such as Zigbee®, BLE or Wi-Fi, whichever is available. This validation would be done to ensure that the temperature change action is made for the intended AC device over a supported communication protocol.
Stage 4: The last validation required is at the end embedded device level which checks that the action received from the IoT gateway is reflected in the embedded device.
This validation would be done to make sure that the temperature gets set to the desired level on the AC unit. The AC unit should send the “desired temperature achieved” notification to the mobile device via the gateway and the cloud. The mobile application, cloud and gateway all get validated as soon as the mobile notification is generated from the Smart AC.
The Importance of Multistage IoT Solution Testing
- It enables testing, validation and verification of application architecture as well as integration between all the components and business requirements.
- It enables catching bugs at the integration level and also finds issues at the component level.
- It enables solution testing in end user and real-time Applications. | <urn:uuid:1bba8e44-51ab-4076-a7ff-49942f22f158> | CC-MAIN-2024-38 | https://www.iotforall.com/the-importance-of-multistage-validation-to-successful-iot-solutions-development | 2024-09-16T12:44:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00822.warc.gz | en | 0.921319 | 1,018 | 2.625 | 3 |
It’s the middle of a workday. While researching a project, a random ad pops up on your computer screen alerting you of a virus. The scary-looking, flashing warning tells you to download an “anti-virus software” immediately. Impulsively, you do just that and download either the free or the $9.99 to get the critical download.
But here’s the catch: There’s no virus, no download needed, you’ve lost your money, and worse, you’ve shared your credit card number with a crook. Worse still, your computer screen is now frozen or sluggish as your new download (disguised malware) collects the data housed on your laptop and funnels it to a third party to be used or sold on the dark web.
This scenario is called scareware — a form of malware that scares users into fictitious downloads designed to gain access to your data. Scareware bombards you with flashing warnings to purchase a bogus commercial firewall, computer cleaning software, or anti-virus software. Cybercriminals are smart and package the suggested download in a way that mimics legitimate security software to dupe consumers. Don’t feel bad, a lot of intelligent people fall for scareware every day.
Sadly, a more sinister cousin to scareware is ransomware, which can unleash serious digital mayhem into your personal life or business. Ransomware scenarios vary and happen to more people than you may think.
What is Ransomware? Ransomware is a form of malicious software (also called malware) that is a lot more complicated than typical malware. A ransomware infection often starts with a computer user clicking on what looks like a standard email attachment only that attachment unlocks malware that will encrypt or lock computer files.
A ransomware attack can cause incredible emotional and financial distress for individuals, businesses, or large companies or organizations. Criminals hold data ransom and demand a fee to release your files back to you. Many people think they have no choice but to pay the demanded fee. Ransomware can be large-scale such as the City of Atlanta, which is considered the largest, most expensive cyber disruption in city government to date or the WannaCry attack last year that affected some 200,000+ computers worldwide. Ransomware attacks can be aimed at any number of data-heavy targets such as labs, municipalities, banks, law firms, and hospitals.
Criminals can also get very personal with ransomware threats. Some reports of ransomware include teens and older adults receiving emails that falsely accuse them or browsing illegal websites. The notice demands payment or else the user will be exposed to everyone in his or her contact list. Many of these threats go unreported because victims are too embarrassed to do anything.
According to the Cisco 2017 Annual Cybersecurity Report, ransomware is growing at a yearly rate of 350% and, according to Microsoft, accounted for roughly $325 million in damages in 2015. Most security experts advise against paying any ransoms since paying the ransom is no guarantee you’ll get your files back and may encourage a second attack.
Cybercriminals are fulltime digital terrorists and know that a majority of people know little or nothing about their schemes. And, unfortunately, as long as our devices are connected to a network, our data is vulnerable. But rather than living anxiously about the possibility of a scareware or ransomware attack, your family can take steps to reduce the threat.
Tips to keep your family’s data secure:
Talk about it. Education is first, and action follows. So, share information on the realities of scareware and ransomware with your family. Just discussing the threats that exist, sharing resources, and keeping the issue of cybercrime in the conversation helps everyone be more aware and ready to make wise decisions online.
Back up everything! A cybercriminal’s primary goal is to get his or her hands on your data, and either use it or sell it on the dark web (scareware) or access it and lock it down for a price (ransomware). So, back up your data every chance you get on an external hard drive or in the cloud. If a ransomware attack hits your family, you may panic about your family photos, original art, writing, or music, and other valuable content. While backing up data helps you retrieve and restore files lost in potential malware attack, it won’t keep someone from stealing what’s on your laptop.
Be careful with each click. By being aware and mindful of the links and attachments you’re clicking on can reduce your chances of malware attacks in general. However, crooks are getting sophisticated and linking ransomware to emails from seemingly friendly sources. So, if you get an unexpected email with an attachment or random link from a friend or colleague, pause before opening the email attachment. Only click on emails from a trusted source.
Update devices. Making sure your operating system is current is at the top of the list when it comes to guarding against malware attacks. Why? Because nearly every software update contains security improvements that help secure your computer from new threats. Better yet, go into your computer settings and schedule automatic updates. If you are a window user, immediately apply any Windows security patches that Microsoft sends you.
Add a layer of security. It’s easy to ignore the idea of a malware attack — until one happens to you. Avoid this crisis by adding an extra layer of protection with a consumer product specifically designed to protect your home computer against malware and viruses. Once you’ve installed the software, be sure to keep it updated since new variants of malware arise all the time.
If infected: Worst case scenario, if you find yourself with a ransomware notice, immediately disconnect everything from the Internet. Hackers need an active connection to mobilize the ransomware and monitor your system. Once you disconnect from the Internet, follow these next critical steps. Most security experts advise against paying any ransoms since paying the ransom is no guarantee you’ll get your files back and may encourage a second attack. | <urn:uuid:8103e1bb-80de-4722-97c4-fd40da2c0b89> | CC-MAIN-2024-38 | https://www.mcafee.com/blogs/internet-security/ghouls-of-the-internet-protecting-your-family-from-scareware-and-ransomware/ | 2024-09-16T12:13:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00822.warc.gz | en | 0.930168 | 1,246 | 3.078125 | 3 |
Most of us use the internet pretty much every day of our lives, and most of the time that will be done using a wifi connection. While the internet has opened many amazing doors, it has also opened things up for scammers and malicious software that can hack your server and access your information.
Our favorite router for security:
Best secure VPN:
In this article, we will look at some of the ways you can learn how to make your wifi hackproof to make sure that you’re safe from the nefarious side of the internet.
- WiFi allows us to access the internet through radio waves that allow for high transfer speeds using a router or a similar device.
- It’s vital to have good WiFi security to protect your devices from malicious attacks and viruses.
- Some ways you can increase your wifi security are by using strong passwords, using a VPN, and many other easy methods you can implement today.
What is WiFi?
As we mentioned before, the internet is a vast ocean of data, information, and pretty much anything you can think of, all available to each of us. In order to access all that data, you need an access point to the web.
That’s where wifi comes in. Basically, wifi allows us to access the internet through radio waves that allow for high transfer speeds. This will typically be done through a router or a similar device.
Having that wifi connection does open you up to malicious attacks, which is why it’s vital to make sure you’re secure.
How Does WiFi Work?
The process of how wifi works is quite complex, but it can be boiled down to radio waves. Let’s say you have a smartphone, that phone will be connected to your router via these radio waves.
That allows the phone to access the internet network along with any other devices that are connected.
Why Having A Secure WiFi Is Important
It’s not a nice thought, but sadly, the internet is filled with people and software that want nothing more than to gain access to your information for shady purposes. The less secure your wifi is, the more you’re opening yourself up to being exploited.
Becoming 100% secure is quite a tall order, but there is a lot you can do to get started with making yourself a lot harder to hack.
Securing your WiFi network is akin to fortifying the digital gateway to your personal domain. Implementing robust passwords, enabling WPA3 encryption, and keeping your router firmware updated are not just technical tasks; they are fundamental practices in crafting a resilient barrier against the relentless tide of cyber threats.
– Kurt Sanger Cybersecurity Expert
How To Protect Your WiFi From Being Hacked
There are quite a few measures you can take to protect yourself from hacking, and we will cover some of the top methods now. These methods will already help a lot, but you can also check out this list of 5 tips for cybersecurity for further information on the subject.
Use A Strong Password And Change It Often
The password you use for your WiFi can make a huge difference. If the password uses your name and birth year, for example, then a hacker wouldn’t have to do much guessing to discover what it is.
That’s why it’s a great idea to use passwords that are easy for you to remember, but based on things people who don’t know you won’t be able to guess.
Even if you use great passwords, it’s still a good idea to change them fairly frequently. That will make it even more difficult for hackers to guess what it is. It may seem like a simple thing, but having inaccessible passwords is a great first step toward making yourself more secure.
Be Careful Who You Give Your Password To
Even if you have an incredible set of passwords that you update monthly, it will be pointless if you hand them out like Halloween candy. Naturally, there are people you can trust your passwords with.
Most people will be okay with friends and family knowing their passwords, but it can be okay to share with outside parties. This is something you should be very careful with, however, and you should only give out passwords to people you absolutely trust.
A good rule of thumb is to never give out passwords to people who contacted you instead of you contacting them. If you contacted a financial adviser to help you work out your taxes, then giving them password access may be advisable.
However, if you’re cold-called by someone claiming to work for your bank, for example, then you should definitely not give them any information. It’s never rude to say no to giving your password to someone you don’t trust implicitly.
Create A Guest WiFi
Creating a guest wifi network can be a great solution for safety, especially if you will be granting people you don’t know access to your router. For instance, you may have an Airbnb room in your home. If you give tenants access to your main network, then they may be able to access your information.
Creating a guest wifi network will separate the connections and make it harder for guests to pry into your information.
Enable WPA3 Encryption
Encryption involves converting data to a specific code language that only people with the key can decipher. Encryption can be strong or weak, so you want to make sure that you have the strongest encryption possible.
WPA3 encryption is recommended for wifi connection, as it is the most up-to-date encryption available currently. Whenever you have the choice, this is the encryption type you should pick for your wifi.
Keep Your Router Firmware Up To Date
Whenever you have a device that can connect to the internet, there will likely be fairly regular firmware updates. This will be the case whether it’s a gaming console, a home security camera or a router.
These updates are usually pretty small and may therefore seem unimportant. However, you should always perform these updates whenever possible, as they will often incorporate the latest security measures to keep your protection up to date.
Use A Firewall
A firewall is a layer of protection that will monitor information processed to and from your network. This firewall will detect potentially malicious software or other threats to your network.
Many antiviruses will include a firewall feature, and it’s definitely something you will need to prioritize for your safety. In addition to a good firewall, services such as Identity Guard can also be incredibly helpful in protecting you and your family from a variety of attacks and breaches.
Be Careful With Public Wifi
When you’re dealing with your own home wifi network, you know the layers of protection that are in place because you would have implemented them yourself.
When you’re on public wifi, you have no assurances about the security measures the admin has implemented, so care is definitely recommended.
For example, if you’re using wifi at a cafe or hotel, you should avoid accessing any sensitive or private information, as hackers may be able to easily access it.
Use A VPN
VPNs, or Virtual Private Networks, have many great uses. This is why we always recommend them to anyone using the internet, as they provide unparalleled internet access and anonymity.
A VPN can also make you much harder to track and monitor, so it can be a real headache for hackers to bypass. NordVPN is always a great choice to start with, as we consistently rank it as one of the best VPNs on the market.
Verify Connected Devices
In your home, you’ll probably have a good idea of the devices that re connecting to your network. For example, you may have 3 smartphones, a laptop, a smart TV and a Playstation 5 connected to your wifi.
You can monitor that these are the only connected devices, and that means you can see if an unidentified smartphone has mysteriously connected. This could mean that someone has guessed your password and can therefore access your network.
That’s why it’s always a good idea to verify and keep track of any connected devices.
Enable MAC Address Filtering
MAC address filtering is a feature that some routers have. Essentially, this feature allows you to limit and control which devices can access your network. Having this extra degree of control makes it more unlikely that unauthorized devices will be able to connect, so we recommend having it turned on.
Use A Secure Router With Quality Security Features
Unfortunately, not all routers are created equally, and some may lack essential security features. That’s why it’s always best to go for routers that have more comprehensive features.
The more secure a router is, the pricier it may be. It’s usually worth it to pay a bit more for extra security, but you can often find some really decent, reasonably-priced routers that will get the job done.
Protecting yourself from scammers and malicious software can seem like an uphill battle, but that’s why we wanted to cover ways that you can learn how to make wifi hackproof.
If you utilize all of the tips in this article, maintain security measures and buy quality hardware, then you will be well on the way to making yourself a lot less susceptible to hacking and scam attempts.
Where To Buy A Secure Router And VPN
There are so many different routers and VPN products to choose from that it can get overwhelming. In this article, we cover one of our top picks for routers that would be great for small businesses.
These routers would also be great for general use, so we would recommend them to any users. There are routers for all budgets and security needs, so we highly recommend checking that article out.
When it comes to VPNs, NordVPN would be one of our top choices, as we touched on earlier. If you want to compare NordVPN to other top products so that you can choose the best for you, then we have you covered. | <urn:uuid:8ba674e2-2068-4ccc-9109-6bd8017ab342> | CC-MAIN-2024-38 | https://battensafe.com/resources/how-to-make-wifi-hack-proof/ | 2024-09-20T06:12:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00522.warc.gz | en | 0.949664 | 2,049 | 2.515625 | 3 |
An increasingly digitalized global economy requires ever-more digitally skilled workforces for nations to remain productive. Unfortunately, according to a recent report, the U.S. workforce is becoming less digitally savvy than other countries.
The report, published by the Information Technology and Innovation Foundation (ITIF), found that roughly 31 percent of employees in the U.S. workforce have either no or limited digital skills, with about one in six employees unable to use email, web search, or other basic online tools. The U.S. ranks just 29th out of 100 countries for the digital acumen of its workforce in business, technology, and data science.
“The United States has led the global digital revolution in (information and communications technology) fields, but across the workforce, the United States is increasingly faltering, which is detrimental to long-term U.S. competitiveness,” Stephen Ezell, the vice president of global innovation policy at ITIF and report author, said in the report summary.
Countries that wish to compete in the global digital economy successfully must cultivate workforces possessing the requisite digital skills so that industries, enterprises, and even individuals can thrive in the digital environment. In the U.S., digital skill requirements have increased for many occupations. The report noted that whereas only 44 percent of U.S. jobs required medium-high digital skill levels in 2002, 70 percent did by 2016.
Digital skills are also critical to higher wages. According to the report, jobs that incorporate higher levels of digital content pay more. In fact, for every 10 percent increase in IT-task intensity, the average U.S. worker’s salary increases four percent.
“The United States is far behind its competitors when it comes to broad workforce digital skills,” Ezell said. “This should raise the alarm in Washington as an increasingly digitalized global economy requires ever-more digitally skilled workforces for nations to remain productive.”
The report recommended that the U.S. increase its number of computer science graduates and concentrate particularly on women, who in 1995 represented 37 percent of U.S. computer scientists but today only represent 24 percent. It also recommended that the U.S. significantly increase its investment in workforce training, including for digital skills; the Federal government now invests less than half as much in such programs as it did 30 years ago. | <urn:uuid:e49190d9-aa3b-4d9d-965c-60eaf9cc6a08> | CC-MAIN-2024-38 | https://meritalkslg.com/articles/report-u-s-workforce-lacks-digital-skills-compared-to-other-nations/ | 2024-09-09T09:15:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00622.warc.gz | en | 0.950008 | 485 | 2.5625 | 3 |
By now, everyone is familiar with the Internet of Things (IoT) where everything big enough to hold an integrated circuit or CPU has one on board somewhere. They’re in our refrigerators, stoves, thermostats, garage door openers, credit cards, smartphones, and probably not unexpectedly, in our running shoes, powered by our own motion, to accurately report how far we walked, calories burned, and distance travelled.
Some might think this is excessive, but we have not yet begun to computerize all that we can. Currently we make computers that are so small, they almost vanish in a spoonful of salt or sugar.
This particular image is of IBM’s latest creation from March of 2018, actually sitting on a tiny pile of salt. It has all the computing power of the x86 series of computer Central Processing Units (CPUs) of the 1990s. The “die” sitting on the fingertip is actually about 40 of these micro-computers which haven’t been separated yet. Want to see it up close? Here’s a (silent) CGI video tour of just one of these tiny miracles, showing the millions of transistors that allow its existence.
Heading for the Future
The progress we’ve experienced since the first substantive computers were invented has been stunning. Back at the dawn of the computer age we expected computers to solve every problem. The original room-filling machines were fully-expected to shrink substantially. We just didn’t know how much.
Contemporary science writers wrote of tremendous technical leaps forward like this one about the incredible ENIAC computer:
“Where... the ENIAC is equipped with 18,000 vacuum tubes and weighs 30 tons, computers in the future may have 1,000 vacuum tubes and perhaps weigh just 1-½ tons.”
--Popular Mechanics magazine, March 1949, page 258
Getting cheap jeans may be perfectly satisfactory to some, but when it comes to health and safety, that is a different matter. Counterfeit products could be eliminated by use of these tiny 10¢ computers.
The talk is now of using them to mark everything ever manufactured with its own indisputably unique Blockchain code, thus eliminating the possibility of fakes and knockoffs forever. Edible versions could be printed on malaria pills to prevent useless fakes from making their way to disaster zones; liquid versions could absolutely identify real, original wines made by reputable vintners, free of poisonous automobile antifreeze used to artificially enhance sweetness by criminals invading this profitable market. Individual identification for everything may still be a few years in the future, but we’re getting there.
Industrial Internet of Things (IIoT)
The next obvious step for manufacturing was simply to connect all the various bit-and-pieces involved in the manufacturing and assembly process so they could “talk” to each other, sharing vital process information. This is called machine-to-machine (M2M) communication.
Nowadays, computer tablet equipped employees will be instantly notified if the last batch of canned peaches didn’t reach pasteurization temperature in the canning process. They’ll know if cans of paint are being overfilled or under filled, and be able to alter the process without ever laying hands on the machinery itself.
More importantly, however, through the use of Artificial Intelligence (AI), the machinery could respond to these problems as they arose, providing instant, timely solutions (alter the timing, cooking the peaches a little longer until they reach the proper temperature; changing the fill parameters of the paint line) so that no bad product ever gets created in the first place.
Machine Vision has allowed us to teach AIs to interpret data visually. Say, for example, that undersized sheet stock was going into the laser cutter. Instead of cutting 15 out of 20 parts correctly and having five errors, the AI could simply stop the process and demand the proper size material for the job.
This non-human process management would mean responses faster than any human could provide. The increase in efficiency would be manifold, along with the elimination of waste, and consequently, the monetary gains would be multiplied.
The most important thing to remember is keeping your employees informed. Their natural fear is that they will be replaced, but this is completely unrealistic. When you increase productivity and increase profits you don’t need fewer people—you need more!
Granted, tasks will change; you’ll need people to monitor, train, and program the machines so the AIs can take over these mundane tasks. The machines are perfectly suited for the uncreative, unrewarding, mind-numbing tasks that are found throughout the manufacturing sector.
People, on the other hand, excel at creative tasks. Your HR Department needs to get out the message that current employees will be retrained for these new requirements at company expense. Current employees are far too valuable to lose, because they already understand the company culture, procedures, and the unique methods or technology that makes a business possible.
The greatest resistance to this level of change comes not from upper management, but from the employees-on-the-floor who are not kept up to date and informed of how they fit into the new venture. It doesn’t matter whether you are the janitor or the CEO. Everyone needs to feel they have a place, are valued, and have security in their job.
We’re all looking forward to the day when we have a personal AI like Tony Stark’s (Ironman) electronic butler, Jarvis…and it is going to happen, eventually, because the technology is inevitable. In the meantime, the future is in our hands right now.
IIoT is not some passing fancy. It is already in use by the most progressive manufacturers. Whether you see it as a steam engine or a modern maglev, this train is leaving the station right now. Get on board, or your competitors will stampede right past you, leaving you in the dust cloud of historical failures.
Taylor Welsh is a content writer for AX Control. | <urn:uuid:9e27e261-ae28-47fc-a6d8-bfc62470f93f> | CC-MAIN-2024-38 | https://www.mbtmag.com/industry-4-0/blog/13247980/iot-becomes-iiot-and-it-is-changing-everything | 2024-09-10T13:27:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00522.warc.gz | en | 0.948165 | 1,264 | 3.0625 | 3 |
In the past few years, we've seen a rising trend of electrical cooperatives (co-ops) utilizing their existing resources and know-how to bring broadband internet access to underconnected regions.
Originally, electrical co-ops were established to provide electricity to rural and remote communities that larger utility companies often ignored. These co-ops are usually owned and managed by the customers they serve, with a focus on delivering reliable and affordable electricity.
As the significance of internet connectivity grows in today's digital world, many rural areas continue to lack access to high-speed internet services. Recognizing this need, some electrical co-ops have started to explore the possibility of extending their services to include broadband internet.
By leveraging their existing infrastructure, such as power lines, utility poles, and substations, electrical co-ops can expand their reach to offer internet services. This method is often referred to as "Fiber-to-the-Home" (FTTH) or "Fiber-to-the-Premises" (FTTP), where fiber optic cables are installed to provide high-speed internet access directly to homes and businesses.
Several benefits come with electrical co-ops entering the internet service provider (ISP) market. Firstly, they already have a well-established customer base and a deep understanding of the communities they serve. This familiarity allows them to adapt their internet services to meet the specific needs of their customers.
Moreover, electrical co-ops are often nonprofit entities, meaning their primary goal is to serve their members and the community rather than maximizing profits. This can result in more affordable pricing, improved customer service, and a commitment to bridging the digital divide in underserved areas.
Transforming an electrical co-op to deliver internet services can involve significant investment in infrastructure, such as laying fiber optic cables and upgrading network equipment. However, some co-ops have managed to secure funding through grants like the BEAD program*, loans, or partnerships with other organizations to support these expansion efforts.
Additionally, the decreasing costs of technology in many networking products can make this option more appealing. Software with white-box networking, often a fraction of the price of traditional networking solutions, can open up economic options and help drive business decisions to offer data services.
By adopting white box networking, co-ops can take advantage of the flexibility and cost-effectiveness of commodity hardware while maintaining control and customization over their network infrastructure. With white box switches and routers, co-ops can tailor their network to meet specific requirements, optimize performance, and scale according to their needs. This level of customization enables co-ops to efficiently deliver data services to their members.
White box networking also offers programmability and automation capabilities, empowering co-ops to streamline network management, reduce operational costs, and quickly deploy new services.
It's crucial to note that the transition to becoming an internet service provider may vary for each electrical co-op. Some co-ops may choose to build and operate their broadband networks, while others may partner with existing ISPs or utilize a combination of strategies to deliver internet services effectively.
Kevin Myers, Senior Network Architect with IP ArchiTechs, said, "I believe Electric Co-Ops are ideally positioned to capitalize on this opportunity because they have experience in infrastructure, heavy construction, and the necessary components to become an ISP, as well as managing a utility-type infrastructure with customers and subscribers."
In conclusion, the involvement of electrical co-ops in providing broadband internet services reflects their commitment to community development and addressing the digital divide in underserved areas. This trend showcases the potential for leveraging existing infrastructure and local expertise to bridge the gap in internet access and contribute to the overall economic and social well-being of rural communities.
Chief Marketing and Product Officer, IP Infusion
As Chief Marketing and Product Officer (CMO), Kelly LeBlanc is responsible for global marketing and product management at IP Infusion. Kelly is a seasoned Silicon Valley marketing executive with 25 years of experience across multiple technology markets. Most recently Kelly was the CMO of Pica8 where she managed global marketing for the company’s enterprise and data center software solutions.
Chief Marketing Officer, EPS Global
Chief Marketing Officer
Senior Network Architect, IP ArchiTechs
A 21-year veteran of service provider, large enterprise and data center networking, Kevin spends his time leading the network architecture and operations team for IP ArchiTechs. Acting as a senior escalation point for the team with a primary focus in advanced routing and switching, he often works on complex issues and designs. Outreach into the networking community is a large part of his work through blogging, podcasts and contributions to various forums and slack channels. An avid presenter, Kevin has spoken at a number of conferences globally as a subject matter expert on network engineering and design. | <urn:uuid:7d60a247-b3f8-4fcc-b95c-d4e52a2a1cfc> | CC-MAIN-2024-38 | https://www.epsglobal.com/about-eps-global/blog/june-2023/the-role-of-electrical-co-ops-is-revitalized-throu | 2024-09-13T00:47:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00322.warc.gz | en | 0.956483 | 976 | 2.734375 | 3 |
Governments have long sought ways to regulate Internet activity, whether for the purposes of taxation, content regulation, or the application of national laws. Effective regulatory measures have often proven elusive, however, since, unlike the Internet, national laws typically end at the border. Earlier this month, the United States began to move aggressively toward a new way of confronting the Internet’s jurisdictional limitations—the domain name system.
As every Internet user knows, inadvertently entering the wrong email or web address typically means that the email bounces back or takes the user to an unexpected destination. As my weekly technology law column notes (Toronto Star version, homepage version), legislators have now begun to consider the possibility of intentionally stopping access to certain sites by ordering Internet providers to block access to their domain names.
The Combating Online Infringement and Counterfeits Act, recently introduced in the U.S. Senate, would potentially force Internet providers, domain name registrars (companies that register domain names) and domain name registries (organizations that maintain the domain name database) to block access to specified domain names.
This domain name block list—already being dubbed the Great Firewall of America—would be created through a censorship court order obtained by the U.S. Attorney General. The court order could be used to shut down a site located within the U.S. or to order Internet providers to block access to the domain name if the site resides outside the country.
Moreover, the Department of Justice could identify additional domain names that are “dedicated to infringing activities.” Despite the absence of any court oversight, this second list would also likely involve blocked domains since Internet providers would be immune from liability provided they curtail access to them.
This notably targets websites located anywhere in the world, since any domain—wherever located—may placed on the list. In fact, since the core of the domain name system resides in the U.S., it is possible that the site could be blocked at a global level if it was removed or rendered inaccessible from the “master” domain name database.
This is not the first time the U.S. has used its control over the domain name system to establish a home field regulatory advantage. In 1999, it enacted the Anticybersquatting Consumer Protection Act to deal with cases of domain name cybersquatting.
The drafters of that law recognized the jurisdictional challenges inherent in resolving domain name disputes by granting trademark holders the right to file a lawsuit against the domain name itself, rather than against the domain name registrant. That approach, known as in rem jurisdiction, treats the domain name as property that can be sued. This led to cases where U.S. courts ruled that they are entitled to order the transfer of a domain name even where a foreign court has issued an order barring the transfer.
The net effect of these laws is to create a two-tier regulatory structure for the Internet. Domain names may be global—more than 200 million have been registered worldwide—yet the U.S. continues to retain effective control over much of the system. As the recent moves to use the domain name system to address online concerns demonstrates, that control raises serious concerns about its jurisdictional reach and the misuse of a system intended to route Internet traffic without regard for its content or destination. | <urn:uuid:8cef3b97-17b5-4076-8bbd-93cb6ef411ab> | CC-MAIN-2024-38 | https://circleid.com/posts/20100929_us_uses_domain_names_as_new_way_to_regulate_the_net | 2024-09-16T18:45:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651710.86/warc/CC-MAIN-20240916180320-20240916210320-00022.warc.gz | en | 0.935193 | 679 | 2.765625 | 3 |
Introduction: Why is Computer Support Important?
Why is Computer Support Important? In today’s digital age, the importance of computer support cannot be overstated. Whether you’re a business owner managing a network of computers or an individual using a single device, reliable computer support is a crucial aspect of maintaining efficiency and productivity. It plays a pivotal role in ensuring that our computers function optimally, and can be a lifeline when things go wrong.
Computer support, in its various forms, is the backbone that keeps our digital world running smoothly. It encompasses everything from troubleshooting hardware and software issues to providing vital security measures against cyber threats. Without it, we would find ourselves lost in a maze of technical problems, potentially compromising our work and personal data. Hence, understanding why computer support is important can help us appreciate its value and influence in our daily lives.
Understanding the Basics: Why is Computer Support Important?
Computer support has become an essential part of our everyday lives. It’s the backbone that ensures the smooth running of our systems, preventing and resolving any technical issues that may arise. Without computer support, we would be left helpless in the face of software glitches, hardware malfunctions, and cyber threats. It’s not just about fixing problems; it’s about maintaining system performance and safeguarding valuable data.
Moreover, computer support plays a critical role in facilitating software and hardware updates, which are necessary to keep your systems up-to-date and protected against the latest cyber threats. It also assists in data backup and recovery, which is crucial in preventing data loss. In a nutshell, computer support is not a luxury, but a necessity in our increasingly digital world. Understanding the basics of computer support is key to appreciating its importance and leveraging its benefits to the fullest.
The Role of Computer Support in Maintaining System Efficiency
The role of computer support is crucial in maintaining system efficiency. It is the backbone that ensures all your hardware and software are running smoothly and effectively. From troubleshooting errors, performing regular updates, to optimizing system performance – computer support handles it all. They are the unsung heroes who work behind the scenes, ensuring that your system is up and running, minimizing downtime and enhancing productivity.
Advanced technologies, such as AI and machine learning, are now playing a significant role in computer support. They help in predicting potential system failures and provide proactive solutions, thereby enhancing system efficiency. Moreover, they aid in automating routine tasks, which frees up time for IT professionals to focus on more complex issues. Thus, computer support, aided by these advanced technologies, plays a pivotal role in maintaining system efficiency.
Ensuring Data Security: The Importance of Computer Support
In the digital age, ensuring data security has become a paramount concern for businesses and individuals alike. A robust computer support system plays a critical role in safeguarding sensitive information from potential threats. This system encompasses various aspects including antivirus software, firewalls, data encryption, and regular system updates. These components work in tandem to create a formidable defense against cyber threats, ensuring the integrity and confidentiality of your data.
However, the importance of computer support extends beyond mere protection. It also involves constant monitoring and timely response to any potential breaches. In the event of a security incident, a proficient computer support team can promptly identify the issue, mitigate the damage, and prevent further intrusion. Thus, computer support is not just about prevention, but also about quick and effective response, making it an indispensable part of any data security strategy.
Computer Support: A Key Factor in Business Productivity
Computer support plays a pivotal role in enhancing business productivity. A robust support system ensures seamless operation, mitigates downtime, and optimizes software and hardware performance. When your IT infrastructure is in good hands, your team can focus on their core tasks, driving productivity.
In this digital age, businesses rely heavily on technology. From communication to data management, everything revolves around computers. Therefore, any disruption in your computer systems can lead to significant downtime, affecting your bottom line. With professional computer support, you can prevent such issues, ensuring your business runs smoothly and efficiently.
The Impact of Computer Support on Software Updates and Upgrades
The impact of computer support on software updates and upgrades is undeniable. The right computer support can streamline the process, reducing downtime and ensuring that software continues to function optimally post-update. It is through skilled IT support that businesses can navigate the complexities of software updates and upgrades, mitigating risks and potential disruptions.
In an era where technology is evolving rapidly, staying current with software updates and upgrades is critical. However, these updates can sometimes introduce new bugs or compatibility issues. This is where robust computer support comes in. They not only handle the technical aspects of the update but also provide troubleshooting and problem resolution post-update. Thus, effective computer support can significantly enhance the impact of software updates and upgrades, ensuring smooth transitions and continuous business operations.
How Computer Support Enhances User Experience
Computer support plays a pivotal role in enhancing user experience, ensuring seamless interaction with technology. It’s the lifeblood that keeps systems running smoothly and users satisfied. From troubleshooting software issues to mitigating hardware malfunctions, computer support teams work tirelessly to ensure a frictionless user experience. They leverage their technical expertise to resolve issues promptly, minimizing downtime and maintaining productivity.
Users demand immediate, effective solutions to their tech problems. This is where computer support shines. By providing round-the-clock assistance, computer support teams foster a sense of reliability and trust among users. They also impart valuable knowledge, empowering users to troubleshoot minor issues independently. Thus, computer support not only resolves immediate issues but also equips users with the tools to navigate future challenges, significantly enhancing the overall user experience.
Conclusion: Why is Computer Support Important?
Why is Computer Support Important? To wrap up, computer support plays a pivotal role in both personal and professional settings. It ensures the efficient operation of systems, mitigates potential risks, and provides solutions to technical issues. Without it, we would be left to navigate the complex world of technology on our own which could lead to significant downtime, data loss, and decreased productivity.
In essence, computer support is the backbone of any IT infrastructure, ensuring its stability and longevity. From troubleshooting software glitches to implementing system upgrades, these professionals keep our digital world running smoothly. So, next time you find yourself facing a computer issue, remember the importance of computer support and how it’s a key player in maintaining our digital lives. | <urn:uuid:8e81e795-8eb2-429d-b22e-b703601187cb> | CC-MAIN-2024-38 | https://k3techs.com/why-is-computer-support-important/ | 2024-09-17T22:06:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00822.warc.gz | en | 0.931715 | 1,331 | 2.515625 | 3 |
探討邊緣運算之智慧醫療 Introduction AI has been changing our world with a vast number of new applications in manufacturing and transportation. Its power can also be leveraged to healthcare and life sciences. Together with the advantage of edge computing that massive amount of data is processed and analyzed at where it is generated and collected, Edge AI enables real-time decision-making/actions for medical clinical and research teams to embrace more possibilities. As the pandemic has raged since 2019, the severer shortage of the medical manpower and the stronger growth pressure of the medical industry led to the evolution of smart healthcare and better medical management with automation and new technology. Let us see what Smart Medical is capable of in the fast-moving world. Diverse Applications in Smart Medical Remote Management The manpower and TCO of medical system can be significantly lowered by remote management with the aids of edge AI and faster internet. As there is no need to send high-bandwidth data such as video streams across the network between cloud and premises, Edge AI improves the efficiency by processing the information at where it is generated and even integrate the collection with database in cloud for remote collaboration between patients, medical staffs, and numerous edge devices. Moreover, the security of patient health information (PHI) can be enhanced as the inference is completed at edge and the data was kept within the device to avoid attacks and data breaches. Patient Care Smart hospitals integrate edge computing and AI workflows with a mass of status collection not only for patient monitoring/ screening, but more intelligent uses. Edge AI in medical ecosystem can alert the medical personnel for attention and help teams make urgent emergency responses, crucial decisions, and even life-saving predictions based on the inference and real-time processing at edge. Without any delays, it performs superior compared to centralized cloud as the powerful computing is close to patients and doctors, at the point of care where exams and diagnoses take place. Surgery Assistance With ultra-low latency processing of edge computing, safter surgeries can be ensured. Handling data at the edge with AI and high-speed transmission of faster internet and higher throughput provides near-instantaneous insights/ feedback for tasks such as hand-eye coordination and warning where critical organs are during the procedure. As surgery assistance, Edge AI helps to avoid accident and improve surgery success rate. Medicine Research In addition to clinical operations, artificial intelligence also brings benefit to research of treatment of disease such as cancers and complications from COVIN-19. With advanced algorithms to process the great numbers of information sharing from global medical technology/ pharmaceutical companies and National Medical Services, new drug candidate e.g., vaccine, antivirus, etc. can be tested inside the computer against the target disease to expedite its development period and provide more efficient healing methods or even personalized medicines. Drug discovery is undergrounding the same revolution of what we have seen in other AI-impacted industries. Conclusion Delivering AI at the edge can minimize data privacy concerns, increase efficiency of remote management, enable real-time AI for clinical decisions, assist smart surgeries, boost the development of drugs, and more to be developed and discovered. AEWIN Edge AI servers with features of high reliability and fine design for edge deployment with the flexibility to expand per user customers’ demands are ready for the new generation of smart medical. Discovering more by contacting AEWIN friendly sales! SCB-1932C: 2U Edge Server with dual Intel® 3rd Gen Ice Lake-SP with 2x Gen 4 x16 FHFL GPU cards, 4x PCIe Gen4 x8 slots, redundant PSU, and short depth design. SCB-1937C: 2U Edge Server with dual AMD EPYCTM 7000 series with 2x Gen 4 x16 FHFL GPU cards, 4x PCIe Gen4 x8 slots, redundant PSU, and short depth design. BIS-3101: Desktop Workstation with Intel® 8th/9th Core i CPU, supporting Gen 3 x16 dual width GPU card. | <urn:uuid:609d179f-da93-43e8-9ddc-5b00fa66230b> | CC-MAIN-2024-38 | https://www.aewin.com/zh-hant/application/edge-ai-for-smart-medical/ | 2024-09-20T11:05:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00622.warc.gz | en | 0.916649 | 826 | 2.75 | 3 |
What is a Zero-Day Vulnerability?
A zero-day vulnerability is an undiscovered flaw in an application or operating system, a gap in security for which there is no defense or patch because the software maker does not know it exists—they’ve had “zero days” to prepare an effective response.
How do you find zero-day vulnerabilities?
The first step in cybersecurity is accepting that there is no invulnerable system, no perfect defense that will prevent any breach. A zero-day vulnerability can surface in any business, in any system, at any time. Once you accept the possibility of unknown vulnerabilities, recognize that attacks are always possible, you can form a pragmatic strategy to minimize risks while at the same time planning how to react quickly and recover from a breach.
How do you handle zero-day vulnerabilities?
When software vendors and cybersecurity researchers discover a zero-day vulnerability, they act quickly to design and implement a security patch. Companies that might be affected by the potential security flaw must be made aware of it as quickly as possible, must implement the security patch as soon as it’s available, and must be vigilant against the possibility of a security breach during the window of vulnerability—even after the patch has been applied.
Zero-day vulnerability vs. zero-day attack
A zero-day vulnerability is a potential threat, a gap in security that exists only until it can be repaired. But until a patch has been developed, tested, and released, there is a critical period of time during which the vulnerability can be exploited and attacked. For that interval, attackers have a brief advantage—malware is often easier and quicker to design.
A zero-day exploit is the worst-case scenario, where malicious code is developed and deployed to take advantage of the vulnerability before a security response is available.
A zero-day attack occurs when bad actors use a known exploit to target a vulnerable system to damage its operation or steal privileged information.
What is an example of a zero-day attack?
One famous example of a zero-day attack occurred during the early days of the COVID-19 pandemic, when vast numbers of students and office workers abruptly transitioned to remote learning and working from home, and everyday use of videoconferencing software multiplied practically overnight. One of the most popular videoconferencing platforms, Zoom, had over 500 million downloads in 2020 alone.
In April 2020, a zero-day vulnerability was discovered in Zoom that made it possible for attackers to gain remote access to users’ computers under certain conditions. The weakness was soon patched, but not before widespread negative publicity led many businesses and schools to temporarily restrict or prohibit the use of Zoom software.
How many zero-day attacks have happened?
The number of zero-day exploits has exploded in recent years. A record 83 zero-day exploits were reported in 2021, more than double the number reported in 2020. Security researchers attribute the rise in zero-day events to the continued growth of software offerings, cloud hosting services, and Internet-connected devices—but also to the increasing attention and sophistication of security software and services, discovering attacks that might previously have gone undetected.
HPE and zero-day vulnerabilities
HPE can help your organization achieve a cyber-resilient workplace from the data center to remote employees’ home offices and everywhere in between with HPE Pointnext security risk management services.
Our security experts help customers minimize zero-day vulnerabilities through secure-by-design and zero-trust principles and accelerate time to recovery with tested business continuity and disaster recovery strategies. HPE GreenLake data protection services offer both backup as a service and disaster recovery as a service.
HPE GreenLake for AI, ML, and analytics helps organizations in virtually every industry leverage artificial intelligence (AI) and machine learning (ML) to extract business value and actionable insights from the incredible volume of data generated across their enterprise network.
HPE GreenLake for business applications offers a suite of services to enhance the performance, efficiency, and security of enterprise information systems from edge to cloud, with the ability to scale up as network infrastructure expands and operations evolve to seize new opportunities.
HPE GreenLake for Big Data helps businesses make effective use of the massive volume of information they generate every day, turning it into practical insights and valuable business intelligence. By properly leveraging Big Data, companies can speed up time-to-market and reduce capital expenditures. | <urn:uuid:99a24b69-b9fa-4f6e-9972-22189f0296e9> | CC-MAIN-2024-38 | https://www.hpe.com/au/en/what-is/zero-day-vulnerability.html | 2024-09-20T10:26:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00622.warc.gz | en | 0.952397 | 908 | 3.59375 | 4 |
As most people know by now, the Linux operating system has been developed under the philosophy of Open Source software originally pioneered by the Free Software Foundation as “free software”. Nevertheless, many people don’t truly appreciate just what Open Source really is. In this blog post, I’ll offer my perceptions.
Quite simply, Open Source is based on the notion that software should be freely available: to use, to modify, to copy. The idea has been around for some twenty years in the technical culture that built the Internet and the World Wide Web and in recent years has spread to the commercial world.
There are a number of misconceptions about the nature of Open Source software. Perhaps the best way to explain what it is, is to start by talking about what it isn’t.
- Open Source is not shareware. A precondition for the use of shareware is that you pay the copyright holder a fee. Open source code is freely available and there is no obligation to pay for it.
- Open Source is not Public Domain. Public domain code, by definition, is not copyrighted. Open Source code is copyrighted by its author who has released it under the terms of an Open Source software license. The copyright owner thus gives you the right to use the code provided you adhere to the terms of the license.
- Open Source is not necessarily free of charge. Having said that there’s no obligation to pay for Open Source software doesn’t preclude you from charging a fee to package and distribute it. A number of companies are in the specific business of selling packaged “distributions” of Linux.
Why would you pay someone for something you can get for free? Presumably because everything is in one place and you can get some support from the vendor. Of course the quality of support greatly depends on the vendor.
So “free” refers to freedom to use the code and not necessarily zero cost. As someone said a number of years ago, “Think ‘free speech’, not ‘free beer’”.
Open Source code is:
- Subject to the terms of an Open Source license, in many cases the GNU Public License (see below).
- Subject to critical peer review. As an Open Source programmer, your code is out there for everyone to see and the Open Source community tends to be a very critical group. Open Source code is subject to extensive testing and peer review. It’s a Darwinian process in which only the best code survives. “Best” of course is a subjective term. It may be the best technical solution but it may also be completely unreadable.
- Highly subversive. The Open Source movement subverts the dominant paradigm, which says that intellectual property such as software must be jealously guarded so you can make a lot of money off of it. In contrast, the Open Source philosophy is that software should be freely available to everyone for the maximum benefit of society. Richard Stallman, founder of the Free Software Foundation, is particularly vocal in advocating that software should not have owners (see Appendix C).
In the early years of the Open Source movement, Microsoft and other proprietary software vendors saw it as a serious threat to their business model. Microsoft representatives went so far as to characterize Open Source as “un-American”. A Microsoft executive publicly stated in 2001 that “open source is an intellectual property destroyer. I can’t imagine something that could be worse than this for the software business and the intellectual-property business.”
In recent years however, leading software vendors, including Microsoft, have embraced the Open Source movement. Many even give their programmers and engineers company time to contribute to the Open Source community. And it’s not just charity, it’s good business!
So what is an Open Source license? Most End User License Agreements (EULA) for software are specifically designed to restrict what you are allowed to do with the software covered by the license. Typical restrictions prevent you from making copies or otherwise redistributing it. You are often admonished not to attempt to “reverse-engineer” the software.
By contrast, an Open Source license is intended to guarantee your rights to use, modify and copy the subject software as much as you’d like. Along with the rights comes an obligation. If you modify and subsequently distribute software covered by an Open Source license, you are obligated to make available the modified source code under the same terms. The changes become a “derivative work” which is also subject to the terms of the license. This allows other users to understand the software better and to make further changes if they wish.
Arguably the best-known, and most widely used, Open Source license is the GNU General Public License (GPL) first released by the Free Software Foundation (FSF) in 1989. The Linux kernel is licensed under the GPL. But the GPL has a problem that makes it unworkable in many commercial situations. Software that does nothing more than link to a library released under the GPL is considered a derivative work and is therefore subject to the terms of the GPL and must be made available in source code form. Software vendors who wish to maintain their applications as proprietary have a problem with that.
To get around this, and thus promote the development of Open Source libraries, the Free Software Foundation came up with the “Library GPL”. The distinction is that a program linked to a library covered by the LGPL is not considered a derivative work and so there’s no requirement to distribute the source, although you must still make available the source to the library itself.
Subsequently, the LGPL became known as the “Lesser GPL” because it offers less freedom to the user. So while the LGPL makes it possible to develop proprietary products using Open Source software, the FSF encourages developers to place their libraries under the GPL in the interest of maximizing openness.
At the other end of the scale is the Berkeley Software Distribution (BSD) license, which predates the GPL by some 12 years. It “suggests”, but does not require, that source code modifications be returned to the developer community and it specifically allows derived products to use other licenses, including proprietary ones.
Other licenses—and there are quite a few—fall somewhere between these two poles. The Mozilla Public License (MPL) for example, developed in 1998 when Netscape made its browser open-source, contains more requirements for derivative works than the BSD license, but fewer than the GPL or LGPL. The Eclipse Public License (EPL) specifically allows “plug-ins” to remain proprietary, but still requires that modifications to Eclipse itself be Open Source. The Open Source Initiative (OSI), a non-profit group that certifies licenses meeting its definition of Open Source, currently lists 79 certified licenses on its website.
You may tempted to think that the GPL is just an academic exercise. Nobody takes it seriously, right? Wrong! There are people, the “GPL police” if you will, some of whom have way too much time on their hands, and they take the GPL very seriously. They will “out” anyone who doesn’t play by the rules and there are examples of vendors who have been taken to court as a result.
Bottom line; if you’re concerned about keeping your code proprietary, be very careful about where your models come from. Don’t blindly copy large chunks of code that is identified as GPL Use the code as a model and write your own. If your product is going to incorporate Open Source code, you may want to consult an attorney who specializes in intellectual property law related to Open Source.
Well, this has been a brief personal tour through the world of Open Source software. Not surprisingly, there are a lot of other resources out there on the web. Just google “open source software”.
This article by Doug Abbott is on Open Source Software. | <urn:uuid:4a9908d8-b698-45c9-a8d5-d4b22114794f> | CC-MAIN-2024-38 | https://gogotraining.com/blog/2016/04/ | 2024-09-08T06:48:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00822.warc.gz | en | 0.950892 | 1,658 | 3.265625 | 3 |
Is your inbox suddenly overflowing with all kinds of subscription emails you never signed up for, making it extremely difficult for you to find messages from important senders? If so, it's likely that you've become a victim of an email bombing attack.
Understanding Email Bombing Attacks
Cybercriminals are constantly improving their techniques to stay one step ahead of cybersecurity professionals. Ever since email emerged as a dominant communication channel, malicious attackers have been exploiting it for various nefarious purposes.
The best-known email threat today is phishing, a social engineering attack that takes advantage of the impersonal nature of email communication to trick human victims into revealing sensitive information or acting against their own best interest using fraudulent messages. According to the 2020 Verizon Data Breach Investigations Report, phishing is responsible for nearly one-fourth of data breaches. Phishing is so popular, in fact, that it somewhat overshadows other dangerous email attacks, including email bombing.
The idea behind email bombing is simple: flood the victim's inbox with a deluge of messages to make the inbox unusable. Sometimes, email bombing is performed as an act of vengeance. In such cases, the attacker's ultimate goal is to harass the victim, making their life more difficult. In other cases, however, email bombing is used as a distraction from a more serious cybercrime, such as account hacking and subsequent fraudulent purchases.
Attackers know that any online purchase they make using stolen account credentials automatically triggers an order confirmation message. If the victim notices the confirmation message soon enough and realizes that something isn't right, they may be able to cancel the purchase and stop further abuse of the stolen credentials by changing their password.
To prevent that from happening, attackers write simple scripts that take as input email addresses, which they then automatically sign up for legitimate newsletters and all kinds of other subscription emails. This method of executing a Distributed Denial of Service attack (DDoS) is far more efficient than direct spamming because it's less likely to trigger spam filters. Attackers can even purchase email bombing as a service on the dark web, with prices being as low as tens of dollars for thousands of messages.
Email bombing attacks wouldn't be so easy to pull off if it wasn't for the fact that many websites don't verify new subscriptions in any way, such as by requiring new subscribers to click a confirmation link sent in an introductory email message. Those that do sometimes send multiple confirmation reminders, which are also useful ammunition for an email bombing attack.
Responding to an Email Bombing Attack
It's only natural to start deleting unwanted emails to restore your inbox to its former state. However, manually deleting subscription emails one by one is like fighting the wind—new emails will just keep arriving.
Instead, it's much better to partially close the gate to your inbox by creating custom email rules to filter incoming messages based on keywords like "subscription" or "confirmation" or "sign-up." Since email filters can't really tell a legitimate message from an illegitimate one, you should configure them to only archive matching emails rather than deleting them right away.
When the initial avalanche of emails is brought under control, the next step is to check unread emails for purchase and withdrawal confirmation and other signs of suspicious activity. Even if you don't find any indications that one or more of your online accounts have been compromised, you should still manually check the recent activity on all websites where your payment card information is stored.
We also recommend you change your passwords, making sure that each new password is sufficiently strong and completely unique. While you're at it, you should also enable two-factor authentication on all accounts that support it to add an extra layer of protection in addition to your password.
If you discover that email bombing has been used to hide fraudulent purchases and other illegal activity, don't hesitate to contact your financial institutions so they can help you protect your accounts. Of course, make sure to also inform local law enforcement.
To prevent future email bombing attacks, it's best to work with an experienced provider of IT security services to implement additional email security policies and controls. These may include everything from purposefully slowing down email transmissions (a technique called email tarpitting) or implementing a machine learning-based spam filter.
Email bombing is a productivity-decimating email attack whose true purpose sometimes becomes apparent only when it's already too late to act. That's why all organizations that rely on email should proactively prepare for it, and we at BCA can strengthen your email defenses and educate your employees to ensure that email will always remain a useful business tool for you. Contact us for more information. | <urn:uuid:16a00b09-8d54-435c-afef-8c68bebc3c81> | CC-MAIN-2024-38 | https://www.bcainc.com/2021/12/09/have-you-been-a-victim-of-email-bombing/ | 2024-09-08T06:03:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00822.warc.gz | en | 0.949132 | 938 | 2.609375 | 3 |
Climate change conspiracies are spreading rapidly during UN's COP26 event
Amplified by bots and influencers, millions of posts on social media networks peddle false ideas about climate change.
Conspiracy theories that promote climate-change skepticism and denial spread rapidly across the internet ahead of the United Nation's ongoing COP26 Climate Change summit in Glasgow, Scotland.
Amplified by bots and influencers, a large volume of climate change denial content spread on social media starting in June, according to researchers at Blackbird.AI. The technology firm's platform uses machine-learning algorithms to scan millions of posts across mainstream social networks — including Twitter, Telegram, fringe sites and others — and, aided by human analysts, identified four major climate denial trends targeting U.S. and European climate-change policy. | <urn:uuid:5dfc6f98-78d4-4b3b-bd14-11187ab07294> | CC-MAIN-2024-38 | https://news.danpatterson.com/p/climate-change-conspiracies-are-spreading | 2024-09-15T16:35:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00222.warc.gz | en | 0.887093 | 162 | 3.015625 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.