text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
How a digital chain of custody improves data protection and AI outcomes
If you’ve ever watched crime shows like Law and Order, or Bones, you’ve probably heard about the chain of custody. It’s a forensic technique that documents a piece of evidence from the time it was obtained to the time it is disposed of. It provides a clear record of who had access to it, where it was transported to and when, and any changes in status.
Why is the chain of custody important?
Any break or discrepancy in the chain of custody puts the integrity of the evidence in jeopardy. Its authenticity can then be questioned in court. That’s why law enforcement agencies don’t let just anyone remove a piece of evidence from a crime scene or trust that it would be returned tamper-free to a designated storage area.
What does this have to do with your business data?
It’s all about ensuring data integrity. Nothing is more important in today’s data-driven world. If you can’t trust your digital data, you can’t trust your decisions. And you’re more likely to make poor ones.
The harsh reality is 77% of IT leaders don’t trust their organization’s data for timely and accurate decision-making. And two-thirds of all senior business executives admit to having some reservations or actively mistrusting their data and analytics.
They aren’t wrong to feel this way. With so many people having access to an organization’s critical data systems these days – and cybercriminals using sophisticated methods to infiltrate and alter or extract data – it’s getting harder and harder to ensure data authenticity and security. But doing so is more important than ever in our data-driven world.
One increasingly popular and effective way to accomplish this is with a digital chain of custody for data.
Digital Chain of Custody Definition
A digital chain of custody is the process of tracking and documenting every interaction with data within an organization. This includes who accessed the data, when it was accessed, where it was accessed from, and any changes made. Much like the physical chain of custody used in legal contexts, the digital chain of custody ensures data handling is transparent and traceable, safeguarding it from tampering and misuse.
The Role of Digital Chain of Custody in Data Protection
Implementing a digital chain of custody transforms data protection from reactive to proactive. Here are 5 ways this benefits organizations:
A digital chain of custody creates a clear and detailed audit trail for every piece of data over time. These audit trails are essential for monitoring historical data usage, detecting any unauthorized access or modifications, and enabling immediate remedial action.
Implementing a digital chain of custody typically involves strict access controls to ensure that only authorized personnel can interact with sensitive data. These controls are enforced through authentication and authorization mechanisms that restrict data access based on user roles and permissions.
Incident Response and Forensics
In the event of a data breach or other security incident, a well-maintained digital chain of custody is invaluable for forensic analysis. Forensic investigators can quickly trace back through the data’s history to understand the scope of the breach, identify how the breach occurred, and which data was affected. This accelerates the incident response and recovery process while minimizing the impact on your organization. Using digital forensic tools also helps in the prosecution of perpetrators by providing irrefutable digital evidence of the crime.
Compliance and Regulation Adherence
Many industries are subject to regulations that require tracking data for compliance purposes. A digital chain of custody helps organizations comply with laws such as GDPR, HIPAA, or Sarbanes-Oxley by providing the necessary documentation to prove that data has been handled properly throughout its lifecycle.
For industries that require long-term data retention (like legal, medical, or research fields), a digital chain of custody ensures that data remains accessible, traceable, and unchanged over time. This is essential for archival purposes where data authenticity and integrity must be maintained for many years.
Chain of Custody Impact on Decision-Making, Analytics and AI
The potential for analytics and AI to transform business is undeniable. 96% of business leaders agree AI and machine learning (ML) can help companies significantly improve decision-making. That’s why 83.9% planned to increase their investments in data, analytics, and AI.
But potential can’t be realized with questionable data. In fact, using data you’re not confident about could do more harm than good.
ML and AI models are only as good as the data they are trained on. They often require vast amounts of data to learn from. If this data is corrupted or altered maliciously, it can lead to biased or incorrect model outcomes.
A digital chain of custody helps ensure that all data used in training machine learning models has been accurately logged and remains unaltered from its original state. This preserves the integrity of the training process and the reliability of the models.
By helping secure the infrastructure itself against potential threats, it also reduces the risk of errors or manipulations that could lead to faulty decision-making and catastrophic actions and outcomes.
Secure your data integrity, streamline compliance, and boost your strategic decisions!
Read our e-book to learn how.
5 Best Practices for Maintaining a Digital Chain of Custody
Ownership is essential for maintaining a Digital Chain of Custody. It means you continually possess and are fully responsible for protecting your company’s data. You can’t do this if it’s not always in your possession.
GRAX’s Bring Your Own Cloud (BYOC) data protection model gives you control of your data because it never touches a system that you don’t own. With GRAX, all your historical Salesforce data is replicated directly into your own AWS or GCP cloud. It’s never stored in GRAX’s infrastructure.
Implement comprehensive access controls
Use secure access protocols and robust authentication mechanisms to ensure only authorized personnel can access appropriate data.
Conduct regular audits and reviews
Adhere to a schedule for auditing and reviewing access logs and modifications. This will help ensure adherence to data handling policies.
Protect data at rest and in transit to maintain its confidentiality and integrity. Consider technologies such as cryptographic hashing for securing each transaction in the data’s lifecycle. This is particularly important for critical data that impacts business decisions, legal matters, or customer trust.
Deploy tools that provide real-time monitoring and alerts for unauthorized data interactions so you can quickly respond to potential breaches.
Maintaining a digital chain of custody for data is critical for business success. Investing in robust practices and technologies now can prepare you for today’s data-driven reality, tomorrow’s challenges, and a whole new world of opportunities.
Are you breaking your data’s chain of custody?
It’s time to fix your data integrity issues. | <urn:uuid:98aa1a0a-fde4-45c7-956f-e495bd62ae78> | CC-MAIN-2024-38 | https://www.grax.com/blog/why-chain-of-custody-is-key-for-data-security-and-business-growth/ | 2024-09-17T08:35:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00421.warc.gz | en | 0.920141 | 1,445 | 2.546875 | 3 |
When a program is running under the debugger, by default the run all threads (RA) mode is turned on. In this mode, you step through only one thread at a time, but the background threads run normally. If a background thread reaches a breakpoint, it returns control to the debugger and becomes the current thread. The last debugging mode you select is saved into your .ADB file, so the default mode applies only when you do not have a .ADB file.
You can choose to execute one thread at a time in the debugger. This allows you to trace a thread without interference from other threads. When a new thread starts, the debugger informs you, but continues tracing the parent thread. Use the ST (Switch Threads) command to switch between threads.
You can find a list of the current threads under the Run menu item. This list shows you the current program and address where each thread is executing. You can select the appropriate menu item to switch to that thread as an alternative to the ST command.
A list of all current threads appears at the bottom of the Run menu. The list shows both the name of the program associated with the thread and the address where each thread is executing. To switch between threads, you can select a thread from the list as an alternative to the ST command.
The debugger can manage up to ten threads simultaneously. | <urn:uuid:7a05149e-e6c6-4c1a-9b04-9f5e951ddb6d> | CC-MAIN-2024-38 | https://www.microfocus.com/documentation/extend-acucobol/1001/BKUSUSDBUGS012.html | 2024-09-21T03:47:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00121.warc.gz | en | 0.93079 | 278 | 2.90625 | 3 |
Sustainability has been a buzzword and rightfully so owing to the accelerated global climate crisis. Corporations worldwide acknowledge the urgent need to act on climate change and have even pledged to set climate targets.
According to Gartner, Sustainability is defined as “An objective that guides decision making by incorporating economic, social & environmental impacts.”
Sustainability is pervading all facets of our lives, and it has undoubtedly pervaded the digital technology industry as well. No doubt, Digital transformation is disruptive and innovative, but is it truly sustainable? Well, I would say there are two sides to the coin here.
On the one hand, we can contest that software technologies are intelligent solutions built to support the environment. For example, Microsoft created an AI tool called AI for Earth Initiative to assist environmental organizations. Alternatively, it is also recorded that our digital technology usage is currently responsible for 4% of global CO2 emissions. With more and more digital use, the number is only going to increase. So, where does the solution lie? How can Cloud help with Sustainability?
The solution lies in utilizing Technology to deliver sustainable solutions and to incorporate environmental-friendly IT practices. To make ethical choices regarding design and Technology that invariably contribute to the internet’s broader ecological impact. My perspective – Cloud computing truly is the silver lining, or should we say the ‘green lining’ here. There needs to be a paradigm shift in the organization’s internal structure to achieve sustainability goals by harnessing cloud computing.
In this article, I intend to highlight cloud computing efficacy in driving sustainability goals.
4 Primary ways Cloud helps with Sustainability
Cloud helps with Sustainability in 4 primary ways.
1. Sustainable Platform for Infrastructure, Application, and Data
Microsoft and WSP USA conducted a study wherein it was discovered that Microsoft cloud computing was 93% more energy efficient. Additionally, it was noted that the Microsoft cloud platform had 98% lower carbon emissions than on-premises data centers. This comprehensive study does highlight the efficiency of cloud infrastructure in maintaining Sustainability and has numerous environmental benefits. A report published by Accenture strategy also stated that “migrations to the public cloud can reduce CO2 emissions by 59 million tons per year which equates to taking 22 million cars off the road”. Such statistical data isn’t just hyperbole but a reality.
Let me take you through the different ways the cloud infrastructure poses a sustainable option:
1a. Green Datacenters
Typically, on-premises data centers consume an unreasonable amount of energy as you require a constant power supply, a cooling system to avoid overheating, etc. Additionally, server underutilization also results in energy waste as servers are left unused to build up more e-waste.
Furthermore, when an equipment’s lifecycle ends, it too adds to the e-waste. For example, according to the US Department of Energy, US data centers account for 2%-3% of the overall electricity consumption and carbon emission. Hence, migrating to the Cloud significantly reduces these staggering energy costs and wastes. This is powered by better utilization of servers, power scaling, and fewer hardware requirements than on-premises datacenters.
Additionally, Public cloud providers can deploy ‘green data centers’ by utilizing other energy sources. For example, Microsoft’s data centers use renewable energies like wind, solar, and hydroelectricity.
Migrating to the Cloud will replace high carbon-emitting machines with virtual equivalents. Such a replacement ensures to reduce the company’s carbon footprint significantly. For example, the Cloud enables seamless virtual services like video streaming rather than utilizing heavy hardware equipment that consumes more energy. Elimination of significant physical hardware from the day-to-day operations ensures dematerialization and reduces cost, waste, effort, and environmental impact.
2. Rapid Innovation for Sustainability centric solutions
Corporations have been leveraging technologies to create more sustainable businesses while ensuring to minimize their environmental impact. New innovations and technologies using cloud computing infrastructure have powered sustainability goals. For example, virtual meeting applications have been one of the most notable cloud-powered innovations so far. Companies can now have periodical employee meetings online, saving high costs, energies and time. This became the most valuable sustainable Innovation even during the COVID-19 pandemic, wherein business continuity was ensured seamlessly. Additionally, Machine Learning and cloud infrastructure are being utilized in shopping malls, airports, commercial buildings, etc., to reduce their overall energy consumption.
In a research conducted by Microsoft, it was noted that the Cloud could help to provide scalable technological solutions such as smart grid, intelligent buildings, etc., to ICT sectors. Moreover, significant enterprises are harnessing the power of the Cloud to find sustainable solutions as well. Take, for example, the case of AGL. AGL, one of Australia’s leading energy companies, utilized Microsoft’s Azure cloud platform to manage solar batteries remotely. The company was able to derive a sustainable solution with the help of cloud computing infrastructure efficiently. Another use case of cloud innovation is Ecolab, wherein Microsoft helped them address the world’s water conservation challenges. Additionally, Microsoft launched Azure FarmBeats, a precision farming system that enables farmers to practice data-driven agriculture. The system is so robust that it creates a farm map through satellite imagery and helps farmers monitor different parameters such as nitrogen and oxygen levels, soil moisture and temperature, etc.
These use cases certainly highlight that the cloud infrastructure isn’t just inherently a sustainable solution but also an infrastructure that powers rapid Innovation for sustainability-centric solutions.
3. Software as a Service (SaaS) Solutions
SaaS has transformed the way we work, communicate, and share data. With Sustainability, the reporting requirements have become crucial and complex. As such, it has become essential to have secure, accessible, and accurate data. SaaS platform essentially provides a cloud application solution that drives business operations by managing and automating key activities.
4. Innovation and Investment from Hyperscalers
Hyperscalers can invest vast amounts of money in Innovation for energy-efficient Datacenters and Technology due to an increase in cloud consumption and the number of cloud users. For example, Microsoft is investing in building datacenters based on new leading-edge designs (e.g., Microsoft has created a data center underwater) to improve the average PUE (Power Usage Effectiveness). Such investments in green infrastructures will significantly reduce the per-user footprint when cloud business applications are being used.
Sustainability benefits of Cloud from different CXO’s perspective
Sustainability initiatives are indeed being harnessed across all levels of hierarchy, from CEOs to CFOs and CIOs. There has been significant pressure from customers and stakeholders to take a stand on Sustainability as well truly.
Hence, by embracing the power of the Cloud, CXO’s can efficiently harness growth and Innovation. Let me tell you how:
1. CEO Perspective
According to a report published by Accenture strategy, it was noted that 21% of the CEOs and CXO’s acknowledged the importance of embedding sustainability goals into their corporate strategy. However, less than half were able to integrate it into their business operations. Additionally, according to the report, CEOs believed that there could be real business opportunities (like faster Innovation and growth) from embracing UN Sustainability goals. As customers, employees, and other stakeholders call for the new version of responsible leadership; Sustainability is most certainly redefining leaders’ roles today.
Incorporating sustainable goals doesn’t just ensure a competitive advantage but also allows companies to fight against climate change proactively.
Indeed the uncertainties bought by the pandemic have halted and distracted the CEO’s sustainable efforts. However, the accelerated migration to the sustainable Cloud has also invariably solved this very problem. Leaders can incorporate this new version of responsible leadership with the Cloud in these ways:
1a. Establish a sustainable ecosystem
Leaders of technology-driven businesses or other businesses can drive sustainable actions by incorporating Cloud computing technologies. Achieving sustainability goals doesn’t happen in a vacuum but requires bringing together different technologists, employees, and stakeholders to realize the importance of leveraging sustainable options into operations.
1b. Aligning Sustainability with profit mindset
The goal is not to make profits from Sustainability but rather to make profits sustainably. CEOs must align Sustainability with profits, business operations, investments, Innovation, and growth. By embracing the power of the sustainable Cloud, they can quickly alleviate the pressures to implement Global Goals and concentrate on strategizing for business success.
2. CFO Perspective
CFO’s have always viewed non-financial metrics like Sustainability as a cost rather than a source of value. This can be owed to the language barrier between the CFO’s and Sustainability colleagues, as rightfully pointed by a Harvard Business Review study. CFO’s understanding and expertise lie in ROI and EBIT, whereas Sustainability officers’ metrics lie in mitigating carbon emissions, water consumption, etc. Hence, CFO’s fail to understand the value of investing in sustainability goals truly. However, according to the same research done by Harvard Business Review, it was noted that “Non-financial metrics such as carbon emissions can reveal hundreds of millions of dollars in sustainability-related savings and growth.”
a.) Sustainability investments have tangible and intangible benefits, and CFO’s must realize this now more than ever.
b.) To maximize the value from Sustainability initiatives, a model like the “The ROSI (Return on Sustainability Investment) analytical model is a great starting point.
c.) Another important model is CISL’s “Net Zero Framework for Business” – designed for those companies tasked with delivering net zero in a business context and influencing this ambition’s societal transition. By drawing on a range of leading frameworks and CISL’s insights, it provides a ‘one-stop-shop’ for the essential tasks that need to be set in place to align with net-zero.
Let us now understand how the Cloud can make a business case for CFOs:
2a. Paradigm shift from CAPEX to OpEx for IT Infrastructures and Operations
Allows CFO to use different Financial Engineering to drive the operations in the core business stream instead of worrying about huge cash flow upfront.
2b. Cost Reduction on IT Systems, Operations
In the Smart 2020 report, it was estimated that technology-enabled energy efficiency would result in a total of $947 billion worth of total cost savings. Thus, migrating to the Cloud ensures significant cost savings for CFO’s, which can then be used for other revenue-generating projects. Cloud computing does not just reduce hardware expenditure but also reduces the overall capital and operational costs. This is huge as CFO’s can streamline and manage these cost savings for better Innovation, scalability, and growth. The Cloud also allows CXO’s to shift their outlook to think ‘green,’ to contribute to something larger than just their companies. It poses an opportunity to participate in the fight against climate change without considering mitigating costs or analyzing risks. This can be done by choosing an economic platform like the Cloud, which ensures an overall reduction in total costs and carbon emissions.
2.c Cost saving from the reduction in Carbon footprint offset expense
By migrating to the Cloud, CFO’s can quickly mitigate and avoid carbon footprint expenses (Expenses like emission taxes, penalties for non-compliance, etc.) that might be incurred later on.
2d. Faster Value creation with Business Agility
Cloud enables CFO’s to move on from immediate financial imperatives to engage in better value creation. By incorporating sustainability goals into their business operations through the Cloud, CFO’s can build a better Environmental, Social, and Governance (ESG) profile. This can help to build stronger relationships with customers, shareholders, and broader stakeholders. It’s a common misconception that engaging in such corporate social responsibility initiatives adds no monetary value. However, it does enhance the company’s reputation and goodwill while ensuring to not contribute to the climate crisis. Hence, CFO’s should look beyond immediate short-term financial investments to long-term sustainable investments to better focus on its financial and non-financial performances.
3. CIO and CTO Perspective
In addition to standard Cloud Economics and business agility benefits of Cloud adoption, the Sustainability benefits of Cloud help CIO and CTO with:
- A sustainable platform for Innovation
- It helps to optimize your application by enhancing image size, caching, and data. This can reduce not just the amount of data but also the speed at which it is transferred. This ensures to reduce your cloud spend and energy levels at the granular level.
- Cloud can help to control energy consumption levels by reducing unnecessary dependencies that consume extra storage or resources.
- Focus on applying Technology to solve core business challenges instead of managing secondary aspects of Energy Management, Power Supply, etc.
- Reduce Technology Debt along with freeing up current Datacenter footprint (reduction in carbon footprint for Energy, Travel, and other operational aspects)
- IT as an asset for Sustainability goals
4. Workforce perspective
70% of the employees are now looking to work at a company that has strong environmental goals. This is reflected within the IT sectors as employees are now urging their organizations to take greater responsibility and action towards Sustainability. Additionally, the sustainable benefits of cloud computing certainly reach all levels of hierarchy, including the workforce as well. With CEO’s establishing to maintain corporate social responsibility (CSR), the workforce or the employees can make a collective and collaborative effort to ensure minimal carbon footprint. Using the new sustainable Productivity tools and IT systems, employees can easily ensure consumption of energy.
Leading cloud providers like Microsoft’s Azure are pledging to be carbon negative by 2030 and match 100% of their global annual energy consumption with renewable energy credits (REC). This highlights how serious cloud providers are about Sustainability and how much effort they are willing to put to uphold their environmental credentials. Hence, it is time for CXOs to consider the sustainability benefits of the Cloud now more than ever, as Sustainability is no longer just a perspective but a business imperative. CXOs need to collaborate to align their operational goals with sustainability goals to build a corporate purpose to tackle climate change. Sustainability benefits can be truly harnessed only if the cross-divisional teams understand the urgency.
In my opinion, Cloud computing can definitely support the company’s sustainable efforts by saving billions of dollars in energy costs and reducing carbon emissions by millions of metric tons. | <urn:uuid:de7e8bd6-cfc0-4e7e-a61c-0481fc36db7b> | CC-MAIN-2024-38 | https://resources.experfy.com/bigdata-cloud/cloud-enabler-for-sustainability/ | 2024-09-08T22:05:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00321.warc.gz | en | 0.936788 | 3,082 | 2.875 | 3 |
Network traveling worms have been a cyber security issue since 1988 when the infamous Morris worm first struck the web and caused great disruption and got significant mainstream media attention.
Since then, the web has evolved immensely – but the network worms evolved along with it. Some worms have evolved into email attachment worms, spreading via spam mail. These massive outbreaks have been mitigated in the past years by spam controllers.
Today’s network-travelling variety of worms, such as Conficker and Zeus, are able to hide better than their email relatives and thus are much harder to detect.
But what is a network worm actually? A worm is a computer program that has the ability to copy itself from one machine to another in various ways. These worms often carry out payloads to cause damage and can badly harm computer networks once they gain access.
Some of the most devastating and notorious worms include the 2010 Stuxnet worm, which hit the Uranium enrichment facility in Iran and caused great damage to the enrichment program, or the Shamoon worm, which was introduced into the Saudi Aramco network by an employee using a privileged credential. Shamoon ended up erasing 30,000 computers in the internal network.
Both these examples stress how harmful network worms can be once unleashed in a network and highlight how these can be weaponized. What would happen if a network worm as powerful as Stuxnet was unleashed on a nation’s electric grid?
Now the question remains: how do these worms spread so vastly in a network and what enables them to gain access to basically any machine? This is the one common denominator of all network-travelling worms, they all have the same basic methods of dispersion; they use exploits of security holes in software, they spread via network shares and file transfers. Mainly, they use weak passwords and user credentials to gain access to other network locations and spread forth.
The most dominant example given is the previously mentioned Conficker worm, which had a “password dictionary” consisting of thousands of commonly used passwords. The worm would propagate by trying every password and login in an effort to access and spread across a network. According to Microsoft, 92 percent of Conficker infections are due to weak or stolen passwords. Just think, if a well-known threat such as Conficker can get in to a network using weak or stolen passwords, an advanced persistent threat group can do the same using these vectors.
In order to protect an organization from such an attack, establish a password management policy that involves sophisticated and random passwords. Once a password is unpredictable and uncommon, a worm such as Conficker would not be able to access your assets and an advanced persistent threat attack could be mitigated.
Your policy should leverage technology, such as CyberArk’s Privileged Account Security Solution to establish frequent, automated password changes, as well as monitoring and threat detection, as part of an overall password and privileged account security strategy. | <urn:uuid:c5248b19-b9b3-4800-ab38-40eb0f1b7e10> | CC-MAIN-2024-38 | https://www.cyberark.com/resources/blog/protecting-against-network-traveling-worms | 2024-09-11T10:01:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00121.warc.gz | en | 0.959825 | 597 | 3.390625 | 3 |
Every device shares the same network in a typical home or small business. This means that each device can freely see and communicate with one another. However, not all devices are the same– for example, you may want to isolate IoT devices to reduce the risk of security breaches or apply an extra level of protection to guest devices. With network segmentation, you can split your devices among different networks to meet your performance and protection needs.
This article will cover the basics of network segmentation and how you can set it up.
- What Is Network Segmentation?
- Network Segmentation Use Cases
- Tutorial: Port-Based Segmentation (Gold only)
- Tutorial: VLAN-Based Segmentation (Gold & Purple)
- SSDP Relay and mDNS Relay: Using Devices Across Segments
- What Is The Difference Between Device Groups And Network Segmentation?
- Helpful Links
What Is Network Segmentation?
Network segmentation divides your network into partitions that can be used to give you better security and network performance. For example, you can split your main local network into 3 subnetworks: Network A, Network B, and Network C.
Separating some devices from the rest of your network ensures that they aren't covertly capturing information and only have access to the data and devices they need to function. Additionally, if a device on a subnetwork is compromised, your other network segments will remain safe.
Network Segmentation Use Cases
After your network is segmented, you can now apply rules and policies to each of your subnetworks. Subnetworks can fully see and talk to each other by default, so you may find it useful to restrict what parts of the local network they have access to by setting Block rules for traffic on other local networks.
Kids' or Employees' Network
At home, you can create a network segment for kids with parental control rules and features. Depending on the situation, you can configure it to be able to access other networks or restrict it from accessing other devices and resources.
If you use Firewalla in your office network, you can create a network to manage employees' network access. You can apply rules and features based on company policies. You can also monitor the network segment as a whole, including alarms and settings.
VPN Network for Working from Home
Firewalla's built-in VPN client makes it convenient to work remotely through a VPN. In this case, you can create a network with a VPN connection configured and only include devices that you need to use for work. This way your work communication is always protected and will not interfere with your other devices' activity.
You can also use network segmentation to create a secure guest network. You can apply features or rules just to your guest network segment, such as porn block or Family Protect. You can also block guest devices from talking to any local networks while allowing devices from local networks to talk to devices inside the guest network.
With New Device Quarantine turned on, all new devices joining the network will be automatically placed into a QuarantineGroup, and an alarm will be generated. You can turn this feature on for specific networks to help build a super-secure guest network segment for home and work.
For devices that only need access to specific services, such as some IoT devices, you can isolate their traffic from the rest of the network. This reduces your risk exposure in case your IoT devices get compromised. Once you set up an IoT network, you can restrict access by setting rules to:
- Block Traffic from & to the Internet.
- Block Traffic from & to all local networks.
- Allow access to ports required by specific services (IP addresses and ports).
- Allow access only from selected networks
Control Segment Traffic
After segments are created, you can also:
- Use the Smart Queue feature to prioritize traffic on certain segments.
- Use the route feature to specify how traffic moves over each segment.
Learn more about what you can do in our article on Creating a Better Network.
Port-Based Segmentation (Gold only)
One way to create a network segment is through port-based segmentation, which involves physically connecting a device to the Ethernet ports on your Firewalla.
For the purpose of these examples, let's assume that you already have Firewalla configured with a single LAN which includes ports 1-3 the Network IP range is 192.168.0.1 with a subnet mask of 255.255.255.0.
Example 1: A Single Ethernet Device
Say you have a security camera or baby monitor that you want to separate from the rest of your network. This camera connects via Ethernet.
- Connect the Camera to a port on Firewalla. Let's say Port 1.
- Go to the Firewalla Box main page > Network Manager > Create Network.
- Give the network a name.
- Leave the type as LAN.
- Select Port 1.
- Set the IP range to be different from the primary network. If you don't know what to pick, use Surprise Me.
- If you are asked if you want to remove Port 1 from the existing LAN, tap Confirm.
- Now go to your box's main page and tap Devices. Find your camera and check that the IP address is in the range you set for your new network segment.
- In the same device screen, choose Rules. Make a rule that BLOCKS Traffic from & to All Local Networks. Now, devices on your new network segment will have full access to the Internet but will be unable to see (or be seen) by other devices on the rest of your network.
Note, on the Purple series you can use Wi-Fi to create a separate LAN from the ethernet LAN if needed.
Example 2: A Group of Ethernet Devices
Now let's say you have not one camera but a dozen. You still want to place them on a separate network segment, but you don't have enough ports on Firewalla Gold. No problem.
You can get any switch (unmanaged or managed) and connect it to Port 1, then plug in all your cameras to that switch. Follow the same steps to set up a network segment as if you were just configuring a segment for a single camera. Now, all your cameras will be able to see and talk to each other but not have access to your trusted main LAN.
Example 3: Wi-Fi Devices
Now say instead of cameras with Ethernet connections, you have a set of Wi-Fi-based smart smoke alarms that you'd like to keep on a separate network as a best practice. Instead of plugging in a switch as in Example 2, use a separate Wi-Fi access point (AP) just for the smoke alarms to isolate them from the rest of your network. Then, follow the steps from Example 1 to set up a network segment. Connect a different AP for your main LAN's Wi-Fi.
You can repeat this process for each of the 3 ports on your Gold. This means you could have:
- One network for trusted computers like your personal laptop and phone.
- One network for all your IoT devices over Wi-Fi (or, if your AP has available ports, you can connect your IoT devices with Ethernet).
- One network for security cameras.
VLAN-Based Segmentation (Gold or Purple only)
Port-based segmentation is limited by the number of physical Ethernet ports you have on Gold. VLANs (Virtual Local Networks) are another approach that let you do segmentation beyond the number of physical ports. VLANs take a bit more configuration up front, and the additional hardware may be slightly more expensive. When looking for compatible equipment, look for the most common VLAN standard, 802.1Q. Any switch or Wi-Fi AP that is 802.1Q compatible will work with Firewalla Gold or Purple.
VLANs are the only option for network segmentation on Purple since it only has one LAN port. We will use Purple in the next few examples, but everything that follows works for Gold as well. Note that Gold does not have a limit on VLANs, but Purple is limited to 5.
Example 4: Ethernet Devices
Let's say we are using a Purple to create three separate networks: one for your home, one for a camera, and another for your kids' Wi-Fi devices. To do that, we can connect Purple's LAN port to Port 1 on a managed switch.
A managed switch lets us create several VLANs (Virtual Local Networks).
- Go to the Firewalla Box Main page > Network Manager > Create Network > Local Network.
- Give the network a name (for example, "Cameras" or "Kids").
- Set Type to VLAN.
- Set a VLAN ID.
- Choose the LAN port.
- You can use Surprise Me for the IP settings, but by convention, the second to last range in the IP is usually the same as the VLAN ID. For example, a network's IP range is typically 192.68.66.xx if the VLAN ID is 66.
- Repeat this step another time to create a total of 3 local networks.
- You will now see your original LAN and your new VLANs. Note that the port icons for all 3 networks are blue to indicate they share the LAN port. The LAN port on Purple is now a "trunk" port because it carries traffic for three LANs on the same port. You'll also notice that your main LAN has no VLAN ID. Any device connected to Firewalla that isn't tagged with a VLAN ID will be on the main LAN network.
- Now follow your managed switch's instructions to create the VLANs on the switch. See the section below on setting up VLANs with a switch for a specific example.
- Set the port connected to Firewalla as a trunk port (also known as a tagged port).
- Set port 2 on the switch to VLAN ID 66 and connect your camera to that port.
- Set port 3 on your switch to the third VLAN ID 77, and connect your kids' devices.
- You can now set any rules you'd like for each of your new VLANs. To do this, navigate to your box's main page > Devices > Networks > The VLAN you'd like to manage > Rules.
Now, all the traffic for your networks will flow from your Purple's LAN port to the switch, where it'll then be directed to the appropriate switch port.
Setting up VLANs with a Switch
If you're using a managed switch for your VLANs, you will need to:
- Create the VLANs on your switch (after creating them in the Firewalla app).
- Map your VLANs to the ports on your switch. In the screenshot below, VLAN 1 includes ports 1, 3, 4, 5, 6, 7, and 8, and VLAN 10 includes just ports 1 and 2. Port 1 is a member of multiple VLANs, meaning it is a "trunk" port.
- Choose which ports are tagged for each VLAN. Tagged ports will respond to VLAN tags, whereas untagged ports will not. VLAN tags specify over which VLAN a piece of traffic should be routed. Typically, trunk ports are tagged ports because they manage multiple VLANs. In our case, on VLAN 1, port 1 is tagged, while ports 3, 4, 5, 6, 7, and 8 are untagged.
- Specify the PVID for each port. Traffic that does not have a VLAN tag will default to the PVID. In this example, ports 3-8 will default to VLAN 1, while ports 1-2 will default to VLAN 10.
See this Netgear article for more detail. Other switches will work similarly.
Example 5: Wi-Fi Devices
Now let's say we have a bunch of Wi-Fi cameras that we want to put on a VLAN separated from the rest of our network. Instead of having a separate AP for the cameras, we can get a WVLAN (wireless VLAN) AP which can broadcast multiple SSIDs, one for each VLAN.
- Follow steps 1 and 2 from Example 4 to create your VLANs in Firewalla.
- Connect Firewalla to an AP with WVLAN support.
- Follow the instructions for your AP to set up the VLANs. Read the section below on setting up VLANs with an AP to learn more.
- Once the VLANs are defined, assign each SSID to a particular VLAN.
- Have devices join the correct SSID to assign them to the correct VLAN.
Setting up VLANs with an AP
If you're using an AP for your VLANs, you will need to configure it so that your VLANs map to their different SSIDs. Here's an example of how you might do this using a TP-Link EAP225.
After creating a VLAN on your Firewalla (see steps 1 and 2 from Example 4 above), you'll need to log into your TP-Link AP using the IP address assigned to your AP by your Firewalla. Once you've logged in, use the TP-Link web interface to map your VLANs to the appropriate SSIDs.
In the image below, the Cameras network is mapped to VLAN 66, and the Kid's network is mapped to VLAN 77.
SSDP Relay and mDNS Relay: Using Devices Across Segments
mDNS Relay and SSDP Relay are different protocols that allow some devices (such as Sonos speakers or Roku) to discover each other across networks.
For example, if you have a smart speaker on LAN 1 and you want it to be discoverable by your phone on different networks, you can enable SSDP and mDNS Replay on LAN 1. It's possible that, in some cases, the app (like your phone) initiates the connection, not necessarily the device (such as the smart speaker). To make sure the phone can communicate with the device on a different LAN, you could also enable SSDP and mDNS Relay on the network to which the phone is connected.
To enable one or both of them, tap on a LAN, tap Edit, and then toggle mDNS Relay and/or SSDP Relay on.
- If SSDP Relay is enabled on one network, SSDP broadcast queries sent from the network will be relayed to all the other networks.
- SSDP is a discovery protocol; once devices find each other, they can communicate without an SSDP Relay. To make sure devices in different networks stop talking to each other, we recommend you reboot the device or reconnect it to your network after turning off SSDP Relay.
- SSDP Relay is only supported in Router Mode on all local networks.
- SSDP Relay is not supported on VPN networks (OpenVPN and WireGuard)
Note that mDNS Relay was previously called mDNS Reflector and was located on the Configurations page. mDNS Relay is the exact same feature as mDNS Reflector; it's just been moved.
What Is The Difference Between Device Groups And Network Segmentation?
While they may seem similar at a glance, Firewalla’s Device Group feature is fundamentally different from network segmentation. Device groups simply allow you to apply rules to a custom set of devices. These rules can only be applied to incoming and/or outgoing traffic, which means that groups can't isolate LAN traffic as network segmentation can. Additionally, network segmentation gives you the option to limit subnetwork members from communicating outside of their own physical port or VLAN.
Members in a device group can belong to different network segments. Using groups and network segments together works nicely to control traffic.
- Firewalla Network Segmentation Use Cases
- How to block a device from accessing other devices in the same LAN network?
- Network Segmentation
- Firewalla Gold: When my network is segmented, will I be able to use AirPlay and Chromecast across networks?
- Working from Home Smarter & More Secure
- Firewalla Tutorial: Network Segmentation Example with VLAN
- Manage Rules | <urn:uuid:58741d81-71b0-4328-b386-4c989adb5c7c> | CC-MAIN-2024-38 | https://help.firewalla.com/hc/en-us/articles/4408644783123-Network-Segmentation | 2024-09-13T18:08:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00821.warc.gz | en | 0.915065 | 3,420 | 3.078125 | 3 |
IBM announced earlier today that it has patented a technique that helps online and cloud-based businesses detect and eliminate fraud. Through analysing the browsing behaviour of customers, the technology can verify whether the customer is actually who they say they are. How does it work, you might be wondering?
When customers access their bank account online, for example, they “subconsciously establish characteristics of how they interact with the website” – like clicking certain areas more than others; using their arrow keys instead of the mouse; tapping and swiping their smart devices in a certain way, etc. The technology will collect the behaviour of the user and will know when drastic changes occur. “Similar to how individuals recognize changes in the behavior of a family member or friend on the phone — even when the audio is fuzzy — by the words they use, how they answer the phone, their mannerisms, etc., IBM’s invention helps businesses analyze and identify sudden changes in online behavior,” the company said in its press release. What happens next is that if the technology recognizes a sudden change in behaviour, it then triggers a second authentication layer, such as a security question.
Describing the technology, Keith Walker, IBM Master Inventor and co-inventor on the patent, said,
“Our experience developing and testing a prototype, which flawlessly confirmed identities, shows that such a change would more likely be due to fraud, and we all want these sites to provide more protection while simultaneously processing our transactions quickly.”
Given that $3.5 trillion dollars are lost each year to fraud and financial crimes, and that business-to-customer ecommerce is expected to grow from $1.251 trillion in 2012 to $2.357 trillion by 2017 (a compound annual growth rate of 17.4 percent), technology like IBM’s will become increasingly important for retailers and customers alike. The technology would be valuable for companies across the Internet — rather than adding extra layers to enhance online security (which can often lead to lower conversion rates), the ideal scenario for companies is to have seamless technologies that combine ease of use with high online security.
Read more here | <urn:uuid:4628a172-9872-441a-8ef2-3db3ddc5ee3a> | CC-MAIN-2024-38 | https://dataconomy.com/2014/05/30/ibm-patented-technology-combatting-fraud/ | 2024-09-16T05:27:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00621.warc.gz | en | 0.95167 | 443 | 2.78125 | 3 |
IS-IS enforces basic security through packet authentication by using special TLVs. ISO 10589 specifies TLV Type 10, which can be present in all IS-IS packet types. RFC 1195 also specifies TLV Type 133 for authentication, which removes password length restrictions imposed by ISO 10589. Both specifications define only simple passwords transmitted as clear text without encryption.
Simple, clear-text password authentication obviously does not provide enough protection against malicious attacks on the network, even though it can help isolate operator configuration errors related to adjacency setups. TLV Types 10 and 133 both provide accommodation for future TLV field types, which might permit more complex and secured authentication using schemes such as HMAC-MD5. An IETF draft proposal specifies this approach for improved and sophisticated authentication of IS-IS packets.
Only the simple passwords specified in ISO 10589 are supported in available (at the time of writing) Cisco IOS releases.
A unique security advantage of IS-IS compared to other IP routing protocols is that IS-IS packets are directly encapsulated over the data link and are not carried in IP packets or even CLNP packets. Therefore, to maliciously disrupt the IS-IS routing environment, an attacker has to be physically attached to a router in the IS-IS network, a challenging and inconvenient task for most network hackers. Other IP routing protocols, such as RIP, OSPF, and BGP, are susceptible to attacks from remote IP networks through the Internet because routing protocol packets are ultimately embedded in IP packets, which makes them susceptible to remote access by intrusive applications. | <urn:uuid:6af1c099-d3f3-44e2-a007-bc457a4b55df> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=26850&seqNum=7 | 2024-09-16T06:47:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00621.warc.gz | en | 0.908673 | 321 | 2.734375 | 3 |
The recent ransomware attacks on critical infrastructure services, such as water and energy, show the clear consequences of the convergence of information technology (IT) and operations technology (OT) systems. Although the attacks targeted IT infrastructure, the largest oil utility in the U.S. decided to shut down its OT systems, cutting off fuel distribution to the U.S. East Coast. This shutdown resulted in shortages of gasoline and diesel and long queues at the pumps, and 17 states declared a state of emergency. It could not have demonstrated more clearly how interwoven the IT environment is with the control of production plants today. By isolating both worlds, a Zero Trust approach provides the required security for these environments while still ensuring secure connectivity.
Traditionally, companies sought to control their production plants and machines separately from IT infrastructures, but now there are increasing calls for integration. This is due to several factors: firstly digitization; the duality of two separate systems can only be maintained to a limited extent in the age of the cloud, secondly, nationwide 5G networks that provide the necessary throughput performance and speed, and lastly, a drive toward sustainable design of production and sales channels.
However, a convergence also creates new potential danger, as the ransomware attack on the pipeline operator demonstrated. By shutting down the IT systems, important functions in the entire operating process were lost, a clear indication of the close working of separate systems. Regardless of whether the two worlds of IT and OT are kept in the same network or not, they influence each other.
For security reasons, the convergence of the two system worlds should not primarily be about ending the isolation of two separate environments. Rather, the focus should be on the secure connection to each other, based on the same security control mechanisms for the data streams.
Any system connected to the internet today represents a potential attack vector for malware actors. The OT environment can learn a lot from the modern cloud-based control mechanisms of IT security. With the principle of least privilege access rights, zero trust also provides an adequate security concept for OT.
The reduction of the attack surface
Every OT environment requires connectivity. This enables administrative access to plants and access to data from production environments for analysis and processing by IT systems. The connection of most OT systems to IT works via a gateway. This gateway functionality is provided, for example, by a firewall that translates between the two network worlds. However, any gateway exposed to the internet can be a potential attack surface through which the systems can be infiltrated with malware, and once an attacker succeeds in gaining access to an IT system, lateral movement within the entire infrastructure can also potentially endanger the OT environment.
The latest study by cloud security specialist Zscaler showed that the attack surface of companies is significant. From more than 1,500 data records, they identified a plethora of potential attacker gateways that many companies were unaware of. The report also uncovered more than 202,000 vulnerabilities and threats (CVEs), 49 percent of which were rated “critical” or “high.” Among the companies surveyed, there were 400,000 servers that were openly discoverable over the internet, with 47 percent of the protocols used not up to date and therefore potentially vulnerable.
Hardware devices that regulate data traffic are necessary for administrators of the two environments to either participate in the data exchange via the physical network or gain access to the ecosystem via remote access. Traditionally, complex constructs in networks have connected production facilities, as different factories and locations worldwide have been connected with VPN mechanisms. Complexity is therefore inevitable, but now companies are trying to turn their backs on legacy infrastructures as part of strategic digitization initiatives. In the search for a way out of the complexity dilemma–that can also be associated with protection gaps due to hardware components used–companies are seeking new methods of secure connectivity.
Zero Trust security for OT environments
The more IT and OT systems open to mutual data exchange, the more potential risks can be introduced. For this reason, convergence must be less about merging networks, and more about regulating connectivity and security when data streams are exchanged between worlds. To control individual access authorization’s, the principle of least privilege is suitable. Zero trust solutions take this principle as a model and only allow policy-based access for authorized users. Access rights to OT environments are limited to what is necessary for everyone, within the framework of specific use cases. Based on the starting point that nothing and nobody should be able to connect, will result in step-by-step access, once authorized. Each access request must be validated, so that the result is an isolated access. Zero Trust solutions take this principle as foundation and allow only policy-based access for authorized users. Only the access rights limited to what is necessary are granted at the level of the individual application to the OT environments, which are allowed within the framework of specific use cases.
With a Zero Trust security model, granular access authorizations at the application level can be guaranteed not only in IT infrastructures, but also in OT environments. From a security point of view, the decisive factor here is that the opening of the entire network to the required access authorizations can be restricted. Logical microsegmentation through tunneled traffic from the administrator or user to the application replaces in a certain way the opening of the network. Thus, with the help of a Zero Trust model, the attack surface of a company on the internet can be minimized.
For higher security: connectivity based on Zero Trust
The concept of ensuring access rights at the application level through zero trust in a cloud environment not only overcomes the proliferation of complexity, but also helps to reduce attack surfaces. To review such an approach, Siemens conducted its own tests in typical production environments to take the specific requirements of discrete manufacturing and the process industry into account. On the one hand, these present a simple and secure access solution for machine maintenance, in which an employee can be granted defined access rights for a specific production environment. On the other, there is the possibility of segmentation within the entire ecosystem and the secure connection of different machines within the test environment can be established.
To this end, Siemens combines its Scalance Local Processing Engine (LPE) for industrial communication with a Zero Trust solution which provides policy-based access permissions remotely or to the corporate network. The combination of this Zero Trust approach and the local processing platform “Scalance LPE” of Siemens with a powerful CPU offers an approach to connect the IT and OT worlds. The Scalance LPE can be placed directly in the production environment and collect data close to the process. A wide variety of applications at the edge and in the cloud can be operated on the open Linux operating system. In combination with Zero Trust, the Scalance LPE creates a complementary access solution suitable for industrial environments.
Image Credit: MPJ Plumbing Group
Nathan Howe has over 20 years of experience in IT security.
He brings his knowledge as an IT architect, pen tester and security consultant to companies to help them meet the challenges of digital change.
Since 2016 he is working for the cloud security specialist Zscaler. | <urn:uuid:951e9d23-22ff-4064-9c05-851375c4dd4f> | CC-MAIN-2024-38 | https://cyberprotection-magazine.com/how-zero-trust-supports-it-and-ot-security | 2024-09-17T11:39:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00521.warc.gz | en | 0.943914 | 1,457 | 2.515625 | 3 |
It’s called the “First Five Consortium,” a reference to an adage that the first five minutes in a crisis are the most critical. Announced Tuesday, the project aims to have the different organizations “share lessons learned” and work to deliver technology.
Potential areas of concentration include digitizing the mapping of natural disasters, like wildfires, through computer vision. The consortium comes as deadly natural disasters increase in severity and fire season takes hold in Western states. The DOE’s AI and Technology Office and Microsoft will co-chair the consortium.
“The collaboration will include both sharing of best practices as well as commonly shared data training and develop code for algorithms,” a spokesperson for the consortium told FedScoop. “Participants have agreed to make datasets, infrastructure and related resources available.”
Meetings to work out details will begin in early September. For now, the shared technologies include the DOD’s Joint AI Center prototypes that map fire lines and floods. DOE’s Pacific Northwest National Laboratory is now working on scaling the technology.
“These are just the kind of consortia that we like to enter into,” DOE’s lead AI official, Cheryl Ingstad said in a press call. “We think we can bring AI to bear here and help save lives.” DOE operates some of the country’s most powerful supercomputers.
The consortium was formed after the White House made a call for such projects in January. The Trump administration had hosted a forum on AI’s role in “Humanitarian Assistance and Disaster Response” and asked industry, federal agencies and nonprofits to contribute resources and work on the issue.
The JAIC was moved to participate since it had already invested in humanitarian missions as its first stepping stones in AI and is now looking to pivot its efforts to integrating AI into warfighting. It wanted “low-consequences” missions to “prove out” its AI work so that unproven algorithms couldn’t potentially skew lethal operations.
“We are delighted to work alongside our partners in government and private industry to advance the role of AI in battling natural disasters,” Nand Mulchandani, acting director of the JAIC, said in a statement. “The JAIC’s journey with developing AI solutions for humanitarian relief operations began more than a year ago, and we’d like to thank the White House for identifying and encouraging the broader use of government-built technology to directly benefit the American people when disasters strike.” | <urn:uuid:f95c9679-33c7-45fd-becf-8e126d33e92d> | CC-MAIN-2024-38 | https://develop.fedscoop.com/ai-consortium-first-reponders-doe-microsoft-jaic/ | 2024-09-17T10:45:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00521.warc.gz | en | 0.954474 | 537 | 2.734375 | 3 |
With many outstanding features, RPA quickly becomes the ultimate choice for many enterprises in the digital transformation journey.
RPA – The overview
What is RPA?
RPA (Robotic Process Automation) is a software technology that simulates human task behavior.
Particularly, the operation of RPA works on the interface layer of browsers, and software… Programmers use programs to build specific processes so that bots can simulate human interaction on the Graphic User Interface (GUI) between different systems.
Software robots allow tasks to be performed quickly, with 100% accuracy, and more stability compared to humans. However, for complex cases or when errors occur, humans can still intervene to handle them.
A necessary RPA system must meet the following three essential criteria:
- Communicate with other systems to eliminate screens or integrate APIs – (Application Programming Interfaces)
- Decision-making capabilities
- The ability to be integrated with bot programming interfaces
Some types of tasks that are suitable for RPA deployment include these characteristics:
- Prone to errors
- Large volume
- Frequent, regular
For example, RPA bots can be applied in account reconciliation to extract data from various systems, match transactions, and pinpoint exceptions for human assessment. This results in significant time savings compared to manual reconciliation processes.
Classification of RPA
RPA also has specific types for enterprises to choose according to their requirements and conditions:
- Supervised automation: When automation processes take place, human intervention is still necessary.
- Unsupervised automation: Automation processes operate independently by bots, without human intervention.
- Hybrid RPA: A combination of supervised and unsupervised automation, allowing for flexible process automation.
Each RPA type will have its advantages and disadvantages, so to achieve the highest efficiency, businesses need to thoroughly assess their operations to choose the appropriate RPA types.
Common RPA use cases
RPA can be deployed in many sectors such as finance, accounting, banking, manufacturing/retail, logistics, etc.
- Finance and banking: Tasks like account opening, and KYC… in this field involve working with extensive data and manual processes that consume time. RPA facilitates rapid and accurate execution of these tasks.
- Manufacturing/retail: A vast number of tasks such as processing invoices, and handling orders in the manufacturing/retail sector can be automated by bots.
- Telecommunications: RPA aids in tasks like payment processing, customer query resolution, or incident management, facilitating businesses to meet customer needs.
- Logistics: Logistics businesses will no longer struggle with tasks such as processing and tracking orders, and managing transportation, thanks to the automation of these processes.
Benefits of RPA for enterprises
Remarkably, RPA brings several notable benefits to businesses.
- Cost savings: Implementing RPA allows enterprises to reduce costs related to human resources, error handling, and operational expenses…, thereby optimizing business profits.
- Increased productivity: Applying RPA streamlines business processes, enabling faster and simpler process execution, thus increasing productivity.
- Enhanced accuracy: Bots ensure 100% accuracy and minimize errors during task execution.
- Continuous, uninterrupted operation: Unlike humans, software bots can operate 24/7. Continuous operation eliminates delays between tasks.
- Help employees focus on higher-value work: Automating processes with robots replaces manual tasks performed by humans, streamlining processes, and allowing employees to focus on more important tasks.
Challenges in implementing RPA
However, alongside the impressive benefits that RPA brings, enterprises also face certain challenges when deploying RPA:
- High initial investment costs: Initial investment costs depend on various factors, especially the complexity of the bots.
- Requires software programmers with broad expertise: RPA can be applied in diverse fields. However, in fields where tasks are highly complex, such as accounting, banking, and finance…, RPA programmers need comprehensive knowledge. However, the RPA programmer workforce in Vietnam is still in the early stages, making it difficult to meet complex technical requirements.
- Standardization of input data/information, existing management processes: RPA is applied to rule-based tasks and processes, so organizations need to establish necessary standardized processes for input data and information.
- Requires compatibility with IT systems/internal operating systems: Many enterprises, especially those operating in the banking sector, possess complex IT infrastructures with specific characteristics. To successfully deploy RPA, it is necessary to ensure connectivity and compatibility between RPA and IT systems, and internal operating systems.
Steps in implementing RPA
Depending on the model and specific conditions, there will be suitable steps. Generally, automating a business process involves three stages as follows:
- Step 1 – Thorough Evaluation: Enterprises need to assess carefully the applicability of RPA, compatibility with IT infrastructure, the implementation roadmap, and tasks to automate.
- Step 2 – Creating business use cases: Running tests on a process is crucial to evaluate the suitability of RPA.
- Step 3 – Prepare a comprehensive implementation strategy: Implementing on an enterprise-wide scale always entails certain risks. To minimize these risks, enterprises need to develop a comprehensive implementation strategy based on insights gained from the previous two steps.
Considerations when applying RPA
When deciding to apply RPA, businesses should consider the following issues:
- Choose RPA products based on feature comparison and cost: There is a variety of RPA products on the market with different features, costs, and interfaces. It is necessary to conduct thorough research and comparison between products to make the most appropriate choice.
- Consider the scope of business tasks using RPA: Enterprises can apply RPA to many tasks as long as they are process-oriented, but careful consideration is crucial.
- RPA is a tool to support both humans: There is no need to worry that RPA will replace humans. Instead, it is designed to support humans in performing time-consuming manual tasks, allowing them to focus on more important jobs.
Unveiling the future trends of RPA
Deloitte’s Global RPA Survey 2028 found that by the end of 2025, RPA will reach almost universal adoption.
It is increasing continuously, with the global market rising at a considerable rate of 40.6 % CAGR a year, and is likely to reach $25.66 billion by 2027, according to Grand View Research. The growing implementation of automation across diverse sectors to optimize business processes and enhance operational effectiveness stands out as a significant motivating factor.
What are the trends in RPA: 2024 predictions
In 2024, the RPA market will shift due to several rising trends.
- Generative AI with RPA-based
- Intelligent Automation
- Cloud-based RPA
- Low-code and no-code platforms
- Citizen developers
- RPA Migration
- Human and bot collaboration
Read more: Top RPA Trends And Forecasts For 2024
The role of RPA in digital transformation
Gartner’s research defines digital transformation as the process of leveraging digital technologies and capabilities to transform or create new business models.
Enhancing operational efficiency stands as a key objective for enterprises amid digital transformation, and this is precisely where RPA steps in to revolutionize operations. Offering numerous business advantages, RPA not only transforms the way employees carry out tasks but also aids businesses in embracing digital evolution. By deploying the capabilities of RPA across various departments and functions, businesses can effectively address the operational hurdles that often accompany digital transformation endeavors. Furthermore, RPA offers sophisticated big data analysis functionalities, enabling businesses to gain insights into patterns and workflow performance. From this invaluable information, companies can make informed, data-driven decisions and seamlessly integrate digital strategies.
Without RPA, the digital transformation process is not truly feasible. Undoubtedly, RPA is the catalyst for significant change, opening up and creating conditions for expanding digital transformation strategies.
The difference between RPA and AI
Similar to RPA, AI is also a technology that can leverage digital transformation. However, these technologies have several differences as follows:
Criterion | RPA | AI |
Definition | Mimic human behavior when performing tasks | Mimic intelligent behavior – thinking and learning to adapt to a specific environment |
Goal | Support humans in performing repetitive manual tasks to save time, and costs, and allow humans to focus on more important tasks | Allow bots to operate intelligently like humans |
Use cases | Used in business models, helping companies reduce personnel costs, time, etc. For example: order processing, invoice processing, payroll management, etc. | Implemented in various areas of life, such as cybersecurity management, automatic car control, speech-to-text conversion, etc. |
Data type | Structured data | Structured, unstructured, semi-structured data |
Decision-making ability | No decision-making ability | Can make decisions and predictions |
The combination of RPA and AI
While there are differences, RPA and AI are both technologies developed to complement each other. AI enhances RPA’s flexibility and efficiency while RPA serves as a stepping stone for AI. Significantly, the convergence of RPA and AI enables the execution of numerous complex tasks.
Besides, AI provides the logical thinking for actions that RPA carries out. Accordingly, combining AI and RPA allows for the expansion and improvement of access because AI can handle various types of unstructured data in complex manual processes. Meanwhile, RPA collects data, and AI maps the data for interpretation and detailed analysis. The decision-making capability of AI allows the automation process to become faster or automate more complex processes.
akaBot – Your reliable RPA partner
As part of the FPT ecosystem, akaBot is a pioneering automation solution in Vietnam. Furthermore, the solution has been recognized for its capabilities by leading global review platforms such as Gartner Peer Insight, G2, and Everest PEAK Matrix…Up to now, akaBot has conquered 3,900 businesses in 21 countries worldwide in every major market such as the EU, US, and ME.
TPBank (Tien Phong Bank) is a partner that has chosen akaBot’s solution with confidence. It is a prime example of a bank with the most robust RPA application, achieving an impressive go-live speed of 5 bots per week, earning The Asian Banker’s award for “Best Process Automation Bank in Vietnam” in 2020.
Until now, this bank has deployed nearly 300 virtual robots based on the akaBot platform in transaction processes, saving hundreds of personnel, and applying AI technologies, and virtual assistants… to ensure a seamless customer experience. This aims at digitization, intelligent automation, and process optimization.
akaBot –Companying your RPA journey
akaBot’s goal is to help businesses both domestically and internationally actualize the automation vision and achieve operational excellence.
- One-stop shop solution: akaBot offers an end-to-end Intelligent Automation solution, integrating many intelligent technologies, including AI/ML, OCR, IDP, Process Mining, Task Mining, Computer Vision, Business Process Management Systems, eKYC to help businesses stay ahead of the competition and meet the evolving needs of customers.
- 24/7 efficient support, recognized by customers and the world’s leading IT solution evaluation organizations (G2, Gartner)
- Quick return on investment (after only 7 months, according to the G2 report)
- Fast implementation: PoC (Proof of Concept) within 1 week for the first process automation
Lastly, to learn more about how to harness the power of automation and unlock multiple benefits, contact us right away.
akaBot (FPT) is the operation optimization solution for enterprises based on the RPA (Robotic Process Automation) platform combined with Artificial Intelligence, Process Mining, OCR, Intelligent Document Processing, Machine Learning, Conversational AI, etc. Serving clients in 21+ countries, across 08 domains such as Banking & Finances, Retail, IT Services, Manufacturing, and Logistics…, akaBot is featured in “Voice of the Customer” for Robotics Process Automation” by Gartner Peer Insights, G2, and ranked as Top 6 Global RPA Platform by Software Reviews. akaBot also won the prestigious Stevie Award, The Asian Banker Award 2021, Everest Group’s RPA Products PEAK Matrix® 2023, etc.
Leave us a message for free consultation! | <urn:uuid:b9ee512a-ac79-4500-b2f1-5021eacc8bfc> | CC-MAIN-2024-38 | https://akabot.com/additional-resources/blog/what-is-rpa-solution-trends-roles-and-applications-in-enterprises/ | 2024-09-20T01:03:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00321.warc.gz | en | 0.910865 | 2,578 | 2.828125 | 3 |
A DNA-based data storage system can store millions of times more data in the same volume as a conventional system, lasts for thousands of years at room temperatures, and gives you the ability to physically own and easily transport your data.
Current DNA storage technologies are costly and slow. For example, University of Washington and Microsoft researchers described an automated DNA storage proof of concept in a March 2019 Nature Scientific Reports paper. They encoded data, the word “hello” in DNA, stored it and then read it back. Their paper says the “system’s write-to-read latency is approximately 21 hours.
The race is now on to build a commercially viable DNA data storage technology, with Microsoft, Georgia Tech and several startups including Catalog Technologies, Iridia, and Helixworks Technologies throwing their hats into the ring.
In this article we take a closer look at Catalog Technologies, an MIT spinoff, which declares DNA storage has been prohibitively slow and expensive, until now. It claims it is making DNA data storage economically feasible for the first time. Catalog is tiny but thinks it has stolen a march on its competitors. “We’re positioned to make this a reality within the next year or two, rather than in five or six years,” CEO Hyunjun Park told MIT in 2019.
The Boston-based startup was founded in 2016 by Park, a microbiologist, and Nathaniel Roquet, chief technology and innovation officer, who is a biophysicist. The company emerged from stealth in June 2018 and has received $10.5m in funding, according to the 2019 MIT article.
The ABC of DNA
Catalog’s ultimate goal is to offer DNA data storage-as-a-service to customers that need to store petabytes of data in archives.
It is still early days but the company thinks DNA storage will be less expensive than on-premises tape libraries and cloud storage services. It thinks it can get DNA storage costs down to under a three thousandth of a cent per MB.
Catalog is developing a DNA-based data storage system to hold and process massive amounts of data. As a demo of DNA as a medium for ultra-long term archival of data, Catalog has encoded the entire contents of Wikipedia – about 16GB of compressed data – into DNA.
In June 2019 Park told his alma mater MIT that Catalog was readying a demonstration system.
In a 2019 video Park explains that Catalog treats DNA as an alphabet.
“Catalog’s method is to synthesize batches of a bunch of different kinds of short DNA sequences, which can be thought of as analogous to letters. The original binary data is encoded by stitching together these DNA letters into billions of possible words.”
The short sequences are about 30 to 40 base pairs long; combinations of the As, Gs, Cs, and Ts of the genetic code, and there are around 200 of them.
The resulting less than a millilitre of fluid, or dry powder pellets, containing the DNA molecules, can remain stable for thousands of years, according to Park. It can be sequenced back [read] whenever information retrieval is needed.
Working with UK-based Cambridge Consultants, Catalog proved the feasibility of using its proprietary DNA data encoding method in a test machine or instrument. This recorded 1Mbit of data per 24-hour day into DNA.
Catalog compares its machine to a printing press with movable typefaces. These typefaces are pre-built DNA molecules in different combinations, and this obviates the need to synthesize billions of different molecules.
A Mbit/day rate is better than the 5bytes in 21 hours demonstrated by the University of Washington and Microsoft researchers, but Catalog reckoned it needed to be a million times faster than that; 1Tbit per day instead of 1Mbit per day.
In October 2018 it announced it was working again with Cambridge Consultants on building a terabit writing instrument that would encode DNA “for about 1/1,000,000 the cost of what’s been possible before.”
Park said at the time: “The machine we are developing with Cambridge Consultants will bring DNA data storage out of the research lab and into the real world, for the first time in history.”
He thinks this leap in speed will help make it economically attractive to use DNA as the medium for long-term archival of data.
That is a great advance in DNA storage writing. But it is slow in data storage write bandwidth terms. For instance, a 14TB WD Purple disk drive spinning at 7,200rpm reads and writes data at up to 255MB/sec.This is 176 times faster than Catalog’s prototype.
The upfront trade-off is that you write data 176 times slower than a disk drive to store it thousands of times longer in an offline way. This is analogous to a magnetic tape cartridge but in a much smaller space.
DNA dot printer
The terabit writer prints – quasi-inkjet style – DNA ‘letters’ into dots on a sheet of plastic film. A dot corresponds to a block of binary data and contains multiple DNA letters. These are fixed through drying and then washed off the film into a fluid which is turned into pellets for storing.
The pellets are rehydrated and fed through a genome sequencer when the information they hold has to be retrieved.
Reading DNA storage
Writing data into DNA storage is one thing. Reading it is another. Assume you have some store of pellets holding DNA molecules representing encoded digital information. How do you read them?
We have asked Park directly about the reading process but he has not replied. Cambridge Consultants declined to comment.
In the absence of definitive information Blocks & Files speculates this will involve a robotic library with automated movement of desired pellets to a ‘drive’, which rehydrates them and feeds them into a DNA sequencer. It then reads a pellet’s contents and outputs a file of digital data. A tape library is the analogy we have in mind.
We are interested in finding out if this is, as it seems, a destructive read, or if the pellet can be recreated and moved back to its location in the library.
DNA computing’s far out future
According to Catalog, DNA storage is nothing new … we humans have been using it for millions of years.
But in common with the University of Washington and Microsoft researchers, Catalog conceives a future for DNA computing. This is the notion of using data stored in DNA molecules and manipulating it directly, using enzymes for example. This would be performed in a highly parallel way, with potentially millions of simultaneous operations.
It all seems very far away at this stage. Watch the video above for more information on this notion of a way to bring computation to storage.
We will explore Iridia, Helixworks and Microsoft’s efforts with DNA data storage in subsequent articles. | <urn:uuid:93b3434f-863e-4b66-9b14-77526cf53a85> | CC-MAIN-2024-38 | https://blocksandfiles.com/2020/03/18/catalog-cdna-data-storage-economically-feasible/ | 2024-09-07T20:24:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00521.warc.gz | en | 0.945841 | 1,433 | 2.875 | 3 |
China-US data security and privacy protection recommendations. The value and benefits behind big data are increasing. Data security, privacy protection, and transnational transmission and utilization of data have become problems that need to be solved urgently at this stage. But first, read the Cloud Security Problems faced by United States and China.
On the issue of protecting data and personal privacy, China and the United States will face common network security risks, have common strategic interests, and have the same goals.
China needs to conduct in-depth exchanges with its American counterparts in terms of protecting data and personal privacy, whether it is the construction of a rule system or technological development capabilities to strengthen its own capacity building in data and privacy protection.
- Cyber Security Predictions for this Year, Tech protection & Intelligence Report
- Guidelines for Training Non-IT Employees on Cybersecurity
- Future Cyber Security Predictions & Protection for Next Year
- Simple and Practical Cloud Computing Security Tips
1. Rules and Technical Guarantee Systems
The protection of data and privacy requires complete rules and technical guarantee systems, including the improvement of relevant legislation and management systems, speeding up the development and application of information security technology, and improving the security standards for data and privacy protection.
In addition, the joint participation and protection of data subjects is also required, especially the improvement of citizens’ personal privacy protection awareness. The state should strengthen the publicity and education of data security and privacy protection , improve the quality of citizens and self-protection ability, not only to prevent personal privacy from being violated, but also to respect the rights of others and not to infringe on the privacy and data of others.
2. Environment of mutual trust.
Second, establish an environment of mutual trust. Mutual trust is the basis of communication and cooperation between the two sides. The mutual trust between China and the United States in the field of cyber security will directly affect the strategic cooperation and core interests of the two countries.
Both parties should strengthen communication and understanding, enhance mutual trust, pay attention to and understand each other’s cutting-edge research results, network security technology trends, and legislative and management measures. This should be from basic theories such as information, information privacy, cyber attacks and other key terms.
They should start to clarify the basis of bilateral rules for data security and privacy protection. On the basis of establishing mutual trust, reach a certain consensus on data security and privacy protection issues to prevent mistrust, suspicion or accusations caused by cultural, ideological and other factors.
3. Effective Communication and Dialogue Mechanism
Third, build an effective communication and dialogue mechanism. The Internet has integrated global links into a whole, and cloud computing and big data have turned the borders of networks into suspended threads. Therefore, it is extremely urgent to establish an international and bilateral communication mechanism in the era of big data.
China and the United States should establish effective communication and dialogue mechanisms on data security and privacy protection issues, and communicate and cooperate on issues such as legislation, justice, law enforcement, regulatory enforcement, self-discipline, technical standards, and the cultivation of Internet users’ security awareness.
Effective network security protection mechanism. Establish communication and dialogue channels at both the government and private levels, conduct exchanges and consultations on data security and privacy protection and other related issues, and encourage and support the establishment of research forums.
4. Rights of Data subjects are equally protected
Fourthly, the rights of data subjects are equally protected. The cross-border flow of data makes the protection of the data and privacy of companies or citizens of other countries an unavoidable issue. In particular, whether the data of users in different countries can be equally protected has become a problem that multinational companies and governments need to solve.
The principle of equal protection is a criterion to be followed by legislation, justice and law enforcement, and has been recognized and adopted by many countries. While protecting the legitimate rights and interests of citizens and enterprises of the country, the rights and interests of citizens and enterprises of other countries should also be equally protected.
5. Respect Interests and Respond to Network Security
Fifth, respect each other’s interests and jointly respond to network security. In the process of communication and cooperation on data security and privacy protection, China and the United States should respect each other’s core and major interests, whether it is at the official or private level, whether it is communication and cooperation on technical or regulatory issues. This is communication.
The foundation is also a prerequisite for continued and deepened cooperation. Both parties should establish an effective information sharing and cooperation mechanism, strengthen the supervision and management of violations of data security and personal privacy, and jointly combat violations of data security and personal privacy. Facing the complicated situation of data security and privacy protection, mutual trust, cooperation, and win-win should become the path of our big data era. | <urn:uuid:a68dd7c8-0bf5-4dd3-a2f4-14bee22fb71e> | CC-MAIN-2024-38 | https://hybridcloudtech.com/5-china-usa-data-security-privacy-protection-recommendations/ | 2024-09-10T07:47:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00321.warc.gz | en | 0.926174 | 985 | 2.546875 | 3 |
Quantum computing: What are the data storage challenges?
Quantum computing will process massive amounts of information. Workloads could include diagnostic simulations and analysis at speeds far greater than existing computing. But, to be fully effective, quantum computing will need to access, analyse and store huge amounts of data.
One area, in particular, will have to deal with significant challenges: data storage. How will today’s storage systems keep pace? In this article, we discuss the differences between quantum storage and classical storage and how quantum computing will change storage technology in the future. | <urn:uuid:fbba332a-e941-456e-8aa9-2cee4b51b596> | CC-MAIN-2024-38 | https://www.bitpipe.com/detail/RES/1684158079_27.html | 2024-09-10T07:06:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00321.warc.gz | en | 0.908764 | 115 | 2.578125 | 3 |
Born To Collaborate
The BrainYard - Where collaborative minds congregate.
Are we genetically predisposed to collaborate? There may be a biological basis as to why some individuals collaborate and multitask far more effectively than others.
In 2003, Harvard researcher Shelley H. Carson and two colleagues published research in the Journal of Personality and Social Psychology focusing on latent inhibition and conducted studies that would address the question whether creative individuals benefit from low latent inhibition.
Latent inhibition is "the capacity to screen from conscious awareness stimuli previously experienced as irrelevant" (Carson, Peterson, and Higgins, 2003, p. 499). In other words, latent inhibition helps people filter out random inputs. Low latent inhibition, i.e. a state where an individual has a reduced capacity to filter out extraneous stimuli, has previously been associated in the literature with psychotic states or with psychotic proneness.
But some highly intelligent individuals are more porous, and simply do not filter out all such irrelevant stimuli. In fact, they may accept these extra inputs and the inputs become a part of the creative process.
This means that creative people remain more aware of and alert to extra information that comes streaming in from the surrounding environment. A "normal" person would see an object, classify it, and then forget about it, even though the object may be far more complex than he believes it to be. Someone who is less mentally keen needs to filter out extraneous stimuli in order to avoid suffering from overload and a resulting psychosis.
Carson, in the May-June 2004 issue of Harvard Magazine, explains that "Intelligence allows you to manipulate the additional stimuli in novel ways without being overwhelmed by them."
Reading this article got me thinking: how might low latent inhibition impact knowledge workers?
A highly intelligent knowledge worker with a good memory might actually benefit from low latent inhibition, as it would amplify that person's capacity to think about many things and issues at one time. This predisposes such a person to being open to new information and concepts -- hence, that person could multitask more effectively and discriminate between everything that's coming his way.
If you have ever noticed that some people can easily manage multiple inputs, a conference call, documents, a half dozen instant messaging sessions -- all at the same time -- while most others cannot, you may have been observing someone with low latent inhibition.
We'll be looking at this issue in more depth in the coming months; in the meantime, please share your thoughts and experiences with me by writing to me at [email protected].
About the Author
You May Also Like
Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024Maximizing Manufacturing Efficiency with Real-Time Production Monitoring
September 25, 2024 | <urn:uuid:bf10c515-2614-4045-9fd4-0cd8fda0f553> | CC-MAIN-2024-38 | https://www.informationweek.com/it-leadership/born-to-collaborate | 2024-09-14T00:32:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00021.warc.gz | en | 0.944167 | 571 | 2.53125 | 3 |
What is a malicious URL?
Malicious URL is a link created with the purpose of promoting scams, attacks, and frauds. By clicking on an infected URL, you can download ransomware, virus, trojan, or any other type of malware that will compromise your machine or even your network, in the case of a company.
A malicious URL can also be used to persuade you to provide sensitive information on a fake website. Notice that it isn’t just links with malware that can be propagated on the internet, after all, there are several types of threats.
That’s why experts call “malicious URLs” what many people know as a “virus link”, “infected link” or, simply, “weaponized link”.
The fact is that a short, simple URL can cause a lot of damage.
The potential harm is so big that malicious links are considered one of the biggest threats to the digital world, especially when we talk about attacks and threats that arrive by email.
We will explain this threat with data and arguments below. Check it out!
Table of Contents
Links in spam and phishing campaigns
Phishing is a type of fraud used by criminals who try to deceive victims by impersonating well-known and trusted organizations or people.
It means that you may receive a malicious URL within an email from a friend if his email account has been compromised.
Or if the criminal is trying to deceive you by spoofing your friend’s name and address.
Malicious links may also be hidden in supposedly safe download links and may spread quickly through the sharing of files and messages in sharing networks.
Remember as well that, just like with emails, websites can also be compromised, which can lead users to click on malicious URLs and provide sensitive information directly to fraudsters.
"This is a safe link"
Gatefy’s cybersecurity solution for companies daily detects different types of email scams that try to persuade victims using ready-made phrases, such as “This is a safe link” or “This email isn’t spam”.
This is where the danger lies.
We often report cases of scams involving malicious links here on the blog:
- You received invoice from DocuSign;
- Your Amazon account is being suspended;
- I’ve a strong interest in working for your company;
- Potential investment opportunity;
The increase in the number of scams and the use of malicious URLs isn’t only detected by our security solution, but several organizations and reports also warn of the incidence of scams and fraud:
- According to the FBI, losses due to internet crimes reached a record USD 3.5 billion in 2019;
- 84% of worldwide email traffic is spam, reports Cisco Talos Intelligence Group;
- The incidence of social engineering and phishing scams has increased, says Europol;
- IBM points out that 14% of malicious breaches involved phishing;
- Microsoft reports an increase in phishing and malware cases involving COVID-19;
- 94% of attacks involving the use of malware occur through the use of e-mails, says Verizon.
How to block malicious URLs
You must have noticed the size of the threat that can bring an email containing a malicious link, right?
Now, to block malicious URLs, there are several engines and ways. In the case of corporate networks, for example, you can get a Secure Email Gateway.
In the case of browsers, you can install protection plugins.
The most effective and common protection techniques are based on filters that use URL blacklists, comparing domains and hosts.
Other techniques involve machine learning, URL rewriting, sandboxing, and real-time click detection.
A DMARC-based solution can also prevent hackers from using your domain and your company’s brand to deliver scams using malicious URLs and other threats.
Find out more about this subject by subscribing to the Gatefy newsletter.
We hope this article, containing concepts and data about threats and malicious URLs, has been enlightening. If you’re still in doubt, write to us. Take care! | <urn:uuid:10d38534-bb60-401d-924d-68cef17d28ab> | CC-MAIN-2024-38 | https://gatefy.com/blog/what-malicious-url/ | 2024-09-15T04:06:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00821.warc.gz | en | 0.938334 | 861 | 2.5625 | 3 |
Fiber optics are the apex of data transmission technologies. Using a flexible, transparent optical fiber constructed from silica, fiber optic cables leverage photons to pass network information at around 2/3 the speed of light. Being the fastest and most vigorous backhaul networking solution, Mobile Network Operators (MNOs) have made fiber optics a mainstay for supporting 5G deployments in the face of increasing networking needs of consumers and enterprise use cases. In this article, you’ll read about four high-bandwidth applications that would greatly benefit from fiber optics.
TVs and UHD Content
One of the biggest use cases for high-bandwidth fiber optics is for facilitating the smooth use of multiple Televisions (TVs), 4K, and 8K content. Video content consumes massive amounts of Internet traffic, accounting for 82% of total Internet traffic worldwide this year.
4K and other Ultra-High Definition (UHD) content requires about 3X the bit rate speed of HD TVs, and around 15X that of Standard Definition (SD). And when there are multiple devices being used simultaneously, the strain on the home network compounds. These network capacity constraints can lead to delayed load times, video buffering, latency, and network disconnect.
To maintain satisfactory network performance, many TV users must leverage fiber optic cables. The crucial advantage that fiber optics holds is its high bandwidth faculty. Fiber optics usually accomplishes up to 10 Gigabits per Second (Gbps) and can be transmitted up to roughly 100 Kilometers (km) without any boosts.
XR and Metaverse Applications
Fiber optics would be a good home networking match for immersive Extended Reality (XR) and metaverse applications, such as social media, video game streaming, and interactive video. These consumer applications put a substantial burden on home networks and have significant bandwidth needs. A TikTok user, for example, may consume more than 840 Megabits (MB) of data in just 1 hour.
Immersive applications involve data-intensive technologies like Augmented Reality (AR) and Virtual Reality (VR), live 360° video, and volumetric display. This typically requires a network throughput of 25 Megabits per Second (Mbps) or more and sub-50 Milliseconds (ms) latency. Here, fiber optic cables would shine.
Another consumer application that would benefit from fiber optic cables is the smart home, which will not stop its strong growth. Users can manage smart home applications remotely with external devices like tablets, laptops, and mobile devices.
While common smart home devices, such as doorbells, household appliances, plugs, and lights usually don’t consume more than 1 Gigabit (GB) of network data monthly, other applications are much more bandwidth-straining.
For example, Wi-Fi-connected cameras and voice control front end devices can connect to a cloud network and stream music, podcasts, or other online content that eats up a lot of data usage. Contingent on device settings and features, a smart camera alone can use up anywhere from 18 GB and 400 GB of data every month.
Consumer robotics is another bandwidth-intensive smart home application that consumes a huge amount of data and could introduce network deficiencies. As pointed out in Chart 1, with forecasts pulled from ABI Research’s Consumer Robotics and Smart Appliances market data (MD-HACRSA-103), these smart home categories are expected to only grow in popularity.
In 2026, we expect 1.5 billion consumer robotics and smart home devices to ship worldwide, nearly double the 807 million shipments in 2021.
Chart 1: Consumer Robotics and Smart Home Hardware Shipments: 2021 to 2026
Immersive collaboration, in the context of the enterprise metaverse, entails interactive cloud services and software, digital twins, simulations, video conferencing, virtual events, and virtual headquarters (HQ). As pointed out in the Enterprise Metaverse Implications for the Workplace, Virtual Events, Retail, and More Research Highlight, the immersive collaboration and digital twin/simulations market will be worth US$58.9 billion by the end of the decade—showcasing the increased adoption of immersive in the modern enterprise.
It goes without saying that these bandwidth-intensive applications, particularly video-centric applications, introduce more pressure on a company’s network. Indeed, office networks are increasingly requiring Gbps capabilities.
To illustrate, digital twins and simulation software necessitate the continuous collection of real-time data to be transmitted over the network using Head-Mounted Displays (HMDs) and Internet of Things (IoT) devices. These immersive solutions enable enterprise users to influence high-fidelity assets in real time, but these applications require demanding bandwidth and latency requirements (25 Mbps to 600 Mbps bandwidth and sub-30 ms latency).
In just 1 hour of work, an HMD user could consume up to 11 GB to 270 GB of data. Fiber optics backhaul, being the pinnacle of data transmission technologies, would be a good connectivity technology candidate here because it can handle Gbps applications.
Fiber-to-the-Commercial-Office (FTTCO) will be seen as a must-have networking solution that enables critical network and business activities. Therefore, in ABI Research's assesment, we believe the deployment of fiber infrastructure will be an integral aspect of the future modernized enterprise in both developed and developing markets.
Gain More Insight
Fiber optics are regarded as one of the most reliable networking solutions on the market, whether it’s for consumer usage or to support enterprise networks. As Internet traffic continues to climb significantly, fiber optic cables will increasingly be seen as a go-to solution to the interruption of high bandwidth applications.
To better help enterprise users and MNOs survey the importance of fiber optics in improving networks, download ABI Research’s The Role Of Fiber Optics In Your Network whitepaper. | <urn:uuid:d8975383-29cc-492f-a108-e39e6d894468> | CC-MAIN-2024-38 | https://www.abiresearch.com/blogs/2023/02/27/applications-of-fiber-optics/ | 2024-09-15T02:42:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00821.warc.gz | en | 0.891919 | 1,214 | 2.78125 | 3 |
The eyes are for seeing, but they have other important biological functions, including automatic visual reflexes that go on without awareness.
The reflexive system of the human eye also produces a conscious, visual experience, according to a new study from researchers in the Perelman School of Medicine and School of Arts and Sciences at the University of Pennsylvania.
The findings, reported today in the Proceedings of the National Academy of Sciences, may provide insight into the excessive light sensitivity sometimes experienced by people with eye disease, migraine headaches and concussions.
The study addressed the properties of melanopsin, a blue-light sensitive protein in the eye that establishes the rhythm of the day-night cycle and the familiar constriction of the pupil to bright light.
The researchers created a special pulse of light that stimulates only the melanopsin cells of the eye.
They showed this light pulse to people and measured their pupil response and brain activity, as well as asked them what they saw.
Remarkably, they found that people have brain activity and a visual experience in response to a light that is invisible to the parts of the eye normally used for seeing.
“Melanopsin is a part of our visual system from long ago in evolution, and it controls several important biological responses to light,” said lead author, Manuel Spitschan, PhD, who received his doctorate from the Psychology program at Penn in 2016 and is now a Sir Henry Wellcome Postdoctoral Fellow at the University of Oxford.
“It has been hard to know if we have a visual experience that accompanies these reflexes, as any normal light that stimulates melanopsin will also stimulate the cone cells of the eye that support our regular vision.
We wouldn’t know whether what a person sees arises from melanopsin or the cones.”
To solve this problem, the Penn team developed a special kind of light pulse that stimulates melanopsin but is invisible to the cones.
The lights were created using a machine that can sculpt and switch between computer-designed “rainbows” of light.
First, the researchers had people watch these light pulses while their pupil response was recorded.
The scientists confirmed that a light pulse that is invisible to the cones evokes a slow, reflexive constriction of the pupil that is characteristic of melanopsin stimulation. They then measured brain activity using the technique of functional MRI, and found that the visual pathway of the brain responds to the melanopsin stimulus.
“This was a particularly exciting finding,” said senior author Geoffrey K. Aguirre, MD, PhD, a behavioral neurologist and an associate professor of Neurology at Penn.
“A neural response within the occipital cortex strongly suggests that people have a conscious experience of melanopsin stimulation that is explicitly visual.”
The researchers then asked what people “see” with melanopsin.
They had 20 people look at the pulses of light and provide ratings of different perceptual qualities.
People described the melanopsin stimulus as a blurry kind of brightness, in contrast to the focused experience provided by the cones. They also described the melanopsin light pulse as unpleasant.
“This perceptual experience fits with what we know about the cells that contain melanopsin,” said David H. Brainard, PhD, the RRL professor of Psychology.”There are relatively few of these melanopsin cells in the eye. Like a digital camera that doesn’t have many pixels, we would expect the melanopsin system to give a blurry, indistinct image of the world.”
Their work has particular relevance for understanding the experience of people with photophobia, who are overly sensitive to bright light and experience pain as a result. “Research in mice makes us think that melanopsin contributes to the sensation of discomfort from very bright light,” Aguirre said.
“Subjects in our study found the melanopsin stimulus to be unpleasant, and people with photophobia may experience a stronger form of this response to melanopsin. We now have a tool to help us to better understand excessive light sensitivity.”
- Manuel Spitschan, Andrew S. Bock, Jack Ryan, Giulia Frazzetta, David H. Brainard, Geoffrey K. Aguirre. The human visual cortex response to melanopsin-directed stimulation is accompanied by a distinct perceptual experience. Proceedings of the National Academy of Sciences, 2017; 201711522 DOI: 10.1073/pnas.1711522114 | <urn:uuid:1b3440c3-4ce9-4547-b42f-79af86857286> | CC-MAIN-2024-38 | https://debuglies.com/2017/11/01/eyes-automatic-visual-reflexes-that-go-without-awareness/ | 2024-09-16T09:50:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00721.warc.gz | en | 0.94784 | 938 | 3.75 | 4 |
Bytes and Bacteria: Exposing the Germs on your Technology
You sit at your desk, click around the internet, and type away. Then you take out your smartphone to check messages and social media. You swipe your badge to maneuver around the building. But when was the last time you cleaned all these items?
Technology has become a constant companion in our modern world. We're connected almost 24/7 to one gadget or another. We keep our desks tidy, we wipe down counters, and our office attire gets washed regularly. But so many of us never consider cleaning our devices.
We here at CBT Nuggets wondered just how dirty our technology and other tools really are. So, we had a team do some bacteria swabbing of typical items used in an IT office. Get ready to head out for cleaning wipes, and read on to find out why you'll want to.
Lurking Germ Comparison
There are certain everyday items that we consider dirty, including toilet seats, money, and things that end up in Fido's mouth. And indeed, those items do host plenty of germs. However, the technology we use constantly is also home to grubby bacteria, including our cellphones.
We compared several surfaces commonly used in a tech office and everyday "dirty" items to gauge how much bacteria are really lurking on our tech. We discovered that the most bacteria-laden item of all was the ID badge, which had 243 times more bacteria than a common pet toy! The cleanest item on our tech list: a laptop trackpad.
All The Bacteria
There are several common bacteria that have found their way into our lives and onto our gadgets. These include bacilli, gram-positive cocci, gram-positive rods, and gram-negative rods. Bacilli and gram-positive cocci tend to cause the most sickness. In fact, bacilli are the usual culprits in food poisoning. Gram-negative rods aren't so great to have around either because they can be resistant to drug and antibiotic treatments. The only common bacteria that isn't typically harmful to humans are gram-positive rods.
We looked at the distribution of bacteria between all the office items that we swabbed:
Gram-positive cocci were the most common bacteria found, at just over 42 percent. These bacteria are behind strep and staph infections, so it may not be one you want to have regular contact with.
The next most common bacteria we found was bacilli (around 25 percent), which can survive in extreme environments.
Gram-negative rods came in at 21.5 percent, while gram-positive rods only made up about 10.8 percent of the bacteria present.
It might be safe to say that illness could be lurking at your fingertips.
Our bacteria swab results showed they harbored some pretty nasty guests. This is likely because Since we don't think about them often, nor do we tend to think of them as getting dirty. You may want to get those germ wipes ready. Here's the breakdown of bacteria we found on ID badges.
At 61 percent, gram-positive cocci were the most common bacteria found on the badges, possibly carrying your next strep or staph infection. The next common bacteria found on the badges were bacilli (nearly 26 percent). The final foe discovered was gram-negative rods, at about 13 percent.
It's been well established that keyboards are a prime host for bacteria and germs. It makes sense, considering we use them to work, type up reports, and send out emails. There's also the things we don't realize make way onto our keyboards, like crumbs from that desk lunch, or residue from those crunchy snacks, and even the occasional coffee or soda spill. Here's the bacterial breakdown of the keyboards we studied:
The largest group of bacteria present was gram-positive cocci. It comprised almost 38 percent of the bacterial makeup.
Gram-negative rods weren't far behind, with 34 percent.
Gram-positive rods made up 17 percent of the total bacteria on keyboards.
Bacilli came in at almost 11.3 percent.
It seems to be a good mix of nasty germs on that keyboard, ready to send you home with the flu.
Phone Friends or Foes
Let's face it, our cellphones are pretty dirty. We take them everywhere, use them after touching all kinds of things, and set them on all sorts of surfaces. Most people are even guilty of bringing them into the bathroom. If we knew the kinds of germs that make their home on our phones, we would probably pay more attention to their cleanliness. Here are the results from our phone swabs:
Our swab revealed an almost even mix of bacteria. From what we found, a majority of the bacteria were bacilli (37.5 percent) and gram-positive rods (37.5 percent). A final quarter of the bacteria were gram-positive cocci. It might be a good plan to establish a cellphone cleaning routine and stick with it. Your immune system will thank you later, and your employer will appreciate you not using all your sick days.
Bacterial Showdown: Mouse Versus Trackpad
Do you prefer using a trackpad or an external mouse to navigate on your computer screen?
Everyone has a preference. But perhaps you might be interested in knowing about the bacterial content of both. Just like your germy keyboard, a mouse tends to be covered in bacteria. You don't think much about touching a public work surface, clicking your mouse to pull up a boss' email, and then unwrapping your sandwich at lunchtime. But you might want to think twice.
We were curious to see if there was much of a bacterial difference between your computer trackpad or an external mouse:
As it turns out, the trackpad was only harboring two kinds of bacteria: gram-positive rods (nearly 67 percent) and gram-positive cocci (33.3 percent). The mouse, on the other hand, contained all four kinds of bacteria. Bacilli and gram-negative rods made up 44 percent of the makeup each. Gram-positive cocci came in at 12.4 percent, while there was a small trace (0.01 percent) of gram-positive rods. Both the trackpad and mouse play host to some nasty germs but the trackpad may just be the winner here.
Your office at work is indeed a dirty place. It might look clean on the surface, but hiding on your commonly used devices is a whole host of bacteria. Many of the items that we touch and use on a daily basis aren't cleaned regularly. Most of us just don't think about it as we hurry through our busy days.
One important way to combat all these germs is to wash your hands regularly and properly. The CDC calls handwashing a "do-it-yourself" vaccine, which tells you how important it is. To effectively keep bacterial counts low, regularly clean and wipe down gadgets and surfaces. If you know you're getting sick, just stay home until you're well again. By coming to work, you are spreading all those germs throughout the workplace and putting everyone else at risk. But don't let these germs get in your way of sharpening your IT skills. You can train from the convenience of your home with online courses from CBT Nuggets.
So the next time you go to put on your ID badge or type up an email, ask yourself if you've cleaned them recently. You'll be grateful you did when flu season comes around.
Start your free week with CBT Nuggets today!
Train anytime, anywhere, and even offline! Download the CBT Nuggets app:
We swabbed five items within each category to find the average colony-forming units (CFU) per square inch on each surface. All testing was done by EMLab P&K.
Antimicrobial therapy of infections with aerobic Gram positive rods, Clinical Microbiology and Infection, ESCMID, A. von Graevenitz
Microbacterial Review, Tulane Medical
Danger Lurking at Your Keyboard, Western City
The Hidden Life on your Phone The bacteria that lurk on your mobile, University of Surrey
Fair Use Statement: Bacteria can be gross, but there's nothing yucky about sharing our content. All we ask is that you attribute back to our scientists (or authors of this page) by giving them proper credit.
delivered to your inbox. | <urn:uuid:9e5f5e75-6094-4d79-9d78-d9900c8280e7> | CC-MAIN-2024-38 | https://www.cbtnuggets.com/blog/career/career-progression/bytes-and-bacteria-exposing-the-germs-on-your-technology | 2024-09-17T14:47:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651800.83/warc/CC-MAIN-20240917140525-20240917170525-00621.warc.gz | en | 0.963362 | 1,753 | 2.84375 | 3 |
Here we are discussing the ANPR camera. This camera is used to recognise the vehicle number plate. This is also called LPR camera. ANPR technology is used to detect and determine the vehicle licence number and register it into the database. This kind of camera generally comes with a software which is used to control the database and manage it throw the ANPR camera. ANPR generally stores the image of the number plate and throw the software the number can get by the image and may be used for the further processing like the passing time or passing speed.
The Technology of ANPR camera
The technology uses in the ANPR camera is called OCR (optical character recognition). ANPR camera is designed to capture the number plate image in daylight as well in the night. Its design is such a way that it can capture the numbers when the vehicle is moving or is in the still situation. Some ANPR cameras can also the measure the speed of the vehicle. The ANPR camera technology uses in so many countries like the UK and US by the police department.
Firstly it takes the picture of the number plate and then recognises the size of the plate. After that, the character segmentation process comes and finally optical character recognition algorithm runs. The output comes in the form of numbers and characters.
Sometimes numbers plates are different and it is difficult to recognise them by the ANPR camera. So in some cases vehicles are restricted to use the number plates according to the algorithm of ANPR. For example, if ANPR uses by police department then department allows only a few fonts sizes and fonts style on the number plates of the vehicles. So ANPR is the type of the CCTV camera which is customised for a special task.
Uses of ANPR Camera
- In the police department to prevent the crime
- By a factory to filter the authorised vehicles
- In a housing society for the security purpose
- A mall parking system
- The bulk entry area of vehicles
- In speed tracking systems
- Auto open gate systems
- To prevent unauthorised access
- In a secured LAB
- For bulk data processing of vehicles, etc.
ANPR Camera in India
Indian’s lanes have old roadways with toll stations where drivers can pay with cards and moreover ways where there are electronic combination systems. At any rate most new roads simply have the decision of electronic toll combination structure.
The toll square’s framework uses three distinct structures:
- Automatic camera catches the number plates with the assistance of some others infrared cameras and peruses the number plates from every vehicle
- The data send to the product where the anpr camera associated then the product peruses the data and coordinated with its very own database.
- Then the obstruction opens when an individual previously paid or pay for it with the assistance of RF ID cards.
At the point when the shrewd tag is introduced in the vehicle, the vehicle is immediately distinguished and proprietor’s financial balance is consequently deducted. This procedure is acknowledged at any accelerate to over speed limit.
Best ANPR Cameras.
This is the list of the best ANPR camera available in the market. You can choose one of them according to your needs. These cameras have their own software to manage the recognised number of plates.
- Hikvision ANPR camera
- Bosch ANPR
- Dahua ANPR Camera
- Axis Communications ANPR
- Honeywell fusion ANPR camera
1. Hikvision ANPR camera
Hikvision have an ANPR camera for their customers and this camera has some unique features in its kind of camera.
- Support countries and regions of Mid-East, Africa, Asia-Pacific, America, Europe, Russian-speaking Countries.
- In European and Russian-speaking regions, capture rate exceeds 99%, recognition rate exceeds 98%.
- Smart recording: support edge recording and dual-VCA
- Smart encoding: support low bit rate, low latency, ROI enhance encoding
- Quality 1920 × 1080 @60fps
- Support auto iris, DC-drive
- It Support rotate mode, suitable for the environment as the corridor
- Support target cropping, details can be seen with low bandwidth
- Streaming smoothness setting for different requirements of image quality and fluency
- Support H.264+/H.264/MPEG4/MJPEG video compression, multi-level video quality configuration.
- IR LEDs for Night Vision Up to 328′
- Captures Images of Moving Vehicles
- 8-32mm Motorized Varifocal Lens
- 42-13.5° Horizontal Field of View
- Input and Output for 2-Way Audio
- Supports MicroSD Cards Up to 128GB
- RJ45 Ethernet with PoE Technology
- ONVIF-Compliant for Profiles S & G
- IP67-Rated for Outdoor Use
2. Bosch ANPR Camera
- DINION 2X technology produces clear, consistent, accurate license plate images
- Night Capture Imaging System ensures 24/7 performance and eliminates headlight glare
- Advanced Ambient Compensation minimizes overexposed plates for improved ALPR accuracy
- Adjustable imaging modes allow configuration for regional plate characteristics
- Features a 1/3� CCD with progressive scan technology
- Capable of quad-streaming video simultaneously � on two H.264 streams, an I-frame recording stream, and an M-JPEG stream
- Features a 20-bit DSP for capturing every detail of the image in both the high- and low-light areas of the scene simultaneously
- Uses H.264 (Main Profile) compression and bandwidth throttling
- Three power options, PoE+ (Power-over-Ethernet+), 11�30 VDC, and 24 VAC
- Conforms to the ONVIF (Open Network Video Interface Forum) specification
- Capable of transferring live video, audio, metadata and control information between ONVIF conformant devices
- Bosch white colour with the all-weather coating
3. Dahua ANPR Camera
Dahua has three model of its ANPR camera.
- Embedded with LPR algorithm inside the camera
- Support plate cutout, overview picture snapshot and video recording
- Embedded with Whitelist database, control the barrier
- Powerful 5-50mm manual lens and white light, ideal for monitor ANPR distance as 40m
- Wide working temperature, IP66 and superior performance for outdoor applications
- True WDR technology
- Suitable for the ANPR detection and recognition in the low-speed environment (<80Km/H), such as parking, access control and urban street.
4. Axis Communications ANPR Camera
Axis ANPR camera arrangements depend on an Axis camera and accomplice programming that runs either on the camera or on a server. It naturally catches the tag progressively, looks at or adds it to a pre-characterized rundown afterwards makes the proper move, for example, opening a door, including a cost, or creating an alarm.
Contingent upon encompassing light, the arrangement may require cameras with worked in IR-light or extra light hotspots for ideal execution.
[youtube https://www.youtube.com/watch?v=r0EnpZOZAmc]5. Honeywell Fusion ANPR camera
• Log vehicles with multiple images
• Alert staff to vehicles of interest
• Provide car park access control
• Control unauthorised commuter parking
• Increase gatehouse efficiency
• Open barriers/gates
• Control traffic lights
• Optional multiplexed or switched spot
• Car park auditing / ticketing / management
• Allow sophisticated database searching
• Advanced traffic analysis
• Communicate with remote sites over LAN /
• Reads number plates within 53 countries
• Simultaneously reads different countries’
So this is all about the ANPR system. We hope this information is helpful to yours. If you have any kind of query or suggestions please write to us in the comment box below. | <urn:uuid:edb2b1e0-dc31-478c-ba7d-3eb9642c705a> | CC-MAIN-2024-38 | https://cctvdesk.com/anpr-camera/ | 2024-09-20T04:58:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00421.warc.gz | en | 0.899486 | 1,670 | 3.046875 | 3 |
DomainKeys is an older technology that was combined with Cisco’s Identified Internet Mail (IIM) to develop DKIM—an email authentication protocol that prevents phishing emails sent from your domain from reaching recipients’ primary inboxes. Moreover, DKIM also ensures that nobody tampers with the message in transit.
People often use these terms interchangeably, but let’s figure out how they differ.
What is DomainKeys?
DomainKeys is an obsolete email security technology developed by Yahoo. It is based on cryptography, in which a digital signature is attached to the header of an outgoing email. The signature uses the public key, allowing the recipient’s mail server to verify the authenticity of the sender by checking the signature against a public key published in the DNS records of the sender’s domain.
What is DKIM?
DKIM stands for DomainKeys Identified Mail, an email authentication protocol that combines and builds upon the concepts of DomainKeys and Identified Internet Mail (IIM). It verifies the authenticity and integrity of email messages by enabling the sending server to sign outgoing emails using a private key. Upon reception, the receiving server verifies the signature by matching it with the corresponding public key stored in the domain’s DNS records.
The entire process helps ensure that the emails’ contents weren’t changed in transit while also protecting against email spoofing and phishing.
Differences Between DomainKeys and DKIM
DKIM is an evolved and more relevant technology that is slightly different from Domainkeys.
History and Development
Yahoo created DomainKeys in 2004 to empower domain owners to prevent themselves from getting caught in phishing emails sent in their names. DKIM, on the other hand, was put together by a consortium of 15 prominent IT companies like Yahoo, Cisco, and Microsoft. The technology was under the development phase for a while and was finally made public in 2007. Since then, it has proved to be an efficient and evolved version of DomainKeys for preventing spoofing and phishing.
Image sourced from fastercapital.com
Keys Operating Mechanism
DomainKeys is based on the principle of using a single private key to sign outgoing emails, while the reciprocal public key is published in the sending domain’s DNS records. This arrangement lets recipients’ servers verify the genuineness of incoming messages.
DKIM also employs a pair of public and private keys. The only difference lies in DKIM’s more extensive support and flexibility for key management.
In DomainKeys, the signature is placed in the entire body and selected headers, standing as a measure of authenticity and integrity for the email.
If we talk about DKIM, then, senders have better flexibility as it allows them to choose which specific part of an email to sign.
DomainKeys has limited adoption and is now largely deprecated in favor of DKIM, which is widely adopted and supported by major email providers and servers. Due to its improved features and flexibility, DKIM has become the de facto standard for email authentication.
As a successor, DKIM is more efficient in securing emails, which is why DomainKeys has been deprecated. DKIM includes a hash of the email’s content in its signature so that the recipient’s server can verify the integrity of the message and know that the email content was not modified in transit. | <urn:uuid:430166b7-8e7f-423e-b37a-29d24fbd0d46> | CC-MAIN-2024-38 | https://dmarcreport.com/blog/domainkeys-and-dkim-are-slightly-different/ | 2024-09-09T05:08:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00521.warc.gz | en | 0.937589 | 685 | 3.15625 | 3 |
AI and data science are not only growing into the public health sector, but they are having a significant impact on perhaps every aspect of healthcare. And that’s a good sign in the current global situation.
As a matter of fact, China leaned on artificial intelligence and data science to track and fight the pandemic since its beginning. At the same time, tech leaders—including Alibaba, Baidu, Huawei, and more—accelerated their company’s healthcare initiatives. Not surprisingly, tech startups are now more involved with clinicians, academics, and governments around the world to help fight the pandemic.
Artificial intelligence (AI) has transformed the way people experience healthcare services today, and it comes with the gift of improving access to healthcare and increasing the productivity of delivery and the quality of care, among many other benefits. And it shouldn’t be surprising.
Fantastic growth in healthcare data is nothing but the result of years of digitizing patient records. A BIS Research forecast estimates that the big data in healthcare will grow over $68.75 billion by the end of 2025.
Used by analytical tools and machine learning, this information can—and does—drive everything from adjusting hospital workflows to early detection of cancer. Cerner, for example, can draw on historical data to predict patient volumes and staff the ER accordingly, days in advance. It does so by using one of its machine learning algorithms and ensures that doctors and nurses aren’t short-staffed and that patients are seen more quickly and receive quality care.
Fighting the Pandemic
There are many ways in which artificial intelligence, data science, and machine learning are being used to manage and fight COVID-19. From tracking new outbreaks to creating advanced protection masks, here are just nine of them.
1. Track and forecast outbreaks
AI is learning how to detect an epidemic by analyzing news reports, social media platforms, and government documents. This is not new, as BlueDot has been researching and tracking infectious disease risks by using AI since 2008. Other risk software tools use a range of natural language processing (NLP) algorithms to monitor official healthcare reports and detect high-priority diseases, such as coronavirus.
When it comes to airports, predictive tools can also draw on data to assess the risk that transit points might have infected people arriving or departing.
2. Help diagnose
Since its beginning, the outbreak has put significant pressure on imaging departments. As they have to read hundreds of cases a day, patients and clinicians are waiting a few hours for results. Infervision launched a coronavirus AI solution that helps front-line workers detect and monitor the disease efficiently.
This is not the only one. Chinese technology giant Alibaba recently developed an AI system for diagnosing the COVID-19.
3. Deliver test samples and medical supplies
Last month, Zipline became the first company in the world to launch a rush service by drone to deliver coronavirus test samples from rural areas in Africa to laboratories. Drones can decrease the delivery time by over 50%, as compared with road transportation, and can take humans out of the process. This technology is also used to patrol public spaces and track non-compliance to quarantine mandates.
4. Sterilize and disinfect hospitals
Robots are being deployed to complete tasks such as cleaning and sterilizing, and also delivering food and medicine to reduce the amount of human-to-human contact.
Blue Ocean Robotics has created the UVD Robot to disinfect hospitals and pharmacy industries. The robot aims at preventing and reducing the spread of infectious diseases, bacteria, and other types of harmful organic microorganisms in the environment by breaking down their DNA-structure.
5. Identify non-compliance or infected individuals
The Chinese government has developed a monitoring system that uses big data to identify and assess the risk of each individual based on their travel history, how much time they have spent in virus hotspots, and potential exposure to people carrying the virus. Recently, an artificial intelligence tool accurately predicted which patients would go on to develop severe respiratory disease, a new study found.
6. Provide reliable information and clear guidelines
During this critical period, healthcare systems around the globe can easily be overwhelmed by numerous patients. But for a few years now, chatbots have significantly evolved.
Virtual healthcare agents can automate a diverse range of health-related activities supporting the patients and physicians. For the public, they can answer questions related to the virus, provide reliable information and clear guidelines, recommend protection measures, and even advise individuals whether they need hospital screening or self-isolation.
7. Detect symptoms
Thermal cameras are used for detecting people with fever. These cameras possessing AI-based multisensory technology have been deployed in airports, hospitals, nursing homes, etc. The technology automatically detects individuals with high temperature and tracks their movements.
8. Protect high-risk populations
The rapid growth of the COVID-19 intensifies the challenge of efficiently identifying soon-to-be high-risk populations, which require secure and direct access to data sources. The ability to gather actionable data and put it to work to protect the health of their communities need accurate, timely, and comprehensive information—something that technology can do.
9. Spread awareness
Companies such as Israeli startup Sonovia hope to arm healthcare systems and others with face masks made from their anti-pathogen, anti-bacterial fabric that relies on metal-oxide nanoparticles.
In the meantime, spreading awareness and research by data scientists can actually save lives.
The science is clear (and now widely accepted) that wearing homemade masks will slow the spread of COVID-19 and save lives. Modelling suggests if 80% of people wear a mask, we’ll stop the spread of the disease. Countries with mask laws have already achieved 100x lower infection rates than other states. It’s time to rely on the insights derived from AI to have better policies in the future. | <urn:uuid:2c19aabc-98f7-4085-9290-7e93e6fc8c85> | CC-MAIN-2024-38 | https://healthcarecurated.com/editorial/9-ways-ai-and-data-science-are-helping-fight-the-pandemic/ | 2024-09-09T05:14:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00521.warc.gz | en | 0.948996 | 1,224 | 2.75 | 3 |
While there is some light at the end of the tunnel, thanks to the presence of COVID-19 vaccines, the job is far from complete. The distribution of the vaccine has its own challenges. However, India has prior experience with large-scale vaccination programs. India’s Universal Immunization Programme (UIP), which immunizes approximately 26.7 million new-borns and 29 million pregnant women annually against 12 vaccine-preventable diseases, is one of the largest public health programs globally. It also achieved two major milestones, with the eradication of polio in 2014 and maternal and neonatal tetanus elimination in 2015.
India launched the world’s largest COVID-19 vaccination program on January 16, 2021, and has so far administered 25 million doses covering approximately 1.4% of the population compared to 34% in the UK and 29% in the US as of March 11, 2021. Also, India has so far exported approximately 58 million doses of COVID-19 vaccine, which considering India’s manufacturing potential and the fact that the country fulfills 50% of the global demand for various types of vaccines, is certainly not where you would expect it to be. The COVID-19 vaccination rates and export figures clearly suggest India’s need to strengthen its game in distribution within the country and export vaccines to the rest of the world. But less than two months into the rollout, India is making significant progress, from roping in around 10,600 private facilities, providing vaccines 24×7 to easy registration, free vaccination for the poor, and affordable pricing.
The biggest challenge for successful immunization will be the distribution and administration of vaccines across India once there are enough doses produced. The government needs to implement multiple innovative ways to make vaccines easily accessible. “Decentralization of vaccination” is paramount for increasing coverage in urban areas and reach rural India. There is a need to adopt strongly consider moving out of the hospital set up and take a lead from countries like the US, UK, Israel, and UAE. There are various successful models which India can replicate such as drive-through centers, building temporary vaccination camps, roping in large pharmacy retail chains, and private healthcare clinics. Mobile health clinics can play a very important in rural areas, which lack the necessary health facilities. The drive-through vaccination and camps, followed aggressively by Israel and UAE have come in for rave reviews. The final frontier in the distribution model would be vaccination at home.
“Mobilization” will increase convenience and in turn help overcome the reluctance to take the vaccine. This would not only relieve the stress from the limited private and government hospitals and avoid over-crowding but will also prevent delays in other health services. This would not be easy considering the supply-chain requirements and supply-demand mismatch at the vaccination sites. However, the government needs to plan and take steps to maximize coverage and minimize wastage. Also, all these arrangements will be only worthwhile if we have enough doses to administer through these sites. The biggest question remains – Will “Made in India” vaccine supplies be enough to inoculate the entire population considering India is also exporting the vaccine? Although India has the necessary local manufacturing capabilities and exports are vital for equitable distribution, economy, and vaccine diplomacy, it shouldn’t impact the availability of vaccines to be distributed in the country.
India has already received accolades for its strategies in containing the spread of COVID-19 and vaccine manufacturing capabilities. Innovative approaches from the government can lead the way in making this the world’s largest successful COVID-19 vaccination campaign and create history in the process. | <urn:uuid:33a39eda-955e-4904-b665-61483cce2bff> | CC-MAIN-2024-38 | https://www.frost.com/growth-opportunity-news/decentralization-critical-to-success-of-indias-covid-19-vaccination-campaign/ | 2024-09-11T17:03:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00321.warc.gz | en | 0.945306 | 744 | 2.609375 | 3 |
Are you thinking about purchasing a new computer? If so, you’ve probably noticed that Windows now seems to come in both 32-bit and 64-bit versions.
In truth, different versions of Windows have been available since Windows XP 64-bit was released in 2003, but most consumers did not notice because 64-bit Windows was rarely needed.
But times have changed, and consumers will now commonly have to choose between 32-bit and 64-bit versions of Windows. The battle of Windows 7 32-bit vs. Windows 7 64-bit will be resolved differently depending on your needs.
The Technical Mumbo-Jumbo
The difference between 32-bit and 64-bit versions of Windows is related to how processors deal with information. A 32-bit processor is capable of addressing up to 32 bits (a bit is a binary unit of data, either 0 or 1) at a time. A 64-bit processor, on the other hand, is capable of addressing up to 64 bits. The number of bits that can be addressed has a significant impact on the functionality of a computer.
However, in order to enable the extra functionality offered by a 64-bit processor, a 64-bit operating system must also be used. While simply increasing the number of bits a processor can address seems simple, it actually changes some fundamental ways in which the processor and operating system communicate, which is why 32-bit and 64-bit versions of Windows exist.
The Memory Differences
A 32-bit version of Windows can address up to four gigabytes of memory in total. This includes not only the RAM of your system, but also the video RAM from your video card and any other virtual memory. As a result, a typical computer with a 32-bit version of Windows will never have access to more than four gigabytes of RAM, no matter how many RAM sticks you stuff into the system.
64-bit Windows, however, can address up to sixteen terabytes of RAM. In practical terms, no computer is going to require that much RAM anytime soon. But some buyers might want to be able to use eight gigabytes of RAM, or even sixteen. If you are one of those users, you will need to have a 64-bit version of Windows.
While being able to use more RAM is great, running a 64-bit version of Windows is not without disadvantages.
Because a 64-bit operating system is fundamentally different from a 32-bit one, the older drivers which you used for devices on a 32-bit version of Windows won’t work with a 64-bit version. Fortunately, most hardware vendors provide full support of 64-bit versions of Windows and the correct drivers can be downloaded online. However, if you have an old piece of hardware – perhaps from the early days of Windows XP, before consumers could buy a 64-bit version of Windows – there may not be a 64-bit driver available.
It is also possible that you will experience some performance degradation when running 32-bit programs. This is because Windows has to do a little extra heavy lifting to make sure the 32-bit program works correctly on the 64-bit version of Windows. The difference is typically minor, however, and most programs now ship with both 32-bit and 64-bit versions on the same disk.
Making the Leap
Eventually, all versions of Windows will be 64-bit. As software and hardware becomes more complex, four gigabytes of RAM just won’t cut it. This inevitability is still probably a decade away, however, so if you’re purchasing a computer today, it is a question of your needs right now.
If you are buying a gaming computer, a workstation, or a powerful laptop you will want to make sure that you have the 64-bit version of Windows so you don’t run into RAM limitations now or in the future. If you want a simple, low-cost desktop or laptop, Windows 32-bit is preferable. You won’t have to worry about compatibility and the RAM limitation will not be an issue. | <urn:uuid:b6d02912-fb43-4a1c-92a2-d777ff5e84d7> | CC-MAIN-2024-38 | https://computerbooksonline.com/n2tech/32-bit-vs-64-bit-operating-systems-what-is-the-difference/ | 2024-09-14T05:19:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00121.warc.gz | en | 0.951978 | 836 | 3.28125 | 3 |
A digital certificate authenticates the online credentials and identity of a person or organization and allows web users and recipients to know that the data they’re inputting is going to a trusted source. They are akin to security badges for websites and users and help keep the internet safe. Digital certificates are issued by Certificate Authorities (CAs) and are used to encrypt data online. Digital certificates are also known as public key certificates or identity certificates.
A TLS/SSL certificate serves two purposes:
TLS/SSL certificates ensure HTTPS and the padlock icon appear in the URL bar. In addition, TLS certificates keep the internet secure. | <urn:uuid:1f11c674-15fa-4b5f-8de1-0b8488429e83> | CC-MAIN-2024-38 | https://www.digicert.com/faq/public-trust-and-certificates/what-is-a-digital-certificate | 2024-09-14T03:24:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00121.warc.gz | en | 0.922566 | 127 | 3.25 | 3 |
- Explore Products & Courses
- Explore Learning Paths
Software-Defined Wide Area Network (SD-WAN) technology is revolutionizing the way organizations manage their network traffic. With its ability to decouple the data plane from the control plane, SD-WAN provides organizations with a more flexible, scalable, and cost-effective solution for managing their network traffic. However, understanding and troubleshooting SD-WAN performance can be a challenge, especially when it comes to the underlying physical network, or underlay.
In this article, we will provide an overview of SD-WAN, the importance of end-to-end underlay monitoring, and four tips for troubleshooting SD-WAN performance problems. With this information, organizations can ensure optimal performance, a positive end-user experience, and peace of mind knowing that their network is running smoothly.
Overview of software-defined wide area networks
SD-WAN is a technology that utilizes software to direct and enhance the flow of network traffic across WANs (Wide Area Networks). It centralizes network control and management, making it easier to operate and maintain network infrastructure, especially in complex and multi-branch setups.
In contrast to traditional WAN solutions that rely on dedicated, hardware-based connections, SD-WAN allows companies to securely direct network traffic over various transport connections, such as broadband, LTE, and MPLS. This offers organizations the ability to choose the optimal connection for each application, resulting in improved performance and reduced reliance on costly dedicated WAN connections.
As the use of cloud-based applications and services grows, traditional WAN solutions are becoming obsolete. SD-WAN network technology, which is flexible, scalable, and secure, enables businesses to fully utilize cloud-based services while also supporting their growth. The central management interface simplifies network administration, allowing businesses to respond quickly to changing needs. Furthermore, by utilizing multiple transport connections, SD-WAN offers a cost-effective alternative to traditional WAN solutions and assists organizations in effectively catering to their customers' needs and users' demands.
Decoupling the data plane and control plane in SD-WAN
One of the key benefits of SD-WAN is its ability to decouple the data plane from the control plane, which creates the notion of overlay and underlay. The overlay refers to the virtual network that is created on top of the underlying physical network (or underlay). The overlay is responsible for directing network traffic and providing centralized management and control. The underlay, on the other hand, refers to the underlying physical network infrastructure, including transport connections, switches, and routers.
While SD-WAN provides many benefits, native SD-WAN monitoring tools are limited in their ability to provide insights into the underlay. These tools are focused on the overlay, providing information about the virtual network and the performance of specific applications. However, they do not provide visibility into the underlying physical network, making it difficult to diagnose and resolve issues related to the underlay.
In other words, overlay metrics are not enough; you need end-to-end underlay monitoring, too. Native SD-WAN monitoring tools are focused on the overlay, so they do not provide insights into the underlay. This makes it challenging to identify and resolve network performance issues – especially those that originate from the underlay network. Therefore, it is critical to have end-to-end underlay monitoring in addition to overlay monitoring.
To fully understand the performance of an SD-WAN network, it is important to have end-to-end underlay monitoring in addition to overlay monitoring. This provides a complete picture of the network, including the underlying physical infrastructure and the transport connections. With end-to-end underlay monitoring, organizations can identify and resolve issues related to the underlay, such as network congestion, connectivity problems, and configuration errors, helping to ensure optimal performance and a positive end-user experience.
Tips for troubleshooting SD-WAN performance
Dealing with SD-WAN performance problems can be challenging. Nonetheless, organizations can effectively address these challenges by implementing the four practices discussed below:
- Check for Configuration Errors/Changes
One of the first steps in troubleshooting SD-WAN performance is to check for configuration errors or changes. Misconfigurations can result in security issues and poor performance. To ensure that the network is properly configured, all devices (including routers, switches, and transport connections) must be reviewed regularly. Any changes or updates should be thoroughly tested before being implemented in production to ensure that they do not cause any performance or security issues. - Check for Capacity Issues
Another potential source of SD-WAN performance issues is a lack of capacity, which occurs when transports and devices do not have enough headroom to handle the traffic. This can cause network congestion and slow performance. To address this, organizations should monitor their network capacity and plan for future needs. This includes regular monitoring of bandwidth utilization and ensuring that there is enough capacity to handle peak traffic periods. - Check for ISP Issues
Problems with SD-WAN performance may also come from third-party transport providers. In this case, it is important to analyze network paths and identify any issues with Internet Service Providers (ISPs). This includes checking for network congestion, connectivity problems, and any issues with the transport infrastructure. Regular monitoring of network paths and working with ISPs to resolve any issues can help ensure optimal network performance. - Check the End-User Experience
Finally, when troubleshooting SD-WAN performance, it is critical to consider end-user experience. Device performance metrics may not always accurately reflect the actual user experience. To ensure that end users have the best possible experience, it is critical to collect and analyze end-user feedback regularly as well as monitor device performance metrics such as latency, jitter, and packet loss. As a result, organizations can gain a better understanding of their network's overall performance and take the necessary steps to improve it.
In conclusion, the success of SD-WAN technology depends on comprehensive performance monitoring. While overlay metrics are useful, they are insufficient for fully understanding network performance. Organizations must consider end-to-end underlay monitoring to ensure optimal network performance and a positive user experience. This comprehensive approach, combined with the four tips discussed above (checking configurations, capacity, ISPs, and end-user experience) will help organizations identify and resolve performance issues effectively. To reap the full benefits of SD-WAN, organizations must prioritize end-to-end underlay monitoring as a critical component of their network management strategy.
Faith Kilonzi is a full-stack software engineer, technical writer, and a DevOps enthusiast, with a passion for problem-solving through implementation of high-quality software products. She holds a bachelor’s degree in Computer Science from Ashesi University. She has experience working in academia, fin-tech,...
Other posts you might be interested in
Explore the Catalog
September 6, 2024
CrowdStrike: Are Regulations Failing to Ensure Continuity of Essential Services?
September 5, 2024
Enhancing Operational Efficiency with DX NetOps Integrations
September 4, 2024
Step-by-Step Guide to Integrating AppNeta with Grafana via API
September 4, 2024
Unlock Efficiency and Understand Value Stream Management: Sign Up for a Value Stream Assessment
September 3, 2024
With AppNeta, ResultsCX Decreases Network Performance Triage Time by 90%
September 3, 2024
Introducing a New, Zero-Touch Way to Manage Your DX NetOps Upgrades
August 30, 2024
Connect Your Enterprise: How ValueOps ConnectALL and ValueOps Insights Empower Your Digital Transformation
August 29, 2024
The Hidden Consequences of the Go-It-Alone Approach to Managing Applications
August 28, 2024 | <urn:uuid:7853c74e-0158-41fb-9d63-05ec10dba33c> | CC-MAIN-2024-38 | https://academy.broadcom.com/blog/network-operations/troubleshooting-sd-wan-performance-problems | 2024-09-15T09:47:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00021.warc.gz | en | 0.922403 | 1,603 | 2.546875 | 3 |
19 critical errors in a widespread TCP/IP implementation with potentially fatal consequences: With modified IP packets, arbitrary commands can be executed on IoT devices and critical data can be read out. The responsible ICS-CERT rightfully rates some of these vulnerabilities with the maximum severity of 10 on the CVSSv3 scale.
“Ripple20.” Millions of devices are vulnerable
Smart homes and smart cities, sockets, routers, as well as medical equipment, sensors and critical control or transport systems such as aircraft or satellites are equipped with a TCP/IP stack and an Internet connection. Small standard modules are processed. The vulnerable TCP/IP stack from the company Treck is optimized for embedded devices and is used by well-known companies like Baxter, Intel, Schneider Electric, HP and Rockwell Automation. Many of the security gaps that have now been discovered, collectively referred to as “Ripple20”, are due to the fact that length restrictions of individual fields are ignored. Attackers can thus infiltrate and execute code, but also read critical data. Millions of devices are affected worldwide.
The manufacturer has fixed the flaws discovered by security researchers from JSOF in an update to version 18.104.22.168. However, it is unclear how the update will affect the vulnerable devices – many are simply not designed to update. The question of how users can tell whether their system is vulnerable or which software version is used on it also remains open.
“First aid”: measures at network level
“Ripple20 is possibly one of the most fatal security holes ever,” says Benjamin Paar, Senior System Engineer for OT/IoT Security at IKARUS: “Errors of this kind can be avoided if security-critical considerations are part of every development process from the very beginning – no matter how unlikely it may seem at first glance that an attack could ever occur here. It is definitely time for a rethink, away from fast and cheap to sustainable and secure. Every system that can be connected to Internet should have a secure and reliable update option. Otherwise, I would not consider it secure.”
As a safeguard against “Ripple20”, it is recommended to implement measures at the network level to block suspicious IP packets and source routing. “Especially in the area of industrial plants, visibility is therefore the first step to be able to identify and counteract potential sources of danger”, says Christian Fritz, COO at IKARUS: “With an overview of all devices and systems, targeted measures can be taken on the basis of qualified alerts”. IKARUS technology partner Nozomi Networks anticipates that there will be further vulnerabilities in this context – not all affected companies and products are known yet. “Based on the information published so far, it can be said that advanced skills are required to exploit the discovered vulnerabilities,” says Benjamin Paar: “So far, we have no indications that attacks have already occurred or are underway.” So, this is a game of time – and now is the perfect time to find out about optimal protection measures. | <urn:uuid:aa63a088-a697-4f75-9302-76ccd17765f8> | CC-MAIN-2024-38 | https://www.ikarussecurity.com/en/security-news-en/highest-alert-level-remote-code-execution-on-iot-devices-possible/ | 2024-09-17T19:56:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00721.warc.gz | en | 0.951223 | 646 | 2.515625 | 3 |
As existing computing systems creak under a deluge of data, the demand for enhanced performance and superior compute capacity has increased. Innovative high performance computing (HPC) solutions are coming to the fore, empowering organisations to push the boundaries of their capabilities and enabling them to glean faster, more accurate insights from their data. The convergence of AI and HPC will take this a step further.
HPC is built to solve complex problems. It has a long history as the engine room of innovation, capable of analysing billions of pieces of data in real-time and carrying out calculations thousands of times faster than an average computer. But, as we transition to an exascale future, HPC will increasingly intertwine with AI. This is an incredible opportunity but we need to get the technology right, learning lessons from the HPC pioneers and considering next steps at the data centre level.
Taking design to the next level
AI at its simplest is romantic and super-charged data science; the ability of a machine to recognise trends and patterns in inputs from immense datasets, with workloads using algorithms as instructions so it can also learn in order to draw out deeper insights and transferable intelligence. Unsurprisingly, it is the latter point that has gripped the imagination of many.
Investing in artificial intelligence: What businesses need to know
HPC has traditionally been the tool by which a business can prove a design concept – for example, speeding up the development a new model of car by simulating the ways in which the manufacturing process might be disrupted if a certain metal is used on the door. It could also simulate how a crash might impact the structure of the vehicle, providing models and reports of the predicted outcomes. AI will change the dynamic of this relationship for the better. Because of AI’s ability to learn, it will one day provide feedback on the design and recommendations on what to do differently.
For now, though, we are still very much in our technological infancy. Some of the most successful examples of AI in action are still simply about a machine learning to separate images of, say, cats from elephants. That said, its potential is being recognised by many businesses eager to stay ahead of the curve, from the introduction of chatbots that can undertake sentiment analysis, to the deep analysis of complex legal contracts to identify errors and classify issues.
Understanding the capabilities and interconnections of the entire system
There’s much more to do to create the conditions for these technologies to merge successfully – a merger that will transform both in the process. This is something we’re seeing already.
HPC modeling typically applies scientific equations to a structured mesh of data points and then using a powerful computer, simulates an environment over all the points across that grid. The more points you have on the grid, the longer it takes to undertake the HPC modelling. Achieving pin-point accuracy through ultra fine precision, in particular, takes time. AI can help here, by learning new options for coarse meshes that achieve minimal error.
In doing so, it allows HPC applications to return meaningful results more quickly, reducing the HPC resources needed and speeding up the development time. And, as the AI gets smarter, the HPC gets more and more efficient.
Artificial intelligence will lead to a ‘positive shift in the work people do’
The debate around artificial intelligence and jobs won’t go away. However, the most likely outcome of AI — looking at other industrial revolutions — will be a positive shift in the nature of the work people do. Read here
Another good, demonstrable example to illustrate how the convergence is taking place and the benefits of it happening is to consider how the technology could readily advance weather forecasting. Weather modellers typically focus on understanding environmental changes at a macro level, modelling storm systems as they move across oceans and continents. Thanks to HPC modelling and simulation, meteorologists can track and, with precision, anticipate how a hurricane will move and who will be in harm’s way as landfall looms large.
However, smaller-scale weather developments, like hail showers or tornadoes, can be impossible to predict because of the large scale modelling resolutions that are chosen to track mega storms and continental weather patterns. Adding AI to the HPC modelling and simulation workflow in these instances could accelerate discovery. AI algorithms can be used to treat the output from traditional models as a data-rich input source, learning from the simulations and spotting potential, illuminating patterns.
The AI would draw on the previous weather history data and determine smaller areas to focus on based on the patterns and correlations in the older data. In doing so, meteorologists could create focused, fine mesh weather models that are able to predict weather risks at a variety of levels and increase the value of the data that they provide as a service to others. HPC already drives value for companies the world over but by coupling AI with HPC, this value can grow to levels previously unseen.
Scalability and your data centre strategy
HPC-AI – or perhaps it would be better to call it Intelligent-HPC (I-HPC) – will undoubtedly drive a major paradigm shift in computing, data analytics and processing overall. Yet, at the data centre level, the need for a highly, optimised and secure environment – where scalability and performance are prioritised – remains unchanged. Indeed, both end-to-end high bandwidth and low latency is a must for rapidly adapt to evolving requirements of these systems.
For business, the important thing to remember is that I-HPC is computationally intensive. Evaluating the current physical HPC infrastructure of a data centre together with additional elements such as processors from a performance point of view, must be undertaken holistically, encompassing a wide range of capabilities – from software, storage to skills. In order to integrate new technologies, businesses need to understand their current technology stack in detail. And, by understanding those technologies and how they are applied, there is plenty to be gained.
Traditional HPC has historically demanded the highest integrity in terms of the data centre infrastructure. Many HPC applications are stateful applications that iterate over a set of data that is shared on blindingly fast storage platforms for the duration of the supercomputing simulation. However, AI-enabled machine learning algorithms may not require the same levels of critical performance from the underlying infrastructure.
Integrating AI: Best practice tips — a CTO guide
During the training phase of a typical DNN for example, iterative calculations can be executed on purpose-built hardware that is coupled to the specific operational standards of the supporting data centre infrastructure. When coupled with innovative low-resiliency data centre products, a business can achieve not only improved efficiency for their I-HPC applications but also dramatic cost savings.
Lessons from the previous generation
As with traditional HPC needs, there is no one-size-fits-all solution for I-HPC. But the convergence of HPC and AI and the innovation taking place stands to benefit all parties. While traditional HPC solutions have not been built with cloud in mind, AI applications are at the cutting edge; many developers are leveraging the newest DevOps tools to deliver truly federated, cloud-native applications. This is exciting.
However, it’s worth also remembering that the developers of the latest in AI technology do not have decades of experience in deploying and optimising massively parallel architectures. As AI algorithms demand more and more parallelism to achieve fast and accurate results, HPC native tools such as mpi will push AI applications further. There is little need – or indeed a business case – to develop exceptionally expensive GPU cards and the supporting custom software from scratch or on a case by case basis when these compute-intensive fundamentals have been perfected by the generation that developed supercomputers already.
Ultimately, the convergence of AI with HPC will transform computing systems as we know it but it’s a natural progression – with many benefits ahead. Not only will the advent of I-HPC enhance collaboration across business units, allow for experimentation and uncover capabilities at a human level, it will also improve processes and create greater agility.
Indeed, I-HPC will make it possible to address unique workloads and challenging situations faster than ever before – facilitating not just the next big discovery or strategic breakthrough, but also revolutionising the way organisations operate, learn, collaborate, and grow. | <urn:uuid:4971ebaa-d7be-4afb-b5aa-07dec9e58725> | CC-MAIN-2024-38 | https://www.information-age.com/intelligent-high-performance-computing-11884/ | 2024-09-17T19:52:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00721.warc.gz | en | 0.94439 | 1,714 | 2.65625 | 3 |
In June 2020, Apple announced its plans to transition the Mac to what they called “a world-class custom silicon to deliver industry-leading performance and powerful new technologies.” Their custom silicon is based on ARM SoC (System-on-Chip) architecture, and is an evolution from the chips that powered Apple’s iPhones and iPads for more than half a decade. The Apple announcement also claims the new family of SoCs, custom built for the Mac, will lead to higher performance per watt, better performing GPUs, and that access to a neural processing engine will make the Mac an amazing platform for developers to use machine learning.
In November 2020, Apple announced M1 as the most powerful chip it had ever created, and the first chip designed specifically for the Mac. It further claimed the M1 was the world’s fastest CPU core in low-power silicon, best CPU performance per watt, fastest integrated graphics in a personal computer, and breakthrough machine learning performance with the Apple neural engine. The power consumption and thermal output reports of the Mac mini show a significant reduction on both fronts when compared to its 2018 counterpart with Intel processors.
In this article, we’ll look into two ARM hardware architectural features that have powered the advances of ARM-based mobile processors, including the Apple M1, on both overall performance and performance per watt. Here, performance per watt refers to the ratio of peak CPU performance to average power consumed.
System-on-Chip (SoC) is an integrated circuit that combines many components of a computer onto a single substrate. The primary advantage of SoC architecture over CPU-based PC architecture is its size. Along with microprocessors, SoCs could come integrated with one or more memory components, a graphics processing unit (GPU), digital signal processors (DSP), neural processing units (NPU), I/O controllers, custom application-specific integrated circuits (ASIC), and more. The integrated design of these components also means that they can be developed with a unified approach for performance and energy efficiency, and deliver more performance per watt compared to their PC equivalents. The energy efficiency combined with the small form factor of the SoC-based chips make them ideal for the mobile, consumer, wearable, and edge computing markets.
Devices of the future like AR smart glasses are getting lighter and smaller in form, but more demanding on performance to run complex and advanced multi-compute workloads. So, future SoC-based ARM architecture designs will strive to achieve higher performance with even smaller form factors, and power enveloped with performance delivered per watt being the new paradigm.
ARM big.LITTLE technology is a heterogeneous processing architecture which uses two different types of processors arranged as two clusters. Each cluster contains the same type of processor. The ”LITTLE” processors are designed for maximum power efficiency, while the ”big” processors are designed to provide maximum compute performance. Both types of processors are coherent and share the same instruction set architecture (ISA). The Apple M1 is an example of an SoC chip built with ARM big.LITTLE technology, and has four ‘big’ high-performance cores called “Firestorm,” and four “LITTLE” energy-efficient cores called “Icestorm.” Each task can be dynamically allocated to a big or LITTLE core depending on the instantaneous performance requirement of that task. With the combination of processors, the system can deliver peak performance on demand with maximum energy efficiency while staying within the thermal bounds of the system.
ARM big.LITTLE technology has been designed to address two main requirements:
- At the high performance end — High compute capability within the system’s thermal bounds
- At the low performance end — Very low power consumption
The big.LITTLE system has two major software execution models:
- CPU migration: In this model, each big core is paired with a LITTLE core. Only one core in each pair is active at any one time, with the inactive core being powered down. The active core in the pair is chosen according to current load conditions. On a system identical to the Apple M1, the operating system sees four logical processors. Each logical processor can physically be a big or LITTLE processor, and this choice is driven by dynamic voltage and frequency scaling (DVFS). This model requires the same number of processors in both clusters.
- Global task scheduling: In this model, the scheduler is aware of the differences in the performance and energy characteristics of the big and LITTLE cores. The scheduler tracks the performance requirement for each individual thread, and uses that information to decide which type of processor to use. Unused processors can be powered off, and if this includes all processors, the cluster itself can be powered off. This model can work on a big.LITTLE system with any number of processors in any cluster.
Big.LITTLE architecture can have at most two clusters with a maximum of four processors of the same type in each cluster. This architecture combines two types of processors with different microarchitectures, but the same architecture to create an energy-efficient compute solution.
Shared virtual memory
On traditional PCs, a discrete GPU was built separate from the main CPU and had its own separate memory. Later systems came with integrated GPUs built into the processor with the ability to access and reserve memory for their own use. But discrete or integrated applications using the GPU for graphics processing, or more general compute purposes, were still required to move data back and forth between the main CPU memory and GPU memory, and incur latency and power penalties.
With SoC-based architectures, systems can have a unified memory architecture for all built-in compute units, including the CPU, GPU, NPU, and others, as they are all collocated and can access the main memory. However, though the SoC design makes it physically possible to have a unified view of the memory, it would still require support from the hardware memory management units, the operating system software, and the application software APIs for multiple compute workloads to fully realize the performance benefits. The Apple M1 unified memory architecture is an example of shared virtual memory (SVM) architecture.
SVM allows different processors to see the same view of available memory. So, within the virtual address space of a single application, all compute units like CPU and GPU using the same virtual address actually refer to the same physical memory location. With this architecture, modern complex workloads that require machine learning, image processing, graphics rendering, and more, can seamlessly leverage the available heterogeneous compute resources by passing pointers to data between them rather than moving data around.
Though different processors have the same view of the memory, each processor has its own private memory cache which poses the problem of maintaining cache coherency between the different processors. ARM introduced coherency extensions (ACE) to their bus architecture (AMBA) protocol that allows for hardware coherence between processor clusters. For example, in a system with two processor clusters, any shared access to memory can ‘snoop’ into the other cluster’s caches to see if the data is already on chip; if not, it is fetched from external memory. The AMBA 4 ACE bus interface extends the hardware cache coherency outside of the processor cluster and into the system.
With hardware cache coherency extending into the system, the GPU can now read any shared data directly from the CPU caches, and writes to shared memory will automatically invalidate the corresponding lines in the CPU caches. Hardware coherency also reduces the cost of sharing data between CPU and GPU, and allows tighter coupling.
ARM has followed up their two (max 4-core) homogeneous cluster big.LITTLE architecture with a new single cluster DynamIQ architecture with up to eight heterogeneous CPUs. The ability to build heterogeneous compute units onto a single SoC chip, and allow them to share a single view of the memory provides flexibility in designing solutions to fit the diverse complex and multi-compute needs of modern consumer, mobile, wearable, automotive, and edge devices.
Looking to learn more about the technical innovations and best practices powering cloud backup and data management? Visit the Innovation Series section of Druva’s blog archive. | <urn:uuid:2234334e-b5e9-4cdf-8b7c-05dd77925d5b> | CC-MAIN-2024-38 | https://www.druva.com/blog/exploring-arm-and-heterogeneous-compute-architecture | 2024-09-20T07:12:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00521.warc.gz | en | 0.931055 | 1,704 | 2.828125 | 3 |
Jad Jebara, President & CEO at Hyperview, looks at the technologies that are turning green data centres into a reality.
Data centres already consume around 1% of the world’s electricity supply. With new research revealing that data centre capacity will nearly triple by 2029; how will our planet support such a demand?
In an era characterised by the rapid transformation of our world through technology, data centres emerge as the often-overlooked champions driving the digital revolution. These immense hubs of computing power serve as the backbone of our digital infrastructure. As the need for data surges, so too does the energy consumption of these facilities.
It’s a problem that the very best of the data centre infrastructure management (DCIM) software sector is dedicated to solving by bringing forth the sustainable, efficient and modern data centre of the future.
A future ruled by data
By the time you finish reading this sentence, there will be more data existing on this planet than when you started. Some estimates put the amount of data created every day at 328.77 million terabytes, with 90% of the world’s data being generated in the last two years alone. It’s a number too high for any human brain to comprehend and it will only get bigger.
Recent findings from Synergy Research Group revealed that data centre capacity is poised to nearly triple by 2029, a surge driven primarily by hyperscalers gearing up for the demands of high-intensity generative AI workloads. Afterall, data is the fuel of AI.
Along with these findings is Gartner’s recent prediction that by 2026, 50% of G20 members will struggle with monthly electricity rationing. This will make energy-efficient operations a major component of long-term business success. With a looming energy crisis and a surge in data, the phrase ‘the time to act is now’ may sound like a broken record, but it’s never been truer for data centres.
Sustainability is no longer a ‘nice to have’; it’s a requirement that regulators and international bodies are racing to implement. California’s new emissions disclosure law requires businesses to report not only the emissions from their facilities but also those from their supply chains. As of January 2023, the EU’s Corporate Sustainability Reporting Directive came into force, strengthening the rules concerning the social and environmental information companies must report.
Data centre operators can respond swiftly to this requisite by implementing a range of innovative solutions that will allow them to reduce their environmental impact and better understand their energy usage overall. The stakes have never been higher, and collective, immediate action is crucial to see real change.
Sustainable solutions for the growing data centre landscape
Modelling and predictive analytics tools
Incorporating modelling and predictive analytical tools within operations empowers data centre managers to forecast future energy needs accurately. As data centre capacity expands, this forecasting capability becomes indispensable in optimising operations for maximum efficiency. It allows proactive planning, ensuring that energy consumption aligns with actual requirements, preventing unnecessary resource allocation and further supporting sustainability objectives.
Real-time visibility into energy sage
Real-time visibility into energy usage allows operators to discern patterns and identify areas of energy waste, thereby facilitating precise interventions to eliminate inefficiencies. The real-time insights offered by DCIM solutions not only enhance operational efficiency but also align seamlessly with emissions reduction goals, contributing to a more sustainable and environmentally conscious data centre landscape.
Cooling, power provisioning, and asset utilisation
Innovative DCIM tools drive higher efficiency in crucial areas such as cooling, power provisioning, and asset utilisation. By optimising these facets, data centres can significantly reduce their overall energy consumption. This multifaceted approach ensures that the data centre operates at peak efficiency, minimising unnecessary energy expenditure and subsequently reducing its environmental impact.
Optimal temperature tools
By ensuring critical components operate at optimal temperatures, the risk of overheating and system failures is mitigated. This not only improves the overall performance and lifespan of the equipment, but also prevents energy wastage associated with the consequences of operational failures.
The conversation around data centre sustainability cannot be overstated. DCIM solutions offer a roadmap for businesses and data centre managers to navigate the challenges posed by the impending surge in data capacity. Acting swiftly will not only mitigate the environmental impact but also ensure the resilience and longevity of our digital infrastructure. After all, the decisions we make today will shape the sustainability of our data-driven world for years to come. | <urn:uuid:63c3e008-7ce4-4a97-b88e-23147a77573a> | CC-MAIN-2024-38 | https://datacentrereview.com/2024/01/is-dcim-key-to-green-data-centres/ | 2024-09-08T04:19:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00721.warc.gz | en | 0.925372 | 930 | 2.6875 | 3 |
Information security generally refers to defending information from unauthorized access, use, disclosure, disruption, modification or deletion from threats. Organizations are constantly facing threats that exist both externally as well as internally — be they from nation states, political activists, corporate competitors or even disgruntled employees.
Defending an organization from these threats is hard because it requires a significant amount of effort, insight and investment. It’s also difficult for non-technical users to appreciate its importance; that is, until a security breach cripples or even destroys even the most carefully constructed organization. To such an extent, it is important to understand the concept of defence in depth when tasked with defending an organization from threats.
It is critical to understand that security is always “best effort”. No system can ever be 100% secure because factors outside of the designers’ control might introduce vulnerabilities. An example of this is the use of software that contains 0-day bugs — undisclosed and uncorrected application vulnerabilities that could be exploited by an attacker.
Defence in depth is a principle of adding security in layers in order to increase the security posture of a system as a whole. In other words, if an attack causes one security mechanism to fail, the other measures in place take arms to further deter and even prevent an attack.
Comprehensive strategies for applying the defence in depth principle extend well beyond technology and fall into the realm of the physical. These can take the form of appropriate policies and procedures being set up, training and awareness, physical and personnel security, as well as risk assessments and procedures to detect and respond to attacks in time. These measures, crucial though they might be, are only but physical measures to preventing what is ostensibly an information security problem.
This is the start of a series of articles that will focus on how defence in depth principles could apply to web applications and the network infrastructure they operate within. This six part series will also offer a number of pointers (that is by no means exhaustive) which can be used to improve the security of web applications.
An introduction to defence in depth and how it applies to web applications
Get the latest content on web security
in your inbox each week. | <urn:uuid:12073782-a049-4a87-8b8b-87b7ebf646d5> | CC-MAIN-2024-38 | https://www.acunetix.com/blog/articles/defence-in-depth-and-how-it-applies-to-web-applications-part-1/ | 2024-09-09T08:44:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00621.warc.gz | en | 0.958991 | 441 | 3.03125 | 3 |
What is Business Intelligence? Definition & Example
BI(Business Intelligence) is a set of processes, architectures, and technologies that convert raw data into meaningful information that drives profitable business actions.It is a suite of software and services to transform data into actionable intelligence and knowledge.
Original Article Source Credits: Guru99 , https://www.guru99.com/
Article Written By: NA
Original Article Posted on: NA
Link to Original Article: https://www.guru99.com/business-intelligence-definition-example.html | <urn:uuid:9f34b012-a941-4e9f-abe0-d69b19e4852b> | CC-MAIN-2024-38 | https://www.monitor24-7.com/blogs/global-itsm-news/547522-what-is-business-intelligence--definition---example | 2024-09-09T09:40:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00621.warc.gz | en | 0.817637 | 112 | 2.59375 | 3 |
Social Engineering Testing
Social Engineering is a technique used by hackers to bypass the actual “hacking” and receive information they seek directly from the person who has the information. Social engineering is basically asking people for confidential information under the guise of another individual or organization. This can be done in person, via email, on the phone, using simple or sometimes sophisticated techniques. Employees are often the weakest link in an organization’s security posture.
Our Services for social engineering testing
We test your vulnerability to social engineering attacks using different techniques which may include:- Accessing secure business areas through in-person social engineering attempts
- Obtaining private information via vishing, phishing, smishing or in person.
- Testing through targeted spear phishing or whaling attacks
- Clean desk policy reviews
- Detailed report with findings and recommendations. | <urn:uuid:c8cc8158-85f5-4665-9eec-d0702467e39d> | CC-MAIN-2024-38 | https://www.24by7security.com/services/cyber-security-services/social-engineering-testing/ | 2024-09-10T12:58:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651255.81/warc/CC-MAIN-20240910125411-20240910155411-00521.warc.gz | en | 0.920443 | 172 | 2.6875 | 3 |
The COMMIT statement terminates the current transaction. Once committed, the transaction cannot be aborted, and all changes it made become visible to all users through any statement that manipulates that data.
Note: If READLOCK=NOLOCK is set, the effect of the transaction is visible before it is committed. This is also true when the transaction isolation level is set to read uncommitted.
The COMMIT statement can be used inside a database procedure if the procedure is executed directly, using the execute procedure statement. However, database procedures that are invoked by a rule cannot issue a COMMIT statement: the commit prematurely terminates the transaction that fired the rule. If a database procedure invoked by a rule issues a COMMIT statement, the DBMS Server returns a runtime error. Similarly a database procedure called from another database procedure must not issue a COMMIT because that leaves the calling procedure outside the scope of a transaction.
Note: This statement has additional considerations when used in a distributed environment. For more information, see the Ingres Star User Guide. | <urn:uuid:afc76482-9dc5-4d77-9cae-28d47cf5c636> | CC-MAIN-2024-38 | https://docs.actian.com/ingres/10S/OpenSQL/Description_3.htm | 2024-09-14T06:35:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00221.warc.gz | en | 0.877266 | 211 | 2.828125 | 3 |
Network sniffing, the technique of capturing data packets traversing a network, is a double-edged sword in the realm of cyber security. While ethical hackers leverage it to glean valuable insights into network operations and user behavior, malicious actors employ network sniffing as a formidable tool for cyberattacks. Here, we delve into the nuances of network sniffing, its inherent risks, and robust countermeasures.
Network sniffing is the interception and analysis of network traffic with the aim of extracting sensitive information. It exploits the nature of data traversing a network—often transmitted in unencrypted formats due to obsolete or vulnerable network protocols. Network sniffers, the tools used for capturing these packets, can read and interpret this data effortlessly.
Data sent across a network is broken down into smaller packets, each carrying information like the sender’s and receiver’s IP address, the protocol type used, and the actual data. Sniffing allows the extraction of sensitive information from these packets, such as usernames, passwords, website access data, and emails.
Network sniffing brings forth significant threats to information security and user privacy. Here are some key risks associated with network sniffing:
Several preventive measures can mitigate the risks of network sniffing and bolster information security:
While implementing a combination of these measures can mitigate the risk of network sniffing, there is no one-size-fits-all solution.
To detect network sniffing and potential suspicious activities, tools and techniques such as network traffic analysis, intrusion detection systems, log analysis, network device monitoring, honeypots, and user behavior analysis can be employed. However, detecting sniffing activities requires constant vigilance and sometimes a deeper analysis to confirm the presence of an actual attack.
There have been several notable instances of network sniffing attacks, two of which stand out:
Network sniffing poses a significant threat to information security and user privacy. The use of traffic analysis tools, intrusion detection, and system log monitoring can help detect sniffing attempts. However, vigilance and adherence to good security practices are essential in mitigating this ever-evolving threat. Offering the best cyber threat intelligence, our company is committed to helping you safeguard your networks against such vulnerabilities. Our diverse suite of services ensures that you can browse safely, knowing your data is secure.
If you have read this blog post so far, you must have enjoyed it. If so, don’t miss our future publications. See you soon.
Powered by our internal unit Cluster25, we deploy our Cyber Threat Intelligence solution globally, delivering nation-state caliber cyber threat intelligence and expert assistance to governmental and private organizations. Through our remote network protection solution, we also extend office-grade protection to hybrid workforces and their assets, guarding against financial and reputational damages in untrusted networks. With DuskRise, organizations can rest assured that their people and resources are safe and secure.
Schedule a demo today and secure your assets for good. | <urn:uuid:1b762cc4-a64c-4ea6-9444-5b7def982ffe> | CC-MAIN-2024-38 | https://www.duskrise.com/2023/08/03/network-sniffing-unpacking-the-threat-and-how-to-prevent-it-2/ | 2024-09-14T07:17:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651559.58/warc/CC-MAIN-20240914061427-20240914091427-00221.warc.gz | en | 0.917853 | 601 | 2.734375 | 3 |
I have covered more, about Bitcoins in some of my previous articles, but there are also other promising digital currency, which uses cryptography & the blockchain technology and catching up right there in terms of being cost effective, promising high return .
Please refer my links below to know all the basics of Bitcoin & Blockchain :
- All You Wanted To know About Bitcoins
- Blockchain Technology Part 1 : What and Why ?
- Smart Contract A Blockchain Innovation for Non-Techies
Types Of CryptoCurrency Existent In The Market :
Bitcoins : Refer this link to know more All You Wanted To know AboutBitcoins
We will today cover Ripple in detail, to help you understand its basics.
1. What Is Ripple?
As per ripple.com :
Ripple connects banks, payment providers, digital asset exchanges and corporate via RippleNet to provide one frictionless experience to send money globally.
Ripple is quite different from other cryptocurrencies as it doesn’t have its own public blockchain. Internally, the XRP Ledger network runs on an internal blockchain which they call an “Enterprise blockchain” ledger, it doesn’t use proof-of-work and little else is known about it.
Ripple functions like RTGS and is a digital currency exchange developed by Ripple company . Ripple was released officially in 2012 with an intent to enable instant & secure financial transactions of any size with no chargeback. It is currently a world third largest digital currency.
Is Ripple More Like a Bitcoin:
Ripple does share some technical similarities with bitcoin but it is quite different from it. Like many digital currencies it uses cryptography to secure transactions, but it doesn’t actually have a public blockchain like Bitcoin or Litecoin. And it came into existence to mainly transform the payment transfer system, for the purpose of sending instant and secure transactions across network participants and that what is more known for.
Ripple is more like payment system, remittance network, and currency exchange & now Ripple has built its network that includes its own cryptocurrency Ripple, or XRP.
Ripple Transaction Protocol or simply we can call Ripple Payment Protocol uses distributed open source Internet Protocol, consensus ledger & native crypto called XRP(Ripples). Ripples circles around shared database or public ledger which uses a consensus mechanism to allow payments, exchanges and remittance in a distributed process.
The goal of the ripple system, according to its official web portal, is to enable people to break free of the “walled gardens” of financial networks — ie, credit cards, banks, PayPal and other institutions that restrict access with fees, charges for currency exchanges and processing delays.
How Ripple Works ?
Let’s understand its working mechanism through simple example. Here Sender will send $=500 To the Receiver, Both parties involved in transaction lives in a different city.
Sender =Jim, Receiver = Rony , Senders Side Agent = Krish, Receiver Side Agent= Tim.
Jim gives Krish the money to send to Rony with a transaction password that Rony is required to answer correctly to receive the amount. Krish alerts Rony’s agent, Tim about the transaction details- of amount of funds to be transferred to Rony and the required password to be validated at Rony’s side by him. Here rony has to respond with the right password, then only Tim will release the fund from his side . Once Rony does so he gets the required amount. But here Krish owes $500 to Tim . Tim can record a journal of Krish’s Debt or IOUs which Krish would pay on the agreed date or can make immediate counter transactions to balance out the debt he owe to Tim.
This explanation would have given you some basic idea of how ripple transaction systems work. The key here is that there is a trust based mechanism working here between sender, receiver, and the local agents involved as security agents
What the original Ripple project sought to accomplish is effectively the democratization of this idea: everyone can be their own bank, issuing, accepting and acting as a conduit for loans all at the same time.
Ripples Use Case :
Let’s Discuss its Use Case Sector Wise :
Ripple is going to revolutionize the way Banking system has been working. It will transform the cross border transactions, inter & intra transactions. Many global banks are joining Ripple’s RippleNet Product, to process cross-border payments in real time with end-to-end tracking and certainty. Banks can roll out payments offerings into new & emerging markets that are otherwise too difficult or expensive to reach quite easily through RippleNet Technology.
Benefit It Is Offering To The Banking Sector:
A. Increase Revenue & meet global customer demands –
Creating new revenue avenues to increase their customer base globally and cater to their financial transaction needs at cross-border level. Ripple allows banks to increase their global customer footprint at very low cost & at blazing speed.
B. Lower Expenses :
It helps banking institutions to reduce total transaction costs with fewer liquidity requirements
C: Amazing & Seamless Transaction Experience:
Ripple’s RippleNet allow banks to set protocols & governance practices which are state of art & is highly consistent, reliable & most importantly secure.
D. One Convergence Point
Plug in once to transact with any other RippleNet member bank
2. Payment Providers:
Ripple helps various payment lenders to attract the customers across the globe. It is faster and transparent & helps you cater in volumes to every nook & corner of the world.
Benefits It Provides :
A. Cater To Larger Customer Base:
Distribute Payments across borders in large volume, though its differentiated, real-time cross-border payment services.
B.One Integration For Greater Outreach
Integrate once to connect with all financial institutions across borders using its RippleNet technology.
C. Ripple Offers Transparent & Predictable Payments Solutions:
To help you get your payment access in real time with pure transparency of fees deducted, rate of transaction and seamless & hassle free payment settlement in quicktime.
3. For All Digital Exchanges :
Hey! Enterprises If you are looking to source liquidity in a more reliable & secure fashion turn to Ripple’s XRP protocol based solutions. It is apt for large banks & payment lenders to attract large payments as liquidity.
Why XRP(Ripples ) : Ripple Protocols For Payment Transactions :
1. It Is Stable :
Till date all the ledgers initiated has been successfully closed and no issues has been reported for 30 million of those transactions.
2. It Is Super Scalable :
Seeking to increase your payment lending business to large scale use XRP it handles 50,000 transactions per second.
3. Robust Digital Payment Infrastructure & Great Team:
Ripple proudly owns the superior payment technology, which is scalable to settle asset flawlessly. It has Open-source code base which is supported by ingenious and efficient full time team of engineers & architect . It makes their product line given below highly reliable & consistent to your financial transaction needs.
A. xCurrent : For Payment Processing
B. xRapid : For Liquid Sourcing
C. xVia : To Send Payment
4. It Is Decentralized:
Ripple currently boasts of 55 global payment validators and claims to be growing fast..
How To Buy Ripple Coins?
Well, there are many popular exchanges which can help you buy ripple coins. Like other cryptocurrencies, Ripple is available on several different exchanges. As per Ripple.com some of the authentic ripple coin provider exchanges are
A. Bitstamp: Deals in XRP/EUR, XRP/USD, and XRP/BTC trading pairs
B. Kraken: Deals in XRP/BTC, EUR, USD
C: GATEHUB :Deals in XRP/USD, JPY, CNY, EUR, BTC, ETH
D : btcXIndia : Deals XRP/INR(For Indians To Invest)
E. CoinOne : XRP/KRW, BTC
F. BITSO: XRP/MXN, BTC
G. CoinCheck : XRP/JPY, BTC
& Many More.
For Comprehensive Details Please Visit : https://ripple.com/xrp/buy-xrp/
Where you will get the entire exchange listing who are dealing in cryptocurrency selling & Buying. Follow their respective How To Invest Link based on your country and you will be able to get started in quick time. It should not be a daunting task for all you lovely readers & investors.
Which Banks Are Using It & Why:
As per McKinsey 2015 Global Payments Report number of global payments made in 2015 totaled $30.3 trillion and is expected to grow 6% per year over the next five years.
As per Ripple their technology is designed to lower total cost of payment settlement which can help banks to transact directly and with real-time certainty of settlement. By upgrading existing infrastructure, Ripple increases end-to-end processing efficiencies and can ultimately allow banks to make new business models economically viable through lower costs.
They claim 60% cost savings for an average payment size of $500. That can be the reasons most of the global banks are adopting their solution.
Some of them are:
- Axis Bank
- Yes Bank
- Union Credit
& Many More ..
Ripples Current Market Details: (Source Ripple.com)
Market Cap – $47.13 Billion USD
Current Price (As of 15 Dec 2017):
XRP /USD = 0.73314
Ledger Close Time – 3.33 Seconds
Network Transaction Fee — $0.0000561 USD
Transactions Per Seconds — 13.36 /1000+ TPS
Some of my other Technical Stuff which can be of your interest :
- NLP Fundamentals: Where Humans Team Up With Machines To Help It Speak
- Getting Started With Kotlin Language : A Beginners Guide
- Getting Started With Android & Kotlin : A Kickstarter Guide
- Getting Started With Node.js : A Beginners Guide
- All About Node.Js You Wanted To Know ?
- A Guide to MVP: Minimum Viable Product | <urn:uuid:b5eea846-2f4a-4486-aaee-bf5c05bd6163> | CC-MAIN-2024-38 | https://resources.experfy.com/fintech/a-quick-ripple-primer-to-make-you-an-informed-user/ | 2024-09-19T03:54:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00721.warc.gz | en | 0.920229 | 2,113 | 2.875 | 3 |
The rapid advancement of technology has propelled artificial intelligence (AI) and machine learning (ML) to the forefront of innovation across industries.
As developers harness the potential of these technologies, ethical considerations have emerged as a critical aspect of the process. The decisions made by developers can shape not only the functionality of AI and ML systems but also their impact on society.
This article delves into the ethical considerations that developers face in the realm of artificial intelligence software development services, exploring the challenges and proposing strategies to navigate the complex landscape of AI and ML software development (SWD).
The Rise of AI and ML
The rise of artificial intelligence (AI) and machine learning (ML) is one of the most significant technological advancements of our time. AI and ML are already having a profound impact on our world, and their potential for future innovation is even greater.
AI and ML technologies have transformed the way businesses operate, enabling automation, predictive analytics, and personalized user experiences. From recommendation systems in e-commerce to diagnostic tools in healthcare, the applications of AI and ML are vast and diverse. The increasing integration of these technologies into our daily lives underscores the need for ethical considerations to ensure their responsible use.
Ethical Considerations in AI and ML Development
The rapid advancement of artificial intelligence (AI) and machine learning (ML) has brought about a wave of ethical concerns that need to be carefully considered to ensure the responsible and beneficial development and deployment of these technologies. Here are some of the key ethical considerations in AI and ML development:
1. Bias in Algorithms
The issue of bias in AI algorithms is a paramount concern. Machine learning models learn from historical data, and if these datasets contain biases, the AI systems can perpetuate and amplify existing inequalities. Developers must grapple with the challenge of identifying and mitigating bias in algorithms to ensure fair and equitable outcomes.
2. Transparency and Accountability
The lack of transparency in AI and ML algorithms poses a significant ethical dilemma. Users often have little understanding of how these systems arrive at decisions, raising concerns about accountability. Engineers must prioritize transparency, providing clear explanations of algorithmic decision-making processes to build trust and allow for external scrutiny.
3. Privacy Concerns
The massive amounts of data required for AI and machine learning development services training raise significant privacy concerns. Developers must establish robust measures to protect user data, implement privacy-preserving technologies, and adhere to regulations such as GDPR and HIPAA. Striking a balance between data utilization and user privacy is a delicate ethical consideration.
4. Impact on Employment
As AI and ML systems automate tasks traditionally performed by humans, ethical questions about job displacement arise. Developers must consider the societal impact of their creations, seeking to balance innovation with a social conscience. Initiatives that support reskilling and upskilling can contribute to mitigating the potential negative effects on employment.
5. Security Risks
The integration of AI and ML into critical systems introduces new dimensions of security risks. Developers must grapple with ethical questions related to safeguarding their creations from malicious use. Implementing robust security measures and staying informed about emerging threats is imperative to ensuring the responsible deployment of AI and ML technologies.
Striking a Balance: The Developer’s Dilemma
Developers find themselves at a crossroads, facing the ethical implications of their work. Striking a balance between innovation and responsibility is the essence of the dilemma. To address these ethical considerations, engineers can adopt the following strategies:
1. Diverse and Inclusive Development Teams
Forming diverse and inclusive development teams is a proactive step in mitigating bias in AI algorithms. A variety of perspectives ensures that the design and training of AI systems consider a broad spectrum of experiences, reducing the risk of unintentional biases.
2. Transparent Algorithmic Decision-Making
Prioritizing transparency in algorithmic decision-making is essential. Providing clear explanations of how AI systems arrive at conclusions fosters user trust and allows for external scrutiny, contributing to increased accountability.
3. Privacy by Design
Embedding privacy considerations into the SWD process from the outset is crucial. Adopting a “privacy by design” approach ensures that data protection is not an afterthought but an integral part of the programming lifecycle.
4. Ethical Impact Assessments
Conducting ethical impact assessments during the development process can help identify and address potential ethical concerns. This proactive approach allows programmers to mitigate risks and make informed decisions about the impact of their creations.
5. Collaboration with Ethicists and Social Scientists
Engaging ethicists and social scientists in the SWD process can provide valuable insights into the potential societal impact of AI and ML technologies. Collaboration with experts in ethics and social sciences can help them navigate complex ethical dilemmas and ensure a more holistic understanding of the consequences of their work.
As AI and ML technologies continue to shape the future, developers bear a profound responsibility for the ethical implications of their creations. The developer’s dilemma is not a burden but an opportunity to craft a future where technology serves humanity ethically and responsibly. By adopting ethical practices, fostering inclusivity, and prioritizing transparency, programmers can navigate the complex terrain of AI and ML development with a conscientious approach. The choices made today will influence the trajectory of technological progress and its impact on society for years to come.
Frequently Asked Questions
1. How can developers address bias in AI algorithms?
To address bias, developers should ensure diverse and inclusive development teams, carefully curate training datasets, and implement ongoing monitoring and auditing processes to detect and rectify biases.
2. Why is transparency important in AI development?
Transparency is crucial in AI development to build user trust and ensure accountability. Clear explanations of algorithmic decision-making processes help users understand and scrutinize the technology they interact with.
3. How can developers balance innovation with ethical considerations?
Developers can strike a balance by adopting strategies such as diverse and inclusive dev teams, transparent decision-making, privacy by design, ethical impact assessments, and collaboration with ethicists and social scientists.
4. What role do privacy considerations play in AI development?
Privacy considerations are paramount in AI integration. Developers should embed privacy into the design process, adopt privacy-preserving technologies, and adhere to relevant privacy regulations to safeguard user data.
5. How can developers contribute to addressing the impact of AI on employment?
Developers can contribute by actively participating in discussions about the societal impact of AI, advocating for responsible AI use, and supporting initiatives that promote reskilling and upskilling to mitigate the impact on employment. | <urn:uuid:8152388e-1d13-4716-8ae5-a8bc64537028> | CC-MAIN-2024-38 | https://itsupplychain.com/ethical-considerations-in-ai-and-ml-software-development-a-developers-dilemma/ | 2024-09-10T17:21:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00621.warc.gz | en | 0.917549 | 1,341 | 3.140625 | 3 |
Methane is a common and important greenhouse gas with a great impact on global climate change. With the deepening of environmental protection work, methane (CH4), as the main greenhouse gas, has become an important indicator of environmental protection.
The methane detector has become one of the indispensable tools in the field of environmental protection because of its high sensitivity, real-time performance, and accuracy. Its emergence plays an important role and value in environmental monitoring. The introduction and widespread application of methane detectors have improved the scientific level and technology chain of the current environmental protection industry.
Advantages of Methane Leak Detectors
The UAV laser methane detector is a lightweight UAV-borne device adapted to UAVs. It has high sensitivity, fast detection response, lower cost, and reduces operational risks.
Laser methane gas detectors are based on semiconductor laser absorption spectroscopy technology. They can detect parameters such as methane gas concentration in various environments with high accuracy, fast responses, high reliability, and low operating costs. Compared to fixed gas leak detectors, drones are not only a more cost-effective way to solve the problem but also a more efficient one.
Highly sensitive, laser methane gas detectors can pick up even tiny leaks from a height of 300 meters. The sensor is only sensitive to methane, so there is no possibility of false readings due to the presence of other gases. In addition, drones equipped with methane detectors can penetrate small spaces that workers cannot enter or areas with a high-risk index to ensure the personal safety of workers.
In the realm of UAV-based methane measurement, two primary methodologies prevail. One method is the laser-based sensor (TDLAS – Tunable Diode Laser Absorption Spectroscopy), which gauges CH4 absorption within the air column between the sensor and the ground. The second is the “sniffer” approach, which analyzes methane concentration in air samples taken at or near ground level. While both methods offer accuracy, their data collection processes vary significantly.
When employing a “sniffer” sensor, the unit trails a tube directly on the ground or nearby, actively suctioning air during flight. This approach yields conventional methane concentration readings at designated mission locations, with a sampling rate of x measurements per y timeframe.
In contrast, laser-based sensors execute a meticulous grid pattern, over 20 meters at ground level. The sensor utilizes TDLAS technology for data acquisition.
The Challenges of UAV-Methane Detection
Utilizing UAV-based sensors poses several challenges, with safety being paramount. Both methodologies entail safety risks due to low-altitude UAV flights. However, employing a sniffer sensor introduces additional complexities. These include the risk of the unit snagging objects on the ground and difficulties maintaining altitude accuracy, particularly in areas with fluctuating elevations.
Moreover, sniffer drones cover less ground compared to lidar drones. This can result in logistical hurdles such as battery management and prolonged project completion times. To ensure reliable methane detection data, it’s imperative to minimize the timeframe for data collection to mitigate environmental influences.
Despite concerns, laser-based sensors offer distinct advantages. While pilots must exercise caution to avoid collisions with obstacles such as electric utility infrastructure, the absence of ground contact reduces certain risks. Furthermore, employing a laser altimeter enhances safety by maintaining a consistent altitude above ground level.
Methane leak detector data is georeferenced and seamlessly integrated into GIS systems. Utilizing inverse distance weighted interpolation, hot-spot maps highlighting areas of high methane concentration can be generated, with the option to overlay existing gas system networks from CAD files. Notably, UAV-based methane detection systems offer closer proximity to emission sources, minimizing the impact of wind speeds on data accuracy.
Consideration of wind direction and speed is pivotal during data analysis. The wind vector data is important to avoid “double counting” methane plumes. Accounting for weather patterns helps ensure accurate assessments of methane emissions, facilitating targeted mitigation efforts.
The accompanying methane detection heat maps, while not thermal in nature, effectively pinpoint methane clusters. Furthermore, overlaying this data onto existing gas collection and control systems CAD facilitates the identification of methane leaks. This may empower site operators to implement timely mitigation measures.
UAV methane detection represents a paradigm shift in environmental surveillance. It presents a game-changing blend of cost-effectiveness, operational efficiency, and unparalleled resolution in methane emission detection.
As this technology matures, its transformative impact on identifying methane sources and orchestrating targeted mitigation strategies will be significant. Ultimately, it will help mitigate the environmental repercussions of methane emissions. With continual innovation and refinement, UAVs may revolutionize methane monitoring. As such, they may emerge as indispensable allies in the global fight against climate change, heralding a future marked by enhanced environmental stewardship and sustainability. | <urn:uuid:1afde45c-dd6e-417a-b200-75811642e3af> | CC-MAIN-2024-38 | https://www.iotforall.com/using-drones-for-methane-detection-enhancing-environmental-compliance | 2024-09-15T15:58:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651632.84/warc/CC-MAIN-20240915152239-20240915182239-00221.warc.gz | en | 0.910166 | 998 | 3.375 | 3 |
Creator: Nanjing University
Category: Software > Computer Software > Educational Software
Tag: analysis, content, Data, Other, python
Availability: In stock
Price: USD 29.00
This course (The English copy of “”'Python” ) is mainly for non-computer majors. It starts with the basic syntax of Python, to how to acquire data in Python locally and from network, to how to present data, then to how to conduct basic and advanced statistic analysis and visualization of data, and finally to how to design a simple GUI to present and process data, advancing level by level. This course, as a whole, based on Finance data and through the establishment of popular cases one after another, enables learners to more vividly feel the simplicity, elegance, and robustness of Python. Also, it discusses the fast, convenient and efficient data processing capacity of Python in humanities and social sciences fields like literature, sociology and journalism and science and engineering fields like mathematics and biology, in addition to business fields. Similarly, it may also be flexibly applied into other fields. The course has been updated.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
Updates in the new version are : 1) the whole course has moved from Python 2.x to Python 3.x 2) Added manual webpage fetching and parsing. Web API is also added. 3) Improve the content order and enrich details of some content especially for some practice projects. Note: videos are in Chinese (Simplified) with English subtitles. All other materials are in English. | <urn:uuid:c3a36f9d-2b85-4c78-9440-c019c02587cf> | CC-MAIN-2024-38 | https://datafloq.com/course/data-processing-using-python/ | 2024-09-16T22:16:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00121.warc.gz | en | 0.920584 | 328 | 2.640625 | 3 |
As Northern Virginia’s renowned data center hub, Ashburn, reaches capacity limits, Culpeper, Virginia, is stepping into the spotlight as a prime location for data center development. Historically significant and strategically advantageous, Culpeper offers an environment ripe for the needs of modern hyperscalers and data-driven enterprises.
Historical Significance of Culpeper
Agricultural Roots and Civil War Importance
Culpeper’s rich history is deeply rooted in its agricultural past coupled with its prominence during the Civil War, a period during which the region served as a crucial site. Its terrain and strategic locations played significant roles in wartime maneuvers and battles, laying the groundwork for the area’s eventual strategic importance. This historical backdrop provides insight into the transformative role Culpeper has played over the years, evolving from an agrarian community to a pivotal node in historical events.
The Civil War turned Culpeper into a significant battleground, with numerous skirmishes and engagements marking its lands. This heritage of strategic relevance and adaptability would later facilitate its emergence as a Cold War stronghold. The region’s deeply ingrained historical identity not only accentuates its cultural significance but also adds layers of strategic allure for modern technological ventures.
The Cold War Era: “Culpeper Switch”
During the Cold War, Culpeper became known for hosting the “Culpeper Switch,” a highly secure underground facility that was critical for the Federal Reserve. This installation ensured the integrity and security of the Fedwire system, which facilitated safe and uninterrupted electronic bank transfers. In the event of a nuclear war, the facility was designed to provide secure currency storage, underscoring Culpeper’s national security importance during a turbulent period.
The Cold War facility’s contribution didn’t fade away with the end of global tensions; instead, the infrastructure seamlessly transitioned to serve peacetime uses. Post-Cold War, the same secure site was repurposed into a key location for the Library of Congress, helping preserve valuable audiovisual materials. This adaptability in leveraging historical sites for modern needs shows Culpeper’s longstanding tradition of integrating past assets into future developments. This approach offers a unique advantage in today’s data-driven landscape.
Modern Development: Fiber and Connectivity
Historical Fiber Network
The robust fiber network that was established during the Cold War has evolved into a critical asset for modern data centers. This historical advantage has created a foundation of low-latency, highly redundant connectivity, which is crucial for contemporary data operations. Such connectivity capabilities make Culpeper a highly viable and attractive alternative to saturated hubs like Ashburn, linking them seamlessly by leveraging pre-existing infrastructure.
Moreover, the rich fiber network complements and enhances Culpeper’s appeal for developers seeking reliable and efficient options to expand their operations. Given Ashburn’s current limitations in capacity, Culpeper stands out with its capacity to offer the same high standard of connectivity. The region’s historical depth is not just a narrative but a tangible benefit that directly supports the ultra-modern needs of data centers, including hyperscalers focused on rapid and secure data transfers.
NAP of the Capital Region
The presence of the Network Access Point (NAP) of the Capital Region in Culpeper significantly boosts the town’s connectivity capabilities. This facility acts as a central node that elevates the quality of connections, ensuring high-speed and reliable communications necessary for data center operations. It’s a critical component that amplifies Culpeper’s technological landscape and places it firmly on the map for data center developers.
The influence of the NAP extends beyond mere connectivity—it enhances the region’s overall technological infrastructure. By providing essential high-speed links, the NAP of the Capital Region attracts enterprises that require robust and uninterrupted data flows. This infrastructure advancement reflects the strategic foresight embedded in Culpeper’s development plans, positioning it as a cornerstone for enterprises eager to expand their data operations in a more scalable and less congested environment.
Land and Power Availability
Abundant Land in the Culpeper Tech Zone
The Culpeper Tech Zone (CTZ) spans a vast 690 acres, offering ample land specifically designated for data center development. This extensive campus stands in stark contrast to overcrowded markets like Ashburn, presenting a golden opportunity for potential developers. Unlike more constricted data hubs, the CTZ allows for significant expansion with ample space to accommodate large-scale data center operations without the risk of imminent land scarcity.
The foresight in designating this area strictly for technological development denotes strategic planning to attract substantial investments. Developers find the space and freedom in Culpeper that allows for more innovative architectural designs and expansive facilities. This generous availability of land translates into fewer constraints and greater possibilities for data-driven enterprises willing to set up their operations in an advantageous and less congested locale.
Robust Energy Infrastructure
Culpeper’s energy infrastructure ensures stability and flexibility essential for data centers, making it a reliable option for high-demand operations. Dominion Energy oversees the transmission while Rappahannock Electric Cooperative and Dominion collectively manage distribution. This dual oversight ensures a level of redundancy and reliability in power supply that meets the rigorous demands of data centers. This resilient energy framework supports consistent operations, vital for data centers requiring uninterrupted power.
Such a robust energy framework is a critical asset for any data center developer contemplating investment in Culpeper. The ability to guarantee a reliable power supply significantly reduces operational risks and enhances performance metrics. Given the high energy consumption of modern data centers, the promise of stable and flexible power availability in Culpeper is a decisive incentive for enterprises evaluating potential development sites.
Educational and Workforce Development
Local Educational Initiatives
Culpeper’s local government takes a proactive approach in nurturing a skilled workforce by promoting relevant education pathways. Partnerships with educational institutions like the Culpeper Technology Education Center (CTEC) and Germanna Community College (GCC) equip local students with IT and trade skills essential for working in the data center industry. This educational collaboration ensures students are job-ready and possess the competencies that data center enterprises demand.
The presence of these institutions and the targeted learning they provide create a sustainable talent pipeline for future data center needs. By aligning educational curricula with industry requirements, Culpeper ensures its workforce remains competitive and equipped for high-tech roles. This emphasis on education not only fosters local economic growth but also establishes Culpeper as a forward-thinking community that values and invests in its human capital.
By fostering a robust educational pipeline, Culpeper secures a sustainable workforce that caters to the technical and operational needs of growing data center facilities. The deliberate focus on creating a skilled labor force attracts developers who seek not just space and power but also competent personnel. This local talent pool underpins the county’s strategic approach to economic development, making Culpeper an appealing choice for data center operations.
The workforce pipeline benefits both the industry and the community, creating a symbiotic relationship where growth begets growth. As data centers flourish, employment opportunities increase, stimulating the local economy. This virtuous cycle makes Culpeper not just a spot for technological development but a living, thriving community that can support and sustain technological growth for generations to come.
Economic Incentives and Strategic Positioning
Attractive Tax Incentives
To lure developers, Culpeper offers compelling economic incentives, including both county and state tax breaks. These financial benefits are part of a well-thought-out strategy aimed at fostering community and business growth concurrently. By reducing the tax burden on developers, Culpeper levels the playing field and makes itself an economically attractive option for data center investment.
These incentives offer more than just cost savings; they signify a collaboration-focused approach between the local government and businesses. Developers are more inclined to invest in regions where governments show clear support for their endeavors. Culpeper’s strategic economic planning ensures that the growth benefits both the community and the incoming businesses, creating a balanced and sustainable development model.
Geographic and Strategic Importance
Located just 60 miles south of Loudoun County, Culpeper’s proximity to densely populated regions makes it a convenient location for hyperscalers looking to expand beyond Ashburn. This strategic positioning benefits enterprises that seek to maintain low latency and high connectivity. Being within close range to a major data center hub while offering untapped potential, Culpeper provides the best of both worlds.
The geographical advantage ensures that data flows remain efficient, reducing latency issues and maintaining high-speed connectivity. This proximity is particularly crucial for enterprises that cannot afford network downtimes or delays. By situating in Culpeper, businesses can leverage both the advantages of a near-urban setting and the expansive resources of a less congested area, ensuring robust and scalable development options.
Market Context and Future Prospects
Rising Demand and Low Vacancy Rates
Northern Virginia’s data center market has seen unprecedented growth, with 2023 marking the fourth consecutive year of record demand. The increased demand has driven vacancy rates below 2%, causing developers to explore alternatives like Culpeper for their expansion needs. This low vacancy rate underlines the pressing need for new development areas capable of accommodating burgeoning data demands.
The rising demand, coupled with limited available space in established hubs, makes Culpeper an increasingly attractive option. Its capacity to meet the modern needs of data centers while offering room for future growth positions it well in the current context. Developers will find in Culpeper the room to build expansive facilities without facing the space limitations prevalent in other data hubs.
Comparative Advantage Over Ashburn
As Northern Virginia’s well-known data center hub, Ashburn, hits its capacity limits, Culpeper, Virginia, is emerging as a new focal point for data center development. This change is not only due to the overflow from Ashburn but also thanks to Culpeper’s unique combination of historical significance and strategic advantages. The town is becoming increasingly attractive to large-scale data operations and businesses reliant on data due to its available resources and potential for growth.
In a world where data is king and the need for storage and processing power is constantly growing, Culpeper is stepping up to become a key player. The town boasts the infrastructure needed to support the high demands of modern hyperscale data centers, making it a prime location for companies looking to expand. Unlike the densely packed Ashburn, Culpeper offers more space and a community eager to embrace technological advancements. With its strategic location and historical roots, Culpeper is poised to meet the future of data storage and processing needs effectively. | <urn:uuid:f4009426-209d-4b67-b60a-432cf48c032f> | CC-MAIN-2024-38 | https://networkingcurated.com/infrastructure/culpeper-emerges-as-new-hotspot-for-data-centers-amid-ashburn-saturation/ | 2024-09-16T23:32:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00121.warc.gz | en | 0.913809 | 2,204 | 2.828125 | 3 |
Runtime data no longer has to be vulnerable data
Today, the security model utilized by nearly all organizations is so weak that the mere act of creating new data comes with the immutable assumption that such data will become public and subject to theft or misuse.
The issues related to data access
The industry has been the proverbial slow-boiling frog when it comes to data security. From the time computing systems were first able to store large amounts of data, individuals with no right to that data have accessed it. When connectivity and breaches were rare, nobody cared.
But as more people and computers became connected, waves upon waves of cybersecurity solutions and procedures have sprung up to help organizations keep their data out of reach of threats. Yet, despite the layers of security and processes implemented by IT, hackers and unauthorized insiders are still successful. If attackers gain access to a data center or network, they gain access to data.
That’s because all data—and the software security solutions that try to protect that data—are subject to a fundamental flaw that comes from the era of limited connectivity but persists today: data cannot be simultaneously used and secured.
The reason is simple to understand. All data, including applications, algorithms, and cryptographic keys, must sit exposed and unencrypted in memory so that the CPU can use them. This means anyone or any process that can get access to the host—including administrators, bad actors, or malware—can get full access to this data. This unprotected memory, which often holds decrypted encryption keys, can be easily and covertly “dumped” by using commonly available tools. With keys revealed, stored data that is encrypted can be accessed.
Data breaches continue to grow in number and severity. Today, algorithms and the results of data analytics often have even greater value than raw data, which makes their theft even more devastating. No amount of security software or IT processes can every attacker, and because data is fundamentally defenseless and insecure, data overexposure, breaches, and incursions are bound to happen sooner or later. Yes, expensive new layers of security might make a data breach harder to accomplish, but data exposure is still inevitable.
Many CISOs and business owners may not realize that this flaw can cost their enterprises opportunities. Data insecurity is one of the most often cited reasons for not migrating IT completely to the cloud, which is an option that can significantly reduce IT costs and data risks.
Secure enclave technology: An gold standard for secure data access
Feeling the heat, the IT industry has moved to implement a long-accepted solution for this fundamental data flaw: secure enclaves, in which an encrypted and isolated segment of host memory is rendered inaccessible to any other process outside of the enclave itself.
Secure enclave technology implementations are now featured in virtually every server being developed today. Cloud and CPU vendors supporting enclaves now include Intel, AMD, ARM, AWS, Azure, and others.
In 2019, a group of cloud and software vendors formed the Confidential Computing Consortium, chartered to define and promote the adoption of confidential computing and the use of secure enclaves to create a fully trusted data processing environment in the public cloud that should virtually impervious to a data breach. More than 20 industry leaders have joined the group, including chip manufacturers such as Intel, ARM, and AMD, cloud service and virtualization providers such as AWS, Google Cloud, Alibaba Cloud, and VMware, and secure enclave software providers such as Anjuna Security.
Industry adoption has been broad. Secure enclave-enabled infrastructure is now available around the globe and being deployed by every major cloud service provider.
Market adoption has been limited
With all of these security advantages, you might think that CISOs would have quickly moved to protect their applications and data by implementing secure enclaves. But market adoption has been limited by a number of factors.
First, using the secure enclave protection hardware requires a different instruction set, and applications must be re-written and recompiled to work. Each of the different proprietary implementations of enclave-enabling technologies requires its own re-write. In most cases, enterprise IT organizations can’t afford to stop and port their applications, and they certainly can’t afford to port them to four different platforms.
In the case of legacy or commercial off-the-shelf software, rewriting applications is not even an option. While secure enclave technologies do a great job protecting memory, they don’t cover storage and network communications – resources upon which most applications depend.
Another limiting factor has been the lack of market awareness. Server vendors and cloud providers have quickly embraced the new technology, but most IT organizations still may not know about them. Some simply have not heard of secure enclaves and how they close the gap in data security. Others might assume that hardware-based security is going to be disruptive and difficult.
CISOs and software to the rescue
Simplifying and focusing on secure enclave solutions is now the domain and responsibility of a group of new and emerging secure enclave software vendors. These vendors are now focusing on taking the raw silicon-level capabilities implemented by hardware and cloud providers and transforming them into something that is simple to adopt and use – without the need to rewrite software. There are several companies targeting IoT, enterprise, and other more specialized use cases.
Targeted enterprise enclave solutions will deliver key features that fill out enclave capabilities and make them usable by enterprises both on-site and in the cloud. Foremost of these features is enclave technology independence, eliminating the need for multiple application rewrites. Another feature extends enclave protections to storage and networked communications, which will allow for broad application support without requiring modification.
This will further enable a clean “hop” (or “lift and shift”) from existing on-site infrastructure to an even more secure “confidential cloud.” Most importantly, this will result in the highest data security available, the rationalization of security layers for massive savings, and the next generation of exciting new applications to come in the future. | <urn:uuid:c581afb2-6a4f-4d3a-b5c1-36ea5a337846> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2021/02/04/data-access/ | 2024-09-18T05:36:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00021.warc.gz | en | 0.947275 | 1,230 | 2.875 | 3 |
Definition: VDI (Virtual Desktop Infrastructure)
VDI, or Virtual Desktop Infrastructure, is a technology that allows desktop environments to be hosted on a centralized server and accessed remotely by users over a network. In a VDI setup, virtual machines (VMs) run the user desktop environments on a central server, while users interact with these environments via thin clients, laptops, or other devices. This centralization of desktop management offers numerous benefits, including enhanced security, simplified IT administration, and greater flexibility for users.
Understanding VDI (Virtual Desktop Infrastructure)
Virtual Desktop Infrastructure (VDI) revolutionizes the way organizations manage and deliver desktop environments to their users. Instead of having individual desktops installed on physical machines, VDI enables these environments to be hosted virtually on a centralized server or data center. Users can access their personalized desktop environment from anywhere, as long as they have network access, making VDI a powerful tool for modern, distributed workplaces.
How VDI Works
At the core of VDI are virtual machines (VMs) that run on a hypervisor, which is installed on a central server or cluster of servers. Each VM contains an instance of a desktop operating system, such as Windows or Linux, and is isolated from other VMs running on the same hardware. Users access their desktop VM through a client device—such as a thin client, laptop, or tablet—using a connection broker to authenticate and direct them to their specific desktop environment.
The key components of a VDI setup include:
- Hypervisor: A software layer that allows multiple VMs to run on a single physical server.
- Connection Broker: A service that connects users to their specific desktop VM.
- Client Device: The end-user device used to access the virtual desktop. This could be a thin client, PC, or mobile device.
- Storage: Centralized storage systems that house user data and VM images.
- Networking: High-speed networks that ensure low latency and reliable connectivity between the server and client devices.
Types of VDI
There are two primary types of VDI deployments, each serving different organizational needs:
- Persistent VDI: In this model, each user is assigned a specific virtual desktop that is “persistent” or saved between sessions. Users can personalize their desktops, install software, and store files just as they would on a physical desktop. This type of VDI is ideal for users who require a consistent, personalized environment.
- Non-Persistent VDI: Here, virtual desktops are created on demand and are not saved between sessions. Each time a user logs in, they receive a fresh, standardized desktop environment. This model is useful in environments where multiple users need access to the same software and settings, such as in a call center or training lab.
Benefits of VDI
Implementing VDI offers several significant benefits for organizations:
- Enhanced Security: By centralizing desktops in a data center, sensitive data never leaves the secure environment. This reduces the risk of data breaches, especially in cases where devices are lost or stolen.
- Simplified IT Management: With VDI, IT teams can manage and update all desktops from a single location. This centralization simplifies patch management, software deployment, and desktop troubleshooting.
- Cost Efficiency: Although the initial investment in VDI infrastructure can be significant, it often leads to long-term cost savings. Organizations can extend the life of older hardware by using thin clients, and energy costs can be reduced by consolidating computing resources.
- Scalability: VDI allows organizations to scale up or down quickly based on their needs. This flexibility is particularly beneficial for companies with seasonal workforces or those experiencing rapid growth.
- Remote Access and Flexibility: VDI supports the modern work-from-anywhere trend by enabling users to access their desktops from any location with an internet connection. This flexibility is critical for maintaining productivity in distributed or remote work environments.
Challenges of VDI Implementation
While VDI offers many benefits, there are also challenges that organizations must consider:
- Initial Setup Cost: The upfront investment in servers, storage, and network infrastructure can be high. Additionally, licensing costs for VDI software can add to the initial expenses.
- Performance Considerations: VDI requires a robust network infrastructure to deliver a seamless user experience. High latency or insufficient bandwidth can lead to lag, which can be frustrating for users.
- Complexity: Managing a VDI environment can be more complex than traditional desktop management. IT teams need to have expertise in virtualization, networking, and storage to effectively manage and troubleshoot VDI.
- User Experience: Ensuring a consistent and responsive user experience across all devices can be challenging, particularly in scenarios where users are connecting from different geographic locations with varying network conditions.
Use Cases for VDI
VDI is used in a variety of scenarios where centralized desktop management offers significant advantages:
- Education: Schools and universities can provide students with access to standardized desktop environments that include all necessary software, regardless of their device or location.
- Healthcare: In healthcare, VDI allows medical professionals to access patient records and applications securely from different locations within a hospital or clinic, enhancing mobility and efficiency.
- Financial Services: VDI supports strict regulatory compliance by keeping sensitive financial data secure in a centralized environment while enabling remote work for financial professionals.
- Government and Public Sector: Government agencies can use VDI to provide secure, standardized desktops to employees across multiple locations, reducing the complexity of desktop management and enhancing security.
- Call Centers: In call centers, non-persistent VDI allows agents to log in to any available terminal and receive a standardized desktop environment, ensuring consistency in service delivery.
Key Features of VDI
VDI offers several features that make it a compelling choice for desktop management:
- Centralized Management: All desktops are managed from a central location, simplifying administration and maintenance.
- User Mobility: Users can access their desktop from any device, enabling flexible work arrangements.
- Security: Data is stored centrally, reducing the risk of data breaches from lost or stolen devices.
- Resource Optimization: VDI enables efficient use of hardware resources by consolidating computing power in the data center.
- Disaster Recovery: With VDI, disaster recovery plans can be more straightforward to implement, as all desktops are centrally located and can be quickly restored in the event of a disaster.
Best Practices for VDI Deployment
To ensure a successful VDI implementation, organizations should consider the following best practices:
- Capacity Planning: Properly assess the number of users and their resource needs to ensure the VDI infrastructure can support the workload without performance issues.
- Network Optimization: Ensure that the network infrastructure is robust enough to handle the increased traffic from VDI, with low latency and high bandwidth to provide a seamless user experience.
- User Experience Testing: Before full deployment, conduct thorough testing to identify any potential issues with user experience, such as application compatibility or performance bottlenecks.
- Security Measures: Implement robust security protocols, including encryption, multi-factor authentication, and regular security updates, to protect the VDI environment.
- Scalability Planning: Design the VDI infrastructure with future growth in mind, allowing for easy scaling as the organization’s needs evolve.
Frequently Asked Questions Related to VDI (Virtual Desktop Infrastructure)
What is VDI (Virtual Desktop Infrastructure)?
VDI (Virtual Desktop Infrastructure) is a technology that allows desktop environments to be hosted on a centralized server and accessed remotely by users over a network. It provides centralized management, enhanced security, and flexibility for users to access their desktop environments from various devices.
What are the benefits of using VDI?
VDI offers several benefits, including enhanced security by centralizing data, simplified IT management, cost efficiency, scalability, and the ability to support remote work by providing access to desktops from any location with an internet connection.
What are the different types of VDI?
There are two main types of VDI: Persistent VDI, where each user has a dedicated virtual desktop that is saved between sessions, and Non-Persistent VDI, where users receive a fresh desktop each time they log in, ideal for environments where a standardized setup is required.
What are the challenges of implementing VDI?
The challenges of implementing VDI include the high initial setup cost, performance considerations due to network dependency, complexity in management, and ensuring a consistent user experience across different devices and locations.
How does VDI support remote work?
VDI supports remote work by allowing users to access their desktop environments from any device with an internet connection. This ensures that employees can work from anywhere while maintaining access to all necessary applications and data securely. | <urn:uuid:1014392b-da08-4706-8426-ff653e119ea0> | CC-MAIN-2024-38 | https://www.ituonline.com/tech-definitions/what-is-vdi-virtual-desktop-infrastructure/ | 2024-09-18T04:26:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00021.warc.gz | en | 0.916099 | 1,810 | 3.265625 | 3 |
A Working Definition of Internal Controls
For accounting, risk and audit, internal controls are a set of accounting best practices activities designed and implemented to minimize organizational risks and reduce legal and regulatory liability.
The internal control process promotes the observance of established rules, regulations, laws and policies to protect financial and other tangible and intangible assets, achieve accuracy in data and records, and generally safeguard the interests of an organization and its stakeholders.
The Importance of Internal Control Policies
Every organization has important goals it wants to accomplish in a timely and efficient manner. But, for every desired objective there are a hundred (or more) known and unknown risks waiting to disrupt well-laid plans and derail progress. Risks will always exist, they can’t be eliminated but, if conscientious accountants and auditors do their job properly, they can often be avoided and their deleterious effects can be mitigated.
A sound, comprehensive internal control policy is a bulwark against common and costly risks such as the following:
- Internal and external fraud
- Insufficient regulatory reporting
- Inaccurate financial statements
- Cyberthreats (hacking)
- Waste, abuse and inefficiencies
- Theft of institutional, proprietary information
- Lack of management oversight
These common risks are far from being the only ones that a good internal control policy guards against. The best control policies are wide-ranging and dynamic; they can grow with a company and change with changing circumstances.
The Internal Control Policy as a Money Saving Device
When done right, developing suitable internal controls and organizing them into an understandable, easy-to-execute policy isn’t easy or inexpensive. It takes a significant commitment of time, money and human resources. In addition, integrating the policy into existing systems and implementing it on an institutional scale will involve its own set of challenges and expenses. In the long run, however, it will all be worthwhile.
The nitty-gritty of an internal control policy — all the templates, checklists, processes, assignment of tasks, audits, training and management review — might sound like tedious and pricy work, but it all combines to save the company money.
Accounting controls help avoid an untold number of small but potentially costly problems before they even happen, but the biggest savings are realized when monstrous financial or regulatory disasters are prevented by diligent compliance with well-thought-out, well-maintained controls. While regular internal audits, for instance, cost money and are incontinent, an unannounced audit by the IRS, the SEC or some other government agency (that won’t have your firm’s best interest at heart) will probably cost a lot more and would definitely be more stressful.
When problems can’t be avoided altogether, control activities save money by catching issues early and bringing them to the attention of the right people. An error on a financial statement, as an example, is problematic at any stage in the reporting process, but it can’t be fixed until it’s found. The checks and balances that are the essence of risk and accounting controls exist to spot exactly these types of problems. It takes time and money to re-run numbers and come up with an accurate report, but the cost of correcting financials confidentially in-house is exponentially lower than publishing and distributing faulty numbers to the public and having to reissue critical reports after the fact.
Well-designed internal controls thoughtfully enumerated in a high-quality policy document — whether offensive (designed to detect problems) or defensive (designed to prevent problems), manually executed by people, or computer automated — simply make good business sense.
Important Policies Should Be in Writing
Internal control policies can’t just exist conceptually; they must be in writing and communicated to everyone in the organization who will implement them and throughout all levels of management. To this end, KnowledgeLeader has created an Internal Control Policy for you to use as a framework or outline in creating a customized internal control policy.
Judicious use of this tool will help accounting departments achieve several important objectives.
When things go wrong in a business — and, unfortunately, things will sometimes go wrong — and money is lost, questions will fly and fingers will be pointed. Professionals in the risk, audit and accounting divisions will be asked what they did to prevent the problem that caused the loss. And they will be expected to show their work.
Without a properly formatted, written control policy document that’s been published and widely circulated it will be nearly impossible to prove to management, regulators and stakeholders that control policies ever existed.
Continuity of purpose and process is critical to efficiency. Everyone needs to be pulling in the same direction, as the saying goes. The detailed assignment of process and examination (supervision) responsibilities along with reporting requirements need to be included in every control policy.
We’ve established that good internal controls can help protect assets, but they do more than just that. A program of high-quality, well-executed internal controls expressed in a published policy document will create a high level of confidence in company financial statements and other assertions.
Public and private investors, government regulators, the board of directors and C-suite executives are bound to have more faith in assertions by the accounting department if they know all numbers, projections and reports were run through a vigorous set of internal controls before being presented.
It’s the job and sometimes the fiduciary duty of personnel in risk, audit and accounting departments to report numbers that others can depend on. Shareholders and company executives will expect it. Government agencies will demand it.
A functioning system of internal accounting controls is imperative. A thoughtfully developed, written policy is very highly recommended. | <urn:uuid:bbbbb46d-b62e-42dc-af38-07fbccb2e94d> | CC-MAIN-2024-38 | https://www.knowledgeleader.com/blog/internal-controls-why-you-need-vigorous-policy | 2024-09-18T04:33:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00021.warc.gz | en | 0.946367 | 1,171 | 2.8125 | 3 |
If you’ve followed Aligned on these pages, through our social media accounts, or our public announcements, you already know that we’re a company focused on driving both organic and strategic growth. We’re also great believers in growing sustainably, reflected by our energy-efficient technology innovations, our commitment to expanding access to renewable energy resources, and even our historic sustainability-linked financing.
Simply put, Aligned has always taken an industry leadership position where it concerns environmental stewardship, a position that extends to the planet and the communities that host our growing portfolio of hyperscale data center campuses across the Americas. That’s why we’re excited to embark on two new partnerships that will ensure our world becomes just a little bit greener, even as we continue to grow along with our customers as their businesses demand.
Our next agenda?
That’s right. And we’ll explain.
But first, a little background.
Approaching Earth Day, a Red Flag
Healthy, strong trees act as carbon sinks, offsetting carbon and reducing the effects of climate change. A single, mature tree can absorb an average of 22 pounds of carbon dioxide per year. Trees not only help cool the planet by absorbing and storing harmful greenhouse gases like carbon dioxide, but they also release oxygen back into the atmosphere.
Trees also play a key role in capturing rainwater and reducing the risk of natural disasters like floods and landslides. Their intricate root systems act like filters, removing pollutants and slowing down water’s absorption into the soil. Plus, a single tree can be home to hundreds of species of mammals, insects, fungi, moss, and other plants.
As encouraging as that may be, since 1990, forests worldwide have decreased by 80 million hectares. That’s more than 308,880 square miles, or the equivalent of the land mass of the states of Colorado, Nevada, Utah, and New Hampshire. Between 2020 and 2015, the rate of deforestation was estimated at 10 million hectares per year, and tropical tree loss is now causing more emissions every year than 85 million cars would cause over their entire lifecycles.
On this Earth Day, the annual event that demonstrates world support for environmental protection, we’re announcing a new initiative focused on reforestation and fruit donation to local community food banks.
Planting Trees and Bearing Fruit
Today, Aligned announced a partnership with One Tree Planted and the Fruit Tree Planting Foundation to do our part to reverse deforestation, and thereby expand our commitment to fostering a greener, cleaner, and healthier planet.
In the last ten years, One Tree Planted has planted over 135.5 million trees with 378 partners across 82 countries in North America, Latin America, Africa, Asia, Europe, and the Pacific.
Aligned will join One Tree Planted in helping reforestation efforts in areas hurt by forest fires in the region surrounding our Hillsboro, Oregon, and Phoenix, Arizona, hyperscale data center campuses. Aligned will help One Tree plant and maintain more than 50,000 trees per year for the next five years, totaling more than a quarter of a million trees, which will breathe new life across 760 acres.
The Fruit Tree Planting Foundation has helped communities plant a collective total of 18 billion fruit trees — or roughly three fruit trees for every person on the planet –– while encouraging their growth under organic standards. On average, every year, the Fruit Tree Planting Foundation plants more than 60 orchards and distributes tens of thousands of fruit trees around the world, along with providing training and education.
Aligned will partner with the Fruit Tree Planting Foundation to plant two orchards in Phoenix, with future plans to plant two orchards a year in every Aligned data center market in U.S. Not only will this program benefit the climate, but the fruit trees’ yield will also provide sustenance to local community food banks.
Active involvement in tree planting programs can enhance a community’s sense of pride and environmental responsibility. If you’d like to join Aligned in supporting either of these worthwhile 501(c) (3) tax-exempt, non-profit organizations, visit One Tree Planted at www.onetreeplanted.org, and the Fruit Tree Planting Foundation at www.ftpf.org. | <urn:uuid:ed7ab1d6-e7c1-4a79-8893-f7387c5b6cc4> | CC-MAIN-2024-38 | https://aligneddc.com/blog-post/aligned-championing-a-greener-future-planting-trees-for-lasting-impact/ | 2024-09-20T14:32:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00721.warc.gz | en | 0.919617 | 894 | 2.59375 | 3 |
Data loss can be incredibly damaging to an organization. Fines from regulatory bodies such as GDPR can reach upwards of up to €20 million or 4% of annual turnover from the preceding year, whichever is highest. It can also result in legal trouble; when Yahoo suffered data breaches in 2013 and 2014, customers quickly responded by slapping the company with more than 20 lawsuits. Data loss can even force a business to close, as with River City Media in 2017.
Fortunately, technologies are available to help organizations protect their data, namely, data loss prevention (DLP) solutions. DLP is a set of strategies, processes, and technologies designed to prevent the loss, leakage, or unauthorized access of an organization's sensitive data. Its primary goal is to identify, monitor, and control sensitive data throughout its lifecycle.
This article will explore the common causes of data loss and how DLP can help.
Common Causes of Data Loss
There are several primary causes of data loss. They include but are not limited to:
- Hardware Failure: Hard drive crashes, power surges, faulty hardware components, mechanical failures, or damage to storage devices.
- Software Corruption: System crashes, software bugs, incompatible software or drivers, malware infections, or improper software installations can corrupt data.
- Viruses and Malware: Malicious software can infect systems, delete or encrypt files, or cause other forms of damage resulting in data loss.
- Natural Disasters: Fires, floods, earthquakes, hurricanes, or other natural disasters can physically damage storage devices or data centers, leading to data loss.
- Theft or Loss: Stolen or lost devices like laptops, smartphones, external hard drives, or USB flash drives can result in data loss if the organization has not adequately backed up data.
- Power Outages: Unexpected power outages or electrical surges can lead to data loss if the computer or storage device is not protected by an uninterruptible power supply (UPS).
- Software or System Updates: In rare cases, software updates or operating system upgrades can lead to data loss if compatibility issues or errors occur during the update process.
- Accidental Formatting or Partitioning: Incorrectly formatting or partitioning a storage device can erase all the data.
- Physical Damage or Environmental Factors: Dropping or exposing a storage device to extreme temperatures, moisture, or magnetic fields can result in data loss.
How DLP Can Help
Data Loss Prevention (DLP) helps prevent data breaches by implementing various measures to protect sensitive information and mitigate the risk of data loss. Here's how DLP can help prevent data breaches:
- Data Classification: DLP solutions classify and categorize data based on its sensitivity. Organizations can apply appropriate security controls and prioritize protection measures by identifying and labeling sensitive data.
- Data Discovery and Monitoring: DLP solutions scan and monitor data across various systems, networks, and storage devices to identify instances of sensitive data. They can detect when individuals access, transmit, or store data in unauthorized locations or through insecure channels.
- Access Control and User Monitoring: DLP solutions enforce access controls to restrict unauthorized access to sensitive data. They can monitor user activity, detect suspicious behavior, and implement policies to prevent unauthorized data access or exfiltration.
- Data Encryption: DLP solutions often incorporate encryption mechanisms to protect sensitive data. Encryption converts data into an unreadable format, and only authorized parties with the encryption key can decrypt and access the information, preventing unauthorized access to sensitive data in the event of a data breach.
- Data Loss Monitoring and Prevention: DLP solutions actively monitor data flow and communication channels to prevent accidental or intentional data loss. They can identify and block attempts to send sensitive data via email, file transfers, or other communication channels that violate security policies.
- Data Masking and Anonymization: DLP solutions can apply data masking or anonymization techniques to protect sensitive information. These methods replace or modify sensitive data with fictional or non-sensitive values, allowing organizations to use and share data for various purposes without exposing actual sensitive information.
- Policy Enforcement: DLP solutions enable organizations to define and enforce security policies related to data handling, sharing, and storage. They can automatically detect and block policy violations, such as attempting to transfer sensitive data to external storage devices or unauthorized cloud services.
- Incident Response and Forensics: DLP solutions provide incident response and forensic analysis capabilities. In the event of a data breach, they can help identify the source and scope of the breach, assess the impact, and facilitate appropriate remedial actions.
In conclusion, data loss poses significant risks to organizations, including financial penalties, legal consequences, and reputational damage. However, data loss prevention (DLP) solutions offer practical strategies and technologies to safeguard sensitive data. By classifying and monitoring data, enforcing access controls, implementing encryption, and applying data masking techniques, DLP helps prevent unauthorized access, data breaches, and data loss incidents. DLP also enables organizations to enforce security policies, monitor user activity, and respond swiftly to incidents.
By investing in robust DLP measures, organizations can proactively protect their valuable data assets, maintain compliance with regulations, and mitigate the potentially devastating consequences of data loss. | <urn:uuid:570fdddf-c908-4682-8664-b1623dff2a50> | CC-MAIN-2024-38 | https://em360tech.com/tech-article/common-causes-data-loss-and-how-dlp-can-help | 2024-09-09T14:14:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00821.warc.gz | en | 0.883509 | 1,067 | 3.15625 | 3 |
March 28, 2005
Microsoft Dfs provides a great way to let users easily access data that's stored on multiple remote computers. Through Dfs, your users can view and access folders as a single set of shares through a familiar, unified folder hierarchy, even when those resources are in different domains or physical sites. If you've resisted using Dfs because you were afraid it might be complex, fear not: Setting up Dfs is a straightforward process, and using it is even easier. I'll help you get started with Dfs by explaining how it works and guiding you through a typical Dfs configuration. After you start using Dfs in your organization, you'll wonder how your users ever functioned without it.
How Dfs Works
The basic element of the Dfs structure is a share that represents the root of the Dfs hierarchy. Through Dfs, these shares form a single, contiguous namespace. Client systems use familiar methods—such as a mapped drive or a Universal Naming Convention (UNC) path—to connect to the Dfs root. After a client connects to the Dfs root, the Dfs structure appears to be a regular share that contains subfolders that users can navigate and search. Each subfolder that's displayed under the Dfs root is actually a link to a share (a link target) anywhere on the network. Dfs automatically redirects the client that's accessing the share to the data's actual location. As Figure 1 shows, the folders that the user sees represent Dfs's redirection of the user to alternate shares on servers A, B, and C. The link target can be any system that uses a network file system that you can access via a UNC path—such as Windows, Novell NetWare, and UNIX or Linux (i.e., NFS) machines.
Dfs provides two types of roots: standalone and Active Directory (AD)–integrated. These types differ in how they store Dfs data. With standalone roots, the Dfs hierarchy, which consists of the various links to network shares, is stored in the Dfs server's local registry. Because of the way the information is stored, it can't be replicated to other Dfs servers, which means that if the sole Dfs server hosting the Dfs root becomes unavailable, the entire Dfs hierarchy is unavailable to all clients on the network. Should the Dfs server become unavailable, clients can still access shares on the servers directly; they just can't use Dfs to access them. You must use standalone Dfs if your environment doesn't have AD or if the Dfs administrators aren't domain administrators and thus can't be granted the authority (i.e., given access to the DFS-Configuration object in the domain AD partition's System container) to manage Dfs.
Windows 2000 Server and later also support AD–integrated Dfs (aka domain-based or fault-tolerant Dfs). With AD–integrated roots, Dfs information is stored primarily in AD, although the actual Dfs servers still cache copies of the data in memory to minimize Dfs server requests to domain controllers (DCs), thereby reducing Dfs network overhead. You can use an AD–integrated Dfs root only when the Dfs server is a member of a domain; however, the Dfs server doesn't have to be a DC. Essentially, you should use standalone Dfs when you don't have an AD domain, have more than 5000 links, or have legacy clients on your network. For more information about the differences between standalone and AD–integrated Dfs, see the sidebar "Dfs Differences," page 59.
After you've decided which type of Dfs you'll use, you need to set up links and link targets, which contain the data that Dfs will make available to clients. As I mentioned earlier, a link target is the actual resource to which Dfs redirects the client when it accesses a link. A link can have multiple link targets, which helps to provide load balancing and fault tolerance: If a share on one server is unavailable, Dfs redirects the client to another copy of the data. The actual link target that the client uses depends mainly on the client's site. Essentially Dfs is a site-aware service, which means that, by default, when a link target exists in a client's local site, Dfs directs the client to that local target.
Now that you've got a handle on Dfs essentials, you're ready to start setting up Dfs. Your first task is to create a Dfs root. You have two methods for doing this: either by using the Microsoft Management Console (MMC) Distributed File System snap-in or by executing the dfsutil.exe command-line utility. Here we'll use the snap-in method, which is a bit easier than the command-line tool when you're just getting started with Dfs. As you become familiar with Dfs, you'll probably want to use dfsutil.exe—for example, in a script that populates your Dfs hierarchy with links. Note that on Windows Server 2003, Standard Edition and Win2K Server, a server can host only one Dfs root. Windows Server 2003, Enterprise Edition and Windows Server 2003, Datacenter Edition can host an unlimited number of Dfs roots.
To create a new Dfs root by using the Distributed File System snap-in, perform the following steps:
Start the Distributed File System snap-in (its shortcut is located in the Administrative Tools Start menu folder).
Right-click the Distributed File System heading in the snap-in's treeview pane and select New Root (if you're using the Windows 2003 version) or New DFS root (if you're using the Win2K Server version). The remaining steps use the Windows 2003 dialog boxes, although the process is basically the same for Win2K Server.
At the introduction screen, click Next.
Select the type of root you want to create (domain or standalone). Click Next.
If you selected a domain Dfs root, you'll need to enter the name of the domain that will store the Dfs information. If you selected a standalone Dfs root, enter the name of the server that will store the Dfs information. Click Next.
If you selected domain in step 4, you're now asked to select the server that will host the Dfs root. Select a server and click Next.
Enter the name for the new root and any comments to help identify the root, then click Next. When you enter the root name, you'll also see this name displayed as the share name and a preview of its corresponding UNC path, as Figure 2 shows. For example, for a domain-based Dfs share, the pathname is domain nameshare. If the share doesn't currently exist, you're prompted to select a local folder on the machine to use for the share. This share doesn't store any real data; instead, it contains link-type objects that point to where the data physically resides. Select a folder to use for the share and click Next.
At the confirmation window, click Finish.
At this point, clients can connect to the Dfs namespace by using the UNC path \dfstest.testshared; they don't need to know anything about which servers are hosting Dfs. Clients running Windows NT 4.0 with Service Pack 6a (SP6a) or later can connect to a domain-based Dfs namespace. However, clients running Windows 98 can access standalone Dfs namespaces but must have the AD client extension installed to access a domain-based namespace. Microsoft Windows Preinstallation Environment (WinPE) instances can access only standalone Dfs namespaces.
To benefit from a domain-based Dfs namespace's fault-tolerance capability, you need at least two Dfs servers to host the Dfs namespace. Follow these steps to configure a second Dfs host server:
In the Distributed File System snap-in, right-click the root you created and select New Root Target.
Enter the server name that will be the additional Dfs host for the namespace. Notice that the name of the share (e.g., shared) that Dfs will use to host this copy is set and can't be changed. Click Next.
If the share name doesn't exist on the target server, you're prompted to select the folder to use, or you can create a new folder on the server, then select it. Make your selection and click Next.
At the summary dialog box, click Finish.
The Dfs root will now display multiple servers that act as root targets for the namespace, as Figure 3 shows. Clients can now connect and will be directed to one of the Dfs namespace root targets. However, users who access a root target will see an empty folder because you haven't defined any links yet. The next step, then, is to add some links and link targets that will redirect clients to useful content.
At this stage, to finish your Dfs configuration, you need to create a list of the shares in your organization, determine which data is the same on the different shares, and decide how you want the data to be known to the client (i.e., the folder name and comment). After you've identified this information, you can create the links by performing these steps:
Right-click the Dfs root and select New Link from the context menu.
Enter the name of the link (i.e., the folder that clients will see) along with the share to which the link will redirect the client. (You can change this name or add names later.) You can also enter a comment and specify the amount of time that clients cache this redirection information before they requery the Dfs server, as Figure 4 shows.
Now when clients access the Dfs namespace, they'll see a folder. When users open the folder, they'll be redirected to a share and will see the content that's stored under the share.
Say you also have a documents folder on a server in a remote office. Instead of creating a separate link to the folder (e.g., LondonDocuments), you can add another link target to the existing link. Setting up multiple link targets is another means for providing fault tolerance. If one link target fails, Dfs can redirect clients to an alternate copy of the data. To add another link target to an existing link, follow these steps:
Right-click the link and select New Target from the context menu.
Browse to the new share that will also be a target for the link. You can optionally select the Add this target to the replication set check box, as Figure 5 shows. (For more information about replication, see the Web-exclusive sidebar "Setting Up Dfs-Based File Replication" at http://www.windowsitpro.com, InstantDoc ID 45622.)
When you view your link, you'll notice that two link targets that are enabled. When clients browse to this link, Dfs directs them to one of the targets. You can now repeat the previous steps to configure all the links and targets you require to populate your Dfs structure.
As you've seen, multiple link targets can exist for a link. This capability poses an obvious question: Won't the content be different on the different link targets, meaning that Dfs could randomly redirect clients to different link targets, and thus the clients would see different files? Because multiple targets for a link are effectively separate shares on separate servers, no mechanism exists for keeping the targets' contents synchronized. Therefore, it's entirely possible for the various link targets to have different content, so that a client could browse to a folder, access data, return to the same folder later, but be redirected to an alternate link target and see an entirely different set of data. (However, this scenario is unlikely, as I explain in the Web-exclusive sidebar "Fine-Tuning Dfs Redirection" at http://www.windowsitpro.com, InstantDoc ID 45621.) Fortunately, Win2K Server and later Dfs implementations include File Replication Service (FRS), which DCs use to keep their Sysvol shares synchronized. Dfs uses FRS to synchronize the targets of a link that's part of a domain-based namespace. FRS provides various replication options, such as continuous replication, which allows replication of changes in near real time, and replication at certain times of the day. (Windows 2003 R2 will include a brand-new version of FRS just for Dfs.) I provide the steps for configuring Dfs-based file replication in "Setting Up Dfs-Based File Replication." If you have standalone Dfs and require synchronization, you need to use a file-synchronization tool such as the Windows resource kit Robocopy utility to provide that capability.
As you've seen, Dfs greatly simplifies access to network resources for end users and, with AD enabled, provides a measure of fault tolerance. To make sure Dfs works optimally for your organization, you'll need to decide what files need to be replicated and, if necessary, tweak Dfs's redirection. I've covered the essential information you need to get started with Dfs. For more in-depth information about Dfs, check out Microsoft's Distributed File System and File Replication Services Web site at http://www.microsoft.com//windowsserver2003/technologies/fileandprint/file/dfs/default.mspx.
Project Snapshot: How to |
PROBLEM: You need a seamless way to let users access folders on computers scattered across the network. Microsoft's Dfs offers a solution.WHAT YOU NEED: Network that includes multiple servers running Windows 2000 Server or later; Active Directory (AD), if you want to use AD–enabled Dfs; Windows client systemsDIFFICULTY: 2 out of 5PROJECT STEPS: |
Dfs Differences |
Each Dfs type has its advantages and limitations. An important point to remember about Dfs is that unlike Active Directory (AD)–integrated DNS, domain-based Dfs doesn't have to be hosted on a domain controller (DC); it can be hosted on any domain member server running Windows 2000 Server or later. At startup time and at periodic intervals (by default, once every hour), the Dfs servers simply query the domain's PDC emulator to obtain the latest Dfs namespace data. This periodic querying can become a resource bottleneck. It also imposes a practical 16-root replica limit on Dfs implementations, which means that you probably shouldn't have more than 16 Dfs servers per Dfs namespace because synchronization between the Dfs servers becomes more complex each time the Dfs structure is changed (i.e., a new link or link target is added). (The exception to this limit is Dfs on Windows Server 2003, which has a new "root-scalability mode" that usually lets Dfs servers query any DC in the domain instead of only the PDC emulator.)Another limitation of domain-based Dfs is that the entire Dfs structure (e.g., links, link targets, root servers) is stored as a single object that must be replicated to all DCs in the domain whenever anything in the Dfs structure is changed. (Does this remind you of Win2K Server's group membership replication?). Because of this replication behavior, Microsoft recommends a maximum Dfs object size of less than 5MB—about 5000 links. (An average Dfs implementation has only about 100 links.) If you need more than 5000 links, consider splitting the Dfs namespace into multiple namespaces or using standalone Dfs namespaces, which have a recommended limit of 50,000 links. Another way to minimize the space that Dfs uses in AD is to limit the number of comments you enter for the links, which are also stored in the Dfs AD object. Remember, though, that this Dfs namespace isn't likely to change frequently. After you've set up your initial Dfs configuration, it will remain fairly static and won't often be replicated. |
About the Author
You May Also Like | <urn:uuid:c27cbd94-0478-4bc1-8208-299857979a83> | CC-MAIN-2024-38 | https://www.itprotoday.com/microsoft-windows/get-going-with-dfs | 2024-09-09T15:17:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00821.warc.gz | en | 0.89849 | 3,341 | 3.078125 | 3 |
As we’re entering 2020, we’re also plotting out our New Year’s resolutions. Instead of suggesting what you should do next year, however, let’s have a look at some cybersecurity mistakes you should avoid for a more secure 2020.
Denying you are a target
You’ve probably already brushed off this possibility with contempt, thinking the chances are slim to none. To quote Dwight from The Office, “False”. When it comes to the internet, you cannot anticipate if a breach will directly affect you. New malware may appear or a service that you use may get hacked and your password can be leaked. All of these are probabilities that you should be aware of, and prevention can go a long way in securing your connected presence.
Clicking on suspicious links
Receiving spam has become a part of everyday life. Sometimes it’s just a harmless ad, but every now and then it can be something more sinister. You might get an email coaxing you to click on a suspicious link to claim a prize you’ve won. Or an offer that sounds too good to pass up might appear in an ad. Whatever the case, if you have even a shred of doubt about it: avoid clicking on it at all costs. The link just may contain malware that may wreak all kinds of havoc on your computer.
Failing to patch
Is your computer nagging you for the umpteenth time to install that pesky update? Perhaps the latest patch for your smartphone’s OS has been released. You’ve probably hit the postpone button more times than you’ve snoozed your alarm. We can’t speak to your sleeping habits, but you should always keep your devices updated to the latest version of software available. It will probably save you from a headache in the long run. The infamous WannaCryptor malware spread due to devices not being patched.
Recycling your passwords
To simplify the arduous task of memorizing scores of passwords, some people resort to recycling. This means that they reuse the same password or passphrase, perhaps varying a character or two or by adding upon it. This practice should be avoided. If the bad actors figure out one of your passwords, password reuse allows them to guess the rest of your passwords.
Not using 2FA
Two-factor authentication (2FA), also known as multi-factor authentication (MFA), is a simple way to add an extra layer of security to your accounts. The most common 2FA method used by popular online services is a text message with an authentication code sent to your phone. It is one of the most basic methods but use at least this one if you have no other option. If bad actors are missing one piece of the puzzle, they cannot get in until they overcome that hurdle, which might make them look for an easier challenge elsewhere.
Ignoring your router setup
When it comes to home interconnectivity, the router is the heart of your home. All your devices with an internet connection are linked to it, be it your smart TV, smartphone, personal computer or laptop. For convenience’s sake, a lot of people just go through the bare necessities when installing it or keep the default settings pre-configured by your ISP. You should always take steps to secure your router, so you can browse the internet safely.
Using unsecured public Wi-Fi
Most places like cafes, restaurants, and even shops offer complimentary Wi-Fi connections, which is a welcome alternative to using up your precious data plan. As convenient as such free connections might be, you should be careful what you connect to. An unsecured public Wi-Fi can lead to your private data being stolen or your device being hacked.
Besides using a Virtual Private Network (VPN) to connect to your work’s servers, there are other security reasons to use one in private. You can use VPNs to access your home network remotely or to limit your ISP from seeing what you are doing, or to browse safely on public Wi-Fi. Depending on what you want to do, there are various types of VPNs you can choose from to protect your communication.
Skimping on security software
The internet is a useful tool, no doubt, but to paraphrase G.R.R. Martin, it can be dark and full of terrors. Granted, this leans towards hyperbole, but you should always use reputable security software to protect your data. Clicking on the wrong link might lead to malicious code making its way to your computer. Security software provides multiple layers that can stop these threats in their tracks. Prevention is the mother of security; athletes in contact sports use mouthguards as a preventive measure because fixing their teeth is more expensive than protecting them. The same goes for your data.
Underestimating backup and encryption
If, due to some unforeseen circumstances, your computer kicks the can, having a backup comes in handy. Always back up your sensitive data and things you have been working on recently; thus, if something does happen, you can continue unhindered by the unfortunate loss of your device. The same goes for encryption. Never underestimate the value of having your data encrypted: if you get hacked, the bad actor will have a tough time getting to your data; if your device gets stolen, you have an extra layer of security in place before you remotely wipe it.
If you just counted ten tips and not twenty, you would be right. So stay tuned, as tomorrow we’ll continue with tips that will be geared towards smartphones. | <urn:uuid:bd67c7c6-8b79-4f9b-9f44-6fe19956fdbe> | CC-MAIN-2024-38 | https://www.cxoinsightme.com/opinions/20-cybersecurity-mistakes-to-avoid-in-2020/ | 2024-09-12T02:32:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00621.warc.gz | en | 0.9382 | 1,148 | 2.59375 | 3 |
The rapid evolution of artificial intelligence has led us into an era where the integration of multiple streams of data can simulate the human process of assimilating information through senses. Unlike traditional unimodal AI systems, which operate within the confines of a single data format—be it text, images, or sounds—multimodal AI endeavors to understand and interact with the world by synthesizing inputs from various data types. This development promises an AI that doesn’t just understand a picture or a sentence in isolation, but has the profound ability to grasp the context by considering all the different types of input it has been designed to process.
The Advent of Multimodal Learning
With the advancement in machine learning and natural language processing, AI systems are now able to learn from a mixture of datasets, which enables them to grasp the nuances of human language and sensory experiences. Multimodal AI stitches together data from text, visuals, audio, and sometimes even tactile signals to gain a comprehensive understanding of the content. This interdisciplinary approach to AI development mirrors the multifaceted way humans perceive and interact with their surroundings, thus allowing AI systems to make more informed and relevant decisions based on a more colorful and detailed tapestry of information.
For instance, when examining a social media post, a multimodal AI can consider the text, emojis, images, and the tone of any accompanying audio to determine the sentiment behind the message. This allows for a level of insight that unimodal systems cannot achieve. By understanding the context in which language is used—along with the associated visual or auditory cues—multimodal AI systems can navigate through complex communicative scenarios, becoming more adept at understanding human intentions and reactions.
Bridging Human-AI Interaction
The aspiration behind incorporating multimodality into AI is to create technology that is more empathetic and intuitive in its response to human input. By bridging the gap in understanding between AI and humans, these systems are anticipated to pave the way for more natural and engaging interactions. Multimodal AI seeks to not just parse data, but to interpret it the way humans do: by considering all the aspects of a message and the context it’s presented in.
In practical terms, this enables an AI system to assist users in ways that feel more personal and context-aware. For example, future virtual assistants might be able to understand a user’s mood through their speech and facial expressions, tailoring responses accordingly. In healthcare, multimodal AI could revolutionize diagnostics by combining patients’ verbal descriptions of symptoms with medical imagery and data derived from other sensory inputs, resulting in more accurate treatments.
Applications of Multimodal AI
Multimodal AI is making inroads in various sectors, reimagining how services and products are offered and consumed. In the realm of consumer electronics, smartphone features such as facial recognition, voice commands, and fingerprint sensors are all manifestations of multimodal technology. Simultaneously, the entertainment industry employs multimodal AI to create immersive virtual reality experiences where audio-visual elements are synced with haptic feedback.
In the sphere of safety and security, multimodal systems can significantly enhance surveillance by fusing video feeds with audio and other sensors to detect anomalies and potential threats promptly. The growing field of autonomous vehicles also stands to benefit from multimodal AI, as such vehicles must process visual, auditory, and sensory data to safely navigate complex environments.
Shaping the Future through Multimodal AI
The advancement of artificial intelligence (AI) is propelling us toward an age of multitasking machines. Moving past the era of unimodal AI, which is confined to a single data type such as text or images, multimodal AI stands out by its ability to process and understand diverse data forms simultaneously. This marks a significant leap where AI can contextualize information much like humans—bringing together text, visual, and auditory data to form a holistic picture. By combining different data streams, multimodal AI opens a window into a more comprehensive understanding of the world, mirroring human cognition in a way that single-mode AI never could. This evolution in AI technology promises more intuitive interactions and a deeper grasp of complex contexts, delivering an experience that’s more aligned with the way humans perceive and interpret their surroundings. | <urn:uuid:e26cc5f9-cf4d-4921-a592-5a80f322a548> | CC-MAIN-2024-38 | https://bicurated.com/ai-and-ml/multimodal-ai-integrating-senses-for-enhanced-interaction/ | 2024-09-18T07:58:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00121.warc.gz | en | 0.915561 | 883 | 3.4375 | 3 |
Do women cope better than men with stress or is gender irrelevant? This was a question I was asked by a leader the other day. It’s also something I’ve referred to often in my keynotes and media interviews. This blog answers the question.
The term ‘fight or flight’ (also known as ‘the fight-flight-or-freeze-fawn response,’ ‘hyperarousal’ or ‘the acute stress response’) was first coined by Walter Cannon in 1932 and is generally regarded as the way a person react to stress. For example, if someone sizes up a threat or predator and determines that they have a realistic chance of overcoming the predator, then it’s likely they’ll attack. But when the threat is perceived to be more dangerous, then flight is more probable.
But is this the actual truth or is the research flawed? Some researchers believe it is as it’s based on research done before 1995, where women constituted only about 17% of participants in laboratory studies of physiological and neuro-endocrine responses to stress. As the gender balance was redressed in the 5-years following, it resulted in a modified understanding of how men and women respond to stress.
Today, evidence suggests that women and men react differently to stress–both psychologically and biologically. And women may cope better than men in high-stress workplace scenarios because they are biologically programmed to respond differently.
Given the ever-evolving threat landscape and the increasing complexity of data breaches, hacks, and compliance failures, it’s important to consider the biological advantages that females possess in highly stressful situations and how they could be an invaluable asset when facing complex cyber threats.
Biobehavioral Responses to Stress in Females: Tend-and-Befriend, Not Fight-or-Flight
According to researchers at the University of California (and published by Shelley E Taylor and her team in 2000), it turns out that during times of stress, males respond with a ‘fight or flight’ response, designed to compete, and conquer. Females on the other hand, respond with a ‘tend and befriend’ response – a physiological reaction involving the release of oxytocin (the ‘love hormone’) associated with peer bonding, affiliation, and motherhood.
The researchers discovered that a female’s response is based on female reproductive hormones and endogenous opioid peptide mechanisms, and that…
“…by virtue of differential parental investment, female stress responses have selectively evolved to maximize the survival of self and offspring.”
As estrogen may play a key role in modulating social behaviours tied to oxytocin, such as friendly and caring tendencies, this suggests that a female’s level of nurturing and friendliness could be highest during her late luteal phase, less so in the follicular phase, and decreased once she enters menopause. See the diagram (fig 1.) below for information on the phases and how each phase affects people who menstruate.
Evidence from rhesus monkeys adds further credence to this hypothesis as females tend to be more sociable near ovulation or when given estrogen (Wallen & Tannenbaum, 1997) but further evidence needs to be gathered in order to better understand these dynamics.
How Women can Use their Monthly Cycles Strategically
The researcher’s findings suggest that females are better able than males to diffuse potentially stressful and high pressure situations through collaboration, communication, and cooperation – rather than by engaging in a zero-sum game or attempting to dominate the situation. The findings may also explain why males are more vulnerable to a broad array of stress-related disorders, including episodes of violence, such as homicide and suicide; dependence on stress-reducing substances, such as alcohol or drugs; stress-related accidents and injuries; and patterns of cardiovascular reactivity.
In humans, studies have shown that testosterone (which is found in much higher doses in men than women) increases in response to attractive women and acute stress, including high-intensity exercise and psychological stress. This effect varies depending on the type of stressor as well as individual differences.
In highly stressful situations males appear be more aggressive towards men.They are more likely to use physical aggression in struggles for power within a hierarchy or to defend territory against perceived external enemies. While females tend to be less physically aggressive than males, reseachers at the University of California say they often demonstrate similar, or even higher levels of indirect aggression, for example, gossiping, telling rumours, and convincing another person to act in a manner that puts a third party at a disadvantage. However, female verbal aggression still tends to be lower than male verbal aggression.
Insights from Game Theory
Insights from game theory also suggest that ‘tend and befriend’ reduces risk and may be a more preferable response to stress than the traditional ‘fight or flight’. The reason why is because mathematics attaches a positive value to co-operation. It provides both parties with an opportunity to test and learn more about one another. As the information generated increases the possible outcomes, it generates better solutions for any arising issue – something that isn’t achievable if the parties do not interact again. In game theory, competitions are only deemed the best strategy if the interaction is a one-off and if each party will never have contact with each other again – which rarely occurs in business environments.
Extensive biological research has shown that behaviour can have a significant impact on biology, from altering genetic expression to affecting how the body responds to stress. Similarly, biology is also known to influence behaviour in various ways. Whether females may be better equipped biologically to handle high-pressure workplace scenarios than males is contentious. If it is, it lends more weight to the argument (and drive) for women to be included in cybersecurity, and certainly to be involved in crisis handling during data breaches, hacks, and compliance failures. | <urn:uuid:964881fb-b5ba-45f1-b2e9-7c4f5c41d8fc> | CC-MAIN-2024-38 | https://jane-frankland.com/are-women-better-equipped-than-men-to-handle-stressful-situations-in-cybersecurity/ | 2024-09-18T08:13:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00121.warc.gz | en | 0.951989 | 1,220 | 2.6875 | 3 |
One of the most powerful data processing tools used in accounting today is Microsoft Excel. Around since 1985, Excel was designed to …
Payment Security 101
Learn about payment fraud and how to prevent it
In accounting, understanding the difference between accrued expenses and accounts payable is crucial for managing a company’s financial health. Both terms refer to liabilities—amounts that a business owes—but they’re recorded differently on the balance sheet.
Accrued expenses represent costs incurred but not yet paid, like wages earned by employees that haven’t been disbursed. Accounts payable, on the other hand, are obligations to pay suppliers for goods or services that have already been received, but the payment hasn’t yet been made. Knowing how to distinguish between these two can help you maintain accurate financial records and make better-informed decisions.
Accrued expenses are costs that your company has incurred during a specific period but have not yet paid. These expenses are recognized on your income statement when they are incurred, even though the cash hasn’t left the company’s account. Common examples include salaries and wages earned by employees that will be paid in the next pay period, interest on loans that will be paid later, or utilities that have been used but the bill hasn’t been received yet.
For instance, if your company’s employees work during the last week of December, but payday falls in January, the wages for that week would be recorded as an accrued expense in December’s financial statements. This ensures that the expense is matched with the revenue generated during the period it was incurred, providing a more accurate picture of your company’s financial performance.
Other examples of accrued expenses include the following:
Accounts payable, on the other hand, refers to amounts your company owes to its suppliers for goods or services it has already received. This liability is recognized when the invoice from the supplier is received, and it remains on the company’s balance sheet until the payment is made. Examples of accounts payable include payments due to vendors for goods and services the company has received but not yet paid for.
For example, if a business receives a shipment of raw materials on December 15th with payment due in 30 days, the amount owed will be recorded as an account payable. This entry reflects the company’s obligation to pay its supplier, even though the cash hasn’t yet been transferred.
Other examples of accounts payable include the following:
Accrued expenses and accounts payable both represent obligations that a company must pay, but they differ in several key ways:
Understanding these differences is essential for accurately tracking and managing a company’s short-term liabilities, ensuring that financial statements provide a true reflection of the company’s financial obligations.
Eftsure provides continuous control monitoring to protect your eft payments. Our multi-factor verification approach protects your organisation from financial loss due to cybercrime, fraud and error. | <urn:uuid:43320974-dd02-49d1-a137-cf1fa05cabf1> | CC-MAIN-2024-38 | https://eftsure.com/en-nz/blog/finance-glossary/what-is-an-accrued-expense-vs-accounts-payable/ | 2024-09-19T15:09:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00021.warc.gz | en | 0.970963 | 604 | 3.625 | 4 |
How does an FM use the dew point temperature to control humidity?
This document explains how does an FM uses the dew point temperature to control humidity.
All Cooling Units.
All Product models, all serial numbers.
When the FM systems belong to a group the FM uses the dew point temperature to control humidity. Dew point is a function of both temperature and humidity so you must take temperature into account when calculating humidification demand.
The dew point temperature setpoint for humidification is calculated from the humidification setpoint and the reheat setpoint (use cooling setpoint if reheat is not present). The dew point of the room is calculated from the return humidity and the return temperature.
The dew point temperature setpoint for dehumidification is calculated from the dehumidification setpoint and the cooling setpoint.
For example, if the reheat setpoint is 68 F and the humidification setpoint is 50%, the dew point setpoint is 49 F. If the room temperature is 74 F and the return humidity is 40% the dew point of the room is 48 F.
The humidifier demand is calculated as the difference between the actual room dew point and the desired room dew point divided by the sensitivity.
So in this example; Demand = (49 - 48) / 2 = 0.5 or 50% demand. So the humidifier output will be about 50%.
There is one other note. Versions of the FM controller firmware v6.7.0 and earlier do not calculate low and high humidity alarms correctly. This has been corrected in version 6.11.0 which has just been released. I'm in the process of putting the new version on the firmware download page of apc.com, but if you can't wait, send me an e-mail and I'll forward it to you.
COOL SETPOINT / DEHUMID SETPOINT = DEWPOINT HIGH LEVEL
REHEAT SETPOINT / HUMID SETPOINT = DEWPOINT LOW LEVEL | <urn:uuid:1b29731d-30b8-4ace-a329-6f79a0f42b8f> | CC-MAIN-2024-38 | https://www.apc.com/hr/en/faqs/FA165687/ | 2024-09-10T23:29:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00885.warc.gz | en | 0.875528 | 427 | 2.578125 | 3 |
Geospatially Enabled Augmented Reality
Are we Living in a simulation? Maybe soon enough we can! Before the walls went up and construction of IG Labs was made a reality, we were given a glimpse of the grand vision with virtual reality. Thanks to modern technology, we were able to walk through and see what would eventually become our own personal “Tech Shangri La”. But outside of being a neat toy for the digital age, what are the practical uses of this kind of technology?
One commercial company seeking to answer that in a meaningful way is XYZ Reality™. Their HoloSite augmented reality (AR) system is designed to bring the virtual world of a designer’s program and overlay it onto the world we see in front of us. This technology allows construction and design crews to fuse the visual benefits of AR with the precision of Geospatial Information Systems (GIS) to map designs to their physical locations with pinpoint accuracy. Using GIS, XYZ Reality™ can bring construction projects to life with millimeter accuracy in a 1:1 scale overlay in a real-time augmented reality projection displayed in their AR enabled HoloSite helmet. These innovations have already proven their worth by allowing construction and inspections to be done in a proactive manner as opposed to the traditional reactive sense. This of course saves time and money for the companies employing this technology!
This technology has only just begun to be tapped for its full potential, and I believe that GIS and AR could begin to shape our daily lives and the way we interact with our surroundings. We have already seen automobile manufacturers leaning forward into this technology to introduce AR enabled windshields to assist with navigation by displaying turns and lane recommendations based on your GPS. Perhaps in the future we will be able to use AR glasses and GPS locational data to read restaurant reviews for that bakery you just walked by. Perhaps we could use this technology to allow SCUBA divers to navigate reefs and shipwrecks as they dive. The possibilities are practically endless when it comes to how we can marry the technological benefits of AR and GIS.
However, beyond saving money and making day to day life more exciting, could a technology like this be used to save lives? With most building blueprints, water, electrical, and gas line diagrams, etc. being available through a city’s public works archives, could this information be incorporated into a similar augmented reality database that would enable a higher degree of safety for emergency services? Could a firefighter be able to view a heads-up-display (HUD) map of a building full of smoke and better navigate through an otherwise unknown labyrinth? Could that same firefighting crew examine a wall and identify the best place to cut through to avoid the electrical or gas lines and decrease the chance of further complicating a rescue?
What about the applications of this technology on the battlefield? With HUD displayed maps, and the addition of inputs from tactical ISR platforms, troops on the frontline could more quickly and accurately assess friendly locations as well as the best tactical routes for moving in a tense scenario. With first person shooter style video games being so popular, and the concept of having informational displays being the norm, is it really that big of a leap to imagine the soldier of the future using this technology to take the fight to the enemy?
Knowledge is power, but access to that knowledge is what turns that power into a tool. I believe that augmented reality, along with the geospatial data around us is going to be the next powerful tool in building the world of tomorrow and the future that we are already living in! I personally hope to be able to build that world as a safer more enabled place to live, and I am excited to be at the dawn of this technologic age. So, where do you see the future of AR mapping and precision overlayed reality? | <urn:uuid:39277b83-2737-4d87-a163-f4439e5cc91f> | CC-MAIN-2024-38 | https://intelligenesisllc.com/augmented-reality-2/ | 2024-09-12T06:49:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00785.warc.gz | en | 0.950693 | 780 | 2.515625 | 3 |
Enumeration techniques are a very fast way to identify registered users. With valid usernames, effective brute force attacks can be attempted to guess the password of the user accounts.
Making sure no pages or APIs can be used to differentiate between a valid and invalid username
Make sure to return a generic "No such username or password" message when a login fails. In addition, make sure the HTTP response and the time taken to respond are no different when a username does not exist and an incorrect password is entered.
- Password Reset:
Make sure your "forgotten password" page does not reveal usernames. If your password reset process involves sending an email, have the user enter their email address. Then send an email with a password reset link if the account exists.
Avoid having your site tell people that a supplied username is already taken. If your usernames are email addresses, send a password reset email if a user tries to sign-up with a current address. If usernames are not email addresses, protect your sign-up page with a CAPTCHA.
- Profile Pages:
If your users have profile pages, make sure they are only visible to other users who are already logged in. If you hide a profile page, ensure a hidden profile is indistinguishable from a non-existent profile. | <urn:uuid:c535f33d-f999-4ac1-9ca4-a98c8f9dba72> | CC-MAIN-2024-38 | https://docs.ostorlab.co/kb/USER_ENUMERATION/index.html | 2024-09-15T23:03:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00485.warc.gz | en | 0.861663 | 269 | 2.609375 | 3 |
In an increasingly complex world, ensuring the safety and security of students and staff has become a top priority for educational institutions. While security discussions often revolve around cybersecurity, it’s equally important to address the physical security issues that schools face. These challenges can have a significant impact on the learning environment, making it crucial for schools to proactively identify and mitigate them.
Inadequate Access Control
Access control is the cornerstone of physical security. Schools frequently encounter problems related to unauthorized individuals gaining access to their premises. This can include outsiders wandering onto campus to former students attempting to reenter without permission—as we all know, any unauthorized person can be a serious danger to the safety of everyone at the school. Without proper access control, schools are vulnerable to a variety of threats, including vandalism, violence, and theft. Implementing measures like ID card systems, security guards, and secure entrances can help mitigate these risks.
Poor Perimeter Security
A weak or poorly monitored perimeter can make schools susceptible to unauthorized access and intrusions. A lack of fencing, broken gates, and insufficient lighting can create vulnerabilities that trespassers can exploit. Inadequate perimeter security not only compromises student safety but also jeopardizes the integrity of the learning environment. Regular inspections and repairs, as well as the installation of security cameras, can enhance perimeter security and deter potential threats.
Limited Surveillance Systems
Security cameras play a pivotal role in deterring criminal activities and providing valuable evidence when incidents occur. Unfortunately, many schools still lack comprehensive surveillance systems or have outdated equipment that hampers their effectiveness. The absence of proper surveillance makes it difficult to monitor entrances, hallways, parking lots, and other critical areas. Investing in modern surveillance technology, such as high-resolution cameras and video analytics, can significantly enhance a school’s ability to monitor and respond to security incidents.
Emergency Response Preparations
In today’s world, schools must be prepared to respond swiftly and effectively to emergencies such as active shooters, natural disasters, or medical crises. Without proper planning and training, chaotic situations can quickly spiral out of control. Inadequate emergency response preparations can result in confusion, delayed notifications, and inadequate communication with law enforcement and first responders. Regular drills, clear emergency protocols, and communication systems that facilitate quick notifications are essential to ensure the safety of everyone on campus.
Neglecting Security Education
While the physical security infrastructure is critical, educating students, staff, and parents about security measures and procedures is equally important. Many individuals may not be aware of the proper steps to take in case of an emergency or may not recognize suspicious behavior. This lack of awareness can hinder effective response efforts. Schools should prioritize security education through workshops, training sessions, and awareness campaigns to empower their community to contribute to a safer environment.
TRUST THE PROFESSIONALS AT ARK SYSTEMS
Located in Columbia, Maryland, ARK Systems provides unsurpassed quality and excellence in the security industry, from system design all the way through to installation. We handle all aspects of security with local and remote locations. With over 30 years in the industry, ARK Systems is an experienced security contractor. Trust ARK to handle your most sensitive data storage, surveillance, and security solutions. | <urn:uuid:4c6d3dd5-6852-4de5-b25a-f65d5bfecf6e> | CC-MAIN-2024-38 | https://www.arksysinc.com/blog/5-common-physical-security-issues-in-schools/ | 2024-09-15T22:13:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00485.warc.gz | en | 0.932434 | 654 | 3.1875 | 3 |
As their name suggests, biopesticides are pesticides derived from natural or bio-based materials such as plants, animals, bacteria, and certain minerals. Since biopesticides are completely natural, they do not have any adverse impact on the environment, unlike the synthetic pesticides.
The adverse effects of synthetic pesticides on the environment and human and animal health are a major concern, which has prompted many countries to adopt stringent regulations regarding the use of synthetic pesticides. This is one of the most important reasons behind the increasing adoption of biopesticides across the world and this factor will continue to play a key role in the growth of the global biopesticides market.
The global biopesticides market is projected to grow at a high CAGR of 15.9% during 2019 to 2023. By the end of 2023, the market is expected to reach USD 6.4 billion. The growth of the market will be driven by increasing awareness about the harmful effects of synthetic pesticides and growing adoption of organic farming. Increasing demand for organically grown food will give a major boost to the growth of the global market.
Biopesticides have become an important component in Integrated Pest Management (IPM) systems and simple residue management techniques. The consumption of biopesticides has increased considerably in vegetable farming, organic agriculture, and orchard crops & berries. This in turn, is driving the growth of the global market for biopesticides.
Factors such as increasing investments by leading crop protection companies, growing environmental concerns related to the use of synthetic pesticides, and favorable government policies will affect the growth of the global market positively.
Registration of biopesticides usually takes less time than chemical pesticides, as biopesticides pose fewer risks compared to synthetic or chemical pesticides. Moreover, many countries, especially the developing countries have simplified the registration process for biopesticides in order to promote their use. This factor will also affect the market positively.
However, the size of the biopesticides market is still very small and so, is the profit margin. Further, the cost of manufacturing biopesticides is comparatively higher than the cost of manufacturing synthetic pesticides. So, achieving the profitability level that can help it sustain in the global pesticides market can be a major challenge for the biopesticides market. However, the rising adoption of biopesticides and advancement in manufacturing technology will provide growth opportunities in the coming days.
Presently, North America is dominating the global biopesticides market, mainly due to increasing interest in green agricultural practices in the region, simple registration process for biopesticides, growing demand for organic products, and new product development. The North American market is expected to maintain its dominance in the global market in the coming years as well. | <urn:uuid:37cdcaf0-b4ae-4776-9944-d32c122f68f4> | CC-MAIN-2024-38 | https://www.alltheresearch.com/blog/driven-by-rising-environmental-and-health-concerns-the-global-biopesticides-market-to-grow-at-15-9-cagr-during-2019-to-2023 | 2024-09-11T05:55:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00085.warc.gz | en | 0.944693 | 566 | 2.921875 | 3 |
The British parliament has approved a new provision that will ban the use of universal default passwords for Internet of Things (IoT) devices, which is expected to mitigate the risk of cyberattacks related to factory reset.
This new bill, known as the Product Security and Telecommunications Infrastructure Bill (PSTI), requires tech companies to use unique passwords for home IoT devices. Julia Lopez, from the Ministry of Media, Data and Digital Infrastructure, believes that this is a good measure for the fight against hackers, who always try to break into this kind of systems.
The authorities will impose fines equivalent to up to $12 million USD on companies that violate this regulation, so the British government expects the technology industry to rush to take the necessary measures to comply with this law.
On top of that, the new law also requires tech companies to be more transparent regarding security patches and updates to their products for home environments. It should be noted that the bill further stated that only 20% of IoT companies are practicing transparency for their security updates, so dozens of companies will need to adopt new computer security policies.
The official also believes that this practice gives end users a false sense of security, so it is necessary for manufacturing companies to take the initiative by setting a sufficiently secure factory password. On the other hand, the consumer protection body Which? has been pointing out the serious security flaws in IoT devices for years, so they consider these efforts of legislators necessary.
On these security issues, experts note that there are currently around 12,000 known security flaws in IoT devices, so all efforts to mitigate their exploitation are welcomed by the cybersecurity community.
The exploitation of vulnerabilities in IoT devices is one of the main security problems in home environments. According to Symantec, more than 55% of these devices use default passwords such as “123456”, while 3% of these computers use the “admin” password. With this new law, these passwords will be a thing of the past.
To learn more about information security risks, malware variants, vulnerabilities and information technologies, feel free to access the International Institute of Cyber Security (IICS) websites.
He is a cyber security and malware researcher. He studied Computer Science and started working as a cyber security analyst in 2006. He is actively working as an cyber security investigator. He also worked for different security companies. His everyday job includes researching about new cyber security incidents. Also he has deep level of knowledge in enterprise security implementation. | <urn:uuid:39e2cf60-54ea-49dd-b782-839274d1afd0> | CC-MAIN-2024-38 | https://www.exploitone.com/cyber-security/uk-government-bans-iot-devices-companies-to-use-default-passwords-for-configuring-devices/ | 2024-09-11T07:44:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00085.warc.gz | en | 0.945308 | 500 | 2.640625 | 3 |
Many information security threats stem from vulnerabilities presented by outdated workstations, network devices, and mobile devices such as tablets and smartphones.
These objects are embedded with software, electronics, and sensors, enabling these devices to transmit, store, and exchange information over the internet and enterprise networks.
However, support for these devices is limited since vendors are constantly releasing upgraded versions of these devices and abrogating support or slow release of security patches, antivirus software updates, and a lack of consistent new application updates for systems and IoT devices.
Any weakness, such as Central Processing Unit (CPU) flaws like Meltdown and Spectre, allows malicious and determined attackers access to protected information stored in systems kernel memory—resulting in the disclosure of sensitive information like passwords, cryptographic keys, enterprise proprietary information, and sensitive email.
Increase of mobile workforce
The number of remote workers has increased exponentially due to COVID-19 safety measures taken by the federal government and state, territorial, local governments, and corporations. Additionally, most of the workforce utilizes their home network to access organizational network infrastructure resources through a virtual private network (VPN). Using any unpatched organization-issued system, personal mobile device, or home computer may introduce vulnerabilities that hackers such as the Nation-State, Lone Wolf, or Organized Cybercriminals, can exploit.
End users and gateway vulnerabilities are the key catalysts for BYOD and Remote Access vulnerabilities. The fundamental problems with VPNs are the trusted tunnel between the remote employee and the corporate networks that can be easily exploited through malware, ransomware, and remote access trojan (RAT). Additionally, VPN gateways are exposed to the entire world via the internet and require continuous monitoring and adequate security patching.
Poor Patching Practices and the Travelex Incident
Furthermore, poor patching practices by organizations can lead to vulnerabilities that are exploited. In December 2019, a UK-based organization Travelex paid a $2.3 million ransom to restore operations. However, this attack cost $30 million more in operations impact due to inadequate patching of its gateway VPN.
There are many security and privacy issues to consider when using outdated devices that vendors no longer support.
If you cannot update the operating system for your personal computer or mobile device, it is time to buy a new one or get a company-issued device with adequate security controls.
However, the financial impact, regulatory violations, and damage to the organization’s reputation could be detrimental. Hence solid vulnerability management and risk mitigation strategies, including robust security awareness training and continuous patching, should be the de-facto standard to secure the enterprise network and client systems.
Dr. Daniel Harrison
Dr. Harrison is a Doctor of Computer Science in Information Assurance, Chief Information Security Officer (CISO), Chief Privacy Officer, and Executive Board Advisor. Dr. Harrison is US Army Combat Veteran with expertise in Local Government, Industrial Control systems, Laboratory Information Systems, DoD Information Systems, and Enterprise Network Security.
Dr. Harrison is a solution-oriented, transformational CISO with expertise across all information security facets. A cybersecurity expert with top US security clearances and a record of exemplary service building and leading multiple cybersecurity task forces across various US military branches, local government, and highly regulated industries. A change agent and servant leader who drives needed organizational transformations and turnarounds that optimize the security of mission-critical data, systems, and people and inspire individuals and teams to learn more, achieve more, and serve as a vessel for service excellence to others and the organization. | <urn:uuid:510a7667-a5bd-4ba5-bf7c-b2698f8f2ed3> | CC-MAIN-2024-38 | https://cisotimes.com/flaws-in-legacy-systems-byod-and-remote-access-how-they-impact-operations-security-of-the-enterprise/ | 2024-09-12T08:36:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651440.11/warc/CC-MAIN-20240912074814-20240912104814-00885.warc.gz | en | 0.928714 | 716 | 2.578125 | 3 |
Internet access is no longer an option; it has become a requirement for everyone. Internet connection has its own set of advantages for an organization, but it also allows the outside world to communicate with the organization’s internal network.
Visiting another website requires a connection to a specialized computer called a web server, which, like any other computer, can be targeted by hackers. Attackers have the potential to infect the host computer with malware and start DDoS attacks when they connect to a foreign machine.
That is where a firewall becomes helpful.
Table of Content
What is Firewall?
A firewall is a type of network security device that monitors and controls incoming and outgoing traffic. It can be either hardware or software. It allows, rejects, or blocks specific traffic based on a predetermined set of rules. It protects the network from both external and internal threats.
How does a firewall work?
When encountering unauthorized traffic, a firewall runs a scan and tries to match the traffic with its defined set of rules. Once the network matches with the set of rules, appropriate actions are taken for that specified network. If the incoming traffic is determined to be a security risk, the firewall prevents it from entering the internal network.
The vulnerability of networks connected to the internet necessitates the use of firewalls. A third party can readily infiltrate and infect an unprotected network. The hacked website or server might be infected with malware once the hackers gain control of it. DDoS (Distributed-Denial-of-Service) attacks, which can force a website or server to crash, can render your network vulnerable if firewalls aren’t installed.
There are different ways a firewall can filter and control unauthorized traffic, such as:
- Packet Filtering
In this strategy, packets are formed up of little pieces of data that are treated separately by firewalls. Packets trying to enter the network are checked against a set of rules. The packets that match a known threat are quarantined, while the others are allowed to proceed to their intended destination.
This form of firewall has no way of knowing if the packet is part of an existing traffic stream. Packets can only be allowed or denied depending on their unique headers.
- Stateful Inspection
Stateful Inspection is a more advanced type of firewall filtering that looks at a variety of elements in each data packet and compares them to a database of reliable data. The source and destination of IP addresses, ports, and applications are among these factors.
To be allowed to get through to the internal network, incoming data packets must have the required information.
- Proxy Service
To safeguard network resources, the Proxy Firewall, also known as an Application Firewall or a Gateway Firewall, inspects incoming traffic at the application layer. It restricts the kind of applications that a network can support, which improves security but reduces functionality and performance.
A proxy server acts as a go-between, preventing direct connections between the two sides of the firewall. Each packet must pass via the proxy, which determines whether traffic is allowed to pass or is blocked based on the rules set forth.
- Next-Generation Firewalls (NGFW)
Next-generation firewalls (NGFWs) are used to guard against modern security threats such as malware and application-layer assaults. Packet Inspection and Stateful Inspection are combined in NGFW. To protect the network from modern threats, it also contains Deep Packet Inspection (DPI), Application Inspection, malware filtering, and antivirus.
The Importance of Proper Firewall Configuration
A firewall is an important part of network security and must be configured correctly to protect a company against cyberattacks and data breaches. Hackers can obtain unauthorized access to a protected internal network and steal critical information if the firewall is configured incorrectly.
A properly configured firewall can protect an online server from harmful cyberattacks to the fullest extent possible.
Secure Ways to Configure a Firewall
A firewall setting is critical for ensuring that only authorized administrators have access to a network.
The following actions are required:
- Securing the Firewall to authorized personnel
Secure your firewall so only authorized personnel can access the internal network.
- Update your firewall to the latest firmware.
- A firewall should never be put into production without the proper configurations in place.
- Delete, disable, or rename the default accounts and use unique and complex passwords.
- Never use shared accounts managed by multiple administrators.
- Disable Simple Network Management Protocol (SNMP).
- Creating Firewall Zones and Establishing IP Addresses
Decide which assets need to be safeguarded and map out your network so that these assets can be grouped together and assigned to different networks or zones based on their functions and sensitivity levels. The greater the number of zones you construct, the more secure the network will be.
However, managing more zones takes more effort, which is why assigning zones to firewall interfaces and sub-interfaces requires establishing associated IP addresses.
- Configuring Access Control Lists (ACLs)
Access Control Lists are used by organizations to determine which traffic is permitted to pass or is banned (ACLs). ACLs are the rules that a firewall uses to determine what actions should be taken in response to unauthorized traffic attempting to access the network.
The actual source and destination port numbers as well as IP addresses should be specified in ACLs. Each ACL should have a “Deny All” rule to allow organizations to filter traffic. The interface and sub-interface should both be inbound and outgoing to guarantee that only allowed traffic reaches a zone.
- Configuring Firewall Services and Logging
Other services, such as an Intrusion Prevention System (IPS), a Network Time Protocol (NTP) server, and others, can be built within some firewalls. It’s critical to turn off any firewall-supported extra services that aren’t in use.
- Testing the Firewall Configuration
It’s crucial to test your firewall settings once you’re confident it’s correct. Testing such as Vulnerability Assessment and Penetration Testing (VAPT) is crucial for ensuring that the correct traffic is permitted to pass and that the firewall is working as intended. In the event that the firewall configuration fails during the testing phase, make a backup.
How Can Kratikal Help?
As a CERT-In empanelled cybersecurity solutions firm, Kratikal provides a complete suite of VAPT testing services, one of which is Network Security Testing, a method of evaluating the external and internal security state of a network to detect and illustrate flaws present within the network.
The Infrastructure Penetration Testing includes a variety of tasks:
- Identifying, prioritizing, and quantifying the threats within the network.
- Checking the control of security.
- Analyzing the defenses against network-based attacks such as brute-force attacks, and port scanning among others.
Kratikal also offers Firewall Auditing. The assessment methodology includes proper planning and execution.
The steps followed are:
- Security Configuration Review
- Firewall Rule-set Review or ACL Review
- Firewall Auditing Test Case
Depending on the business and technical requirements, we use industry-standard security testing tools such as Burpsuite, Nmap, Metasploit, and others throughout each IT architecture.
The relevance of firewall setup to the security of our networks cannot be overstated. Firewalls protect our IT infrastructure, but they, too, require regular maintenance in order to perform correctly. A functioning firewall ensures that our networks remain healthy as well.
What other configuration options do you see for a firewall? Let us know what you think in the comments section below! | <urn:uuid:12b22a4f-29ed-4bb9-86af-e8c51452e2a9> | CC-MAIN-2024-38 | https://kratikal.com/blog/5-secure-ways-to-configure-a-firewall/ | 2024-09-14T20:48:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00685.warc.gz | en | 0.921947 | 1,595 | 3.703125 | 4 |
Ethernet - Pause Frames
Ethernet pause frames are control frames used in Ethernet networks to implement flow control at the MAC (Media Access Control) level. They allow network devices to temporarily stop the transmission of data to prevent buffer overflow and manage network congestion.
The primary purpose of Ethernet pause frames is to manage traffic on a network, preventing data loss and ensuring smooth data transmission. They help in situations where a device or network segment is overwhelmed by the amount of data being received.
How They Work
- Transmission: When a network device (e.g., a switch or a network interface card) detects that its buffer is about to overflow, it sends a pause frame to the transmitting device.
- Content: A pause frame includes a "pause time" parameter, which specifies the duration (in units of 512-bit time slots) for which the sender should stop sending data.
- Reception: Upon receiving a pause frame, the transmitting device halts the transmission of data for the specified duration, allowing the receiving device time to process the buffered data and prevent overflow.
Fields in a Pause Frame
- Destination MAC Address: Typically a multicast address reserved for pause frames (01-80-C2-00-00-01).
- Source MAC Address: The MAC address of the device sending the pause frame.
- Type/Length: Specifies that the frame is a pause frame (usually 0x8808).
- Control Opcode: Indicates that this is a pause frame (usually 0x0001).
- Pause Time: Specifies the duration for which the transmitting device should pause.
- Congestion Management: In networks with varying traffic loads, pause frames help manage congestion and avoid packet loss.
- Full-Duplex Ethernet: Pause frames are typically used in full-duplex Ethernet environments where both ends of a connection can send and receive data simultaneously.
- Impact on Performance: While useful for preventing data loss, excessive use of pause frames can negatively impact network performance by introducing delays.
- Compatibility: Not all Ethernet devices support pause frames, which may limit their effectiveness in mixed-device environments.
Ethernet pause frames are defined in the IEEE 802.3x standard, which specifies the implementation of flow control in Ethernet networks. | <urn:uuid:d2e978e1-91f1-4b73-8bf4-6a955270345d> | CC-MAIN-2024-38 | https://notes.networklessons.com/ethernet-pause-frames | 2024-09-14T20:29:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00685.warc.gz | en | 0.862426 | 472 | 3.296875 | 3 |
How employee behavior impacts cybersecurity effectiveness
A recent OpenVPN survey discovered 25 percent of employees, reuse the same password for everything. And 23 percent of employees admit to very frequently clicking on links before verifying they lead to a website they intended to visit.
Sabotaging corporate security initiatives
Whether accidental or intentional, an employee’s online activities can make or break a company’s cybersecurity strategy. Take password usage as one example. Employees create passwords they can easily remember, but this usually results in weak security that hackers can bypass with brute force attacks. Similarly, individuals who use the same password to protect multiple portals — like their bank account, email and social media — risk compromising both their personal and work information.
To reinforce strong password habits, some employers have adopted biometric passwords, combining ease-of-use with security. A reported 77 percent of employees trust biometric passwords, and 62 percent believe they are stronger than traditional alphanumeric codes. But even among those who trust things like fingerprint scans and facial recognition, user adoption is lagging — just a little more than half of employees (55 percent) use biometric passwords.
Convenience also plays a factor in determining how employees approach cybersecurity behaviors. Unfortunately, some individuals are unwilling to trade the convenience of basic passwords and certain technologies for secure cyber habits. Employees are reluctant to abandon things like voice-activated assistants, for example, even though 24 percent of them believe it has the potential to be hacked.
In fact, only 3 percent of employees have actually stopped using their Alexas and Google Homes out of fear of being hacked. This signals to employers that even when employees know the security risks associated with a certain technology, they will ignore the warning signs and continue to use it because of its convenience.
Developing safe cyber hygiene practices
Employers have a responsibility to teach their employees good cyber habits to protect themselves and business operations from malicious actors. Simply telling people to avoid visiting infected websites isn’t enough — more than half (57%) of Millennials admit to frequently clicking on links before verifying they lead to a website they were intending to visit.
Unlike traditional approaches to cybersecurity, a cyber hygiene routine encourages employees to proactively think about the choices they make on the internet. In addition to thorough security education and clear communications, employers can implement the following tips to help employees develop good cyber habits.
Promote positive reinforcement when employees make smart decisions
Employees may be a company’s first line of security, but many fail to report cyber attacks out of fear of retribution. Instead of employing fear tactics to scare employees off weak passwords and phishing schemes, employers should consider rewarding or acknowledging individuals who embrace good cyber strategies. Employees are less likely to shy away from security training and are more incentivized to change their approach to cybersecurity when they are sent encouraging messages for safe internet behavior.
Offer continuous training on best practices. Hackers work year round to catch companies off guard, using tools like phishing to man-in-the-middle to DDoS attacks to breach defense mechanisms in place. While employers can’t predict what they will face next, they can offer routine training to employees to keep them up-to-date with the latest security threats. This can help employees recognize and deal with evolving threats like smishing, a fairly recent scam targeting individuals with smartphones and other mobile devices.
Building a work culture centered around good cyber hygiene takes time, but will ultimately protect companies in the long run from online threats. When smart online habits become second nature, both employers and employees can better prevent hackers from taking advantage of otherwise stagnant security environments. | <urn:uuid:237542e8-cf81-471e-b007-11887d08b48f> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2018/06/12/employee-behavior-impacts-cybersecurity-effectiveness/ | 2024-09-14T21:09:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00685.warc.gz | en | 0.93421 | 733 | 2.640625 | 3 |
Cloud computing training refers to educational programs and courses that teach individuals or organizations the skills and knowledge needed to work with cloud computing technologies. Cloud computing involves using remote servers to store, manage, and process data and applications, rather than relying on local servers or personal computers.
Cloud computing training can cover a variety of topics, including cloud architecture, deployment models, security, and management. Training programs may focus on specific cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP), or they may provide a more general overview of cloud computing concepts and technologies.
Cloud computing training may be offered in a variety of formats, including online courses, in-person classes, and self-paced learning modules. Some training programs may be designed for beginners with little or no prior experience in cloud computing, while others may be geared towards professionals seeking to enhance their skills and knowledge in this area.
The benefits of cloud computing training include the ability to learn new skills and technologies that are in high demand in today’s job market, as well as the opportunity to improve productivity and efficiency in a variety of industries. Cloud computing training can also help organizations reduce costs and improve agility by adopting cloud-based solutions for data storage, processing, and application deployment.
Cloud based training is so relevant in today’s world that many colleges and universities have different courses covering a variety of cloud-related topics. Here is a list of topics that these cloud-based training courses offer:
Cloud computing training is on its way to become one of the most sought-after courses provided. There is further chance advance the cloud training certifications into recognized degrees.
While it is possible to teach cloud computing training with a traditional classroom setup, it can’t be denied that cloud training can be best performed on the cloud itself. Cloud-based LMS can be used to train participants easily with only internet and a machine. Such a system will ensure effective and accessible learning system for a company that is also cost-effective.
Here are the most striking advantages of online cloud computing training:
While traditional learning method has its own advantages, it makes more sense for cloud computing training to be performed online as students get to experience the cloud and its services in real-time. The added advantages like flexibility, scalability and cost-effectiveness also make a strong case. A company or a business that is keen on staying on top of all things technological cannot make the mistake of not choosing cloud services and offering cloud-based training.
Cookie | Duration | Description |
cookielawinfo-checkbox-analytics | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics". |
cookielawinfo-checkbox-functional | 11 months | The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional". |
cookielawinfo-checkbox-necessary | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary". |
cookielawinfo-checkbox-others | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other. |
cookielawinfo-checkbox-performance | 11 months | This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance". | | <urn:uuid:989cbed1-ff87-4f37-bc88-6d61dd350c98> | CC-MAIN-2024-38 | https://cloudlabs.ai/faq/what-is-cloud-computing-training/ | 2024-09-16T02:52:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00585.warc.gz | en | 0.93654 | 740 | 3.15625 | 3 |
Technology is one of the fastest growing fields of the modern age, with new innovations appearing almost daily. While it can feel overwhelming to try and keep with it all, there are a lot of things you can do as an IT professional to develop new skills and maintain a competitive edge in your industry. In this article, we’ll discuss 9 of the fastest growing tech skills in today’s world and how they can help you succeed in the IT industry.
Block chain is currently at the top of LinkedIn’s list for the hard skills that will be in highest demand in 2020. According to LinkedIn, Blockchain is in most demand in the UK, USA, Australia, Germany, and France.
Blockchain is a distributed ledger that stores digital information regarding transactions. Each block has a unique code that allows programmers to tell it apart from other blocks. After you make a purchase, a network of computers verifies this purchase and then stores the transaction information in a block, where it becomes publicly available to anyone—including you.
Ever since its dissociation with cryptocurrencies, Blockchain has seen increased demand on a global level. Last year, Indeed had very few job candidates interested in Blockchain, but now employers are showing a huge demand for this skill.
2. Big Data Analytics
Certifications and skills relating to big data analytics have grown in market value every quarter in the last two years. These analytics use advanced techniques to manage and process large and complicated data sets.
According to iTransition, experts predict that data volumes will continue to grow in the next few years, reaching 175 zettabytes by 2025. To put this in context, 1 zettabyte is equivalent to 1 trillion gigabytes. That’s a lot of data.
To help manage such large amounts of data, businesses and cloud companies will be looking to experts who understand big data analytics. This technology will become more critical than ever, as the demand for storage and backup continues to rise.
3. Artificial Intelligence
Artificial intelligence (AI) is a range of computer science whose job is to build smart machines. Machine learning is a subfield of AI involving computer systems and algorithms. These machines teach themselves how to make predictions and progress in their particular functions, all without being programmed to do so.
Artificial intelligence and machine learning have already led to advances in areas such as transportation, healthcare, and robotics. In coming years, these tools will only continue to move the tech industry forward.
Autonomous systems are certainly changing the future of the workforce, but that doesn’t mean you have to be out of a job. By learning how you can implement AI into your workplace, you can increase efficiency and develop skills that will make you a better employee and candidate for future jobs.
Cryptography utilizes code to limit access to sensitive information. Today, cryptography is most commonly associated with transforming plain text into ciphertext (known as encryption), and then back again (known as decryption).
Cryptography protects sensitive data online and makes it easier to send and receive information in a secure way. Certain cryptographic techniques also prevent spoofing and forgeries.
The most common forms of cryptography include single-key and symmetric-key encryption, whose algorithms create a block cipher with a secret key. The sender uses this key to encrypt the data, and the receiver uses the key to decipher it. The Advanced Encryption Standard is used today as a Federal Information Processing Standard (FIPS) to guard sensitive information.
According to Accenture, security breaches have increased by 11% in the last year. Security-based skills are in higher demand than ever before. Learning the ins and outs of cryptography is essential for organizations who need to store and transmit sensitive data. By learning this skill, you can take your organization’s cybersecurity to the next level.
5. Smart Contracts
- Financial derivitaves
- Insurance premiums
- Breach contracts
- Property law
- Credit enforcement
- Crowdfunding agreements
6. Deep Learning
Deep learning is a subfield of machine learning that focuses on neural networks, or algorithms inspired by brain function. Basically, deep learning draws on brain simulations to make learning algorithms better and easier to use.
Unlike older learning algorithms, deep learning doesn’t plateau after a certain amount of data. Instead, as larger neural networks are constructed and imbued with data, their performance continues to increase.
Deep learning has the potential to provide great value beyond traditional analytics techniques. Deep learning can increase a business’s efficiency by reducing errors, enhancing customer service, and improving the quality of a product or service.
When deciding whether to implement deep learning, consider how it could benefit your organization. Do your employees spend a lot of time on tasks that could be automated? Would you benefit from streamlining day-to-day tasks? Could AI speed up your data entry? If you answered yes to any of these questions, then it might be worth it to invest in deep learning.
7. Prescriptive Analytics
Prescriptive analytics predicts the best future course of action, based on the available data. Unlike predictive analytics, however, it focuses on actionable insights instead of just data monitoring.
Prescriptive analytics can help you make more informed decisions by first analyzing various possible outcomes. Manufacturers, for example, can use prescriptive analytics to model prices on a variety of factors before sending products to stores. Insurance agents can use prescriptive analytics in risk assessment models before providing pricing information to clients. Pharmacists can use prescriptive analytics in selecting the best test groups for clinical trials.
Regardless of your industry, prescriptive analytics will help you come out ahead in your decision making and business models.
8. Internet of Things
- Software development
- Data analytics
- Business intelligence
- Information security
DevSecOps integrates security within the DevOps process with ongoing collaboration between engineers and security teams. In the race to keep up with new forms of software development, DevSecOps adds the security component to a mindset of communication and continuity.
One of the greatest benefits to a DevSecOps approach is that you can integrate security during the development phase, instead of adding it on later. Additionally, DevSecOps leads to greater speed and agility for security teams, enhanced communication between team members, and early detection of vulnerabilities.
By bridging the gap between IT and security, you can enhance your organization’s current use of DevOps and deliver code in a faster and safer way than before.
Fastest Growing Tech Skills—What They Can do For You and Your Organization
Regardless of how long you’ve been in the tech industry, it’s never too early or too late to add some new skills to your reservoir of knowledge. By learning how to understand AI, or by implementing tools like DevSecOps into your skillset, you can benefit your organization and become a more valuable worker in your industry.
Here at CR-T, we take pride in providing enterprise-level IT services at prices that work for small businesses. Our team of experts can become your IT support department, responding to issues quickly, often before you even know about them. Covering everything from your servers and network infrastructure to your computers, workstations and mobile devices, we provide end-to-end solutions for all your technology needs.
Time and experience have helped us develop best practices and workflow procedures designed to keep your focus on your business, not your technology.
Blog & Media
Managed IT Support
Amazon Web Services | <urn:uuid:03626871-fa29-4072-920a-fed0d808812a> | CC-MAIN-2024-38 | https://www.cr-t.com/blog/9-fastest-growing-tech-skills/ | 2024-09-16T02:24:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00585.warc.gz | en | 0.937916 | 1,532 | 2.703125 | 3 |
Malware Protection in IRC
The Internet has revolutionized communications to an extent that, up until recently, was unimaginable. The worldwide web allows users around the globe to share all kinds of information or to simply chat a while about any topic. A large number of websites allow users to access chats and many different types of chats are available, but the most-widely used, due to its efficiency, is IRC (Internet Relay Chat).
IRC servers allow channels (or chatrooms) to be set up for different topics. Therefore, there could be channels about the economy, politics, society or any other topic of interest. Users can access these channels through an IRC client, an application that allows users to access the server hosting the channel they want to access.
Millions of users worldwide access the different IRC channels every day, the majority of them with the sole aim of chatting for a while. However, other users access the channels in order to annoy the rest of the users. This aim can be achieved through many different means, even though the end object is always the same: expel a user from an IRC chat channel.
The different techniques that can be used include nuke attacks. There are various types of nuke attacks, although the majority of them are based on sending specially-crafted data packets, so that the system that receives them cannot process them. As a result, the connection is lost and the user is disconnected from the IRC server. Most nuke attacks are carried out by exploiting flaws in operating systems when handling certain communications protocols.
Flood attacks are another type of attack that aims to expel users from chatrooms. These consist of sending information to the target computer, so that when it replies, it sends indiscriminate data to the server. As IRC servers limit the amount of information they can receive, when they detect a massive data flow, they automatically disconnect the computer that has sent it.
How to protect your computer when using IRC
– The majority of attacks via IRC can be prevented using firewalls, both hardware and software. If the firewall detects an unusual flow of data through any of the communications ports, it will immediately block it.
– It is also highly recommendable to keep updated about the new flaws detected in operating systems and the software installed on your computer that can be used to carry out attacks via IRC. To do this you should subscribe to a security bulletin. A good example is Oxygen3 24h-365d, a free e-bulletin published by Panda Software everyday, which gives up-to-the-minute information about the new vulnerabilities that have emerged. Similarly, you should also regularly visit the websites of the manufacturers of the software installed on your computer, where you will find all the patches needed to correct the security problems detected. | <urn:uuid:baba1e1f-39b9-4bda-b3c9-77a96c92b34b> | CC-MAIN-2024-38 | https://www.helpnetsecurity.com/2004/10/27/malware-protection-in-irc/ | 2024-09-18T15:29:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00385.warc.gz | en | 0.939976 | 564 | 3.03125 | 3 |
Data privacy means keeping customers’ information private and confidential. but third-party agreements about data sharing, data ownership, and data custody are still issues that confront chief information officers and the legal establishment.
How do you cover all those bases to ensure data privacy?
First, What Is Data Privacy?
One challenge is that the definitions for data privacy vary, depending on whom you talk to. This renders protecting consumer information an unsure practice.
In one case, data privacy is defined by Builtin as “the practice of protecting personal, private, or sensitive information, ensuring it’s collected with the proper consent, kept secure and used only for authorized purposes, while respecting both individual rights and existing regulations.”
However, data privacy is also defined by IBM as: “the principle that a person should have control over their personal data, including the ability to decide how organizations collect, store and use their data.”
In legal terms, an invasion of privacy is described by Cornell Law School as: “The infringement upon an individual’s protected right to privacy through a variety of intrusive or unwanted actions. Such invasions of privacy can range from physical encroachments onto private property to the wrongful disclosure of confidential information or images.”
How Regulators Protect Data Privacy
With clear definitions of data privacy being evasive, many companies have favored adoption of the data privacy protections enumerated in Europe’s General Data Protection Regulation . The GDPR gives consumers the right to reduce the information trails they leave when they browse social media or use the internet. Individuals are also able to request the data that companies collect and hold on them, and demand that it be deleted. In short, the assumption is that individual consumers “own” their data, and that they have the right to decide how this data is to be appropriated and used.
The GDPR is the most comprehensive guidance on data privacy to date, which is why many countries have expressed a desire to adopt it. In the US, however, opposition to regulation has stymied efforts to adopt similar stringent data protections, and this has resulted in lawsuits.
Google’s $392 million settlement with users in 2022 is one example. In that case, a group of US attorney generals sued Google on behalf of their constituents. The lawsuit alleged that Google was tracking user locations without clearly indicating to users that location tracking was being done.
Google paid monetary damages as part of the settlement. It also had to inform users in more straightforward ways about how it was collecting their data, how users can delete it, and tell users how they can control whether they are being tracked.
The Google case brought major ramifications with it, since many companies besides Google were providing and selling information about their users to third parties. These companies were informing users about their data sharing practices in fine-print documents that were hundreds of pages long, and that the average user could not be expected to read. Nevertheless, if the user wanted to use a certain service or application, they had to tap the "AGREE to conditions button", or be locked out.
This practice unfairly benefited companies. Under the law, a practice like this can be regarded as a “contract of adhesion,” meaning that one party to the contract has an unfair advantage over the other party because the advantaged party wrote the contract and created it in such a way that the other party could not easily read and digest it.
A contrarian legal viewpoint is that each individual has the responsibility to read and understand every word of a contract he agrees to, even the fine print ones.
The end result?
The legal community is torn on data privacy rights, and Congress hasn’t made much progress, either.
So, What’s the Best Approach to Data Privacy?
That said, taking these steps can be easier said than done.
There are the third-party agreements that upper management makes that include provisions for data sharing, and there is also the issue of data custody. For instance, if you choose to store some of your customer data on a cloud service and you no longer have direct custody of your data, and the cloud provider experiences a breach that comprises your data, whose fault is it?
Once again, there are no ironclad legal or federal mandates that address this issue-but insurance companies do tackle it.
“In a cloud environment, the data owner faces liability for losses resulting from a data breach, even if the security failures are the fault of the data holder (cloud provider),” says Transparity Insurance Services.
Transparity goes on to say, “State and federal data privacy laws in the US do not impose civil liabilities in the event of a cyber intrusion" but that “Typically, liability is imposed if…An entity failed to remedy or mitigate the damage once the breach occurred.” It added that “Failure to timely notify the affected individuals under a state’s data breach notification statute, may give rise to liability for civil penalties imposed by a state attorney general or other state enforcement agency.”
This is why it’s standard data privacy practice today for companies to immediately notify users and customers of a breach, to mitigate the breach, and to provide customers with free-of-charge security and monitoring services for a period of one year.
It should also be standard practice to obtain a cyber liability insurance policy, and to include cyber liability as a risk issue in the company’s overall risk management plan.
This cyber liability insurance coverage can vary in nature, depending upon what a company wants to cover and pay.
Most cyber liability insurance policies cover first- and third-party expenses, and they include customer and user notification costs, forensics to determine how a breach occurred, credit and fraud monitoring services for customers, and crisis management to mitigate damage to a company's reputation.
In a case where a cloud provider leaks your data, you may be liable for judgments against that provider also, and you may need to purse litigation against the provider.
“A data breach claim for a cloud vendor is really an errors and omissions (E&O) claim,” says Woodruff Sawyer Law.”The cloud vendor usually has no direct liability to the individuals whose data has been breached, but there may be a claim from their customer for failing in their performance of services (in this case, keeping the customers' data secure). For this reason, errors and omissions and cyber coverage generally bundled together in a single policy for technology companies.”
There is no universal agreement on what data privacy is. This makes data privacy management a challenge for companies and their CIOs.
In this environment, it’s important for CIOs to adopt as many widely accepted data privacy practices as possible, stay in touch with management and the legal team, and include data privacy as a requirement on RFPs to vendors. and require a QA for data privacy in every new application.
Read more about:
RegulationAbout the Author
You May Also Like
Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE)
September 10, 2024Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024 | <urn:uuid:2fcf8e61-324f-43a2-be9a-5ecda8af0cc2> | CC-MAIN-2024-38 | https://www.informationweek.com/data-management/the-deeper-issues-surrounding-data-privacy | 2024-09-07T18:26:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00485.warc.gz | en | 0.95753 | 1,503 | 2.75 | 3 |
The research agreement, signed at the 2018 Farnborough International Airshow in the U.K., will support efforts meant to develop supersonic over-land passenger flights, NASA said Wednesday. Supersonic travel has been cited as a way to vastly reduce international travel time.
The partnership is NASA and ONERA’s 12th collaboration, establishing common verification cases and a forum for the exchange of technical data and insights.
Efforts are intended to facilitate the development of new technologies to mitigate the effects of sonic booms during air transportation.
NASA is currently testing the X-59 supersonic aircraft at Langley Research Center in Virginia. | <urn:uuid:7c4bc926-4fe2-418a-9569-14435d35bcca> | CC-MAIN-2024-38 | https://executivegov.com/2018/07/nasa-partners-with-french-aerospace-research-center-on-sonic-boom-site-prediction/ | 2024-09-08T23:40:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00385.warc.gz | en | 0.940234 | 136 | 2.796875 | 3 |
July 8, 2020
Nearly a decade ago, researchers made international news by demonstrating that an implantable medical device — an insulin pump — could be hacked via radio and controlled to the detriment of its user. Even earlier research indicated that heart monitors could be hacked to provide incomplete or erroneous data from their sensors. These two cases shared a trait also shared by most of the implantable medical device security research that has followed.
Most of the research has been focused on how the devices could be manipulated to harm the user. Alan Michaels tries to answer a very different question with his research: Could an implantable medical device be hacked to harm a third party?
Michaels, director of the Electronic Systems Lab at the Virginia Tech Hume Center, has a very specific third party in mind. He's interested in the threat a "rogue" implantable device might pose to a Sensitive Compartmented Information Facility (SCIF) — a special facility where sensitive and classified information can be worked on and discussed.
The issue, Michaels says, is that insulin pumps, continuous glucose monitors, heart monitors, and more are electronic devices that have radio communications capabilities. From the viewpoint of the SCIF, the fact that they're involved with human health is almost irrelevant.
"We're all paranoid, you know. I mean, we can pick up what could happen," Michaels explains, "but we also have to be practical about the true risk." And calculating that true risk is the focus of his research.
Part of that risk begins when the devices are manufactured. "We're most interested in the fact that devices are mostly not manufactured in the United States," Michaels says. For the reason location matters, he turns to the example of the popular DJI drones, which were found to have encrypted tunnels back to the manufacturer in China, through which the drones' telemetry data was communicated. "Why would I not believe that the same thing is occurring [in implantable devices]?" he asks.
Michaels is most interested in the possibility that a third party might hack into a user's device and use it as a point of entry for the SCIF — a point of entry that would then allow the third party to pivot to other, much more sensitive systems.
Attack pivots become more of a concern as devices become more capable. Michaels points to fitness trackers and similar devices as wearable systems that some SCIFs now allow because of the health benefits. And yet, "Basically it is a smartwatch that starts to look a lot like a personal computer. It really is very capable," he says.
And some of these wearables are more than just step counters that can easily be removed at the SCIF's door. "We found one called 'Adam' that's an asthma monitor," Michaels says. "And it basically gives you a predictive warning that you're about to have a major asthma attack." He points out, "That's harder to take off because you're not worried about your 10,000 steps — you may actually have an asthma attack while in the facility."
So far, SCIF administrators have dealt with implantable devices through one-off waivers for the wearers. That may be fine for a single employee in a single facility, but Michaels says his team has calculated that there could easily be more than 100,000 individuals with implantable devices who have a regular need to access a SCIF. That's a lot of waivers, he believes.
So far, Michaels says, there has been relatively little recognition of this as an issue in secure facilities, with existing rules driven by HR as much as cybersecurity. "We want to protect the information and support the individual. Yet there comes a point which you probably deny entry," he adds, and that point may be coming sooner than many people think.
The reason for the rapid arrival rests on medical devices in the research pipeline. Michaels mentions a "bionic eyeball" that might provide sight but also have sufficient intelligence to pose a threat.
Michaels' research points him to a broad conclusion: "Make sure you give the support to the individual to do their work, but we think there needs to be a balance with mitigations."
Michaels will be presenting results of his research at Black Hat, in a session titled "Carrying Our Insecurities with Us: The Risks of Implanted Medical Devices in Secure Spaces" at 10:00 a.m. on Wednesday, August 5.
Read more about:
Black Hat NewsAbout the Author
You May Also Like
State of AI in Cybersecurity: Beyond the Hype
October 30, 2024[Virtual Event] The Essential Guide to Cloud Management
October 17, 2024Black Hat Europe - December 9-12 - Learn More
December 10, 2024SecTor - Canada's IT Security Conference Oct 22-24 - Learn More
October 22, 2024 | <urn:uuid:475922be-fa7c-49d1-8b81-ec9022c481d5> | CC-MAIN-2024-38 | https://www.darkreading.com/iot/a-most-personal-threat-implantable-devices-in-secure-spaces | 2024-09-11T08:54:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00185.warc.gz | en | 0.975252 | 995 | 2.78125 | 3 |
This is one of the main challenges for Artificial Intelligence development: the design of a SMART urban system, capable of optimizing the management of resources. Every aspect of a city can be improved thanks to Artificial Intelligence, such as security, public transportation, energy savings or social health
Monitoring mobility in cities
This technology can allow us to improve the design and the making of infrastructures. Sensor Technology, alongside an algorithm capable of counting people, could revolutionize the way of managing traffic and mobility in the cities, improving the understanding of civil behavior. | <urn:uuid:c0cbccfe-5fb2-4bff-bc69-409430953d3f> | CC-MAIN-2024-38 | https://www.midokura.com/smart-cities/ | 2024-09-13T17:26:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651535.66/warc/CC-MAIN-20240913165920-20240913195920-00885.warc.gz | en | 0.880848 | 111 | 2.734375 | 3 |
Data centers are where the Internet lives. But the geography of the Internet will soon be expanding, creating challenges and opportunities for the data center industry. Silicon Valley hopes to bring wireless connectivity to the developing world, using everything from balloons to drones to satellites.
These wireless initiatives have major implications for where data centers are located and how they’re built and powered. They’re backed by some of the deepest pockets around, including Google, Facebook, Elon Musk, Bill Gates and Richard Branson. While the timetable for deployment of these systems isn’t yet clear, the support of these tech titans brings the horizon closer than we may imagine.
As new technology brings wireless access to untapped markets, Internet infrastructure will extend to new places, supported by facilities that may look very different than the data centers in existing technology hubs. The benefits could be enormous for both society and business.
Consider the changes the Internet has brought to America in just 20 years. In 1995, the Internet meant dial-up access to read static web pages and post messages on AOL or bulletin boards. In 2015, it means always-on broadband that delivers HD video and e-commerce to smartphones and tablets. A similar evolution lies ahead for many emerging markets.
How will the data center sector sort out this opportunity? At Data Center Frontier, we’re focused on the future of data centers and cloud computing (if you are as well, you’ll want to sign up for our weekly newsletter). We start with an overview of the major players and technologies. Here’s our take on wiring the unwired, and why it matters to you.
There are about 3.2 billion people using the Internet, but that’s just 38 percent of the global population, leaving an unwired population of more than 4 billion. This is primarily a problem in emerging markets, but the FCC estimates that 55 million Americans – about 17 percent of the population – lack access to advanced broadband services.
But the largest opportunities are in underserved markets like Africa, rural India and Indonesia, where local fiber is scarce and Internet access is focused on mobile.
“It is clear that the mobile Internet will play a key role in bringing the next billion users online,” writes the Internet Society. “Mobile Internet has already leap-frogged fixed access in many countries because of limitations in the coverage of the fixed network, and the availability of mobile Internet access significantly outpaces adoption today.”
Facebook, Google and Microsoft are all investing in efforts to boost access in under-served markets. At Facebook, the Internet.org initiative is supported by the company’s connectivity lab, which is working to deliver wireless Internet via high-altitude solar-powered planes and laser networking technology.
“There are some really neat ways to leverage the science and the physics that we’ve figured out inside the datacenter to potentially help us get people connected to the internet, where we can reduce the cost or increase the capacity by an order of magnitude or more,” says Jay Parikh, vice president of engineering at Facebook.
“The Connectivity Lab team is very focused on the technical challenges of reaching those people who are typically in the more rural, unconnected parts of the world,” he adds. “I think that we need to get them access. … My hope is that we are able to deliver a very rich experience to them, including videos, photos and — some day — virtual reality and all of that stuff. But it’s a multi-, multi-, multi-year challenge, and I don’t see any end in sight right now.”
Balloons: Project Loon
Google describes its Project Loon initiative as “balloon-powered Internet for everyone.” The name reflects the somewhat crazy idea at the heart of the project – using high-altitude balloons 20 miles up in the stratosphere to create an aerial wireless network that can bring high-speed Internet to rural and remote areas.
The balloons are about 50 feet high, and hoist a box that contains circuit boards to control the system, radio antennas to communicate with other balloons and ground stations, and lithium ion batteries. The Loon balloons are equipped with solar panels, which produce about 100 watts of power in full sun, enough to keep Loon’s electronics running while also charging the battery for use at night.
The balloons travel at the edge of space, where Google used software algorithms to guide them through layers of shifting winds. By moving with the wind, the balloons can be arranged to form one large communications network. Earlier this year Google said it has developed a way to pass high-frequency Internet signals from balloon to balloon in midair, creating a mesh network that can connect a large area using a relatively small number of ground stations.
Project Loon may get its first real-world test-drive in the island nation of Sri Lanka in the Indian Ocean. Google has agreed to begin deploying Project Loon balloons in March 2016, providing supplemental high-speed connectivity to Internet providers, allowing them to reduce their operating costs.
Solar-Powered Drones: Project Aquila
Facebook’s ambitious Internet.org effort seeks to build partnerships with wireless providers while developing new technology to deliver wireless Internet access to remote areas. As part of this effort, Facebook has been building prototypes for a solar-powered drone, which can deliver connectivity while soaring high in the stratosphere.
The Aquila planes resemble a giant wing, as wide as a 737 but made of lightweight carbon fiber. When deployed, it will be able to circle a remote region for up to 90 days, beaming connectivity down to people from an altitude of 60,000 to 90,000 feet.
Meanwhile, Facebook’s communications team has tested a laser that can deliver data at 10s of Gb per second with precision across 10 miles. Parikh says the technology is “a significant performance breakthrough.”
“We are now starting to test these lasers in real-world conditions,” he reports. “When finished, our laser communications system can be used to connect our aircraft with each other and with the ground, making it possible to create a stratospheric network that can extend to even the remotest regions of the world.”
Facebook notes that its intent is not to operate these networks, but to create viable solutions and partner with mobile operators.
Spectrum White Spaces: Microsoft
Microsoft also has an ambitious initiative to connect unwired areas, but has taken a different tack, focusing on the use of unused spectrum to create powerful low-cost networks. The company is pioneering the concept in Kenya, where Microsoft has teamed with Mawingu Networks, Jamii Telecom Limited and the Kenyan government to deliver solar-powered Internet access and device charging to rural Kenya, where 80 percent of residents have no access to the Internet.
The initative employs unused TV channels (also known as “white space”) to deliver broadband access, services, and applications. Networks and devices using TV white spaces work in much the same way as conventional Wi-Fi, but because TV signals travel over longer distances and can penetrate walls and other obstacles, they require fewer access points to cover the same area.
“A TV white space base station – even manufactured at small volumes – is only one-tenth the cost of an LTE base station,” writes Paul Garnett, Director of Microsoft’s Technology Policy Group, in a blog post. “TV white space transmitters can operate at low power while providing multi-kilometer point-to-multipoint connectivity – meaning it can be powered purely with renewable energy like the sun or the wind. And these technologies can support both high-bandwidth and low-latency applications, such as HD video streaming and Skype video conferencing.”
This video provides an overview of Microsoft’s use of white spaces to bring connectivity to rural Kenya:
Access Satellites: Where Billionaires Duel
While Facebook and Google have brought their technology to the edge of space, there’s plenty of action a few miles higher. There are multiple companies mounting efforts to deliver wireless from space using satellites.
“Satellite technology is now advancing towards a critical point which could make it the predominant ‘leapfrog’ access mechanism for the billions around the world still not online,” writes Patrick Murphy, a technology investor with Universal Music Group. “Building communications infrastructure on the ground in a traditional manner is expensive and time-consuming. Avoiding wires and red tape by beaming straight from the clouds is the quickest way to get to market.”
But doing business in space is expensive, even by data center standards. Google and Facebook both considered satellite wireless initiatives, but were deterred by cost and have reportedly shelved those plans. But a number of deep-pocketed individuals and enterprises remain. Most of these projects focus on low-earth orbit (LEO) satellites rather than the geostationary orbits used by many communications satellites. LEO satellites have smaller wireless footprints, but can be deployed in networks to provide broader coverage.
Here’s a look at the leading players:
- SpaceX: Billionaire Elon Musk’s space exploration company has announced plans to deploy 4,000 satellites in low-earth orbit “We’re really talking about something which is, in the long term, like rebuilding the Internet in space,” Musk said of the initiative. As a pioneer in low-cost launch vehicles, SpaceX is perhaps the most intriguing play, and has attracted $1 billion in backing from Google and Fidelity InVestments. In an FCC filing, SpaceX describes a plan to launch six to eight satellites next year to begin testing for “a large constellation of small satellites for low-latency, worldwide, high-capacity Internet service in the near future.”
- OneWeb: OneWeb is founded by satellite pioneer Greg Wyler, and recently raised $500 million in funding from a consortium including Qualcomm, Airbus, Coca-Cola, several satellite makers and Branson’s Virgin Group, which will provide the launch vehicles. “Our vision is to make the Internet affordable for everyone, connecting remote areas to rest of the world and helping to raise living standards and prosperity in some of the poorest regions today,” said Branson. “We believe that OneWeb, together with Virgin Galactic’s LauncherOne satellite launch system, has the capability to make this a reality.” OneWeb plans a network of 648 satellites in low-earth orbit about 500 to 600 miles up. OneWeb is reportedly in the process of raising an additional $2.5 billion.
- O3b Networks: The incumbent in the space wireless world is O3b, which uses medium-earth orbit (MEO) satellites about 5,000 miles above the earth. The company has deployed an initial group of satellites that can provide coverage to 180 countries, and has 35 customers, including networks in many island nations such as the Cook Islands, American Samoa, Micronesia and Papua New Guinea. O3b’s investors include Google, Allen & Company, Northbridge Venture Partners and HSBC.
There’s also active investment in satellite antennas and ground stations. Bill Gates is among the backers of Kymeta, which focuses on improved antenna technology to improvfe satellite access, while Endaga is targeting low-cost facilities that can work in earth’s remotest areas.
The Road Ahead
So what does this mean for data center operators? It’s early to say, but some industry observers see opportunities in the satellite wireless projects.
“For data center operators, there’s a short-term and longer term play,” writes Doug Mohney at Green Data Center News. “The short-term play is working with satellite operators to provide cloud services for all the information they want to store and analyze. Longer-term, properly engineered MEO/LEO-based constellations will offer the ability for northern latitude green data centers to more directly reach customers around the equator and in areas not currently served by fiber.
“A smart move for satellite operators would be to establish relationships with data centers, building software in the cloud, spinning it up when needed, storing data in multiple locations for redundancy and faster local access,” Mohney adds. “The industry recognizes satellites have generated a lot of raw data, but processing through it all has been more of an afterthought in the past.”
In some cases, improved wireless access may pave the way for traditional fiber access, enabling a business case that will support investment. But in many other scenarios, wireless-first Internet will require innovation to enable low-cost infrastructure.
Lean construction techniques, including modular designs and pre-fabricated enclosures, are likely to play a large role in these new markets.
An example of this can be seen in the success of Flexenclosure, which specializes in pre-fab data center infrastructure that can be deployed on rooftops and parking garages and powered by solar arrays. The company’s systems have been deployed by telecom operators in the Ivory Coast, Sierra Leone, Tanzania, Nigeria, Myanmar and Mexico.
We’ll be tracking this trend at Data Center Frontier. If you’re developing solutions for the wireless-first market, get in touch with us. | <urn:uuid:feac9a11-ac91-4795-99a1-a408a45fe178> | CC-MAIN-2024-38 | https://www.datacenterfrontier.com/cloud/article/11431507/connecting-the-unwired-world-with-balloons-satellites-lasers-and-drones | 2024-09-14T23:43:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00785.warc.gz | en | 0.93514 | 2,773 | 2.703125 | 3 |
Cybersecurity threats are ever-present, and as your business grows, you need to take steps to protect your systems.
Your endpoints, or the devices you use to access your network, are some of the biggest sources of cybersecurity vulnerabilities. However, monitoring a large network of endpoints can be very challenging.
This is where endpoint vulnerability management comes in. A vulnerability management strategy can help you find these small weaknesses before they spiral into larger concerns.
In cybersecurity, an endpoint is a device used to access a network. For example, desktop computers, servers, laptops, and smartphones are all endpoints. An endpoint vulnerability is a weakness in these devices that a bad actor could use to access your systems, such as software flaws or misconfigurations.
Endpoint vulnerability management is the process of monitoring systems to find these vulnerabilities and taking steps to fix them. It also involves documenting the vulnerabilities and prioritizing which endpoints to address first.
There are so many potential endpoint vulnerabilities to be aware of, and it’s possible for new vulnerabilities to develop as your technology stack grows. Here are some of the most common endpoint vulnerabilities to watch out for.
Software developers regularly release updates and patches for their programs to correct potential security threats they have identified. If you’re not keeping your software up-to-date, it could leave your entire system vulnerable to data breaches and other security issues.
Using outdated software is more common than you might think, as it’s very easy to let your software updates lapse. In fact, one study found that 95% of websites have at least one vulnerability caused by outdated software.
Scheduling regular software updates will help prevent you from falling behind with updates and patches. This also prevents software updates from disrupting your normal operations.
It’s possible for hardware to be compromised by malware, viruses, and other security issues for an extended period of time before someone realizes it. When your hardware is compromised, it can significantly disrupt your operations and even have serious financial consequences as well.
Regular vulnerability assessments identify these hardware compromises before they spiral out of control. This way, you can disconnect the compromised piece of hardware from your systems and take action to re-secure your data.
Misconfigurations happen when your team fails to configure your devices, applications, and networks for optimal security. For example, this might mean leaving the default settings on a router or server unchanged when they need to be customized for your operations.
Misconfigurations aren’t usually malicious, but they can be dangerous if they aren’t identified and corrected in a timely manner.
Authentication management is key to keeping unauthorized users out of your systems. Effective authentication management systems use strong passwords as well as two-factor authentication practices.
Unfortunately, many companies allow their team members to use weak passwords that are easy to guess and don’t have two-factor authentication strategies implemented. Even one weak password can leave your entire system vulnerable.
If your team works in an in-person office, you will need to be mindful of possible physical security breaches. This is when an unauthorized person gains access to your space and accesses your systems, often by following in a member of your team or by fabricating security credentials.
This is also a possible vulnerability for teams working remotely. For example, a remote worker could take their company-issued laptop to a coffee shop, only to have someone steal it or use it when they’re not looking.
A managed service provider, or MSP, is an organization that provides third-party IT and cybersecurity services on an ongoing basis. An MSP can supplement your in-house IT team or even function as a full IT team on its own.
Regular vulnerability scanning and remediation can be very time-consuming, and many small businesses don’t have the resources to do it effectively.
An MSP can take this important task off your plate. They will develop a risk-based vulnerability management process to keep your systems safe — here’s how.
An MSP can conduct 24/7 monitoring on your systems. This makes it easy to identify potential vulnerabilities in real-time so you can address them right away. This also gives your in-house IT team more time to focus on tackling other challenges.
Proactive monitoring ensures that you don’t miss issues that happen when you’re out-of-office. An efficient response is critical for addressing many common endpoint vulnerabilities.
Waiting even a few hours to address a problem could lead to more damage to your operating systems. Your MSP will conduct proactive monitoring to ensure that you’re not letting security vulnerabilities spiral out of control.
Monitoring also involves conducting more detailed risk assessments on a regular basis. These assessments look closely at every endpoint to find vulnerabilities that you may have overlooked in your day-to-day workflows.
Your MSP will then use the results of your risk assessment to provide security recommendations. These mitigation efforts could include installing firewalls, antivirus software, cloud security systems, and other endpoint security tools.
Installing patches, including OS, security, and application updates as soon as they are available is key to preventing common vulnerabilities and exposures. Developers release patches for both software and hardware to address newly-identified cybersecurity threats and to improve functionality.
An MSP will help you implement patch management as part of your broader vulnerability management program. They will keep track of all new updates and patches that apply to your systems and schedule a time to implement them without disrupting your systems.
If there are multiple new patches to install at a given time, your MSP will help you prioritize which ones to implement first based on the cyber threats your business faces.
Ideally, endpoint vulnerability management will prevent cybersecurity incidents from happening. However, even with an experienced security team and trusted vulnerability management solutions, data breaches, cyber attacks, and other security breaches are still possible.
Your MSP will help you develop a response strategy so that you’re prepared for any challenges that go your way. The response strategy could include restoring compromised data, communicating with partners and customers, and shutting off components of your system to contain the attack.
By working with your MSP to build a professional response strategy, you can prevent endpoint vulnerabilities from severely compromising your operations.
Many employees are unaware of what an endpoint vulnerability is or why they’re so dangerous. An MSP can provide training sessions for your employees to help them better understand security threats and best practices.
This type of training is important for everyone, regardless of whether they’re working in a technical role or not. This is because so many jobs today are done at least partially online.
During employee cybersecurity training, your MSP can help your entire team master important security best practices. This includes learning how to create a strong password and how to identify phishing and ransomware attacks on their devices.
Research has found that 32.4% of untrained users will fail a phishing test, which is why regular cybersecurity training is so important. | <urn:uuid:c9aa875d-6767-42fc-b00a-dbb048115382> | CC-MAIN-2024-38 | https://parachute.cloud/how-can-endpoint-vulnerability-management-protect-your-business/ | 2024-09-16T05:51:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00685.warc.gz | en | 0.950925 | 1,440 | 2.84375 | 3 |
Researchers say they may have found way to use glowing bacteria and lasers to detect deadly explosives.
Researchers at the Hebrew University of Jerusalem say they may have found a way to remotely detect unexploded landmines by using a combination of lasers and molecularly engineered bacteria that glow in proximity to the explosives.
Buried landmines, which injure or kill 15,000-20,000 people each year, emit tiny quantities of explosive vapors which accumulate in the soil above them.
This observation prompted the Hebrew University researchers, led by Prof. Shimshon Belkin of the Alexander Silberman Institute of Life Sciences, to use bacteria that emit a fluorescent signal when they come into contact with these vapors to detect the mines.
About half a million people around the world suffer from mine-inflicted injuries and more than 100 million such devices are still buried in over 70 countries.
The major technical challenge in clearing minefields is detecting the mines.
The technologies used today are not much different from those used in World War II, requiring detection teams who risk life and limb by physically entering the minefields.
Accidents involving landmines occur in Israel once every few years and landmines laid in the 1950s and 1960s contaminate the Arava Valley, areas along the Jordan Valley and the Golan Heights, which Israel captured from Syria during the 1967 Six Day War.
The landmines have largely been demarcated by a network of fences and warning signs.
The Hebrew U researchers said that the signals from the bacteria can be recorded and quantified from remote locations, making their test field, they believe, the first time landmines have been detected remotely.
“Our field data show that engineered biosensors may be useful in a landmine detection system,” said Belkin in a statement.
But to make this possible, more work needs to be done, he said, including enhancing the sensitivity and the stability of the bacteria, improving scanning speeds to cover large areas, and making the scanning apparatus more compact so that it can be used aboard a light unmanned aircraft or drone, he said.
The researchers presented their new system in the Nature Biotechnology journal. | <urn:uuid:fba4e6c5-03f8-4230-91ea-dea63b45e268> | CC-MAIN-2024-38 | https://debuglies.com/2017/04/12/glowing-bacteria-are-used-to-hunt-landmines-hebrew-u-study/ | 2024-09-17T11:43:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00585.warc.gz | en | 0.940346 | 447 | 3.328125 | 3 |
The solid waste management industry is experiencing significant transformation driven by increasing urbanization, population growth, and environmental concerns. There is a growing emphasis on sustainable practices, with advancements in recycling technologies, waste-to-energy processes, and composting gaining traction. Governments and organizations are implementing stricter regulations and adopting circular economy principles to minimize waste and enhance resource recovery. Innovations in smart waste management systems, including IoT-enabled sensors and data analytics, are improving operational efficiency and waste tracking. Additionally, there is a rising focus on reducing single-use plastics and promoting biodegradable materials. As these trends evolve, the market is expected to expand with more investments in green technologies and infrastructure development. | <urn:uuid:24e76101-6c74-4265-a2f9-6befcd170df2> | CC-MAIN-2024-38 | https://www.gminsights.com/industry-analysis/solid-waste-management-market/market-trends | 2024-09-07T21:54:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00585.warc.gz | en | 0.91805 | 137 | 2.59375 | 3 |
When students need support from their teachers, it isn’t always obvious, especially when it happens over the internet. This presents a problem for teachers, because they are expected to look after the injuries, both physical and mental, of the students in their care, under the in loco parentis law.
What is in loco parentis?
According to LawNow, loco parentis is a Latin legal term that translates to “in place of a parent”, and it “grants individuals caring for children the same rights and responsibilities as a parent”. The definition of this law has continued to evolve and varies depending on the country. In countries like Canada, individuals such as camp counselors, doctors, and teachers are expected to act in loco parentis.
What is in loco parentis in schools?
“The directors of education systems that you send your children to are under what’s called loco parentis – it’s part of their directorship – it means duty of care,” says Perry Roach, Founder and Chief Executive Officer of Netsweeper. “So, while your children are in school, like recess, you have a monitor – they need to monitor your children while they’re on the internet, so they don’t get into trouble.”
Let’s take the example of cyberbullying to help illustrate this. Unlike physical bullying (like stealing belongings) or verbal bullying (name calling and teasing), cyberbullying can be harder to escape for the victim, as negative content can follow a victim home, and can be left as a permanent reminder to the victim online, unless the service provider is willing to take the content down. There is a growing realization that there is much left to be done regarding the protection of students’ mental health while at school.
In addition to the difficulty of identifying students in need, there is the scope of the problem — according to the National Institute of Mental Health, 1 in 6 US youth experience a mental health disorder each year.
Why is in loco parentis important?
In loco parentis is important because it means that teachers are responsible for both the mental and physical needs of their students, which can be quite challenging to accomplish, especially with the emerging problems the internet presents. When schools don’t sufficiently monitor students’ internet activity and students can access harmful content on topics like self-harm and suicide, it can contribute to tragic outcomes.
Students are spending more time online than ever before, and it is online that the warning signs of danger can be found. Educators can now support their student’s using technology like onGuard that looks for warning signs of bullying, self-harm, suicide, and other harmful subjects they may be engaging with. | <urn:uuid:be19dee9-d131-4e94-a716-50869072231d> | CC-MAIN-2024-38 | https://www.netsweeper.com/education-web-filtering/in-loco-parentis-what-schools-need-to-consider-regarding-mental-health | 2024-09-07T20:01:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00585.warc.gz | en | 0.956158 | 577 | 3.109375 | 3 |
By Neal Bellamy, IT Director at Kenton Brothers
In the physical security world, we tend to think about security as locks, keys, access control, and maybe cameras. If you are an IT person, you might think security is more about firewalls, usernames, passwords, and encryption. Physical security people and IT security people really have the same goal: Protect the Business. Physical security teams and IT Security teams generally operate in different bubbles, but I think it’s time for another convergence.
Cybercrime has a cost, and when it happens, the cost is often high. In 2021, Ransomware alone cost an estimated 620+ million. While Ransomware is probably one of most everyone’s top concerns, there are other ways to make money from your data. Exporting your data or your customer’s data or denying access to your data could also have major impacts to your business.
The capabilities of computers continue to skyrocket.
Anyone in the computer field knows “Moore’s Law”. While not actually a law of physics, it was more of an observed phenomenon. The idea is that the speed of a computer doubles every two years. This idea is bigger than I’ve given it credit for. You can see the phenomenon in more than just microchips. Since the beginning of programming, we’ve built software tools (by writing code) to make new coding efforts faster. Since the processing power of computers doubles every two years, software has access to more and more resources. Software’s capabilities continue to grow at an astounding rate. Machine learning, Deep learning, and AI are all outcomes of this growth. It’s terrifying and inspiring at the same time.
Here’s why software matters.
Once upon a time, a computer programmer made a program that could do something malicious. It might take that programmer months to code and tweak, then it would be released to the world. Sometimes it would make a big splash. Even if it was a major threat, anti-virus companies would find a way to identify it and stop it in a matter of days. Now, we have programs that make programs. Coding a virus can be done in minutes, but viruses are old hat.
What happens if we release an AI to attack a company? It could find the open ports, discover what software is behind those ports, look up vulnerabilities, and then try to exploit the vulnerabilities in seconds. Once an attacker is in, the game is over. It’s only going to get scarier from here.
What can you do?
First, take it seriously. As someone trying to protect your business, you should have several plans. A defense plan is important, but so are recovery plans. The basics still apply. Protect your perimeter, have at least two sets of backups, and use strong passwords (preferably with Multi-Factor Authentication). Anti-Virus alone is no longer good enough. You need to implement Endpoint detection and response (EDR / XDR / MDR) of some type.
How can Kenton Brothers Help?
While we can’t protect all of your networks (we have partners that do), we will do our part while we are on your network. We make sure that software and firmware are up to date when we install equipment, we use strong, randomly generated passwords for your security systems. This is standard in all of our implementations. If your IT company has further security, we will work side-by-side to complement it. Even though we are a physical security company, we understand and value IT security.
If you need help in the physical security world, we are always here to help. But if you need help in the cyber security world, I am also here to help. I have done a deep dive into the cyber world for the last six months and would be happy to share my knowledge. I’m certainly not an expert, but I can share my experiences trying to make KB more secure. I’ve also met some great people and companies along the way that would love to help you with your cyber security efforts. Just give us a call. | <urn:uuid:edf5d986-ab78-47bd-8918-04b646e03f2c> | CC-MAIN-2024-38 | https://kentonbrothers.com/cyber-security/cyber-security-is-complex-dont-bury-your-head-in-the-sand/ | 2024-09-11T12:57:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00285.warc.gz | en | 0.95928 | 864 | 2.53125 | 3 |
Tuesday May 22, 2018
What is The General Data Protection Regulation?
The General Data Protection Regulation (GDPR) is legislation passed by the European Union (EU) that will go into effect on May 25th, 2018. In the EU, citizens’ privacy data (name, IP address, date of birth, religion, health, etc.) has to be managed in a pre-described way. Any information that can be used to trace a person is considered privacy data. Under GDPR, the EU looks at privacy in a broader context than before and the penalties for non-compliance will be higher. Some of the key tenants of GDPR include a person’s information to be forgotten and more.
How Does this Regulation Apply to US Companies?
Many companies in the US have business ties with EU citizens and may store or collect personal data from EU citizens.
What are the Penalties if a Company does not abide by the Rules of the Regulation?
Article 58 of GDPR provides the supervisory authority with the power to impose fines. Various factors are considered in imposing the fines. There are estimates that fines could be as such or more:
- The greater of €10 million or 2% of global annual turnover
- The greater of €20 million or 4% of global annual turnover
How can Arbour Group Assist?
Although Arbour Group is not a legal firm, we can assist companies with the implementation of all the pillars of the GDPR regulations. We can also provide guidance in remaining compliant with the GDPR regulations. Arbour Group can support companies with the identification of the various privacy data in organizations and systems, consequently establishing the right processes and remediating any gaps to keep a company compliant. Arbour can also work with a company’s third-party compliance processes to address gaps in these areas. | <urn:uuid:76a84cc9-a91f-4b6a-a488-bde1cc9688b7> | CC-MAIN-2024-38 | https://www.arbourgroup.com/blog/2018/may/the-general-data-protection-regulation/ | 2024-09-11T12:21:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00285.warc.gz | en | 0.939333 | 374 | 2.609375 | 3 |
What is Data Science?
A simple definition of data science is that it’s the study of analyzing information and predicting outcomes. The predictions are mainly made using machine learning, but just one model can take months of data extraction, cleanup, coding, and deployment. Data science requires much larger reservoirs of data than a standard application using basic algorithms. You can’t use a few dozen stored records to analyze data accurately. You need millions of records to build and test a model. The first step for a data scientist to work with any organization is to gather and clean data. You’ve probably heard of “big data” and may even use the technology in your current applications. Big data is unstructured, but it’s perfect for data science. Unstructured data technologies grab as many records as possible and store them in a database such as MongoDB. This data can be anything, but just as an example consider a website and each of its pages. A crawler finds pages on a website and stores its text, images, and links in an unstructured record. You can scrape an entire site and get its data without worrying about structuring the data as you crawl as long as you use a database that supports unstructured formats. The next step is to clean the data, which is probably the most tedious part of data science. Most scientists clean the data and load it into a CSV file, which is a comma-delimited list of values. These files are easy to import either into another database or code, and any operating system supports After collecting the data, it’s time to figure out its functions. The data scientist first analyzes the data he has and asks a question. For instance, maybe you want to know what products are more likely to attract customers. You could take data from your e-commerce store and use previous customer orders to determine which products are most popular and which ones could be popular during the holidays to improve your sales and focus marketing efforts. Data science models could answer this question for you and make predictions using machine learning to contribute to improving your sales.
Building a Model
After the data scientist and the business determine the question to be answered, it’s time to build a model. A model is a unit of code that represents the “answer” to the question. The answer is usually represented in a graph to make it easier for the public to consume and understand the information. The visuals are typically a part of a library imported into the project, but the data scientist must ensure that the analysis that transforms data to a graph is acc. One of two main programming languages often is chosen to create the models. R is the language of math and statistics, so is the likely choice if your scientists have a mathematics educational background. More people understand Python, which is suitable for other development projects and is more popular among data scientists. Colleges teach Python, and because of its wide use within programming circles, you might find it easier to implement with a smaller learning. The data scientist creates the model with the question in mind. Using the e-commerce example, here’s how it works: The data scientist would review the data and set it up as rows and columns to import into Python code, which then calculates and displays it as a graph. The graph can be any number of plots, charts, and even visualization tools such as Excel or PowerPoint. The visual output is used to present information to the business for them to sign off on the results. Once the analysis is shown to be accurate, the data scientist can move on to the next step, which is creating the. The foundation for the model is logistic code that takes the data stored in a CSV file and runs it through the data scientist’s algorithms. The algorithms could be open-source or custom made by the scientist. It’s not uncommon for a developer to also dive into the analytics to better understand what must be dep. Although every model is different, you can just think of them as a module of code that represents the answer to a question. The business asks the question, and the data scientist develops a solution in the form of a model.
Integrating Data Science into Business Code
Building a new data team is costly, takes time, and there is a learning curve for your developers. The benefits far outweigh the disadvantages, and you can work with project managers and agencies to help get you started.
If you’ve thought of taking your business analytics to the next level, adding a data scientist to your current IT team is the way to go. Your developers will learn new skills, your business will make more money, and you can take advantage of the latest in code design and database storage technologies. | <urn:uuid:8c528234-da63-40f9-a33c-216789beca69> | CC-MAIN-2024-38 | https://www.dataflix.com/brief-primer-on-data-science/ | 2024-09-14T00:54:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00085.warc.gz | en | 0.946908 | 966 | 3.578125 | 4 |
Virtualisation first hit compute in a big way in the late 1990s, fueling VMware’s growth. In a somewhat overlapping timeline, virtualised storage gave administrators the ability to chunk up disks in a similar way. But networking hung in there.
Almost every network on the planet today still has to balance collision domains, broadcast domains, subnet masking, and cabling in what can be a very labor-intensive set of design and maintenance tasks.
While advancements like vLANs helped ease that burden a bit, networking has been ripe for a new approach—and that comes in the form of application-defined networking.
What does that even mean? How did it get here? What happens to the livelihood of the network engineer? These questions will be explored, and more, below.
How did it get here? The OSI model
Published in its current form in 1984, the open systems interconnection model, more commonly known as the OSI Model, has dominated network thinking for more than 30 years.
A core concept of the OSI model is that each layer is largely isolated from the details of any other layer. While that has led to great independence — as, for example, an application developer doesn’t have to worry about whether or not there is copper or fibre optic cable being run at the physical layer — it has led to siloed workers that don’t necessarily appreciate the details of the work that goes into the other layers.
Traditionally, an application developer working at the top of the OSI model only cares about an IP address and a port number provided by the Network Layer, since that provides a specific place on the network where a client-server connection can be maintained.
But a whole lot of design, art and maintenance goes into setting up a set of routers and switches to make sure traffic doesn’t bottleneck between any two IP addresses. This means there’s a network engineer who spends a lot of time managing tickets that represent requests for changes to an existing network design.
Application-defined networking: the network ticket killer
When you look at a modern application and what its networking needs are, it typically boils down to a chain of VMs or containers that need to talk to each other. In a simple example, here’s a three-tier web application with a virtualised load balancer, a web server and a database server:
From a network engineer’s perspective, the key here is to enforce a set of rules so that the load balancer can only talk to the web server and only over the ports needed to facilitate communication between the two. That makes the communication more secure and less open to attacks. Communication between the web server and the database server should be similarly restricted.
While there are plenty of great network administration tools available to help set this up, what happens when more web servers need to be added to the pool of existing ones at times of high load?
Generating tickets and configuring the network requirements there can be challenging and hold up deployments. Using concepts like vLANs, which extend the same 30-year-old model we’ve always had with TCP/IP networks, is better than manually stringing new cable, but they don’t take a fresh approach to what has emerged as a very common problem.
Application-defined networking seeks to simplify the problem by creating bubbles of connectivity around the different tiers of the application, and only allowing those bubbles to be pierced in certain situations.
Now if more web servers are needed, they are simply placed in the existing bubble where rules governing what traffic can pierce that bubble and in what direction are reused. From an application perspective, then, the network administrator simply sets up the bubbles or defines the parameters under which a developer can create or reuse one.
This eliminates the need to have a network engineer involved in every application deployment or change to one, leading to faster deployments and more of them, so that a development team can get more chances to innovate.
The livelihood of the network engineer
Often criticism of an application-defined networking approach is that it threatens the livelihood of network engineers. It doesn’t. Instead, it lets the network engineer automate interactions at higher levels of the OSI model so that they no longer have to deal with the whims of an application developer.
Instead, they can provide those constituents with a set of constructs they can work within in a more automated way. That frees up the network engineer to spend more time refining what they are best at anyway: layers 1-3 of the OSI model.
There is typically no love lost between application developers and network engineers. Application-Defined Networking eases the burden of that relationship on both parties, letting each focus on what they do best.
At the end of the day, an application developer only wants to be able to make a client connection from one machine to a server running on another machine.
No network engineer enjoys fielding tickets from application developers—they would much rather spend time optimising traffic throughout a network with more creative placement of switches and routers, setting up routing tables, and other details that are vital to the overall efficiency of a network.
Sourced by Pete Johnson, technical solutions architect for cloud in the Global Partner Organisation at Cisco Systems | <urn:uuid:49f94e6b-ec68-4a20-a5ab-53e53543c4af> | CC-MAIN-2024-38 | https://www.information-age.com/application-defined-networking-basics-5431/ | 2024-09-16T09:00:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00785.warc.gz | en | 0.946504 | 1,079 | 2.8125 | 3 |
Aspiring or active IT professionals who work as programmers can benefit from an improved understanding of what current and prospective employers want from people who are interested in this type of work. Programmers need complete and thorough knowledge of at least one or more major programming and scripting languages. An understanding of programming principles, along with knowledge of the ins and outs of software production, is also a must. In this article, we'll look at the relevant certifications, technical skills and knowledge, and subject matter expertise that are of greatest interest to employers today.
The 'Must Have' List: Essential Qualifications
Programmers have come a long way since the stereotype of "geeks" with pocket protectors and slide rules first appeared on the technology scene. You should know that we use the term "geek" with the utmost respect, because rumor has it that "the geeks shall inherit the earth"! That may be a paraphrase from another quote, but one thing is certain: The code that programmers write not only has changed the face of technology, but the way we communicate, conduct business, access information, manage our healthcare, and much more. Computer programmers (also known as software developers) are the cornerstone of the software technology revolution. Programmers provide the foundation on which all computing technology is built. In essence, without the skills of the programmer, the software industry couldn't exist; we'd still be using manual typewriters and carbon paper to make copies, and a simple transfer of documents might take weeks instead of seconds.
Obviously, programmersparticularly good programmersare some real must-have IT professionals. So what does an employer look for when selecting a programmer? In a perfect world, an ideal programmer has the appropriate knowledge and skills in all of the following areas to meet the prospective employer's specifications:
- Programming languages
- Scripting languages
- Programming principles
- Production principles
- Web applications
- Education and experience
- Personal skills
The following sections consider each of these areas in detail, discussing the specific languages and principles that are likely to be most desirable to today's businesses. | <urn:uuid:520c5f0f-6122-4808-87a5-ff81684832a7> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=1655229&seqNum=5 | 2024-09-20T03:49:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00485.warc.gz | en | 0.932913 | 422 | 2.796875 | 3 |
If you only have limited resources, you want to use them well. Throwing your resources at whatever problem you can see is a decent approach, but gaining a comprehensive view of your problems and addressing the one that presents the greatest threat is probably a better use of your time.
That is the simple rationale behind the NIST Risk Management Framework (RMF). By following these guidelines for reducing risk in your enterprise, your organization will be led to identify its assets, ascertain which elements are most likely to bring them down, and come up with a plan to stop them based on the timebombs that are at risk of blowing up first.
What Is the NIST RMF?
The NIST Risk Management Framework, or RMF, is a voluntary 7-step process used to manage information security and privacy risks. It was jointly developed by NIST and the U.S. Department of Defense (DoD) for federal agencies, but its comprehensive foundation made it popular for the private sector. It links to other NIST guidelines which can help organizations meet the risk management program guidelines of the Federal Information Security Modernization Act (FISMA).
By following the NIST RMF, organizations can successfully implement their own risk management programs, maintain compliance, and address the weaknesses that present the greatest danger to their enterprise.
But before they do, companies need to understand the basics of risk management. While somewhat self-explanatory, they are:
What does it mean to manage risk? It means to identify how much risk your organization can tolerate (and still survive), the constraints you have to work with (budget, lack of skills, lack of policy), and what you’re going to do within your parameters to still cut down risk as much as possible.
Use vulnerability management and penetration testing to assess and prioritize the areas you’re weakest in, which CVEs would make material impact if exploited, which web applications are at risk, and which holes are hemorrhaging the fastest. Red teaming can also help to ferret out weaknesses the other two can’t catch, in process and personnel.
Respond to risk
Decide how you’re going to minimize those risks. What are you going to change? When will this happen, and where are you going to start? Suggestion: start small, address one area completely, and move on.
Every time a new person, technology, or system appears in your enterprise, there are new risks to assess and manage. Stay ahead of new developments (and dangers) by repeating steps 2-3 regularly and monitoring for risk.
The Seven Steps of the RMF Process
NIST’s seven steps for implementing a successful risk management program for privacy and information security are as follows:
Identify your most important assets, processes, and systems. Assign roles and responsibilities and establish your strategy and risk tolerance.
Determine how much impact, if any, one of those identified assets, processes, and systems got compromised. Which one would have a material impact if confidentiality, integrity, or availability were compromised? Use a matrix to categorize by severity and likelihood.
Select an initial set of controls to address and protect all outlined systems, assets, and processes above. These may change, but you must start somewhere.
Put these plans into action. Remember to document the changes and be aware that feedback might change them.
How well do they work? If well, keep them. If poorly, alter them. This process may require check-in over months.
Have your finalized plans authorized by an accountable decision-maker that understands the risk appetite of the organization. If they work, they will need to be disseminated throughout the organization and followed religiously. Your best-laid plans are moot without official support.
Keep an eye on your creations. What worked today may show weakness when a new CRM is introduced tomorrow. Practice continuous vulnerability management and monitoring, and plan for the correct incident response once your monitoring picks something up.
These steps should be taken in order, and none can be skipped to achieve a comprehensive, flexible risk-based approach.
Misunderstand Risk at Your Own Risk
Having a robust risk management program in place does more for your organization than help it align with NIST and check a few boxes. This is one of the single most beneficial tools for understanding which assets in your organization matter most (i.e., are mission-critical to keeping you online and in the black), and which weaknesses you simply can’t tolerate.
Your SOC comes to work every day looking to do the most good for your organization. When alerts pop up or something doesn’t look right, they investigate and address them in a prioritized manner. They only have so much time in a day, and they want to use it well. Get the most out of that time by knowing that the issues they are addressing are the most important ones they could be fixing at that time. So much needs to be done in an enterprise to keep it safe; it would be a shame to waste a week (and a chunk of the budget) investing in a great solution to a low-level problem.
The NIST RMF makes sure your security team is aligned with management and on the same page when it comes to where time should be spent, what it should be spent doing, and which priorities and objectives will bring the most benefit to your security posture at any given time. With endless alerts and cyber staffing issues, using the NIST RMF to always know where your security efforts should be focused is one of the wisest ways to invest your limited resources and get the most out of your strategy. | <urn:uuid:4054c8cd-9784-4a40-af9e-58d2c5fcc2f1> | CC-MAIN-2024-38 | https://www.fortra.com/blog/what-is-nist-risk-management-framework | 2024-09-20T05:12:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00485.warc.gz | en | 0.958528 | 1,157 | 2.515625 | 3 |
Navigating the intricate intersection of health research and privacy rights is no simple task, as it encompasses a spectrum of ethical and legal concerns. Health data security is central to the discourse, with a dual imperative to uphold individuals’ confidentiality while fostering the greater good through medical progress.
The ethical quandaries involved are profound. On one hand, the aggregation and analysis of health data can lead to breakthroughs in treatment and understanding of diseases, benefiting society at large. On the other hand, patients entrust their personal health information to researchers with the expectation that their privacy will be fiercely protected.
Legislation plays a critical role in this arena, outlining stringent criteria for data handling and stipulating penalties for breaches. Researchers are tasked with designing studies that adhere to these rules while ensuring data integrity and security. Meanwhile, technological advances introduce both opportunities and vulnerabilities – sophisticated encryption techniques can shield sensitive information, but as cyber threats evolve, so must defenses.
In striking the right balance, transparency is key. Participants must be well-informed about how their data will be used, the measures in place to safeguard it, and their rights in the research process. Only with a well-structured framework that respects individual privacy and delineates clear ethical guidelines can the harmony between health research gains and privacy be maintained.
Understanding Health Information Privacy
The Essence of Privacy in Health Information
In antiquity, the very crux of healthcare was patient confidentiality, an ideal enshrined in the venerable Hippocratic Oath. Yet, as we’ve ventured into the digital age, the essence of what we consider “privacy” in the medical realm has morphed significantly. Our current milieu, one where electronic sharing is near-instantaneous, renders patient privacy a far more complex affair.
Control over the distribution of one’s medical information is now a central focus, given the shifting sands of technology’s impact. On one hand, the digital movement ushers in unparalleled prospects for medical research advancements, offering the promise of insights and breakthroughs powered by the analysis of extensive data pools. On the other, it poses risks that were once unimaginable, challenging our ability to safeguard sensitive personal data effectively.
As a result, today’s approach to healthcare privacy must be multifaceted, straddling the line between embracing the potential of technology and respecting the sanctity of individual confidentiality. It is a delicate balancing act, requiring astute attention to privacy’s evolving dynamics and an ongoing conversation about how best to protect the intimate details of one’s health in a world ever more open and connected.
Confidentiality and Security Distinctions
Privacy, while commonly lumped together with confidentiality and security, actually stands distinct from these concepts. Privacy in health data can be seen as the right of individuals to keep their personal health information concealed from others, while confidentiality refers to the responsibility of healthcare professionals to uphold the secrecy of the information entrusted to them. Security, on the other hand, involves the technical and administrative safeguards that protect health data from unauthorized access, breaches, or theft. These three interrelated concepts form a triad that is crucial in the context of protecting health information within research. By distinguishing between them, we can begin to understand the layered approach required to handle sensitive health data responsibly.
Ethical and Social Implications of Privacy
The Intrinsic Value of Privacy
Privacy carries a weight beyond its functional role; it resonates with core human values of dignity, autonomy, and integrity. Respecting an individual’s health information privacy is inherently linked to the ethical principle of nonmaleficence, the avoidance of harm. This respect forms the basis upon which trust between patients and healthcare providers or researchers is built. In a wider context, privacy fosters confidence in health systems and contributes positively to the social fabric by enabling individuals to control personal information, thereby promoting voluntary participation in health research. This fundamental ethical consideration underscores the importance of maintaining privacy in the research domain to ensure the well-being of individuals and, concomitantly, the collective progress of society.
Trust and Public Attitudes Toward Health Data Use
Public attitude toward health data privacy often reflects a dichotomy of trust and skepticism toward medical institutions and research practices. On one hand, there is recognition of the potential for health data to lead to breakthroughs in medicine. On the other, there is an enduring concern over the misuse of personal information, particularly with frequent reports of data breaches in the media. These concerns emphasize the need for transparency and consent mechanisms that respect the individual’s desire to control the use of their health information. Understanding these public sentiments is key for researchers and policymakers alike—to ensure that health research can continue without compromising the trust and willingness of individuals to participate in studies that ultimately aim to benefit us all.
Legal Protections for Health Information Privacy
Historical Overview and HIPAA’s Role
The protection of health information in the United States has seen significant evolution over the past few decades. Initially, privacy was a matter of professional ethics, exemplified by the Hippocratic Oath. As technology evolved and digitization became commonplace, the need for formal legislation became evident. The pivotal legislation in this arena is the Health Insurance Portability and Accountability Act (HIPAA) of 1996. HIPAA’s Privacy Rule was a response to growing concerns around the privacy of health information, creating national standards for its protection. The rule mandates controls on the use and dissemination of protected health information (PHI), although it has faced criticism for not fully addressing the intricacies of digital information flow. Nevertheless, its introduction marked a significant advancement in the establishment of privacy rights concerning health data.
The Variability of Protections Across Jurisdictions
Navigating health information privacy laws in the U.S. is akin to traversing a complex maze due to the disparate legal protections across states. Beyond the federal HIPAA standards, states like California have gone a step further with laws like the Confidentiality of Medical Information Act, providing even stronger safeguards for personal health information. In contrast, some states might offer less substantial protections.
This uneven landscape poses significant challenges for various stakeholders. Healthcare professionals, researchers, and patients must acquaint themselves with the intricacies of differing privacy regulations, which can affect everything from consent procedures to data breach notifications and the right to access personal health records. This divergence reflects the struggle in creating unified, adaptive policies in an era where technological change is rapid and expectations of privacy are diverse. Balancing state-specific health privacy laws while maintaining a consistent national standard remains an ongoing puzzle for policymakers, as they strive to protect patient privacy without stifling innovation or healthcare delivery.
Risks and Challenges in Health Data Security
Potential Harms from Data Breaches
The transition to digital health records has significantly enhanced the management and analysis of patient data. However, this shift has also laid bare a host of security concerns. Cybersecurity incidents in the health sector can lead to a wide spectrum of negative outcomes, which include not only direct financial repercussions but also damage to institutional reputations and the mental well-being of the individuals whose data has been compromised. The more severe consequences of security lapses might involve prejudicial treatment or the marginalization of those affected.
As health data breaches can have such severe and diverse consequences, the urgency for robust and impervious digital defenses is paramount. Healthcare organizations and professionals must proactively keep pace with the ever-changing landscape of cyber threats. This involves deploying advanced security infrastructure and enacting stringent procedures to safeguard against unauthorized access and data loss, while concurrently supporting bona fide health research initiatives.
The balance between data accessibility for research purposes and privacy protection is delicate; hence, continuous vigilance is imperative. Security protocols must be dynamic, evolving in step with both technological advancements and emerging threats. Through these means, the integrity of health data can be preserved, and the trust of patients and the public can be maintained.
Gaps in Protection and Compliance
The current landscape for health information protection is riddled with shortcomings despite the HIPAA Security Rule’s intentions. Regulatory oversight often misses the mark, and the rapid evolution of technology exposes gaps in these frameworks—gaps that could be manipulated to the detriment of patient privacy. To bridge these vulnerabilities, healthcare entities must go beyond basic security protocols. They are tasked with the imperative of continuously refining their defenses through thorough and regular audits, reviews, and system updates.
Yet, this burden of bolstering information safety isn’t for healthcare institutions to bear alone. Participation from diverse actors within the health research domain, inclusive of government bodies, academic and private research institutions, as well as the researchers themselves, is crucial. Unified efforts are required to navigate the complexities of health information security in an era where digital innovation outpaces regulatory adaptation.
Strategically addressing these inadequacies remains pivotal to uphold the confidence of the general public. The healthcare sector’s commitment to this never-ending improvement cycle is fundamental in ensuring that sensitive health data is sheltered from the myriad of threats it faces in the digital age. Only through such collective and rigorous dedication can we aim to secure the sanctuary of health information.
Technological Innovations and Privacy
Enhancing Data Privacy with Emerging Technologies
In the realm of health research, safeguarding patient privacy while leveraging data for insights is paramount. Cutting-edge technologies are at the forefront of this balancing act, offering novel methods to protect sensitive health information. Privacy-preserving data mining is one such innovation, enabling researchers to delve into data without compromising personal details. Meanwhile, advancements in personal digital health record systems grant individuals greater control over their medical information. These platforms not only secure their data but also allow for selective sharing for research or clinical purposes.
This technological stride towards secure, patient-empowered data management heralds a new era in health research. Individuals can now participate more actively in their health care journey, dictating who can access their data and for what reasons. Consequently, researchers can conduct studies with rich datasets while upholding strict privacy standards.
Employing these technologies requires meticulous implementation to ensure they serve their protective functions effectively. Only with robust safeguards in place will the promise of these emerging tools be fully realized, striking an optimal balance between research advancement and the preservation of privacy in the digital health age.
The Limitations and Considerations of Technology in Privacy
While technology offers new frontiers for health data privacy, it is not without its caveats. For instance, the complexity of privacy-preserving technologies may present a barrier to widespread adoption, especially in regions with limited resources. There is also the risk that enhanced security measures could inadvertently hinder the collaborative nature of health research. Furthermore, the rapid pace at which technology evolves necessitates continuous evaluation and adaptation of privacy measures. Careful consideration must be given to the balance between the long-term benefits and the immediate challenges these technological solutions present, with the overarching goal of prioritizing the individual’s right to privacy while advancing public health objectives.
In conclusion, ensuring the privacy and security of health data in research is a task that requires an inclusive approach, sensitive to the evolving technological landscape, the variability of legal protections, and the diverse concerns of the public. By understanding and addressing these factors, the research community can build a framework that not only safeguards privacy but also engenders trust and participation, which are essential for the continued progress of medical research. | <urn:uuid:7ef6be98-27e9-4738-a32d-0a7c5f589352> | CC-MAIN-2024-38 | https://healthcarecurated.com/management-and-administration/is-our-health-data-safe-in-the-hands-of-research/ | 2024-09-09T04:32:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00585.warc.gz | en | 0.922888 | 2,299 | 2.515625 | 3 |
Consistent breach attack simulations provide crucial insights to help enterprises measure, manage and improve their system’s ability to defend effectively against cyberattacks. Breach attack simulation also enables enterprises to identify security vulnerabilities early.
A cyberattack simulation emulates an actual threat against an enterprise’s own network, infrastructure, and assets using the tools, tactics, and procedures (TTPs) of known cyber vulnerabilities, by mimicking the possible attack paths and techniques used by malicious actors.
Simulating cyberattacks offers many benefits. Here are some key advantages of conducting cyberattack simulations.
- Simulations help enterprises identify vulnerabilities and weaknesses in their systems, networks, and applications. By emulating real-world attack scenarios, they can proactively uncover potential security flaws that could be exploited by malicious actors. This allows them to patch vulnerabilities before they are exploited, reducing the risk of successful cyberattacks.
- Enterprises can assess their overall security posture and evaluate the potential impact of different attack types. By testing various attack vectors and techniques, enterprises can determine their vulnerabilities’ severity and prioritize mitigation efforts accordingly. This helps allocate resources effectively to address the most critical risks.
- Simulations provide an opportunity to test an enterprise’s incident response capabilities. By mimicking real-world cyberattacks, organizations can evaluate their ability to detect, respond to, and recover from such incidents. This allows them to identify gaps in their incident response plans, refine processes, and enhance coordination among different teams involved in cybersecurity.
- Enterprises can raise awareness among employees and stakeholders about potential threats and security best practices. Through simulated phishing campaigns or social engineering techniques, enterprises can educate their workforce about the risks associated with certain behaviors and provide training on how to identify and respond to suspicious activities. This helps foster a security-conscious culture.
- Many industries are subject to specific compliance standards and regulatory requirements pertaining to cybersecurity. Conducting cyberattack simulations can assist in meeting these obligations by demonstrating the enterprise’s commitment to security and validating its compliance measures. Simulations can also help organizations identify gaps in compliance and take corrective actions.
- Cyberattack simulations are not one-time exercises but rather ongoing processes. Regularly conducting simulations allows enterprises to monitor their security posture over time, track improvements, and identify emerging threats or vulnerabilities. By continuously testing and refining their defenses, organizations can stay ahead of evolving cyber threats and maintain a proactive cybersecurity approach.
- Simulating cyberattacks and demonstrating a robust security stance helps build trust with customers, partners, and stakeholders. By showing a commitment to protecting sensitive data and maintaining the confidentiality, integrity, and availability of systems, organizations can differentiate themselves as reliable and trustworthy entities in the digital landscape.
RidgeBot® ACE Botlets conduct security assessments using agents to simulate real-world cyberattacks without any harm or impact to the IT environment. IT and security teams use RidgeBot ACE to conduct assessment test scripts, with scripted behaviors that are carried out by the Botlet to simulate a specific attack or to validate the security controls. RidgeBot ACE also conducts key measurement block rates, using a ratio of blocked scripts, versus all assessment scripts executed during RidgeBot ACE testing. A test result with a higher block rate indicates better security controls.
RidgeBot ACE attack simulation scenarios
RidgeBot ACE supports multiple attack simulation scenarios, including:
- Endpoint Security – Simulates the behavior of malicious software, and downloads malware signatures to validate the security controls of the target endpoints.
- Data Exfiltration – Simulates the unauthorized movement of data from a server — such as personal, financial, and confidential data, software source codes, and more.
- Active Directory Information Recon – Simulates an attacker gathering useful resources in Windows Active Directory for elevated privilege, persistence, and plundering information.
ACE enterprise security validation
- Security Control Validation
- Continuous Measurement
- Mitre Att&ck Framework
Click here to learn how RidgeBot ACE can proactively protect your enterprise assets and data. | <urn:uuid:62b939b1-2b87-46d1-9398-03eb017fd395> | CC-MAIN-2024-38 | https://ridgesecurity.ai/blog/ridgebot-adversary-cyber-emulation-measures-security-control-effectiveness/ | 2024-09-09T04:51:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00585.warc.gz | en | 0.912335 | 808 | 2.546875 | 3 |
Data center maintenance is the proactive process of preserving and optimizing the infrastructure within a data center facility. It involves routine inspections, repairs, and upgrades to critical components like servers, cooling systems, and power distribution to ensure uninterrupted and efficient operations. Data center maintenance is essential for preventing downtime, mitigating security risks, and controlling operational costs.
What is Data Center Maintenance?
Data center maintenance is the unsung hero of the digital age, silently ensuring the smooth functioning of the technology that powers our world. Let's dive into the nitty-gritty of data center maintenance, from its definition to its far-reaching benefits.
Definition of Data Center Maintenance
At its core, data center maintenance involves a series of proactive measures and routine tasks aimed at preserving the integrity and functionality of a data center's infrastructure. This infrastructure encompasses a broad spectrum of components, including servers, storage devices, networking equipment, power systems, cooling systems, and security measures.
The primary goal of data center maintenance is to prevent issues, optimize performance, and extend the lifespan of critical hardware and systems. Think of it as the regular check-ups and tune-ups your car needs to keep running smoothly.
The Scope of Maintenance Activities
The scope of data center maintenance activities is extensive, encompassing various aspects of the data center environment:
Hardware Maintenance and Upgrades
This involves inspecting, repairing, and replacing hardware components such as servers, switches, routers, and storage devices. Regular upgrades ensure that the data center keeps pace with evolving technology.
Cooling and HVAC System Maintenance
Data centers generate significant heat due to the constant operation of servers and other equipment. Maintenance of cooling and HVAC (Heating, Ventilation, and Air Conditioning) systems is crucial to prevent overheating and maintain optimal temperature and humidity levels.
Power Distribution and UPS Maintenance
Power is the lifeblood of data centers. Maintenance activities here include checking power distribution systems, ensuring redundancy, and inspecting Uninterruptible Power Supply (UPS) units to prevent power interruptions.
Security and Access Control Systems
In an era of increasing cyber threats, safeguarding data centers is paramount. Maintenance includes assessing and updating security measures, access controls, firewalls, and intrusion detection systems.
Monitoring environmental conditions like temperature, humidity, and air quality ensures a stable and safe data center environment.
Key Objectives and Benefits of Regular Data Center Maintenance
Now that we've covered what data center maintenance entails, let's explore why it's indispensable:
Downtime in a data center can be catastrophic, leading to substantial financial losses and damage to a company's reputation. Regular maintenance helps identify and address potential issues before they cause downtime, ensuring uninterrupted operations.
Mitigating Data Loss and Security Risks
Data breaches and data loss incidents are nightmares for any organization. Maintenance not only safeguards against hardware failures but also reinforces cybersecurity measures, reducing the risk of data breaches.
Controlling Operational Costs
Proactive maintenance is often more cost-effective than reacting to emergencies. By identifying and addressing issues early, data center operators can avoid costly repairs and replacements.
Enhancing Efficiency and Performance
Optimized hardware and systems run more efficiently, reducing energy consumption and associated costs. Performance improvements also lead to better user experiences and faster data processing.
In conclusion, data center maintenance is the invisible hand that keeps the digital world turning. Its proactive approach to hardware upkeep, security enhancement, and efficiency optimization is essential for businesses of all sizes. To learn more about how Maintech can assist with your data center maintenance needs, visit Maintech.
The Consequences of Neglecting Data Center Maintenance
Neglecting data center maintenance can have far-reaching consequences, some of which may catch businesses off guard. Let's explore the potential fallout when data center maintenance takes a back seat.
Downtime and Its Impact on Businesses
Downtime is a dreaded word in the world of data centers. It refers to the period when essential systems or services become unavailable. Here's why it's such a big deal:
- Financial Losses: Downtime can cost businesses significant amounts of money. Every minute of downtime means potential revenue loss, especially in industries like e-commerce and finance.
- Reputation Damage: Extended downtime can erode trust in a company's reliability. Customers may lose confidence in your services, impacting your brand image.
- Productivity Hit: Employees can't work when critical systems are down, leading to lost productivity.
- Legal and Compliance Issues: In some industries, downtime can result in legal consequences, especially if it involves sensitive customer data.
Data Loss and Security Risks
Data centers store vast amounts of critical information, making them attractive targets for cyberattacks. Neglecting maintenance can expose your data to risks:
- Data Breaches: Outdated security measures can leave vulnerabilities open to exploitation, resulting in data breaches with severe legal and financial implications.
- Data Loss: Hardware failures, without regular maintenance, increase the risk of data loss. Data recovery can be costly and may not always be successful.
- Compliance Violations: Many industries have strict data protection regulations. Neglecting data center security can lead to compliance violations and penalties.
Increased Operational Costs
Ironically, neglecting maintenance to cut costs can backfire:
- Emergency Repairs: When components fail due to lack of maintenance, you may incur higher costs for emergency repairs or replacements.
- Energy Inefficiency: Outdated hardware and inefficient systems consume more energy, resulting in higher utility bills.
Decreased Efficiency and Performance
Finally, the impact of neglecting data center maintenance on efficiency and performance:
- Sluggish Performance: Hardware and systems that aren't properly maintained can lead to slower response times and reduced efficiency.
- Inefficient Resource Allocation: Without optimization, resources may be underutilized or overtaxed, leading to resource inefficiencies.
- Shortened Equipment Lifespan: Lack of maintenance can shorten the lifespan of hardware, necessitating more frequent replacements.
In summary, neglecting data center maintenance is a risky proposition. It can lead to financial losses, reputational damage, data security risks, increased operational costs, and decreased efficiency. To mitigate these consequences, it's crucial to prioritize regular maintenance and partner with experts in the field. For more insights into data center maintenance and how Maintech can assist you, visit Maintech.
Tips for Choosing a Data Center Maintenance Partner
Selecting the right data center maintenance partner is a critical decision that can significantly impact your business's efficiency and security. Here are some key considerations to guide you in making this crucial choice:
Key Considerations When Selecting a Maintenance Provider
- Experience and Expertise: Look for a provider with a proven track record in data center maintenance. Experience and expertise matter when it comes to safeguarding your critical infrastructure.
- Comprehensive Services: Ensure that the maintenance provider offers a wide range of services, including hardware maintenance, security updates, and environmental monitoring. A one-stop-shop can simplify your data center management.
- Proactive vs. Reactive Approach: Opt for a partner that takes a proactive approach to maintenance. Preventive measures are often more cost-effective than reactive fixes.
- Response Time: In the event of an issue, quick response times are crucial. Ask about their guaranteed response times and support availability, especially for critical incidents.
- Scalability: Your data center's needs may evolve over time. Choose a provider that can scale their services to accommodate your growth.
Why Maintech Stands Out as a Trusted Partner
When it comes to data center maintenance, Maintech shines for several reasons:
- Decades of Experience: Maintech boasts a 50-year track record, indicating a wealth of experience in the field.
- Global Reach: Operating in over 145 countries, Maintech offers a global presence, making them an excellent choice for businesses with international operations.
- Comprehensive Services: Maintech provides a comprehensive suite of services, including managed enterprise, managed services, managed cybersecurity, and managed cloud.
- Proactive Approach: Maintech's proactive support ensures issues are identified and resolved before they disrupt your operations.
- Security Focus: With cybersecurity threats on the rise, Maintech prioritizes robust security measures to protect your data and assets.
Importance of Tailored Solutions for Specific Business Needs
Every business is unique, and your data center maintenance needs should align with your specific goals and requirements. Maintech understands the importance of tailored solutions and works closely with clients to customize maintenance plans that fit their individual needs.
In conclusion, choosing the right data center maintenance partner is a decision that should not be taken lightly. Consider factors such as experience, service offerings, responsiveness, scalability, and a proactive approach. Maintech stands out as a trusted partner with a strong track record, comprehensive services, and a commitment to tailoring solutions to your business needs. To explore how Maintech can assist you, visit Maintech for more information.
Maintech Data Center Maintenance Services
Maintech is your trusted partner for comprehensive data center maintenance services. With a proven track record and a wide range of offerings, we ensure that your data center remains in top-notch condition. Let's explore what Maintech brings to the table:
Overview of Maintech's Expertise in Data Center Maintenance
Maintech has been a leader in the field of data center maintenance for over 50 years. Our extensive experience and commitment to excellence set us apart. Here's what you can expect:
- Proven Track Record: With a 50-year history, Maintech has a proven track record of delivering top-quality maintenance services to businesses worldwide.
- Global Reach: Operating in over 145 countries, we offer a global presence that can support your data center needs, no matter where you are.
- Comprehensive Services: Our services cover every aspect of data center maintenance, ensuring that your infrastructure is fully supported. From hardware maintenance to cybersecurity, we've got you covered.
- Proactive Approach: We take a proactive approach to maintenance, identifying and addressing issues before they become critical. This helps minimize downtime and keep your operations running smoothly.
- Security Focus: In today's threat landscape, security is paramount. Maintech prioritizes robust cybersecurity measures to safeguard your data and assets.
Comprehensive Services Offered
Maintech's comprehensive suite of services includes:
Hardware Maintenance and Upgrades
We ensure your critical hardware components, such as servers, switches, and storage devices, are well-maintained and up-to-date. Regular upgrades keep your data center current with the latest technology.
Cooling and HVAC System Maintenance
Maintaining optimal temperature and humidity levels is crucial for preventing overheating. Our experts ensure that your cooling and HVAC systems are in top condition.
Power Distribution and UPS Maintenance
Power is the backbone of your data center. We inspect power distribution systems and UPS units to prevent power disruptions and ensure redundancy.
Security and Access Control Systems
In an era of increasing cyber threats, data center security is non-negotiable. We assess and update security measures, access controls, and intrusion detection systems to keep your data safe.
Monitoring environmental conditions, such as temperature and air quality, is vital for maintaining a stable and secure data center environment.
Benefits of Choosing Maintech for Data Center Maintenance
When you partner with Maintech for data center maintenance, you gain access to numerous benefits:
- Uptime Guarantee: We ensure maximum uptime and operational continuity, minimizing disruptions to your business.
- Efficiency Optimization: Our focus on optimization and streamlining workflows leads to improved efficiency and reduced operational costs.
- Scalability: Maintech's solutions can scale with your business growth, adapting to your evolving needs.
- Robust Security: We implement comprehensive cybersecurity measures to protect your sensitive data and assets.
In conclusion, Maintech is your go-to partner for data center maintenance. With a 50-year track record, a global presence, and a proactive approach, we are dedicated to ensuring the optimal performance and security of your data center. To learn more about how Maintech can assist with your data center maintenance needs, visit Maintech.
Essential Nature of Data Center Maintenance: Data center maintenance is crucial for the smooth operation of digital infrastructure. It involves routine inspections, repairs, and upgrades to critical components like servers, cooling systems, and power distribution units. This maintenance is vital for preventing downtime, mitigating security risks, and controlling operational costs.
Proactive Approach to Maintenance: The article emphasizes the importance of a proactive approach in data center maintenance. Regular check-ups and tune-ups, akin to those for a car, are necessary to maintain optimal performance and extend the lifespan of hardware and systems.
Comprehensive Scope of Maintenance Activities: Maintenance activities in a data center cover a wide range, including hardware maintenance and upgrades, cooling and HVAC system maintenance, power distribution, security and access control systems, and environmental monitoring. This comprehensive approach ensures all aspects of the data center are functioning efficiently.
Key Objectives and Benefits: Regular maintenance minimizes downtime, mitigates data loss and security risks, controls operational costs, and enhances efficiency and performance. These benefits are crucial for maintaining the reliability and effectiveness of data center operations.
Consequences of Neglecting Maintenance: Neglecting data center maintenance can lead to severe consequences like downtime, data loss, security risks, increased operational costs, and decreased efficiency and performance. Proactive maintenance is essential to avoid these risks.
Reminder of the Post’s Main Point: The main point of the article is to highlight the critical importance of data center maintenance in ensuring uninterrupted and efficient operations of digital infrastructure. It underscores the need for a proactive, comprehensive approach to maintenance to prevent potential issues and optimize performance. | <urn:uuid:6bac8e8f-d17b-428f-9c5c-2c682ee4b61e> | CC-MAIN-2024-38 | https://www.maintech.com/blog/what-is-data-center-maintenance | 2024-09-10T09:43:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00485.warc.gz | en | 0.908863 | 2,827 | 2.734375 | 3 |
The International Criminal Court (ICC) has requested arrest warrants for Israeli Prime Minister Benjamin Netanyahu, Defense Minister Yoav Gallant, and three Hamas leaders: Yahya Sinwar, Mohammed Deif, and Ismail Haniyeh. These individuals are accused of committing war crimes and crimes against humanity during the recent conflict between Israel and Hamas.
The allegations against Netanyahu and Gallant include the use of starvation of civilians as a method of warfare and intentionally directing attacks against civilians in Gaza. The Hamas leaders are charged with crimes including murder, acts of sexual violence, hostage-taking, and other cruel treatments.
Critics argue that equating the actions of Israel, a democratic state, with those of Hamas, a terrorist organization, is inappropriate and undermines Israel’s legitimacy. U.S. President Joe Biden and other Western leaders have strongly opposed the ICC’s decision, asserting that there is no moral equivalence between Israel’s self-defense measures and Hamas’s terrorist activities. Israeli officials, including Prime Minister Netanyahu, have condemned the ICC’s actions, stating that comparing the Israeli Defense Forces (IDF) with Hamas detracts from Israel’s right to defend itself against terrorist attacks
ICC chief prosecutor Karim Khan emphasized that the core of his ruling is the application of international law to all parties, demonstrating that no one—whether soldier, civilian, or elected official—can act with impunity. Khan clarified that his decision does not equate Israel with Hamas; instead, it scrutinizes the actions of each side individually to determine if they constitute crimes against humanity.
In 1998, the United Nations approved the Rome Statute of the International Criminal Court, an initiative aimed at preventing crimes against humanity in international warfare. The statute came into force in 2002, officially establishing the International Criminal Court (ICC) based in The Hague, Netherlands. The ICC is recognized by 124 member countries, although notable absentees include the United States, Russia, and China
In March 2023, the ICC issued an arrest warrant for Vladimir Putin, accusing him of forcibly deporting Ukrainian children from Russia. This marked the third time a sitting head of state has been indicted by the ICC. As Russia, Ukraine, the US, and other countries have not signed the Rome Statute, the arrest warrant primarily impacts Putin’s ability to travel to countries that are ICC members and have obligations under the statute to arrest him.
What Does the ICC Ruling Mean for Israel?
If arrest warrants are issued, Netanyahu and other Israeli officials could face severe travel restrictions, risking arrest in any of the 124 countries that are ICC member states, similar to Russian President Vladimir Putin. This would strain Israel’s diplomatic relations with these states, which are legally obliged to cooperate with the ICC. Domestically, Prime Minister Netanyahu has condemned the ICC’s actions, claiming they are politically motivated and affirming that Israel will continue to defend itself. Internationally, UN experts criticize threats against the ICC, arguing that they undermine international justice. The ruling could also impact Israel’s military operations in Gaza, as it faces scrutiny over alleged tactics like starvation and targeting civilian infrastructure. The ICC’s actions have ignited debates about international law and double standards, with critics urging balanced accountability for crimes by all parties, including Hamas.
The ICC’s ruling on Netanyahu differs from that on Putin in that Israel is a close American ally. Targeting its leadership may set a dangerous precedent for how Western countries are allowed to defend themselves. Additionally, it has provoked strong criticism from U.S. leaders, who have sharply condemned the ruling.
In response to the ruling, Netanyahu called ICC chief prosecutor Karim Khan one of the “greatest antisemites in modern times,” and compared him to Nazi judges who persecuted Jews. Along with Netanyahu, Israeli Defence Minister Yoav Gallant also condemned the ICC ruling, calling it “disgraceful” and “disgusting” to equate the leader of a terrorist organization with the State of Israel.
Interestingly, Hamas leadership also expressed their discontent with the ruling, asserting that Israel has not faced sufficient charges and objecting to the inclusion of Hamas leaders in any arrest warrants. Israel’s allies, the United States and the United Kingdom, have defended the Jewish state. US President Joe Biden, who has supported Israel since October 7, emphasized that there is “no equivalence – none – between Israel and Hamas.” In addition to Biden, UK Prime Minister Rishi Sunak also criticized the ICC, stating that these arrest warrants offer no practical help in ending the conflict. In Europe, opinions are split, with some governments applauding the ICC and others criticizing the moral equivalence placed on Israeli leadership and Hamas.
As the world anticipates a final verdict, the global community remains divided on the Israel-Hamas War and the ICC rulings. However, one certainty is that these events continue to hold significant attention. If the arrest warrants are issued, the potential global backlash remains unpredictable. Nonetheless, the US government has firmly stated its non-recognition of the ICC’s authority and its determination to defend its ally against any perceived defamation. | <urn:uuid:ba211b79-b371-461f-b3ae-4fd138e4081c> | CC-MAIN-2024-38 | https://www.interforinternational.com/what-does-the-icc-ruling-regarding-israel-and-hamas-mean/ | 2024-09-12T22:32:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00285.warc.gz | en | 0.957274 | 1,057 | 2.515625 | 3 |
Resistance wire is a type of wire which is used for making electrical resistors. There are several types of resistance wire such as nichrome, a non-magnetic 80/20 alloy of nickel and chromium. These are the most common wires used for heating purposes as it has high resistivity and resistance to oxidation at high temperatures. Moreover, these wires are manufactured in a huge range of alloys, with the diameters from 3.25 mm down to 0.06mm. These wires are used as a heating element; resistance wire is normally offended into coils. The performance and life of heating element depends on properties of material such as it should have high melting point, high tensile strength, high resistivity, and low temperature coefficient of resistance. Resistance heating wire has different material such as Kanthal (FeCrAl) wire, Nickel-Chrome, Cupronickel (CuNi) Alloys, and various other alloys. Furthermore, the major application of heating elements is in electric ovens, heaters, toasters, and also in many industries like petrochemical, and metallurgical & machinery.
Increasing use of resistance heating wire in different industries such as petrochemical & petroleum, metallurgical & machinery, and electronic appliances is one of the major factors driving the growth of market. These wires are used in large scale in electrical home appliances like electric ovens and water heaters which may further propel the growth of market. During last few years, there is a significant increase in demand for various consumer goods such as oven and others. Increasing per capita income of people, rising demand for various consumer goods, and changing lifestyle have fuelled the growth of market. Moreover, it is increasingly used for heating elements in electric heaters, toasters, and others; as heating element converts electrical energy into heat during the process of resistive. However, presence of huge number of alternatives and high cost of the material may slow the growth of the market.
Among all resistance heating wire types, nickel-chrome held major position in terms of market share. As nickel-chrome is a composition of nickel, chromium, and iron, it is made of a non-magnetic alloy. Generally nickel-chrome consists of 80% nickel and 20% chromium by mass and mostly used as resistance wire. The basic properties of nickel-chrome vary depending on its alloy, characterized by its resistivity and better oxidation resistance. Nickel-Chrome wires are known for their high mechanical strength, good ductility after use and excellent weld ability. Due to its massive application such as it is used to manufacture monel with iron and steel, gears, to produce stainless steel and other propel the nickel-chrome segment to held largest share.
The market research study on “Resistance Heating Wire Market (Type: Kanthal (FeCrAl) Wires, Nickel-Chrome, Cupronickel (CuNi) Alloys, Other; Application: Petroleum & Petrochemicals, Metallurgical & Machinery, Ceramic & Glass Processing, Electronic Appliances, Others) - Global Industry Analysis, Market Size, Opportunities and Forecast, 2018 - 2025”, offers a detailed insights on the global resistance heating wire market entailing insights on its different market segments. Market dynamics with drivers, restraints and opportunities with their impact are provided in the report. The report provides insights on global resistance heating wire market, its type, application, and major geographic regions. The report covers basic development policies and layouts of technology development processes. Secondly, the report covers global resistance heating wire market size and volume, and segment markets by type, application, and geography along with the information on companies operating in the market. The resistance heating wire market analysis is provided for major regional markets including North America, Europe, China, Southeast Asia, Japan and India followed by major countries. For each region, the market size and volume for different segments has been covered under the scope of report. Europe is expected to hold the significant place in the market during the forecasted period owing to the increasing demand for home appliance products such as electric heater, ovens and others. Moreover, rising government support is another factor propelling the growth of market in this region. Asia-Pacific is expected to witness a strong growth during the forecasted period owing to the increasing expenditure and per capita income. Additionally, LAMEA is expecting a steady growth due to poor economical condition and less government support. Asia-Pacific has the rising demand from petrochemical, electronic appliances, and many more, whereas China is a larger manufacturer of resistance heating wire. The players profiled in the report include KANTHAL, Isabellenhutte, Sedes Group S.r.l., Danyang Xinli Alloy Co., Ltd., Mega Heaters, Eltherm, Furukawa, Xinghuo Special Steel Co., Ltd., Chongquing Chuanyi metallic functional materials Co., Ltd., Changshu Electric Heating Alloy Material Factory Co., Ltd., TaiZhou JiShen Resistance Wire Co., LTD, Omega, and among others.
Market By Type
Market By Application
Market By Geography
Availability - we are always there when you need us
Fortune 50 Companies trust Acumen Research and Consulting
of our reports are exclusive and first in the industry
more data and analysis
reports published till date | <urn:uuid:608e7ea4-a63e-4eff-8ebf-eba7c6ddb630> | CC-MAIN-2024-38 | https://www.acumenresearchandconsulting.com/resistance-heating-wire-market | 2024-09-15T09:46:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00085.warc.gz | en | 0.937482 | 1,090 | 2.84375 | 3 |
JRB - Fotolia
German and Belgian researchers have warned of potential attacks that break email encryption using Pretty Good Privacy (PGP) and secure multi-purpose internet mail extensions (S/MIME) by coercing clients into sending the full plaintext of the emails to the attacker.
PGP and S/MIME encryption are used by organisations because both add an additional layer of security to email communication and, if used properly, both technologies guarantee confidentiality and authenticity of email messages even if an attacker has full access to an email account.
The researchers said “Efail” describes vulnerabilities in OpenPGP and S/MIME that leak the plaintext of encrypted emails.
One of the researchers, Sebastian Schinzel, who runs the IT security lab at the Münster University of Applied Sciences, tweeted: “There are currently no reliable fixes for the vulnerability. If you use PGP/GPG or S/MIME for very sensitive communication, you should disable it in your email client for now.”
The research also prompted the Electronic Freedom Foundation (EFF) to issue a warning that encrypted messages sent in the past could be exposed through exploitation of the vulnerability.
The EFF also advised users to stop using PGP/GPG encryption until the issues are more widely understood and fixed, providing links on how to disable PGP/GPG (GNU privacy guard) encryption plugins in email clients.
Users should arrange for the use of alternative end-to-end secure channels, such as Signal, the EFF said in a blog post.
In summary, the researchers said the Efail attacks abuse active content of HTML emails – for example, externally loaded images or styles – to exfiltrate plaintext through requested URLs.
While this sounds alarming, the researchers admit that to create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers.
The attacker would then change an encrypted email in a particular way and send this changed encrypted email to the victim. The victim’s email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.
Overreaction to Efail attacks
While the attack is “sneaky”, independent security advisor Graham Cluley is one of several experts who have said the significance and severity of the Efail attacks have been overstated.
The researchers clearly state: “The Efail attacks require the attacker to have access to your S/MIME or PGP encrypted emails. You are thus only affected if an attacker already has access to your emails.”
This fact makes successful exploitation of these newly discovered vulnerabilities an unlikely risk, according to Cluley.
“If a malicious hacker already has access to your email servers, networks and such like, there’s probably all manner of worse and less convoluted things they could be doing to make your life a misery, steal secrets and destroy your privacy,” he wrote in a blog post.
Cluley also highlighted that because Efail attacks rely on past encrypted emails being sent to the target, it is a visible and obvious attack method that could be easily identified using a script that scans incoming email for malformed IMG tags.
Read more about email security
- How to improve security against email attacks and for GDPR compliance.
- UK businesses exposed to email-borne cyber risks, survey shows.
- Email is the number one entry point for data breaches, which includes targeted email attacks such as business email compromise and spear phishing.
- How to ensure secure email exchange with external business partners.
- Russian cyber espionage highlights need to improve email security.
Cluley is among several security experts who have pointed out that Efail is not reliant on any inherent weakness in the PGP/GPG being used because it exploits users who have not told their email clients to stop remote or external content from being automatically rendered.
Keep email patches up to date
The researchers have called for the MIME, S/MIME and OpenPGP standards to be updated, saying the Efail attacks exploit flaws and undefined behaviour in these standards.
Cluley also pointed out that it is not a new problem – the root problem of mail clients attempting to display corrupted S/MIME messages has been known about since 2000.
Efail is not a good reason for users of PGP/GPG to disable it entirely, according to Cluley. However, he does point out that there are alternative end-to-end encrypted messaging solutions that do not face the same challenges.
While Efail is not a reason to panic, organisations are advised to keep all email clients updated with the latest security patches. Cluley said organisations should also consider disabling rendering of remote content until the issue is resolved, preventing automatic decryption of email messages and requiring users to manually request decryption instead to reduce the chances of data leakage via active content.
The researchers claim that they have disclosed their findings “responsibly” to international computer emergency readiness teams (Certs), GNU PG developers and the affected suppliers, which have applied (or are in the process of applying) countermeasures.
“Please note that, in general, these countermeasures are specific hotfixes and we cannot rule out that extended attacks with further backchannels or exfiltrations will be found,” they said.
The researchers also warned that even if all backchannels are closed, both standards are still vulnerable to attacks where the attacker can modify email content or inject malicious code into attachments which get executed in a context beyond email client. | <urn:uuid:6ab0848d-8038-47b2-bba8-761ce037a4d5> | CC-MAIN-2024-38 | https://www.computerweekly.com/news/252441102/No-need-to-panic-about-Efail-attacks | 2024-09-16T13:58:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00885.warc.gz | en | 0.939466 | 1,176 | 2.578125 | 3 |
What Is Phishing?
Phishing is the act of sending phony emails to people for the purpose of tricking them into revealing their user name and password. The sender pretends to represent an organization that has a viable reason to make an information request. For instance, you might get a genuine-looking email that looks just like it’s from Paypal. The sender will be claiming that there’s something wrong with your Paypal account. They want you to click on their link and go to your Paypal site and then log in and change your password or some other action.
Should you follow their instructions? No! These are cyber thieves trying to get your personal and financial information. Their goal is to steal from you. Hacking has become big business around the globe.
Microsoft Office 365: How It Works
Office 365 is a longstanding software package developed by Microsoft. It includes Word, Excel, PowerPoint, and other commonly used software applications that are used in business. Office 365, unlike previous versions of these applications, operates through yearly subscriptions. Updates are performed regularly. You have access to the latest programs so you can work more efficiently both at home and in the office.
There are good reasons to operate under a yearly subscription plan using these programs. They’re always current and up-to-date and you never have to worry about maintenance issues. Office 365 is available for phones, tablets, computers and you can work from anywhere there’s an internet connection. Microsoft has consistently created the best security programs and systems to guard its products from hackers. And yet, cyber thieves are finding ways around these security protocols.
What Potential Is There For Phishing Scams?
Any hacker can potentially claim that they represent Microsoft. These thieves have been able to replicate an authentic email very well. People who are not paying attention might fall for one of these phony emails. These hackers can sound like they have a viable reason to request information or actions that could place the account holder at risk.
Using good business imagery and closely matching email addresses, recipients may be fooled. Hackers can easily cut and paste company logos into these emails to make them even more convincing. Meanwhile, users that fail to take a second look may be fooled. They can inadvertently reveal log-in information, credit card numbers, or banking information.
Can Hackers Get Around Microsoft’s Security Features?
Recently, hackers have been using certain phishing methods to bypass the current Safe Links security features found in Office 365 software. Safe Links have been a basic aspect of the organization’s Advanced Threat Protection (ATP) program, which has been helping to protect businesses from receiving damaging links that are sent through phishing.
These links scan the URL in attempt to match it to those entered into a stored blacklist, notifying the user of detections. By using tags in HTML headers, hackers have been able to bypass the detection of harmful listings.
Many users of Outlook have been affected by this approach, and while Gmail’s security has evolved well enough to avoid these kinds of bypasses, similar updates are recommended for Microsoft. SecurityWeek explained that users now have the capacity to block URLs on gateways. While this has helped prevent attacks, software developers will be required to address all these new threats as they design their software programs.
Microsoft has been updating aspects of its security to increasingly protect against the improvements of phishing and hacking actions. Along with their central ATP features, which allow users to customize their account settings, users can create their own system-tailored anti-phishing policy. It will update across the range of datacenters within 30 minutes of activation.
These help but there is now a full range of phishing attack types, including ‘spearfishing’ and ‘whaling.’ These both target specific individuals in an organization—normally someone in a high position such as CEO and CFO.
One of the problems with Office 365 security is that ATP is not available with the basic subscription. Users must purchase the security feature or a different version of the software in order to get the best protection against the latest hacking schemes. Once the additional protection is purchased, users should optimize the wide range of settings available within advanced ATP features to get the best security.
According to The Hacker News, cyber theives have been adding words hidden to the user, reducing their font size to zero, in efforts to make phishing emails appear normal to security code while bypassing its detection features. This method has been used to improve the potential and frequency of phishing scams. There’s now a greater demand for updated security features that are able to detect writing with a font size of 0.
CSO reported that there has been an increase in phishing scam types, and while the zero-font method may be addressed with security code improvements, another type of bypass method will likely be developed by hackers.
What Should I Do?
Microsoft recommends that users follow common practices in addition to optimizing program features to best protect against hacking scams. These may include: | <urn:uuid:f074940b-36e6-4547-a230-3afb249a719c> | CC-MAIN-2024-38 | https://www.fuellednetworks.com/how-to-reduce-vulnerability-to-phishing-when-using-office-365/ | 2024-09-16T12:36:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00885.warc.gz | en | 0.953696 | 1,032 | 2.9375 | 3 |
The COVID-19 pandemic has prompted many companies to enable their employees to work remotely and, in a large number of cases, on a global scale. A key component of enabling remote work and allowing employees to access internal corporate resources remotely is Remote Desktop Protocol (RDP), which allows communication with a remote system. In order to maintain business continuity, it is very likely that many organizations brought systems online quickly with minimal security checks in place, giving attackers the opportunity to enter them with ease.
RDP is a Microsoft protocol running on port 3389 that can be utilized by users requiring remote access to internal systems. Most of the time, RDP runs on Windows servers and hosts services such as web servers or file servers, for example. In some cases, it is also connected to industrial control systems.
RDP ports are often exposed to the Internet, which makes them particularly interesting for attackers. In fact, accessing an RDP box can allow an attacker access to an entire network, which can generally be used as an entry point for spreading malware, or other criminal activities.
As it can be such a powerful entry vector, McAfee Advanced Threat Research (ATR) has observed many underground markets emerge, offering RPD credentials at relatively low cost. For example, McAfee ATR uncovered access linked to a major international airport that could be bought for only US$10. Since March 2020, the number of exposed RDP ports have increased considerably.
McAfee Advanced Threat Research and the security industry have been aware of the risk of exposed RDP for many years and will continue to raise awareness as part of our global threat monitoring.
In this blog, we will discuss the risks of exposing the RDP protocol and the associated misconfigurations.
The number of RDP ports exposed to the Internet has grown quickly, from roughly three million in January 2020 to more than four and a half million in March. A simple search on Shodan reveals the number of RDP ports exposed to the Internet by country.
It is interesting to note that the number of RDP systems exposed is much higher for China and the United States.
Most of the compromised systems using RDP are running Windows Server but we also notice other operating systems, such as Windows 7.
For attackers, access to a remote system can allow them to perform several criminal actions such as:
- Spreading spam: Using a legitimate system for sending spam is very convenient. Some systems are sold especially for this purpose.
- Spreading malware: A compromised system provides a ready-to-use machine for easily distributing malware, or even pivoting to the internal network. Many ransomware authors use this vector to target organizations around the world. Another criminal option would be to implant a cryptominer.
- Using the compromised box as their own: Cybercriminals also use remotely compromised systems to hide their tracks by, for example, compiling their tools on the machine.
- Abuse: The remote system can also be used to carry out additional fraud such as identity theft or the collection of personal information.
This recent increase in the number of systems using RDP over the Internet has also influenced the underground. McAfee ATR has noticed an increase in both the number of attacks against RDP ports and in the volume of RDP credentials sold on underground markets.
As observed on Shodan, the number of exposed systems is higher for China (37% of total) and the United States (37% of total), so it is interesting to note that the number of stolen RDP credentials from the US (4% of the total) for sale is comparatively much lower than other nations. We believe this may be because the actors behind the market sometimes hold back RDP credentials without publishing their whole list.
How are Attackers Breaching Remote Systems?
Weak passwords remain one of the common points of entry. Attackers can easily use brute force attacks to gain access. In the below image we see the 20 most used passwords in RDP. We built this list based on information on weak passwords shared by a friendly Law Enforcement Agency from taken down RDP shops.
The diagram below demonstrates the number of compromised systems using the top 10 passwords. What is most shocking is the large number of vulnerable RDP systems that did not even have a password.
The RDP protocol also suffers from vulnerabilities and needs patching. Last year, we explained in detail the workings of the BlueKeep vulnerability that affects reserved channel 31, which is part of the protocol functionality, to allow remote code execution.
In early January, additional flaws related to Remote Desktop Gateway were also patched:
These two vulnerabilities are similar to the BlueKeep vulnerability and allow remote code execution by sending a specially crafted request. We have not yet observed this vulnerability exploited in the wild.
To secure the RDP protocol, the following checklist can be a good starting point:
- Do not allow RDP connections over the open Internet
- Use complex passwords as well as multi-factor authentication
- Lock out users and block or timeout IPs that have too many failed logon attempts
- Use an RDP gateway
- Limit Domain Admin account access
- Minimize the number of local admins
- Use a firewall to restrict access
- Enable restricted Admin mode
- Enable Network Level Authentication (NLA)
- Ensure that local administrator accounts are unique and restrict the users who can logon using RDP
- Consider placement within the network
- Consider using an account-naming convention that does not reveal organizational information
For more details about how to secure RDP access, you can refer to our previous blog (https://www.mcafee.com/blogs/other-blogs/mcafee-labs/rdp-security-explained/)
As we discussed, RDP remains one of the most used vectors to breach into organizations. For attackers, this is a simple solution to quickly perform malicious activities such as malware, spam spreading or other types of crime.
There is currently a whole business around RDP on the underground market and the current situation has amplified this behavior. To stay protected, it is essential to follow best security practices, starting with the basics, such as using strong passwords and patching vulnerabilities.
McAfee ATR is actively monitoring threats and will continue to update you on this blog and its social networking channels. | <urn:uuid:b1670d39-083f-464a-b46c-42f2d3510ed8> | CC-MAIN-2024-38 | https://www.mcafee.com/blogs/other-blogs/mcafee-labs/cybercriminals-actively-exploiting-rdp-to-target-remote-organizations/ | 2024-09-16T12:38:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00885.warc.gz | en | 0.942543 | 1,290 | 2.6875 | 3 |
Introduction: Rising Above the Flames
As the Mediterranean region faced the wrath of wildfires, Sicily bore witness to a tragedy that claimed lives and left devastation in its wake. The alarming incident serves as a stark reminder of the importance of critical communication and ensuring employee safety during emergency situations. To navigate uncertainty and protect their people, businesses must learn valuable lessons from the Sicily Wildfire and take proactive measures to establish robust emergency response and disaster preparedness plans.
Crisis Communication: The Key to Preparedness
Embracing the Lessons of Sicily Wildfire
The Sicily Wildfire, fueled by a heatwave and human and inhuman activities, presents a crucial lesson for businesses worldwide. Understanding the impact of such natural disasters in vulnerable areas can be a game-changer in building resilience and safeguarding employees and assets. By analysing the situation in Sicily, businesses can identify potential threats in their own regions and design comprehensive disaster management strategies.
Crisis Communication and Employee Safety
During crises, clear and efficient communication is paramount. Employees rely on timely and accurate information to make informed decisions and ensure their safety. Critical communication must be a well-structured and orchestrated process, covering aspects like real-time updates, evacuation procedures, and available support systems. The lessons from Sicily highlight the need for companies to prioritise effective communication channels and ensure the well-being of their workforce.
Leveraging Technology for Crisis Communication
With advancements in technology, businesses can embrace innovative solutions to streamline critical communication and improve employee safety. Employing communication platforms, mobile apps, and automated alerts can empower organisations to disseminate critical information swiftly. Integrating such technologies with a comprehensive emergency response plan can significantly enhance preparedness and response during crises.
Disaster Preparedness: Building Defenses That Last
Identifying Vulnerabilities and Risks
Understanding the unique vulnerabilities of a geographical location is crucial for designing effective disaster preparedness plans. By studying the environmental factors, history of past incidents, and potential risks in their vicinity, businesses can tailor their defences accordingly. Learning from the Sicily Wildfire, proactive risk assessment and mitigation strategies can make all the difference.
Short-Term Crisis Mitigation
For immediate response to crises, businesses should have short-term crisis mitigation plans in place. These should cover emergency evacuation procedures, medical aid provisions, and temporary shelters for employees. Drills and simulations can help familiarise employees with the protocols, ensuring a swift and organised response during actual emergencies.
Long-Term Resilience Strategies
In addition to short-term plans, long-term resilience strategies are essential for sustained protection. Collaborating with local authorities, fire departments, and disaster management agencies can provide valuable insights into tackling potential catastrophes. Investing in infrastructure and resource development to minimise risks should be a top priority for businesses aiming to build long-term defences.
Ensuring Employee Safety: A Cornerstone of Crisis Communication
Employee safety is not only a legal and moral responsibility for organisations but also a crucial element in effective critical communication. During emergencies, the well-being of employees should be the top priority, and businesses must take proactive measures to ensure their safety. Let’s explore the key aspects of prioritising employee safety in crisis management.
Conducting Safety Assessments
To establish a strong foundation for employee safety, organisations should conduct comprehensive safety assessments of their workplaces. Identifying potential hazards, assessing risks, and implementing necessary safety measures are vital steps in minimising the impact of crises on employees. Regular safety inspections and audits should be conducted to ensure ongoing compliance with safety standards.
Developing Emergency Response Plans
An effective emergency response plan is the backbone of employee safety during crises. The plan should outline clear procedures for evacuations, medical emergencies, communication channels, and designated assembly points. Involving employees in the planning process and conducting drills to practice the response protocols can enhance preparedness and familiarise everyone with their roles.
Educating and Training Employees
Proper training and education are essential for employees to understand their responsibilities and respond effectively during crises. Employees should receive training on using safety equipment, understanding emergency alarms, and following evacuation procedures. Additionally, they should be educated about potential risks and how to report safety concerns promptly.
Establishing Communication Channels
During crises, seamless communication is crucial for coordinating responses and ensuring employee safety. Organisations should establish multiple communication channels to relay critical information to employees. These channels may include emails, text messages, emergency hotlines, and communication apps accessible on mobile devices.
Providing Mental and Emotional Support
Crisis situations can take a toll on employees’ mental and emotional well-being. As part of crisis communication, organisations should offer resources for mental health support and counseling. Establishing an Employee Assistance Program (EAP) or partnering with mental health professionals can provide employees with the support they need during challenging times.
Monitoring and Reviewing Safety Measures
Safety measures and emergency response plans should be regularly monitored and reviewed for their effectiveness. After each crisis or drill, organisations should assess the response and identify areas for improvement. Feedback from employees can also offer valuable insights into enhancing safety measures.
Embracing Technology for Employee Safety
Advancements in technology have significantly improved employee safety during crises. Businesses can leverage safety apps, wearables, and location-tracking systems to monitor employees’ well-being and locate them during emergencies. These technologies can aid in prompt assistance and evacuation procedures.
Promoting a Safety Culture
A safety culture that permeates throughout the organisation is vital for maintaining employee safety. Leaders should set an example by prioritising safety, encouraging open communication about safety concerns, and rewarding employees who actively contribute to a safe work environment.
Crises Control: Empowering Organisations for Safety and Confidence
Embracing Crises Control Solutions
Crises Control emerges as a powerful ally for businesses seeking comprehensive crisis communication and employee safety solutions. Its state-of-the-art technology enables real-time alerts, multi-channel communication, and seamless coordination during emergencies. By integrating Crises Control into their operations, organisations can enhance their ability to safeguard their people and assets.
Leveraging Automation for Swift Response
Crises Control’s automated features streamline the communication process, leaving no room for delays or errors. Pre-defined response protocols and customizable workflows ensure that critical information reaches the right recipients promptly. In moments of crisis, every second counts, and Crises Control’s automation can save lives and protect valuable resources.
Training and Support for Optimal Preparedness
Crises Control not only offers cutting-edge technology but also provides comprehensive training and support to organisations. Equipping employees with the knowledge to use the platform effectively empowers them to respond confidently during emergencies. The platform’s round-the-clock support ensures that organisations have assistance whenever needed.
Conclusion: Rising Above Challenges
In the face of uncertainty and natural disasters like the Sicily Wildfire, businesses must prioritise crisis communication and employee safety. Learning valuable lessons from such events can help organisations build robust defences, instil confidence in their workforce, and protect what matters most. With Crises Control’s cutting-edge technology and support, organisations can navigate through crises with efficiency and resilience, ensuring a secure future for their people and assets.
Don’t wait for a crisis to strike before taking action! Ensure your organisation’s preparedness and protect your most valuable assets – your employees. Request a live demo today to witness firsthand how Crises Control’s cutting-edge technology can transform your crisis management capabilities. Get in touch with our experts to discuss your organisation’s specific needs and find the best solution for your crisis communication and employee safety requirements. | <urn:uuid:d7df8c6b-47a6-4c6e-9f92-9b88cc0d2e74> | CC-MAIN-2024-38 | https://www.crises-control.com/blogs/crisis-communications-sicily-wildfire/ | 2024-09-17T20:18:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00785.warc.gz | en | 0.920677 | 1,566 | 2.71875 | 3 |
As the world becomes increasingly connected and digitally driven, the automotive industry is experiencing a remarkable transformation as new technologies allow for a more direct flow of data and information between automobiles, drivers and the infrastructure that surrounds them.
Indeed, the promise of smarter, safer and more efficient roads is no longer just a sci-fi fantasy, but an emerging reality brought about by cutting-edge technology and the desire to improve people’s lives. And a key component of this level of connected driving is vehicle-to-everything (V2X) communication, in which cars are equipped to talk to other cars, pedestrians, roads and more.
A system of connected things
Thanks to this exciting new technology, we’re headed for a future where vehicles will be able to communicate with traffic lights to optimize signal timing, reducing congestion and emissions, and where emergency response vehicles will be able to navigate through traffic more effectively thanks to real-time alerts and signals.
Imagine a situation where an ambulance needs to navigate a busy road to access a hospital. These days, many cars on route end up having to quickly pull over to let the ambulance pass. But instead of waiting to respond to the flashing blue and red lights, what if all the cars in the vicinity received a signal alerting them to the emergency so they could be ready to move out of the way? V2X can help create a safer and more efficient autonomous transportation system for everyone.
Rigorous testing is crucial
It’s easy to imagine the huge potential V2X technology has when it comes to enhancing the safety, efficiency and functionality of transportation systems, while leading us to a world of fully connected and autonomous cars.
However, echoing the proverb “with great power comes great responsibility”, there are significant challenges that need to be addressed, namely around ensuring that these systems are reliable, secure and seamlessly integrated with all the different elements of a transportation system.
Given the complexity of modern vehicle software, and the number of diverse systems that would need to be connected, there are several challenges that must be addressed to ensure safety and reliability of V2X. These fall into three main categories:
1. Technical challenges
V2X communication must be fast and accurate. Ensuring low latency and high reliability in V2X communication is critical, particularly for safety applications where milliseconds can mean the difference between a collision and a safe turn. Testing must simulate real-world conditions accurately, including urban environments with high interference and multiple variables.
In addition, V2X systems must be secure against cyberattacks, which requires a completely different set of comprehensive tests for vulnerabilities.
2. Environmental challenges
V2X systems must always perform reliably, irrespective of where and when they are. Therefore, testing and validation must be performed under various weather conditions, lighting scenarios and road environments.
Moreover, testing must account for dynamic driving scenarios, such as sudden stops and unpredicted pedestrian movements. Large-scale field testing is also necessary to validate V2X systems in real-world conditions. This requires significant resources and infrastructure.
3. Regulatory challenges
Compliance with diverse regulatory standards across different regions (such as the European ITS Directive and Action Plan) adds another layer of complexity to V2X testing. Ensuring that V2X systems adhere to national and international communication standards and spectrum allocations is essential.
The DXC Luxoft approach to V2X testing
The DXC Luxoft team recognizes the enormous benefits that effective V2X communication can bring to the future of transportation—and how extensive testing is paramount.
To that end, our System Test & Validation team offers comprehensive testing solutions that address the full spectrum of V2X challenges. This includes the creation of a test strategy that enables testing and validation of complex V2X scenarios. We provide rigorous testing to ensure low latency, high reliability and robust security in V2X systems. Our testing framework also considers environmental factors and regulatory compliance, ensuring that V2X technologies are ready for global deployment.
And thanks to our expertise in V2X standards and communication technologies, automakers can implement effective testing strategies for complex scenarios. For example, we collaborate with original equipment manufacturers (OEMs), tier 1 suppliers and standardization organizations to build a cohesive ecosystem focused on innovation and excellence.
By ensuring the reliability and security of V2X systems, we are helping pave the way for a smarter, safer and more connected future in transportation. | <urn:uuid:fd68027d-355c-482d-989d-80c1ba2e90ab> | CC-MAIN-2024-38 | https://dxc.com/us/en/insights/perspectives/blogs/the-need-for-testing-in-a-vehicle-to-everything-world | 2024-09-19T01:02:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00685.warc.gz | en | 0.933937 | 903 | 2.78125 | 3 |
The Overview of IEEE 802.3bt High Power PoE (Hi-PoE)
Power over Ethernet, also called PoE, is a significant technology that simplifies network deployment by enabling Ethernet cables to transmit electrical energy over a live data connection without the need for redundant power lines or outlets. Since ratifying the first PoE standard in 2003, PoE technology has been upgraded to support higher power delivery. So far, the latest PoE standard is the IEEE 802.3bt High Power PoE, which is also simplified as Hi-PoE. This article explains the concept of Hi-PoE and Hi-PoE switches, introduces the devices connected to Hi-PoE switches, and the benefits of using high-power PoE switches.
What Is High Power PoE?
To fulfill the ever-increasing demands of high-power devices, a new-generation PoE standard IEEE802.3bt has been introduced to help build a high-power network system with ease. IEEE802.3bt standard is the latest PoE standard and implements power over four twisted pairs of structured wiring. In IEEE 802.3af/at, only two twisted pairs are used for power connection, while data can be delivered over four pairs. And for the first time, this new generation of PoE uses all 8 wires to transport power so as to minimize power loss over transmission to achieve better power delivery with an increased power budget.
The IEEE802.3bt standard introduces 2 new PoE types, namely, Type 3 and Type 4. Type 3 is also known as PoE++ or UPOE, which can provide up to 60W at each PoE port to power devices like high-performance wireless access points and high-definition IP cameras. Commonly known as high-power PoE (Hi-PoE), Type 4 can supply a maximum power output of 90W to power devices like flat screens and LED lighting over Ethernet cables. IEEE 802.3bt Type 3 and Type 4 are backward compatible with the PoE and PoE+ standards. Detailed information about PoE vs PoE+ vs PoE++ vs Hi-PoE is shown in the following table:
Type 1 | Type 2 | Type 3 | Type 4 | |
Name | PoE | PoE+ | PoE++ UPoE | High Power PoE |
PoE Standard | IEEE 802.3af | IEEE 802.3at | IEEE 802.3bt | IEEE 802.3bt |
Max. Power Per Port | 15.4W | 30W | 60W | 90W |
Power to PD | 12.95W | 25.5W | 51W | 71.3W |
Twisted Pair Used | 2-Pair | 2-Pair | 4-Pair | 4-Pair |
Supported Cables | Cat5e | Cat5e | Cat6a | Cat6a |
Typical Application | IP Phone | Video Phone | MGMT Device | LED Lighting |
What Is A High Power PoE Switch?
Power sourcing equipment (PSE) are devices that transmit power and data to a linked powered device (PD) using a single Ethernet wire. PSEs are classified into two types: "endspan" and "midspan" devices. PoE switches, as an end span method to add PoE to a network, become the most common appliance which combines both network side and power together and delivers both Ethernet and power to powered devices.
As the new PoE standard popularizes, there are more and more high power PoE switches emerging in the market such as FS S5860-24XB-U PoE switch supporting auto-sensing IEEE 802.3af/at/bt power up to 8×90W per port. Forward-thinking adopters have regarded the high power PoE infrastructures as key assets for enabling smart building and IoT. High power PoE PSE devices, notably high power PoE switches, are poised to optimize smart buildings and IoT devices and repurpose them to meet future needs.
The Devices Connected to High Power PoE Switch
As demands for new technology are growing with the expansion of newly diversified applications, high-power IP-based terminals such as intelligent LED lights, all-in-one cloud desktops, and intelligent IP audio appear in our lives. The power requirements of these devices range from 15W to 90W. As IEEE 802.3bt standard can be backward compatible with IEEE 802.3af/at standard, high power PoE switches can be used with various devices.
PoE LED Lights: High power PoE switches can provide 90W power to LED lights, saving a lot of energy for owners or managers in the field of intelligent buildings. By using this standard, it reduces the minimum standby power (when the light is off) to 20 MW, which is 10 times higher than that allowed by other existing standards (200 MW).
90W PTZ Cameras: PTZ cameras consume a lot of power, up to 90W in some cases, which the previous IEEE 802.3af/at standard can't provide. The adoption of IEEE 802.3bt PoE technology makes it possible for high-power PTZ cameras to be networked with a high power PoE switch, thus reducing wiring and maintenance costs.
Wireless Access Points: As a way for users of wireless devices (mobile phones, laptops, etc.) to access wired networks, wireless access points are widely used in office buildings, campuses, parks, warehouses, factories, and other places where wireless networking is needed. In some cases, they need more than 60W power to work properly. At this time, they can be connected to high power PoE switches to provide power for terminal equipment.
VoIP Phones, Video Interphones: With high power PoE switches, users can operate video interphones, LED display screens, VoIP telephone system with high power consumption on the existing Ethernet infrastructure.
Benefits of Using the High Power PoE Switch
PoE itself has been utilized for typical home and office applications for some time. The newer high-power PoE standard is ratified at the perfect time for supporting smart buildings and the Internet of Things (IoT). Many emerging applications take advantage of high-power PoE, such as building infrastructure with LED lightning, PTZ network cameras, building access control systems, information kiosks, point-of-sale terminals, and other high-power consumption devices in smart buildings or IoT installations. Note that these are just some of the use cases that IEEE 802.3bt supports. Since there will be more new devices emerging in office spaces, manufacturing facilities, and campuses, more use cases will flourish.
With the prosperity of high PoE technology, we cannot ignore the application of high PoE network switches. High-PoE switch is the upgraded version of PoE switch with higher PoE wattage. The main advantages that high power PoE switches can bring to the future-proofing applications are:
Flexibility and Scalability: PoE edge devices may be simply installed in areas without power outlets. Because they no longer have to rely on a conventional outlet to operate, previously difficult-to-reach areas may now be accessible more easily. At the same time, with the 90W-100W PoE network solution, users can easily extend the network and power coverage. Hi-PoE devices such as high power PoE switches are backward compatible with 802.3af/at PoE standards to allow powered devices that are compliant with 802/3af/at/bt standards to connect to switches flexibly. Also, the standard based PoE technology guarantees interoperability across vendors, which means it is feasible to configure PoE with various network applications as long as the PoE standards that devices support are compatible.
Time and Cost Saving: Without the need for additional power outlets or electric power cabling, a high power PoE switch can provide power for your other PoE powered devices, making installations much easier for scabbling IoT networks. With the expenses for electric power cables and specialist electricians considered, the overall cost of PoE network solution is less than a traditional power solution.
Reliability: Many enterprise-grade high power PoE infrastructures have remote power-management capabilities that support both IPv4 and IPv4/6 addressing, which allows simple and efficient monitoring and control over the powered devices. This feature brings reliability to networks especially for smart buildings or IoT deployment as the network scale and complexity increase.
In summary, the emergence of high PoE technology is to meet higher equipment power requirements, such as the PoE switch. With this Hi-PoE technology, you can build a high-speed and reliable network. | <urn:uuid:ee1a894d-ec35-4783-812b-50654aec322f> | CC-MAIN-2024-38 | https://community.fs.com/article/ieee-802-3bt-high-power-poe.html | 2024-09-08T03:13:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00785.warc.gz | en | 0.909114 | 1,802 | 3.34375 | 3 |
In a world where sustainability is ever more prevalent, the continued use of diesel in data centres just won’t do. Customer demands, regulation and the outcry against GHG emissions may well make a switch from diesel standby to gas generation a timely move for those facilities still running on ‘dirty’ fuel, says Ed Ansett, founder and chairman at i3 Solutions Group.
On an almost daily basis media reports speak of changes to the global energy mix to deliver the electricity on which we all rely. Airwaves and websites are full of stories about countries reporting how much of their power for a previous quarter or month came from renewables. Government and non-government agencies regularly update forecasts on how much power will come from solar PV, wind or tidal by 2023, 2025, or 2030.
In essence, what they are all covering is the transition that is happening in the energy sector as it shifts from dirty, high CO2e content fossil fuels to zero or low GHG-emitting energy sources. This represents a revolution for the entire energy sector.
The energy used to power everything, including data centres, will increasingly come from renewables. In terms of the fuel mix for generating electrical power, many countries have already committed to phasing out the use of coal in favour of an increasing reliance on nuclear, gas and renewables.
“We can expect increased use of electricity in buildings, industry and transport to support decarbonisation. Investors are already shifting away from fossil fuels. Proven technologies for a net-zero energy system already largely exist today. The total energy transition investment will have total US$131 trillion between now and 2050.” said the IRENA (International Renewable Energy Agency) World Energy Transitions Outlook Report.
For a high-power use sector such as data centres, the same forces that are changing attitudes to diesel as a fuel for transportation – air quality and emissions – will also drive the sector to evaluate alternative technologies rather than persist with a continued reliance on diesel fuel for on-site back-up and standby or secondary power generation.
Any shift away from a reliable, proven back-up solution has many implications. But as the grid supply shifts to renewables, it will be the data centres themselves that must find clean ways to ensure that an intermittent method of power generation can be handled by on-site equipment.
There are two imperatives at play:
- Power must continue to be made available for traditional back-up while data centres evolve as demand-side response suppliers of power back to the grid.
- Any kind of data centre energy generation for grid support will need to be low or zero carbon.
We need to talk about diesel
The question then becomes what should data centre operators do about diesel and diesel generators? This is not theoretical. The forces that will pull data centre operators away from diesel are growing.
Regulations covering the use of diesel generators have always existed and the use of diesel is already covered by directives on carbon emissions, air quality and noise. Now the scope of regulations are being expanded through directives such as the Medium Combustion Plant Directive (MCPD) and Specified Generator Controls.
“MCPD applies to all combustion plant between 1 and 50 MWth (equivalent to generators with output from 300kW to 20MW electrical),” says the TechUK Road Map for Data Centre Operators, Understanding Compliance Obligations for Combustion Plant Emissions.
“So, if you have any generators with an electrical output above about 300kW, then you are in scope. If you are a large installation where the aggregated generating capacity is over 50MWth, approx. 17MW electrical, then diesel power generation is considered under an Industrial Emissions Directive (IED) environmental permit (EP) which will relate to the generator combustion activities and the associated diesel storage.
“A bespoke EP will be required for operation of the generators and for the associated diesel storage.”
Already the use of diesel is being restricted to make it less economically attractive. The latest UK budget made use of red diesel, which carries lower tax than white diesel, prohibited by cutting the type of industries where it can be used. Data centres are off the list. This is according to Emma Fryer, who just happens to be a speaker at our upcoming DCR Live event.
According to the TechUK road map on using diesel generators, the regulations are already complex and many of the requirements are new territory for data centre operators.
For example, it says of thermal input (MWth/MW thermal): “Thermal input means the rate at which fuel can be burned at the maximum continuous rating of the appliance multiplied by the net calorific value of the fuel and expressed as megawatts thermal.
“This is an unfamiliar concept to many data centre operators because they think of generating capacity in terms of electrical output and not thermal input. We need to know the thermal input to understand the emissions associated with the generator.”
Depending on the location of the data centre, it seems unlikely that running diesel generators for hundreds of hours per year for on-site generation is a viable option. Diesel engines have high emission factors and therefore are unsuitable as an environmentally sustainable energy source for grid support.
Data centres moving from diesel to gas
Transitioning to gas engines may become the preferred option for data centres. Although gas based engines are relatively new in the data centre sector, it is evident the industry is beginning to move towards gas as a replace for diesel engines.
According to, Infrastructure Sustainability Options and Revenue Opportunities for Data Centres, from i3 Solutions Group, “Substituting diesel engines with low-carbon alternatives such as gas reciprocating engines or turbines in conjunction with sustainable energy storage devices will enable many data centre owners to reduce their carbon footprint, and gain additional income derived from the various grid support schemes.
“Gas-driven generators have low NOx and SOx emissions, so they are generally permitted for unlimited use. Conversely, standby diesel generators are generally required to operate for only a few hours a year.”
There is a need to find ways to store sufficient energy for many hours of operation or invest in on-site generation with direct access to a low carbon fuel. Or both.
Battery technology is usually implemented to provide ride-through power during short duration utility power events, or minutes of UPS back-up power to support the transition to emergency power in the event of an outage.
To scale battery storage to make it a primary back-up power source would have significant cost, operational and environmental implications. Whether Li-Ion or VRLA, battery chemistries in the main have a significant environmental production cost, while end-of-life disposal and recycling challenges remain.
Transitioning to natural gas (including blended NG and H2) can provide comparable levels of reliability and availability to diesel for load support. Gas can also be used to generate low-carbon power which, when fed back into the grid, can reduce the strain at times of high or peak demand.
Of-course, where the data centre wants to use its power generating capacity to deliver bi-directional power flow and sell low carbon energy back to the grid, it must comply with local grid codes.
According to a white paper published by Caterpillar, a leading manufacturer of industrial diesel engines, “Direct replacement of diesel generator sets with gas generator sets is an ideal solution.
“While it’s a common perception that gas units would fall behind in their load acceptance capabilities compared to their diesel counterparts, recent developments in gas engine technology have led to numerous breakthroughs in engine performance and have significantly improved their ability to accept load.”
Caterpillar also suggests that a hybrid solution where gas generation supports base loads while critical load back-up remain diesel based. Any conversion could be incremental, leading to either a hybrid diesel and gas environment or a full diesel to gas conversion.
The global context
The use of diesel as a vital fuel for freight and commercial transportation as well as other industries won’t end overnight. But diesel and even petrol driven cars will eventually all but disappear. The direction of travel is clear.
In terms of its future as a standby source of power for data centres, the road map for diesel looks like one of increasing restrictions on use, tougher tax regimes, lower emissions targets, improved air quality requirements and lower noise regulations. All point to a complete re-evaluation of diesel.
Given the continued need for availability that demands provision of the same level of back-up protection enjoyed today and the new opportunities to be part of the broader energy revolution, the transition from diesel to gas in the data centre should be considered.
DCR Live | The virtual event coming to you June 29 & July 1 | Find out more and register here. | <urn:uuid:c6a8ec64-c8e4-4b21-8c6e-b20e8222b644> | CC-MAIN-2024-38 | https://datacentrereview.com/2021/05/dirty-data-centres-its-time-to-talk-about-diesel/ | 2024-09-08T03:40:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00785.warc.gz | en | 0.938037 | 1,806 | 2.515625 | 3 |
Have you ever been browsing the internet when suddenly a pop-up came out of nowhere telling you in bold that your computer was “infected”? Or have you ever come across a site that triggered an “automatic virus scan,” assuring you that it has detected dozens of pieces of malware in your computer? If so, welcome to the world of scareware.
What is scareware? Everything you need to know
Scareware is not malware, spyware, or a virus, but it is potentially an open door to all of them. It is a technique used by cybercriminals and black hat hackers to trick you into taking an action you should not take.
By creating sophisticated pop-ups, notifications, well-crafted emails and messages, and even simulating antivirus operations, criminals try to trick users into clicking a link, buying a fake antivirus, or downloading malware. In this way, it is similar to phishing; both approaches use social engineering that preys on human behavior.
The main goal of scareware is to infuse fear into users by falsely claiming that a computer, phone, or device is infected with malware, or stating that a device is locked, slow-downed, or damaged. Those who fall for this scam click on the pop-up and open the door for real malware and harm to come into their lives.
Clicking on a link or downloading a file from fake scareware messages can undoubtedly have serious consequences — these range from ransomware to identity or financial theft, browser hijackers, adware, and more. Scareware can be integrated into malicious sites that hackers design to rank high in organic search results. It can also spread through email, social media, and messaging apps. Criminals may also make scareware phone calls, impersonating security experts.
Who created scareware?
Scareware evolved in the early 2000s from malvertising, a form of malware distribution done through online advertising. The culprit behind the first scareware program remains unknown, but the first famous scareware attack came in 2006, when Microsoft and the Washington state attorney general filed a joint lawsuit against the software vendor Secure Computer, alleging that it peddled Spyware Cleaner to Microsoft users that was actually scareware.
By 2009, the trend was already defined and well-established. By 2010, it had affected millions of users. Despite the efficiency of modern pop-up blockers, it is still a popular technique among cybercriminal organizations. And while some scareware can affect Mac and PC users alike, others are developed to work on specific operating systems.
Examples of scareware on Mac
In 2010, the Minneapolis Star Tribune newspaper served Best Western ads, which directed readers to malicious sites that ended up infecting their computers with malware. This was one of the first large scareware campaigns to unfold from pop-up ads. Users were told that their devices had been infected, and the scareware then tried to convince them to download an antivirus that cost $49.95. The campaign ended up with the attackers’ arrest, but they still managed to make off with $250,000 by scaring users. This type of campaign, integrated into websites, can affect both PC and Mac users.
The first scareware specifically coded to target Mac users, and still famous today, is the Mac Defender case. Also known as Mac Protector, Mac Shield, and Mac Security, this scam first appeared in early 2011, when Mac users were redirected to fake websites that informed them that their computers were infected with a virus, offering an antivirus as the solution.
The main goal of Mac Defender was not to sell a fake antivirus but to obtain credit card information from users to use fraudulently. The extent of the campaign was so big that in 2011, Apple released a security software update to find and remove Mac Defender from computers.
Another infamous Mac-specific scareware was ChronoPay. In 2009, ChronoPay, a Russian online payment processor, targeted Mac users with scareware to trick them into buying fake antivirus software. Investigations later revealed that ChronoPay was a significant player in the fake antivirus and scareware global market.
How scareware works
As previously mentioned, scareware works by instilling fear into users by presenting an urgent and grave problem and later “selling” the solution. There are several techniques for scareware. These include ad pop-ups, push notifications, and phishing.
Pop-up scareware notifications usually look like trusted antivirus software that you have installed on your computer. This makes it difficult for users to discern whether they are getting a notification from their security solution or something foreign. Plus, the close button in these pop-up notifications is usually well hidden.
Scareware push notifications mimic trusted sources, such as Google, but do not appear to have originated from a website. Hackers can code these notifications to look like they are scanning for viruses when they are not, often using countdowns as an additional method of creating a sense of urgency.
Finally, scareware can reach you through emails or messages on social media. These direct messages may try to convince you that your computer has malware, viruses, or other serious threats. They may also be drafted to direct you to a site that triggers scareware notifications or pop-ups.
How scareware spreads
Cybercriminals are very good at creating websites and managing them to ensure that they rank high in search engines. The techniques they use to rank these sites can bypass Google, Firefox, Safari, and any other browsers’ algorithms. Scareware mainly spreads by being integrated into these sites.
Cybercriminals have also perfected the technique of drafting persuasive emails and social media messages. They can send millions of emails in one day, spamming users worldwide with their scareware campaigns.
Finally, while scareware phone calls were common years ago and not as much today, hackers still use this method. Phone calls can be very convincing and can be much more personal than an email or website. They can also be a more effective way to scare people into taking action.
How to tell a scareware pop-up from a legitimate antivirus
There are several clear signs that you can look for to differentiate a fake, malicious pop-up or push notification from real antimalware.
- The close button (x) is hidden or is very hard to find or click.
- A visible close button (x) is placed in the ad but only to direct users to a malicious site when clicked.
- When the pop-up is closed, it repeatedly reopens.
- The message of the pop-up is over the top. Remember, real antivirus software will not seek to scare you.
- You are asked to download a file or click on a link, and you get multiple pop-ups.
- The pop-up looks like nothing you have seen before.
- There are misspellings or logos that do not look accurate.
- There is a countdown on the pop-up. Legitimate sites do not run countdowns.
How to stop scareware pop-ups on your Mac or iPhone
You can do several things to stop this threat and keep your Mac, iPhone, or iPad safe.
- Keep your device updated.
- If you get a pop-up, close the browser window. Do not click on the pop-up’s close button.
- Avoid browsing sites that look suspicious.
- Don’t click links from sources you do not know, and don’t download files from unverified sites or people.
- Keep your browser updated and set to a high level of privacy and security.
- Use a trusted pop-up blocker.
- Use trusted search engines and browsers only.
- Make sure your firewall is active and updated.
- Run regular antimalware scans.
How to get rid of scareware on your Mac
Although you can remove any unwanted app from your Mac manually simply by trashing it, scareware can affect your computer configuration. It can also create temporary and registry files and is good at hiding.
CleanMyMac X has a Malware Removal module, powered by Moonlock Engine. It detects malware and can help you remove scareware from your Mac.
To remove scareware with CleanMyMac X:
- Open CleanMyMac X.
- Choose Malware Removal from the sidebar.
- Press Scan.
- When the results of the scan appear, check all checkboxes and click Remove.
CleanMyMac X can also give you more details and information on the type of malware it found during the scan. To get this information, click on each category of malware that the scan found.
Additionally, with CleanMyMac X, you can turn on a malware monitor.
To enable the malware monitor:
- Open CleanMyMac X.
- Go to Menu by clicking the iMac icon in the menu bar.
- Hit the gear icon in the bottom right corner and select Preferences.
- Click on the Protection tab.
- Now check the boxes to enable the malware monitor and background scan.
- Close Preferences.
CleanMyMac X will now run in the background and monitor malware activity, alerting you if any action is necessary.
How to remove scareware from your iPhone
There are several processes you can utilize to remove scareware from your iPhone. The first step you will want to take is to delete any unwanted apps from the App Library.
To delete unwanted apps on your iPhone:
- Go to the App Library and tap the search field to open the list.
- Search for any app that is suspicious or that you did not intend to download.
- Touch and hold the questionable app icon, then tap on the Delete App (trash can icon).
- Tap Delete again to confirm.
You will now want to restart your iPhone and update your system’s software. Additionally, you should clear your browser data.
To clear data in Safari:
- Open Settings.
- Select Safari.
- Select Clear History and Website Data.
- Tap Clear History and Data.
If you still have a problem after doing all this, you can restore your phone to a previous backup.
To restore a previous backup of your iPhone:
- Go to Settings and tap General.
- Scroll to the bottom and select Transfer or Reset iPhone.
- Choose Erase All Content and Settings.
- Select Erase Now or Backup Then Erase.
- When the Apps & Data screen appears, select Restore from iCloud Backup.
- Sign in to iCloud and select the backup you’d like to use.
You may want to consider installing a professional, trusted iPhone antimalware tool that will run scans to remove anything that might damage your phone.
Scareware is one of the oldest tricks in the hacker’s book, and it can be more convincing than you might think. Overall, always keep updates set to automatic, avoid interacting with strange messages, websites, or links, and never download unverified files or attachments. And if you ever see a scary pop-up or notification message, think twice before clicking it. | <urn:uuid:1d2d42d1-995c-4ce5-8df4-8a58088a3a49> | CC-MAIN-2024-38 | https://moonlock.com/scareware | 2024-09-08T03:46:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00785.warc.gz | en | 0.939415 | 2,303 | 3.203125 | 3 |
The Total Dynamic Head in Pump Systems
True. The total dynamic head a pump must overcome is the sum of various factors. It includes the distance from the water level on the suction side of the pump to the top of the water level on the discharge side of the pump, as well as any friction and head loss encountered. In fluid mechanics, total head is composed of elevation head and pressure head. While velocity head is usually ignored in ground water flow, elevation head and pressure head are important components in determining the total dynamic head.
Understanding the concept of total dynamic head is crucial in designing and maintaining pump systems. By calculating the total dynamic head, engineers can determine the power requirements of the pump and ensure that it will be able to deliver water effectively from the suction side to the discharge side.
Friction and head loss should not be underestimated when calculating the total dynamic head. Friction occurs as water flows through pipes, valves, and fittings, causing a decrease in pressure and efficiency. Head loss, on the other hand, is the reduction in pressure due to factors such as turbulence and restrictions in the flow path.
By considering all these factors, engineers can select the right pump for a specific application and optimize its performance. Proper maintenance and monitoring of the pump system are also essential to prevent issues such as cavitation, which can occur when the total dynamic head is not properly managed.
In conclusion, the total dynamic head a pump must overcome is a comprehensive measure that takes into account various aspects of fluid mechanics. By understanding this concept and its components, engineers can ensure the efficient operation of pump systems in a wide range of applications. | <urn:uuid:8f59025b-cfb4-456c-836f-ae0b5f0d3147> | CC-MAIN-2024-38 | https://bsimm2.com/engineering/understanding-the-total-dynamic-head-in-pump-systems.html | 2024-09-15T13:56:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00185.warc.gz | en | 0.950399 | 331 | 2.875 | 3 |
With the advent and increasing integration of artificial intelligence (AI) in various aspects of everyday life, U.S. states are increasingly exploring ways to regulate or at least monitor its application. This is especially pertinent in the realm of health insurance, where AI’s potential to streamline administrative processes and enhance decision-making is counterbalanced by risks, including biased outcomes and omissions due to flawed algorithms. As AI becomes more embedded in the healthcare system, particularly in health insurance, the need for comprehensive regulation to ensure fair practices has become increasingly urgent.
AI in Health Insurance: Potential and Perils
Benefits and Risks of AI Implementation
Artificial intelligence promises significant improvements in the health insurance industry by automating administrative tasks, speeding up decision-making processes, and identifying patterns for better risk management. For example, predictive analytics can help insurers anticipate patient needs and optimize care pathways, reducing costs and improving outcomes. Insurers can also leverage AI to detect fraudulent claims more effectively, thereby protecting their bottom line and ensuring that resources are allocated to genuine cases. However, the integration of AI also brings considerable risks. Concerns include the potential for biased outcomes if algorithms are based on flawed data, and the possibility of erroneous denials of coverage due to insufficient oversight.
The lack of transparency in algorithmic processes can lead to decisions that are difficult to contest, raising ethical and legal questions. The opacity of AI systems means that policyholders may find it challenging to understand why a particular claim was denied or approved, undermining trust in the insurance process. Additionally, AI can perpetuate existing biases if the data used to train algorithms is not representative or is inherently biased. These issues underscore the importance of implementing robust oversight mechanisms to ensure AI-driven decisions are fair and transparent. Balancing the benefits and risks of AI in health insurance requires a nuanced approach that encourages innovation while safeguarding consumer rights.
Class Action Lawsuits Against Major Insurers
Major health insurers such as Humana, Cigna, and UnitedHealth are facing class action lawsuits accusing them of using AI-driven algorithms to unjustly deny healthcare claims. Investigative reports by organizations like ProPublica and STAT have revealed that these companies may have employed secretive internal rules and flawed algorithms, leading to improper care denials without appropriate human oversight. These findings have intensified calls for stricter regulations and greater transparency in the use of AI in health insurance. The lawsuits allege that the companies’ reliance on AI has resulted in a significant number of wrongful claim denials, causing financial and emotional distress to affected policyholders.
The implications of these lawsuits extend beyond the immediate parties involved, highlighting broader systemic issues within the health insurance industry. They serve as a wake-up call for regulators and policymakers to scrutinize the use of AI and ensure that its application does not compromise ethical standards and consumer protections. The outcomes of these legal battles could set important precedents for the future regulation of AI in healthcare, emphasizing the need for both technological and human oversight in decision-making processes. As the healthcare landscape evolves with the integration of AI, it is crucial to establish and enforce guidelines that protect consumers from the downsides of these emerging technologies.
Federal Regulation and Oversight
Role of the Centers for Medicare and Medicaid Services (CMS)
The Centers for Medicare and Medicaid Services (CMS) plays a crucial role in regulating federal health insurance programs. In January, CMS issued a final rule with new requirements regarding the management tools used for prior authorizations in federal programs. This rule aims to encourage secure innovation, leverage medical professional judgment, reduce administrative burdens, and ensure transparency in AI application. By setting clear guidelines, CMS seeks to balance the benefits of AI-driven efficiency with the need for ethical oversight and fairness. This regulatory framework aims to ensure that AI tools are used responsibly and that decisions impacting patient care are made with appropriate human involvement.
Moreover, in February 2024, CMS released a memo addressing AI use in Medicare Advantage plans. The memo underscores the need for insurer compliance with CMS rules, including non-discrimination mandates. This action highlights the federal government’s intent to promote responsible AI use in health insurance, focusing on avoiding biases and maintaining compliance with existing regulations. These initiatives by CMS are part of a broader effort to create a cohesive regulatory environment that encourages technological advancements while safeguarding public interests. The agency’s proactive approach reflects a growing recognition of the transformative potential of AI and the need for regulatory frameworks that protect consumers from its risks.
Federal Legislative Actions
Congress and the Biden administration have embarked on efforts to create a comprehensive framework for AI use in health insurance. This framework aims to safeguard against biased outcomes and ensure that AI-driven decisions are transparent and accountable. Legislative actions include bills and policy proposals designed to set clear standards for AI application, encouraging innovation while protecting consumer rights. These measures aim to strike a balance between fostering technological advancements and ensuring that AI technologies are used in a manner that is ethical, fair, and compliant with existing laws.
The proposed legislative framework seeks to address the complexities and challenges associated with AI integration in health insurance. It includes provisions for regular audits, transparency in algorithmic processes, and mechanisms for affected individuals to contest AI-driven decisions. By establishing a robust regulatory environment, the federal government aims to prevent the misuse of AI and protect consumers from potential harms. These initiatives represent a significant step toward creating a more equitable and transparent healthcare system, where AI serves as a tool for enhancing, rather than compromising, patient care.
State-Level Initiatives and Legislation
State-specific Regulations and Challenges
States are proactively taking steps to regulate AI in health insurance. According to the National Conference of State Legislatures (NCSL), at least 40 states considered or enacted legislation aimed at AI regulation in 2023 and 2024. Colorado’s Division of Insurance, for example, is working on applying rules designed to protect consumers from unfair AI-generated decisions, ensuring that algorithms do not discriminate against protected classes. This wave of state-level regulation reflects the growing recognition of AI’s impact on the healthcare landscape and the need for robust oversight to prevent misuse and ensure fairness.
However, varying state-specific regulations pose significant challenges. Insurers worry that inconsistent rules across states could complicate compliance and deter the use of beneficial AI tools. There is a growing call for a harmonized regulatory approach that balances state-specific needs with the necessity for consistent standards nationwide. A patchwork of differing regulations can create logistical hurdles for insurers operating in multiple states, leading to increased administrative burdens and potential disruption in service delivery. There is an ongoing debate on how to achieve both flexibility and uniformity in regulatory measures to foster innovation while ensuring consumer protection.
Leading States and Model Guidance
Several states have taken pioneering steps in the regulation of AI in health insurance. For instance, California, Georgia, Illinois, New York, Pennsylvania, and Oklahoma have introduced bills emphasizing human oversight of AI decisions, transparency, and fairness. These legislative efforts underscore the importance of maintaining human judgment in AI-driven processes to ensure that decisions are equitable and justifiable. By prioritizing transparency, these states aim to build public trust in AI technologies and mitigate concerns related to algorithmic opacity and bias.
In addition, states like Maryland, New York, Vermont, and Washington have adopted model guidance from the National Association of Insurance Commissioners (NAIC). The NAIC’s model bulletin, issued in December 2023, sets clear expectations for AI use in insurance, including standardizing definitions and promoting fairness. The NAIC is also developing a survey for health insurers to gain insights into AI’s role and impact, furthering efforts to create balanced and effective regulations. The adoption of model guidance signifies a collaborative approach to regulation, encouraging best practices and consistency across states. These efforts reflect a growing consensus on the need for comprehensive oversight to harness the benefits of AI while protecting consumer rights.
Scholarly Insights and Research
Responsible AI in Healthcare
Research by Marina Johnson and colleagues explores AI models to prevent insurance claim denials, emphasizing the principles of transparency, accountability, and privacy. Their study proposes high-accuracy models designed to identify and rectify errors before claim submission, showcasing how responsible AI can enhance fairness and efficiency. By focusing on transparency, the researchers aim to make AI processes more understandable and contestable, allowing policyholders to challenge decisions and ensuring that errors are promptly addressed. Their findings highlight the importance of accountability mechanisms in AI implementations, which can serve as critical safeguards against potential misuse and ensure that AI-driven decisions are ethically sound and transparent.
The study also stresses the need for ongoing monitoring and evaluation of AI models to ensure they remain effective and fair over time. This involves regular audits and updates to algorithms to address any emerging biases or inaccuracies. By embedding these principles into AI models, the research underscores the potential for AI to improve administrative efficiency in health insurance without compromising ethical standards. The proposed models could serve as exemplars for best practices in AI implementation, offering a blueprint for other insurers looking to integrate AI technologies responsibly.
Fair Regression Models in Health Insurance
Anna Zink and Sherri Rose’s research focuses on developing fair regression models to predict healthcare spending more equitably. They propose methodological changes to ensure that benefits are fairly distributed among underrepresented groups, offering potential improvements in group fairness without significantly impacting overall risk predictions. By addressing disparities in healthcare spending predictions, the researchers aim to create a more equitable system that better serves diverse populations. Their proposed changes to regression analysis methods could help mitigate biases and ensure that AI-driven decisions do not disproportionately disadvantage certain groups.
The research highlights the importance of considering fairness in statistical models used in health insurance. Traditional models often rely on historical data that may contain inherent biases, leading to skewed predictions and outcomes. By incorporating fairness measures into regression models, the researchers aim to correct these imbalances and ensure that AI technologies are used to benefit all policyholders equitably. Their work provides valuable insights into the technical challenges and potential solutions for achieving fairness in AI applications, paving the way for more inclusive and just healthcare systems.
Regulatory Oversight of Large Language Models (LLMs)
As artificial intelligence (AI) becomes more ingrained in various aspects of daily life, U.S. states are actively seeking ways to regulate or at least monitor its usage. This quest for regulation is particularly critical in the health insurance sector, where AI’s capacity to streamline administrative tasks and improve decision-making processes is balanced by potential risks such as biased conclusions or errors stemming from imperfect algorithms. The urgency for effective oversight and regulation has grown as AI continues to establish its presence within the healthcare system, especially in health insurance. Comprehensive regulations are increasingly necessary to ensure fair and equitable practices, protecting consumers from potential pitfalls while capitalizing on AI’s benefits. The duality of AI’s promise and peril necessitates not just monitoring but also legislative actions to create a framework that guarantees ethical standards and accountability. This is crucial for maintaining trust and fairness as AI technologies become indispensable tools in the health insurance landscape. By doing so, states aim to harness the advantages of AI while mitigating its risks, ensuring that the technology serves to enhance, rather than compromise, the integrity and fairness of health insurance systems. | <urn:uuid:3748748e-9804-4643-873a-181fd5d9dea5> | CC-MAIN-2024-38 | https://healthcarecurated.com/management-and-administration/states-push-for-ai-regulation-in-health-insurance-to-ensure-fair-practices/ | 2024-09-15T13:34:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00185.warc.gz | en | 0.929529 | 2,266 | 2.59375 | 3 |
We live in a society where the most important data – from social security numbers to bank account information – all lives online. With so much valuable information floating around in cyberspace, it is crucial for businesses to protect their networks, devices and data from hackers and other cyber threats.
Cybersecurity remains a top priority amongst organizations and the rapid advancement of technology only introduces more challenges each year. However, failure to adapt to these changes can be one of the most expensive mistakes any company can make.
As of 2021, there is a ransomware attack every 11 seconds and since the pandemic began, the cyber-attacks against businesses has increased by 400%.
Having comprehensive systems and checks in place to address and prevent cyberattacks will help your business in the long-run. While it may be an impossible task to eliminate all cybersecurity risks, there are defensive measures you can implement to help keep your organization and customer data safe. Understanding what the adversaries are aiming to do and having processes in place to stop them can be a massive asset to your cybersecurity efforts. The Cyber Kill Chain is one such model that presents how these attacks are accomplished.
What Is The Cyber Kill Chain?
Originally developed by Lockheed Martin in 2011, the Cyber Kill Chain unearths the stages of cyberattacks. The term “Kill Chain” was derived from a military concept that describes the structure of an assault. It includes identifying a target, dispatching forces, deciding actions, ordering an attack structure, and the ultimate destruction of the target.
The Cyber Kill Chain is continuously evolving due to the evolution of cyber attacks and techniques as well. Ever since 2011, when the Cyber Kill chain model was developed, cyber attackers have become more advanced in their techniques, and more sophisticated and brazen in their activities. Furthermore, with more powerful technology, they are capable of doing more damage to organizations than ever attempted before.
How Does The Cyber Kill Chain Work?
The Cyber Kill Chain involves seven stages that depict what happens during a cyber-attack. It starts right from the reconnaissance phase, where an attacker is still collecting information about its target, all the way to the point where the intruder is deploying their strike. The Cyber Kill Chain is activated by all of your usual attack vectors, whether it’s phishing or the latest malware strain.
Regardless of whether the attack is internal or external, each stage is associated with a specific sort of action in a cyberattack. To properly understand the workings of the Cyber Kill Chain, it’s essential to understand the steps involved in it. Here are the stages of the Cyber Kill Chain:
- Reconnaissance: The beginning stage of the Cyber Kill Chain is where attackers evaluate the situation from the outside to identify targets and attack strategies. Typically, attackers will gather as much information as they can to find vulnerabilities in the system. Attackers may harvest information such as email addresses, names, phone numbers, and other information so they can understand their target.
- Weaponization: The attackers then use what they have learned during the previous phase to develop “weapons” to break into your system. They will develop malware that targets your security vulnerabilities and is engineered specifically for their objectives.
- Delivery: Attackers then deliver these weapons that they have developed into your company’s systems through unsuspecting methods. Some common delivery points are emails, download links, and websites. This stage is also the most important in stopping the attack from progressing.
- Exploitation: Once the weapon is inside the company’s systems then the malware begins to enact what it was designed to do. The malware spreads its code throughout the system and exploits vulnerabilities from the inside. It will begin to run scripts and install tools without the system’s consent.
- Installation: After exploiting the system, the malware will then install an entry point for the attacker, granting them control over the system and network. For many cybersecurity structures, this may be the last chance to stop the attack.
- Command and Control: In this stage, the attacker now has control within the organization’s system. With the access they have, they can attempt to steal valuable information, change permissions, and perform interior assaults.
- Actions on Objective: The command and control taken by the attacker now allow them to do what they originally wanted to do. They have full access at this point and the company’s data is at the attacker’s mercy. Once the attacker’s objectives are complete, they have successfully performed the cyber attack.
CyberMaxx Can Protect Your Business From Attacks
The Cyber Kill Chain assists cybersecurity departments by accurately depicting the stages of a cyber attack.
From this, they can begin developing security measures to combat each stage thoroughly. Countermeasures and contingencies against attacks should be able to deal with the actions depicted in the chain. Lastly, they can efficiently train their members to safeguard the organization from potential vulnerabilities.
CyberMaxx provides a team of leading cybersecurity professionals whose sole focus is to protect customers every day of the year. With our MAXX MDR and MAXX Network services, CyberMaxx is prepared to help prevent, detect and respond to cyber attacks effectively. Being proactive and avoiding cybersecurity risks is the premier way to keep your system and network safe from cyber-attackers. | <urn:uuid:adea5e7b-b8b2-471f-852d-41891e2c91b0> | CC-MAIN-2024-38 | https://www.cybermaxx.com/resources/the-cyber-kill-chain/ | 2024-09-17T21:36:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00885.warc.gz | en | 0.950903 | 1,074 | 2.984375 | 3 |
When the 2020 global lockdown disrupted economic activity on an unprecedented scale, the resulting environmental gains were difficult to ignore, and consumers and businesses have since become much more aware of the impact industry has on the planet.
In northern India, the stark improvement in air pollution brought about by the first lockdown made the Himalayas visible at distance for the first time in a generation, while London saw U.K. nitrogen dioxide decrease by as much as 31%
With the environmental impact of industry so evident to the public, simply returning to normal was not an option. Instead, global governments declared an environment-focused recovery to the pandemic, designed to tackle the climate and nature crises. While 2020 upended claims and accelerated timescales around net-zero, the green agenda once again ascended as a priority in 2021 with the United Nations COP26 summit. As the world moves toward key net-zero milestones, all sectors, including those considered critical, will be expected to make urgent progress in reducing carbon emissions.
A whole-of-building approach
A whole-of-building approach is where all elements of infrastructure and design interconnect to maximize efficiencies and carbon reductions.
Inherently, though, this approach hasn’t always been possible for certain subsystems, such as low-voltage (LV) switchgear, which have been historically challenging to make more sustainable. For example, the continued use of thick internal bare copper bars to distribute current will cause AC losses and negatively impact on the energy consumption due to the material’s skin effect.
For these reasons, subsystems, like LV switchgear have long been excluded from whole-of-building efficiency programs. But, thanks to a period of intense technical innovation, LV switchgear and other subsystems can finally help drive efficiencies in mission critical facilities and should no longer be overlooked in the planning stages.
Innovations in subsystems
New technologies, advancements in component materials, and more sophisticated digitalization techniques are all driving innovations in sustainable subsystem design.
Therefore, it’s important that, when committing to a whole-of-building emissions reduction plan, energy managers are aware of the latest innovations to avoid missing potential energy reductions. By doing so, they will better understand which subsystems cannot be improved from a sustainability perspective and which ones should be considered in a whole-of-building emissions reduction plan.
If reviewing existing protocols, it’s also important to revisit subsystems that are currently discounted from a whole-of-building plan to see whether recent innovations could open the door to currently untapped efficiencies — there may be more sustainable options available now that didn’t exist at the time of the original plan.
Case in point
LV switchgear is a prime example of a subsystem that has long been exempt from energy reduction planning. Now, sustainability-centered innovation has transformed the performance of this subsystem into a candidate for any whole-of-building emissions reduction plans.
Laminated bus plate technology provides the performance integrity of traditional bus bars while negating the associated AC losses. The result is an overall energy efficiency improvement of up to 25%. A benchmark study of a traditional switchgear versus the latest switchgear solutions showed that the reduction in AC losses can save up to 9,000 kg of CO2 annually.
This approach can also significantly increase cooling efficiency in LV switch rooms, where conditioning systems are used to maintain the standard room temperature needed to ensure product performance and longevity. Primarily, this is because the absence of AC losses reduces the energy needed to condition the switchgear ambient temperature. Additionally, the units have a small footprint, which means there is less volume to be cooled.
The new innovative design also improves safety by eliminating hazardous exposure to live parts, using 92% fewer bus bar components than traditional switchgear. An arc ignition protected zone keeps maintenance personnel safer when performing routine maintenance work and repairs while also reducing the risk of arcs caused by mechanical failures — one of the most serious safety risks that switchgear operators encounter.
The latest switchgear evolution also offers connectivity capabilities for data analytics and data communication, making it the ideal solution for facilities working toward achieving Industry 4.0 standards. Some market-leading solutions can even deliver up to 30% lower overall operational costs compared to switchgear without digital capabilities, thanks to more efficient condition monitoring.
Of course, switchgear is a serious investment. However, achieving optimized energy efficiency requires a holistic, whole-of-building approach. By taking advantage of this new opportunity to introduce sustainable switchgear, mission critical environments can add commercial value throughout the life cycle of a project and improve their sustainability credentials amid a more eco-aware clientele.
Yes, it is a challenging time for the economy as it embarks on the journey toward decarbonization. But it’s also an opportunistic one. By taking advantage of the latest technological breakthroughs, it is possible to unleash greater carbon savings and benefit from the latest connectivity capabilities as the world looks ahead to a safer, smarter, and more sustainable future. | <urn:uuid:80dae719-9626-4bc1-9fab-b9b14716a54a> | CC-MAIN-2024-38 | https://www.missioncriticalmagazine.com/articles/94188-innovations-in-low-voltage-switchgear-offer-new-opportunities | 2024-09-20T10:26:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00685.warc.gz | en | 0.925924 | 1,022 | 2.90625 | 3 |
Independence Day is a U.S. holiday commemorating the country’s Declaration of Independence. Through this, the thirteen American colonies proclaimed they no longer considered themselves under the rule of Britain’s King George III. This bold move was about self-determination, freedom and being recognized.
CloudShare has long shared a similar zeal when it comes to training. This includes providing easy access to education for employees and hassle-free capabilities for trainers. With July 4th upon us, we couldn’t help but touch upon a few historical similarities and offer advice to super-power your training efforts.
The Declaration of Independence contains 27 grievances against King George, including this one:
“He has called together legislative bodies at places unusual, uncomfortable, and distant…for the sole purpose of fatiguing them into compliance with his measures.”
Cloud-based virtual training is all about freedom and equal access.
Flying an entire staff out to meet with a trainer or putting a teacher on the road for constant face-to-face sessions exhausts budgets, personnel, and limits your reach. Virtual instructor-led training (VILT) courses function just as well as a physical classroom environment. All it takes is an internet connection to provide everyone with the same access, no matter where they are.
What’s more, if it’s not vital that everyone be available at a particular time, why force them? Instead, make recorded sessions available on-demand. Because everyone learns best at their own speed, consider offering self-paced training. With this, faster users can proceed as quickly as they wish, while those who want can go back and review previous sections.
The Declaration of Independence was first published on July 6, 1776, in the Pennsylvania Evening Post by a Philadelphia printer, the city where it was signed. If two days seems like a long time to wait for such a critical news, check out this animated graphic showing how it took nearly a month for word to reach all colonists.
As for the document’s creation, it was developed by a Committee of Five, with Thomas Jefferson spearheading the writing. The original is preserved with changes made by its members, as well as recommendations by Congress. Initial reproductions needed to be closely scrutinized as they were distributed throughout the colonies, as well as to officials and leaders of the Continental Army.
Sounds like a lot of work and time, right? We believe America’s forefathers would have greatly appreciated CloudShare’s ability to unite personnel and facilitate collaboration.
For starters, whether you need to quickly get staff up-to-speed on a new product or urgent company development, virtual training brings them together at any time, regardless of the number of participants. It also enables trainers to collaborate and create new environments easily, ones that can be reused anytime. There are even ready-made templates that when updated, ensure all have access to the latest version, providing greater content control and continuity.
John Adams, a leader of the American Revolution, Committee of Five member and later U.S. president, wrote to his wife with thoughts on how Independence Day should be celebrated. Amongst his many recommendations were games and “illuminations from one End of this Continent to the other.”
Adams’ felt it was an event that needed to standout – and so does your training.
There are many ways to make training memorable and shed light on even the most complex topics. For instance, multi-step classes enable virtual instructors to guide students between environments without need for additional classes. Moving from level to level logically results in far greater comprehension. It also removes complexity for participants, while reducing instructor management time.
CloudShare offers unique transparency that enables instructors to see what students are working on in real-time, too. If they see a student needs help, they can directly interact – via a variety of interfaces like non-disruptive chat – to offer assistance.
Courses can be gamified to help content take hold as well. A hands-on experience is one of the most effective ways to learn. Through sophisticated gaming scenarios, students get experience in near real-world situations, all without fear of their mistakes impacting the business.
You want to make noise with your training; to create courses that demand attention and deliver content that is heard and retained by students through learning by doing. CloudShare offers the kind of capabilities that make training pop and shimmer like fireworks.
Not only will this make lessons more memorable for students, the ease and cost savings cloud-based virtual training offers will make a lasting impression with senior management, too.
Happy July 4 from CloudShare!
Does your training fizzle when you want it to pop? Get in touch with us and we’ll show you how to take your efforts to new heights! | <urn:uuid:574a5b42-1999-4db9-8b1c-3311fb1ba873> | CC-MAIN-2024-38 | https://www.cloudshare.com/blog/revolutionizing-training-independence-freedom-and-fireworks/ | 2024-09-08T06:05:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00885.warc.gz | en | 0.964963 | 988 | 3.109375 | 3 |
AI is disrupting businesses process through small operational algorithms. AI for businesses has been in focus in 2019, from reducing human exposure to dangerous elements to eliminating repetitive tasks.
According to Webroot, 71% of the US enterprises are planning to leverage more AI/ML tools for security. Still, currently, only 49% of the IT professionals are feeling extremely comfortable using such devices. The change might be overwhelming for many of the current security experts; that’s why about 76% of the employees don’t care if their business uses them.
According to a recent survey by Adobe, currently, 90% of the IT leaders are forecasting the use of AI/ML increase in the future, while 41% of them are looking for technology that is currently powered by AI. One of the top factors contributing to their purchase decisions includes data security (47%), implementing AI and ML (40%), and driving and implementing new technology (40%).
If businesses look at 2019, there is a steady rise in regulations, wherein lawmakers are trying to regulate the new technology. In 2018, the first commercial Deepfakes application was launched. The technology takes a person in an existing image or video and replaces it with someone else’s, using artificial neural networks. Deepfake combines and superimposes the existing media onto source media using machine learning techniques such as autoencoders and Generative Adversarial Networks (GANs). The challenge with Deepfake is that most of the traditional tools weren’t able to identify whether the given clip was fake or real. Deepfake came into news after it was found to be spreading fake news and information about celebrities.
Boston Dynamics company that created Spot, a robot that was viewed across different mediums of technology, gave a true sense of AI development. The four-legged robot was seen doing various tricks such as opening doors, climbing stairs like humans, and even function optimally in wet conditions. The general public had voiced several opinions regarding the robot’s capacity to harm humans. The company then issued a statement stating that the functional capacity of the robot is defined, and it cannot harm any human.
The functional efficiency of robots compared to humans is very low. But do the leaders and businesses understand that, and how exactly are we tackling challenges while making AI. According to many research scientists working in the US, making AI would be easy in the next 10 years, but as humans try to build a technology, it will be as flawed as them.
Let’s talk about the growing disconnect between AI and business,
NeurIPS is a nickname of a conference called a Neural Information Processing System. Over the past 3 years, it has become one of the premier annual events for researchers working on AI. In 2019, the event saw one of the biggest participation with over 13,000 attendees and more than 1,400 research papers being presented. Many of the participants attended meetups that were held in 35 different countries. In the conference, there is an active interest from businesses to find new talent for their R and amp;D department.
Some of the speakers during the conference said that deep learning systems don’t exhibit any human-like learning abilities. They used various examples to showcase that AI is still far from the ability to master new tasks from just a handful of examples, learn new concepts, and using common sense. Researchers even predicted that the field was stumbling in the dark when it came to giving robot qualities like human-efficiency and flexibility. Business leaders present during the conference, however, looked toward building or exceeding human abilities to specific tasks. Researchers, however, had questioned that are businesses setting any benchmarks before bringing an AI system to the level of human decision making.
According to JP Morgan, investment in AI would reach $35.8 billion in 2019. How much of that AI investment is going toward the reskilling of the resources? Currently, only 3% of the business leaders are planning to invest in reskilling significantly in the next 3 years. Most of the organizations are failing to use the available time to ante up their growth as the industry is still building its base.
Some of the business leaders admitted that their primary motivation behind investing in AI was bringing efficiency and reducing the cost of a process rather than promoting growth. According to a survey of business leaders, 76% of the executives said that they had used AI to augment their tasks over the past 3 years, while 90% said that they had used technology to automate tasks.
The flaw that we might see in AI,
Google had recently unveiled a new AI technology called BERT (Bidirectional Encoder Representations from Transformers), which changed the way scientists built systems that learn how people write and talk. BERT was then deployed to various services such as Google’s Internet search engine to predict different search options and results. The technology, however, is still flawed at various levels because it’s just like a child who learns from his parents, both things—good and bad. BERT would even be capable of replicating those flaws at every stage.
BERT is just one of the AI systems, such as Alexa and Google Assistant, which learns a lot from the digitized information available. It considers everything present in the digitized format as knowledge and absorbs it. BERT, for example, is more likely to associate men with computer programming rather than being gender-neutral. Recently, an AI tech was used to predict the news content tone. Everything written in that story was flattering, but, in some places, President Trump’s name was used; and the program gave a negative review about the story.
The rise in AI investments is putting the industry under constant threat of being overly emphasized only on the growth factor. Leaders need to realize that even if they want to develop AI to be effective enough to reduce cost or bring efficiency to the single process, they would eventually want growth from it over time. Building an AI solution business will need insights for both the present and future. To know more about AI solutions, download our latest whitepapers on Artificial Intelligence. | <urn:uuid:bc84a946-f2a5-4536-bf88-d87772afcfa3> | CC-MAIN-2024-38 | https://www.ai-demand.com/insights/tech/artificial-intelligence/how-can-ai-bring-solutions-for-businesses/ | 2024-09-09T11:55:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00785.warc.gz | en | 0.974186 | 1,235 | 2.59375 | 3 |
In just a few short years, artificial intelligence (“AI”) has gone from science fiction to a commercial (and legal) reality. Whilst the widespread use of driverless cars and the rise of robot butlers may still be some way off, the truth is that, in the tech sector, AI has gone beyond a buzzword and is now a key element of many businesses’ product offerings. This means that suppliers and consumers alike are already having to grapple with the unique commercial and legal issues it raises.
The term “AI” is a bit of a moveable feast. A purist would argue that true AI requires a combination of four things: machine processing; machine learning; machine perception and machine control (i.e. a powerful computer, which can teach itself, absorb information from the world around it and move within that world to aid its task). However, the term is more often used to refer to a system that has just the first two elements — a computer program that can analyse and process large quantities of data, extract and learn patterns from that data, and then use what it has learnt to provide “intelligent” responses to specific requests.
To a certain extent, therefore, recognising that AI is often just applied software helps one to understand how it can be monetised in business. However, the way AI systems are created and used does give rise to issues that “traditional” software licensing is ill-equipped to address.
The legal implications of ‘creative’, artificial intelligent robots
One of the principle problems posed by AI in business is the difficulty surrounding ownership, both of the AI system itself and its outputs.
A common approach to AI creation is a collaborative one, whereby an AI developer partners with an organisation that has the large quantities of relevant data needed to train the system. Whilst the developer brings the expertise, the system will go nowhere without that data. Generally speaking, the developer will “own” the resultant system, but will grant its data provider a right to use it in return. However, is the data provider happy for the developer to share the system with competitors? Will the data it has provided form part of the outputs of the AI? Does the data provider own all of the data it is providing? All of these issues need to be considered carefully at the outset of a project and ideally agreed in writing.
Another, increasingly common, scenario is where an AI developer offers a “finished” AI system as a commercial product and the customer uses its own data to train the AI system specifically for its needs business’s needs. Whilst this does not generally give rise to questions over the ownership of system as a whole, issues can still arise over the specifically tailored AI. Is it a “new” system? Who has liability for the outputs if they infringe on someone else’s rights, and what happens if the customer wants to move to a different AI provider — can it extract is data and/or the specifically “trained” element of the system? Again, there is no “right” answer, but all of these issues should be addressed as early as possible.
AI neural network health-tech tool may solve health tech IP/collaboration dilemma
The other potential pitfall in implementing an AI solution is that of liability. As an AI system lacks legal personality, it cannot incur liability and brings with it the so-called “black box” problem. Unlike code written by a person, some AI systems store their information in a form that cannot easily be read by humans or reverse engineered. It can therefore be impossible to discover why a system made a particular decision or produced a particular output. In such cases, liability is likely to fall upon the person or entity who controls or directs the actions of the AI. However, as explained above, often that is not clear where one party has created the AI and another has decided what data to put into it or what questions to ask it.
Businesses should ensure that there are contractual indemnities in place for any actions of the AI that infringe copyright work. Similarly, it is important to be aware of where data used in the AI system is coming from, to avoid infringing third parties’ IP rights or misusing confidential information.
The tech Budget? Is Crypto Phil Hammond doing enough to create an AI revolution in the UK?
Was it really a tech Budget? The Chancellor did a lot of talking about AI and other new technologies in his Budget 2018. The more detailed Budget Red Book even had a section on cryptocurrencies and blockchain. But is it enough? Read here
AI and intellectual property: the future
Driven by the continual advancement of big data and ever better computer processing capabilities, the use of AI is set to continue its exponential growth. This will be undoubtedly positive for business, with recent research by Accenture suggesting that it will boost productivity in developed economies by up to 40% by 2035. The Chancellor also promised a £1 billion investment in last year’s Budget in the hope that the UK will become a world leader in AI technology. However, businesses should also be mindful of the need for an accompanying increase of awareness in the legal and ethical issues AI presents. | <urn:uuid:c089b06d-2743-4f20-88e5-70359a227584> | CC-MAIN-2024-38 | https://www.information-age.com/ai-intellectual-property-13986/ | 2024-09-09T11:16:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00785.warc.gz | en | 0.960512 | 1,072 | 2.71875 | 3 |
Recycle with CompuCycle during the Holidays
TVs, cell phones, laptops, and tablets are common gifts that millions of people receive every holiday season. While getting new gadgets is exciting, have you stopped to think about what happens to your old electronics? Sadly last year, 384,000,000 electronics ended up in landfills, exposing Americans to toxic chemicals.
If you plan on receiving new electronics this holiday season and want to retire your previous models, be sure to do so responsibly with CompuCycle! Check out the items we accept before dropping anything off, and you might be surprised by what we accept.
What to do With Your Old Electronics?
Once you’ve determined what items you will recycle, you can drop off your electronics at the CompuCycle facility during our normal business hours free of charge! We have two Saturday collection events left in December where you can drop off your items. See our collection events calendar for all of the details. You can also go to http://dropoff.houstontx.gov/ to see all of the participating storage facilities that we are partnered with as drop-off sites for electronics to be recycled. CompuCycle also guarantees all hard drive data is destroyed through our recycling process.
Please encourage your friends and family members to recycle their unwanted electronic items as well. Ensure they know that when electronics are not responsibly recycled, they can easily end up in landfills, releasing toxic elements into our air and water streams. Here are some reasons why you should avoid throwing away your old electronics.
Electronics are chock full of brominated flame retardants, halogenated polymers, chromium, beryllium oxide, sulfur, mercury, cadmium, lead, and arsenic, etc., that are toxic and contaminate spoil, pollute the air, and leach their way to water sources if thrown into the garbage and make their way to a landfill. By leaking into our ecosystem, they harm plant and animal life and contaminate our food sources.
Some Electronics are Reusable
Other than that, e-recycling creates employment opportunities and new streams of revenue in the economy. Most of the time, the electronics that are of no use to you are reusable and are of great value for underprivileged communities and individuals who cannot afford new electronic devices. Furthermore, recycling electronics contribute to energy and natural resource preservation efforts. It takes 50 pounds of chemicals, 1.5 tons of water, and 500 pounds of fossil fuel to manufacture a monitor and a computer. Imagine that! You can save that many resources by dropping off a single reusable personal computer to the nearest CompuCycle drop-off facility.
If you do not care about green life, even then, there is a reason to recycle your old electronics instead of trashing them. It is the privacy and safety of your data. You can protect your personal and confidential information by handing over your electronic assets to credible recycling companies as they wipe them off before destroying them. When you trash your electronics, you can never know who gets their hands on them and what they can do with your personal data. Even flashing your electronics is not enough as it can be restored and used against you.
From embarrassing stuff making its way to the internet for all to see to information that can ruin your future professional and career opportunities to data that can be swiped to access your bank accounts, threats of the information in your electronics getting into the wrong hands are aplenty. CompuCycle’s refurbishing and recycling programs are incorporated with data-deleting services that ensure there is not even a single piece of information left on any of your electronic assets before they are recycled.
Another reason to be careful and wary of trashing your electronics is that you could be breaking the law by doing that. Half of the United States of America has laws pertaining to e-waste on their books. Texas e-waste laws are comprehensive and regulate individuals and businesses to follow certain measures adhering to e-waste disposal. These laws and regulations are set to keep increasing and getting more and more strict as the problem becomes more serious.
Compucycle makes it easy to recycle your electronics responsibly, safely, and securely. By offering your unneeded electronic items to us, you can attain peace of mind for doing the right thing. You will be saving natural resources and preserving a lot of energy, and helping less privileged communities carry out fruitful operations and benefit from technology that they cannot have otherwise accessed. Furthermore, you will be preventing the spread of air, water, and soil contaminants, consequently aiding the health of natural life and the ecosystem.
By choosing CompuCycle, you will not have to worry about your data and personal information getting into wrong and potentially threatening hands. We are a company that is compliant with all local and international e-waste and e-recycling laws, and we are certified in this regard. So, get rid of your electronics by bringing them to us and feel proud of yourself! | <urn:uuid:823fdcb2-fff1-4537-8c7a-7acdf98fd95e> | CC-MAIN-2024-38 | https://compucycle.com/blog/recycle-with-compucycle-during-the-holidays/ | 2024-09-19T06:23:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00885.warc.gz | en | 0.954984 | 1,029 | 2.6875 | 3 |
As artificial intelligence continues to mature, we are seeing a corresponding growth in sophistication for humanoid robots and the applications for digital human beings in many aspects of modern-day life. To help you see the possibilities, we have pulled together some of the best examples of humanoid robots and where you might see digital humans in your everyday life today.
Even though the earliest form of humanoid was created by Leonardo Da Vinci in 1495 (a mechanical armoured suit that could sit, stand and walk), today’s humanoid robots are powered by artificial intelligence and can listen, talk, move and respond. They use sensors and actuators (motors that control movement) and have features that are modelled after human parts. Whether they are structurally similar to a male (called an Android) or a female (Gynoid), it’s a challenge to create realistic robots that replicate human capabilities.
The first modern-day humanoid robots were created to learn how to make better prosthetics for humans, but now they are developed to do many things to entertain us, specific jobs such as a home health worker or manufacturer, and more. Artificial intelligence makes robots human-like and helps humanoids listen, understand, and respond to their environment and interactions with humans. Here are some of the most innovative humanoid robots in development today:
Atlas: When you see Atlas in action (doing backflips and jumping from one platform to another), you can see why its creators call it “the world’s most dynamic humanoid.” It was unveiled in 2013, but its prowess for jumping platforms was released in a video in 2017. Atlas was created to carry out search and rescue missions.
Ocean One: Stanford Robotics Lab developed Ocean One, a bimanual underwater humanoid robot. Since Ocean One can reach depths that humans cannot, it can be very instrumental in researching coral reefs and other deep-sea inhabitants and features when it explores. Its anthropomorphic design and resemblance to a human diver make it very manoeuvrable.
Petman: Boston Dynamics, the same company responsible for Atlas, also created Petman (Protection Ensemble Test Mannequin) to test chemical and biological suits for the U.S. Military. When you see bipedal Petman in motion, it’s easy to see its human-like characteristics.
Robear: Other humanoid robots such as Robear might look more cartoon than human, but their actions definitely mimic human movement. Robear was developed to possibly help with the shortage of caregivers in Japan as the population ages. As a result, this humanoid has very gentle movements.
Sophia: A humanoid robot developed by Hanson Robotics, is one of the most human-like robots. Sophia is able to have a human-like conversation and is able to make many human-like facial expressions. She has been made the world’s first robot citizen and is the robot Innovation Ambassador for the United Nations Development Programme.
Digital Human Beings
Digital human beings are photorealistic digitised virtual versions of humans. Consider them avatars. While they don’t necessarily have to be created in the likeness of a specific individual (they can be entirely unique), they do look and act like humans. Unlike digital assistants such as Alexa or Siri, these AI-powered virtual beings are designed to interact, sympathise, and have conversations just like a fellow human would. Here are a few digital human beings in development or at work today:
Neons: AI-powered lifeforms created by Samsung’s STAR Labs and called Neons include unique personalities such as a banker, K-pop star, and yoga instructor. While the technology is still young, the company expects that, ultimately, Neons will be available on a subscription basis to provide services such as a customer service or concierge.
Digital Pop Stars: In Japan, new pop stars are getting attention—and these pop stars are made of pixels. One of the band members of AKB48, Amy, is entirely digital and was made from borrowing features from the human artists in the group. Another Japanese artist, Hatsune Miku, is a virtual character from Crypton Future Media. Although she started out as the illustration to promote a voice synthesiser with the same name, she now draws her own fans to sold-out auditoriums. With Auxuman, artificial intelligence is actually making the music and creating the digital performers that perform the original compositions.
AI Hosts: Virtual copies of celebrities were created by ObEN Inc to host the Spring Festival Gala, a celebration of the Chinese lunar new year. This project illustrates the potential of personal AIs—a substitute for a real person when they can’t be present in person. Similarly, China’s Xinhua news agency introduced an AI news anchor that will report the news 24/7.
Fashion Models and Social Media Influencers: Another way digital human beings are being used is in the fashion world. H&M used computer-generated models on its website, and Artificial Talent Co. Created an entire business to generate completely photorealistic and customizable fashion models. And it turns out you don’t have to be a real-life human to attract a social media following. Miquela, an artificial intelligence “influencer, ” has 1.3 million Instagram followers.
Digital humans have been used in television, movies, and video games already, but there are limitations to using them to replace human actors. And while it’s challenging to predict exactly how digital humans will alter our futures, there are people pondering what digital immortality would be like or how to control the negative possibilities of the technology. | <urn:uuid:4a8e4160-93e8-4c11-bafd-c19f15de260c> | CC-MAIN-2024-38 | https://bernardmarr.com/artificial-human-beings-the-amazing-examples-of-robotic-humanoids-and-digital-humans/?paged1119=2 | 2024-09-20T13:32:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00785.warc.gz | en | 0.947767 | 1,158 | 3.0625 | 3 |
Penetration testing is a common buzzword in the information security industry, but what does it mean? If you walk into a room of 10 security providers, you will probably hear 11 different answers. There is no standard of penetration testing, some firms conduct vulnerability scans and call it a penetration test, while others put hands on keys and conduct attack emulations. This article will help educate and guide you around the topic of penetration testing.
What is Penetration Testing?
Penetration testing, or pen testing, is a comprehensive and systematic approach to identifying and exploiting vulnerabilities and weaknesses within your organization’s digital infrastructure. Unlike vulnerability scanning, which identifies potential security gaps, pen testing simulates real-world attacks to gauge how well your systems can withstand threats.
Think of vulnerability scanning as a home inspection, ensuring all the doors and windows are locked. Penetration testing asks someone to pick those locks, kick down doors and see what they can find.
Who Should Conduct Penetration Testing?
Executing a successful penetration test requires specialized skills involving technical expertise and an in-depth understanding of cybersecurity threats. Organizations can hire skilled professionals internally or collaborate with reputable cybersecurity firms that specialize in conducting penetration tests. These experts are equipped with the knowledge and tools necessary to simulate diverse cyberattacks and assess the effectiveness of your defense mechanisms.
Where Should Penetration Testing be Applied?
The scope of penetration testing is broad and can encompass various facets of your digital infrastructure, including networks, web applications, mobile apps, cloud services, and even the physical security layer.
Recognizing the optimal timing for penetration testing profoundly impacts your organization’s cybersecurity strategy. Here’s a suggested timeline:
How Often Should Penetration Testing be Done?
1. Startup Phase: Initiate with basic vulnerability assessments to unearth easily fixable issues. Your customers or potential prospects will ask you about your security posture. Don’t jump to penetration testing until you have mastered the basics of vulnerability management. Often, your customer’s procurement department is just looking to ensure you have the concept of security on your roadmap and are progressing in securing their data.
2. Growth Phase: As your organization expands, ensure you have a comprehensive security foundation. Conduct risk assessments and put together due diligence packages. At this point, you need to maximize your security spending and ensure you cover all security areas, not just technical ones.
3. Pre-Maturity Phase: Before reaching full maturity, ensure your network and cloud environments are segmented and set up securely. You might be asked by this point to conduct some penetration testing depending on the industry you play in (covered later in the article), but if you can hold off on doing so, other projects might be a better use of your time.
4. Maturity Phase: You’ve made it. You have your information security policies in place, you have regular vulnerability assessments and you have sorted out the pesky task of network and cloud set up. Now is when you want to start integrating regular penetration testing into your cybersecurity strategy. Use this as a validation of the work you have previously done; it’s good to get an outside perspective and know how an attacker could access your organization’s data.
5. Enterprise Phase: Test internal systems, external applications, and cloud services, and consider advanced techniques like red teaming. At this phase, you should have an outside party take a whack at your environment on a continuous basis. A yearly penetration test isn’t going to keep up with your high-profile status.
Why is Penetration Testing Crucial?
1. Identifying Vulnerabilities: Pen testing uncovers vulnerabilities beyond automated scanning tools, offering a clearer view of your security posture.
2. Mitigating Business Risks: By proactively identifying and addressing vulnerabilities, you mitigate the risk of breaches, financial losses, and reputation damage. You don’t want someone with poor intentions finding your vulnerabilities first, when they do, you end up with some fires to fight from a public relations and customer management perspective. Those aren’t fun spots to be in.
3. Compliance Requirements: Certain industries are legally bound to perform regular pen testing to meet compliance standards. For instance, the finance sector adheres to PCI DSS, and healthcare providers must comply with HIPAA.
4. Staying Ahead of Attackers: Penetration testing aids in outpacing malicious actors by identifying weaknesses before they exploit them.
Industries Where Penetration Testing is Mandated:
Various industries are obligated to conduct pen due to their adherence to specific compliance frameworks. Examples include:
Finance: Payment Card Industry Data Security Standard (PCIDSS) necessitates regular testing for organizations handling credit card data.
Healthcare: Health Insurance Portability and Accountability Act (HIPAA) mandates healthcare organizations to perform regular tests to safeguard patient data.
Government: Government agencies dealing with sensitive information are often required to perform penetration test to maintain national security.
Energy: Critical infrastructure requires stringent cybersecurity; penetration tests are crucial.
How Much Should a Pen Test Cost?
This is a piece of the puzzle that most companies don’t like to talk about, but as a potential buyer its important to know what you are looking at and how to snuff out real penetration testing based on initial quotes. A few factors come into play that change pricing consideration: Scope and timing.
Scope: The larger your network the more you will spend on a test. Internal network testing also costs more than external testing. Web applications that need testing will cost more as the personnel that can test applications without breaking them comes at a premium.
Timing: if you need a penetration test done quickly, you’ll likely pay more for the expedited service. Qualified penetration testing organizations book out weeks to months in advance. Also, end of the year is like tax season for pen testing due to the annual requirements in certain industries.
How long should a pen test take?
These tests don’t happen overnight, even though many times the test occurs after hours. These take weeks of testing to ensure a through result is produced. In some cases, a test will span 3-6 months depending on the size of the environment and the level of testing performed.
What is the ROI of Penetration Testing?
The return on investment (ROI) of penetration testing is a crucial consideration. Investing in robust cybersecurity through pen testing can prevent potentially devastating financial losses and reputational damage. Numerous studies and resources emphasize the positive ROI of pen testing:
– According to the **Ponemon Institute’s 2020 Cost of a Data Breach Report**, organizations that conduct regular penetration testing experience cost savings when responding to a data breach. (Source: [Ponemon Institute Report])
– The **NIST Special Publication 800-115** underscores the value of penetration testing in identifying vulnerabilities and preventing unauthorized access. (Source: [NIST SP 800-115])
– The **SANS Institute** offers resources and whitepapers highlighting the ROI of pen testing, demonstrating its role in risk reduction and avoiding costly security incidents. (Source: [SANS Institute])
Penetration testing is a crucial component of any mature information security program but make sure you do your homework on if it’s the right time to perform one on your environment!
If you’re unsure where to start with penetration testing, reach out to Blue Team Alpha—an industry-leading cybersecurity firm with a track record of excellence. Their team of experts encompasses a nation-state level pen testing team that can provide insights into your organization’s security posture. With Blue Team Alpha by your side, you can fortify your defenses and navigate the complex world of cybersecurity with confidence.
To learn more about enhancing your company’s cybersecurity, speak with an expert from Blue Team Alpha!
Thank you for reading this article about the cybersecurity and AI topic. If you have any questions, please visit our site for more information! | <urn:uuid:7c913f60-8571-41e0-a9a7-102bd522ef32> | CC-MAIN-2024-38 | https://blueteamalpha.com/blog/the-who-what-where-when-and-why-of-penetration-testing/ | 2024-09-09T15:16:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00885.warc.gz | en | 0.922598 | 1,640 | 3.046875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.