text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
BrainChip is developing a novel ultra low power “neuromorphic” AI processor that can be embedded in literally any electronic device, rather than centralizing learning in high performance processors.
Today’s edge devices are applying exiting models to process inputs but can’t actually learn in the field, but on-chip learning and inference could radically alter the capabilities of devices in automotive, home, medical, and other remote locations.
BrainChip is able to reduce power thanks to the neuromorphic self-learning approach and also because they reduce precision down to 4 bits or less. This loses some accuracy, but only a little. The company also creates a mesh of cores that have access to local memory, enabling flexibility of processing.
Guests and Hosts
- Lou DiNardo, President and CEO of BrainChip. Connect with Lou on LinkedIn.
- Andy Thurai, technology influencer and thought leader. Find Andy’s content at theFieldCTO.com and on Twitter at @AndyThurai
- Stephen Foskett, Publisher of Gestalt IT and Organizer of Tech Field Day. Find Stephen’s writing at GestaltIT.com and on Twitter at @SFoskett
For your weekly dose of Utilizing AI, subscribe to our podcast on your favorite podcast app through Anchor FM and watch more Utilizing AI podcast videos on the dedicated website https://utilizing-ai.com/. | <urn:uuid:c2a7cef1-5b0c-4677-bf0b-06b3ce231ce4> | CC-MAIN-2024-38 | https://gestaltit.com/utilizing-ai/gestalt/utilizing-ai-s2-ep-6-moving-ai-to-the-edge/ | 2024-09-09T14:23:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00885.warc.gz | en | 0.91275 | 297 | 2.609375 | 3 |
Much like a real life firewall protects a building from the damage of flames, a virtual firewall on a computer or network system acts as a barrier between harmful cyber activities (viruses, phishing, hacking, etc.) and the sensitive information contained on your computers and network. But how do firewalls work? What types of firewalls are available? How easily can they be managed? The answers to all these questions lie below.
In its simplest form, a firewall is a system that controls access to your computer and/or network using a set of control policies and settings. The purpose of a firewall is to filter any traffic coming into your network from the outside. Acting as a transit point, the firewall controls access to your network by filtering all inbound network traffic and determining which data packets might be harmful.
A firewall is a tool that is highly customizable. Its filtering strength can be increased or decreased based on your security requirements. While it is possible to loosen and tighten these controls, it is generally a good idea to tighten controls when more traffic or users are accessing your network. The type of firewall you use and the manner in which it is configured are all factors that should be considered when establishing a business network firewall.
Types of Firewalls
There are a myriad of firewall options available for personal and business use, but there are three particular systems that are considered the most common today. These include a stateless firewall, statefull firewall and a packet-filtering firewall. The definitions that follow require a fairly advanced understanding of this technology – if you have any questions, talk to a Business IT specialist or post your questions in the comments below.
The stateless firewall was one of the first to be launched to protect computers and networks from malware and other malicious content. Stateless firewalls filter and inspect packets of inbound data using preset acceptance protocols. Should the packet of data fail to meet the parameters set forth in the protocol, it is dropped from the network and denied access.
With a statefull firewall, the system will filter incoming data packets once they are stored in the firewall by judging the flow of information. Once the packet is stored, the firewall can compare the packet to any existing flow of data on the network to see if it matches up anywhere and determine its threat risk at that point.
A packet-filtering firewall analyzes information contained in different layers of the data packet and then allows or denies access to the network based upon factors such as destination IP address, protocol, packet source or packet type.
For businesses with larger networks and several computers, a Unified Threat Management (UTM) system is an additional option that offers a comprehensive security solution. It expands upon the traditional firewall solutions by combining additional security capabilities including SPAM filtering, antivirus protection and more. It connects to your business network directly and filters all incoming traffic on the network and compares it to the varying needs of different computers and users on the network to determine access.
The manner in which a firewall is managed is something that needs to be taken into consideration by businesses as well. The above information covered types of firewalls, while we’ll now explain the management of these systems. There are three typical types of firewall management: unmanaged dedicated, managed dedicated and hosted firewalls.
Unmanaged dedicated firewalls are those which are installed locally on your business network and used specifically for your network needs and no others. This ensures a high level of security but requires a strong in-house IT department to manage the system.
Managed dedicated firewalls differ from unmanaged dedicated firewalls only in the fact that your business is not responsible for purchasing and maintaining equipment. You get the same high level of security dedicated to your network, but a business IT service provider manages your network security.
Finally, there is the option of using a hosted firewall. This type of management is done entirely from a virtual system on a shared infrastructure platform. Your business neither invests in equipment nor spends money on IT employees to manage firewall security.
Role of Business IT Service Providers
If you are confused by all these different firewall terms, you aren’t alone. Consulting a business IT service provider is the best way to determine which type of firewall and firewall management will best suit your business. Professionals from an IT service provider can assess the needs and capabilities of your business and help you determine which firewall solutions will be the best fit for your network. | <urn:uuid:cb9ccb38-1c9e-4178-ba14-f95f5d3bde51> | CC-MAIN-2024-38 | https://www.marconet.com/blog/find-a-trustworthy-firewall-system-from-a-business-it-service-provider | 2024-09-09T15:37:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00885.warc.gz | en | 0.938027 | 901 | 3.46875 | 3 |
Using the Image Recognition command
Use this command to search for an image within a source image.
- Double-click or drag the command to the Task Actions List pane.
Select the source image file from a folder or capture it from an application
This image can be standalone or contained within another image that is captured dynamically at run time.
- Select Show Coordinates to capture and view the coordinates of the target image within the window.
- Specify the wait time (in milliseconds) in the Wait field.
Select or capture the image that you want to click upon during play time in
You can capture the image from an an application window or select it from a File.If you are using the command for a window, you also have the flexibility to position your click location relative to an image. This is useful when the target image is blurred, has some background noise, or the target image is visible multiple times.
Select Image Occurrence when the target image can be
found multiple times.
You can insert a variable when you do not know the number of times the image might appear on the screen. Ensure you assign variables that support numeric values.
Select a click option:
- Left Click
- Right Click
- Specify match percentage and tolerance.
Select one of the methods of comparison.
- Monochrome with threshold
- Optionally, select the Quick Test button to see the output without running the entire test.
- Click Save. | <urn:uuid:11c46086-d81e-463f-bcab-1da576ebaa13> | CC-MAIN-2024-38 | https://docs.automationanywhere.com/ko-KR/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/commands/using-image-recognition-command.html | 2024-09-10T20:07:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00785.warc.gz | en | 0.859259 | 301 | 2.765625 | 3 |
If you work in security, you’ll be familiar with concepts like Network Video Recorders (NVR) and Digital Video Recorders (DVR). Particularly, if you specialise in video surveillance solutions. When installing a video surveillance system, you need to decide whether to opt for a networked digital recorder. The question is, which is better: NVR or DVR?
NVR are an important component of any networked video surveillance system. One of their main functions is to store video signals sent by IP cameras connected to the equipment to give remote access to data.
A DVR recorder processes analogue images and saves them in digital format on a hard drive or equivalent. That is the main difference: a DVR converts analogue images to digital format, while an NVR, generally speaking, only functions with digital images.
Both DVR and NVR record video sequences and store them on an external hard drive. However, there are differences regarding how the data is processed and how the equipment is configured.
DVR recorders need to be on the same site as all the analogue signal wiring, i.e. this equipment is connected to an analogue CCTV system via a coaxial cable to the analogue signal input of the DVR. In the case of the NVR, this is not necessary. The equipment can be located outside the facility where the security cameras are located. This is because the connection with the cameras is via IP (LAN or WAN), which guarantees correct data traffic.
The video in a DVR is encoded and processed in the DVR, while video on an NVR is encoded and processed in the camera and then transmitted to the NVR for storage or remote viewing.
Number of cameras
A DVR can only connect to a limited number of cameras, while an NVR can connect to an unlimited number of cameras. PoE and wireless IP cameras are used in NVR systems. At the same time, a DVR recorder connects to HD security cameras and other CCTV, and the quality of images and sound are of lower quality.
NVRs have increased storage and processing capacity, a vital feature for video analytics solutions. This allows businesses to have a device that provides greater storage capacity, remote access, data security, and scalability.
If you still have doubts about the type of recorder you need in your installation, you should ask yourself: What kind of hardware is wired? Are you experienced in designing and configuring network devices? What kind of maintenance does the system require? What kind of access will the installation allow? Remote access? among others. Depending on your answers, you can opt for one system or another or even a hybrid NVR/DVR surveillance system that integrates both NVR and DVR functions.
As video surveillance systems evolve technologically, they bring new opportunities for accuracy and efficiency. However, our video analytics systems are compatible with both NVR and DVR recorders, achieving full integration and maximising security and protection levels.
Learn how DFUSION can improve your perimeter security system! | <urn:uuid:f8185a6a-11ce-4074-803a-a26ee7079fad> | CC-MAIN-2024-38 | https://www.davantis.com/en/blog/technology/nvr-dvr/ | 2024-09-10T21:48:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00785.warc.gz | en | 0.924963 | 619 | 2.65625 | 3 |
Penetration testing aims to detect vulnerabilities and errors that threaten the security of a system – ICT infrastructure, network, application, or website.
Pentest should be carried out systematically and can vary in scope, most often forming part of an audit of IT systems and infrastructure. Their main objective is to examine how resistant a network is to intrusion and the effectiveness of the security measures in place.
We perform penetration testing at all levels:
White box or crystal box tests – testers work according to information about the system under the test provided by the client.
Black box tests – testers independently obtain the information necessary to compromise systems – this model best reflects an actual cyberattack.
Gray box tests – a hybrid of the two methods mentioned above.
An essential part of each test is a summary report that describes the problems identified and assesses the risk of occurrence. It includes specific recommendations aimed at their effective elimination.
Penetration testing should be a mandatory part of your IT strategy if:
You are the owner or manager of a company that has an IT infrastructure
You store data, especially sensitive and personal ones
You want to protect your knowledge, patents, and company know-how
In your company, at least part of the team works remotely.
You will review the effectiveness of the security features
You will find out how well your assets are currently protected.
You will investigate the system’s vulnerability to a potential cyberattack
You will launch a hacking attack on the infrastructure in a controlled manner.
You will make the most of your equipment
You will learn how to get rid of vulnerabilities using existing solutions.
You will set up a recovery plan
You will receive detailed recommendations in line with the best security practices.
You will avoid the costs of a real attack
Both in terms of image and in terms of stopping production lines or operations.
You will gain credibility
You will appear to your business partners as a technologically aware partner that is ahead of the competition.
It is best to do this periodically, at least once a year, and whenever changes are made to the systems. After the pentest phase, it is also worth performing a re-test, i.e. verification of the changes made (checking that they have been implemented correctly and have not led to new security vulnerabilities).
A pentest, depending on the type, size, and complexity of the structure being tested, can take from a few days to several weeks.
The technical competence of pentesters is confirmed by certifications, issued by international cybersecurity organizations. It is also important to participate in projects similar to yours.
A penetration test only deals with its specific part (infrastructure, application, network or website) and is part of a security audit. The audit covers the entire system being audited.
Grandmetric’s safety reports are detailed and meticulously produced documentation. They include by default:
“In today’s world, the methods and so-called vectors of attacks and spreading hazards are exceptionally diverse, not to say – sophisticated. We are faced with an ever-increasing number of possible interfaces, protocols, and interfaces with different parts of the IT environment. This is why all places where a potential attack could occur should be taken into account.”
Marcin Biały, Advisory Architect | Board Member at Grandmetric | <urn:uuid:675662a5-867f-4d51-8648-e78789e3ddc3> | CC-MAIN-2024-38 | https://www.grandmetric.com/services/network-data-and-systems-security/professional-penetration-testing/ | 2024-09-12T01:29:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00685.warc.gz | en | 0.946054 | 683 | 2.75 | 3 |
The most effective cybersecurity programs center on adaptability, and recent innovations in artificial intelligence (AI) are certainly keeping today’s security teams on their toes. With new AI-based tools and services hitting the market at breakneck speed, their potential to both help and harm cybersecurity efforts is becoming increasingly apparent. While security teams and cybercriminals have long been engaged in a push-and-pull relationship, AI is being used on both sides of the fence.
AI is being leveraged to defend organizations against new and existing threats, but it’s also being employed by hackers — often using the exact same capabilities — to breach these defenses faster and with less effort. For this reason, it’s helpful for security leaders to be aware of the common ways AI is being used to propel these conflicting agendas. Security leaders then have the challenging task of weighing the opportunities presented by AI with its risks in order to protect their organizations and their data.
How AI is Enhancing Cybersecurity
There are a nearly endless number of use cases for AI to accelerate a company’s cyber defense strategy. It’s particularly beneficial in helping security teams answer questions like, “If I were a hacker on the outside looking in, where would I find gaps in our security program?” The potential for AI to proactively identify critical vulnerabilities and support remediation is one of its most compelling benefits. For example, AI can be used to crawl a company’s network perimeter to explore which systems or applications are internet-facing and what risks they may carry. With its ability to analyze massive quantities of data quickly, well-trained Large Language Models (LLMs) can augment manual security processes to find and fix issues at a speed that was previously impossible.
How a company takes advantage of AI for cybersecurity will depend on its priorities. Security teams that are concerned about potential issues lurking undetected or being missed during manual reviews may deploy red teams to conduct penetration testing. These teams go on the offensive to test the company’s security measures and can use AI to identify weaknesses and write custom exploits. For other teams, it may be a greater priority to leverage AI’s behavioral monitoring capabilities to identify and address insider threats, such as employees’ attempts to exfiltrate sensitive data. Or they may seek AI’s help in maintaining compliance with constantly shifting industry standards and regulations. Finally, they may even use AI to fight AI-powered attacks. For instance, they can use it to identify automated attack tools or expose malicious code.
While there are traditional methods of tackling all these security issues, AI is taking defensive efforts to the next level. Unfortunately, it’s also elevating what’s possible on the attack front, so despite all its benefits, threats from AI should not be underestimated or ignored.
How Attackers are Leveraging AI
First, there are risks stemming from the fact that many new AI tools are being pushed to market before developers fully understand how to secure them. With this in mind, companies should thoroughly vet all AI tools before approving their use across the enterprise. In reality, however, employees often use AI-based platforms or services that haven’t been authorized by the IT department, including content creation aids like ChatGPT or Google Bard. Security leaders need to plan for this. In order to configure their security controls effectively, they need greater visibility into how these solutions are actually being used within their environments — whether they are approved or not.
There are also more intentional threats posed by bad actors. AI is being used to power many different types of attacks — and, even more worryingly, it is lowering the technical barrier to entry for hackers. Today’s hackers can use AI to write highly believable phishing emails or collect and analyze customer data for credential-stuffing attempts. It can also be employed in the initial stages of an attack, when hackers are researching the organization they’re targeting and gathering intel. LLMs can power open-source intelligence (OSINT) to craft comprehensive dossiers on a company’s weak spots, such as publicly-facing assets with known exploits. These capabilities are reducing the upfront lift for hackers, allowing them to carry out well-researched attacks with ease.
Other Best Practices for Responsible AI Use
While it’s critical for security leaders to understand the pros and cons of AI in cybersecurity, these are only the tip of the iceberg. They’ll also need to stay closely tuned into new government frameworks and strategies around this technology, such as the Biden Administration’s recent Executive Order on AI and the first global AI Safety Summit recently held in the United Kingdom. These regulatory efforts aim to address big-picture concerns around AI, including pressing privacy concerns, ethical considerations, potential biases in training models, and more.
Organizations need to better understand and secure their AI-based tools through acceptable use policies, strong security controls, and training for employees. Education raises awareness on how to take advantage of AI’s productivity and time-saving benefits without putting the company at risk or losing data. In some cases, product and engineering teams will need to consider how to securely integrate AI into their offerings. Organizations must ensure they’re not blindly trusting the outputs of AI and that they have the right balance of automation, human judgment, and integrity.
As today’s companies continue to digitize, they’re exposed to a growing number of cyber threats. Thankfully, this same technology can be used to bolster their defensive mechanisms. As AI adds a new dimension to the ongoing battle between cyber defenders and attackers, security leaders who understand its capabilities on both sides of the coin and take a holistic view of AI in the enterprise can make AI work for them, rather than against them. | <urn:uuid:9f0b4992-ad27-4e8d-af49-1a375373fceb> | CC-MAIN-2024-38 | https://techstrong.ai/articles/the-role-of-ai-in-the-race-between-cyber-defense-and-attack/ | 2024-09-13T09:36:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651510.65/warc/CC-MAIN-20240913070112-20240913100112-00585.warc.gz | en | 0.959725 | 1,182 | 2.625 | 3 |
The sum function adds the set of numbers in the specified measure, grouped by the chosen dimension or dimensions. For example, sum(profit amount) returns the total profit amount grouped by the (optional) chosen dimension.
sum(measure, [group-by level])
The argument must be a measure. Null values are omitted from the results. Literal values don't work. The argument must be a field.
(Optional) Specifies the level to group the aggregation by. The level added can be any dimension or dimensions independent of the dimensions added to the visual.
The argument must be a dimension field. The group-by level must be enclosed in square brackets [ ]. For more information, see LAC-A functions.
The following example returns the sum of sales.
You can also specify at what level to group the computation using one or more dimensions in the view or in your dataset. This is called a LAC-A function. For more information about LAC-A functions, see LAC-A functions. The following example calculates the sum of sales at the Country level, but not across other dimensions (Region and Product) in the visual. | <urn:uuid:6959b76c-334f-4bca-86be-e79a080ed203> | CC-MAIN-2024-38 | https://help.calabrio.com/doc/Content/user-guides/insights-bi/prepare-data/sum-function.htm | 2024-09-20T17:13:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701419169.94/warc/CC-MAIN-20240920154713-20240920184713-00885.warc.gz | en | 0.754135 | 239 | 2.53125 | 3 |
ETL vs ELT in Data Warehousing
When the data is extracted from disparate sources and then it is transformed, the process is termed as ETL (extract, transform, and load). For data transformation, the actions performed include applying calculations and or altering the data types. After transformation of data is done, its loading takes place in target database that is data warehouse. The heavy lifting bulk, that is, the transforming part is conducted by ETL software when ETL is performed by user. Following are the cases in which ETL is use:
- The target and source database are different and varied forms of data are used by them
- Structured data is present
- Compute intensive transformations are there
- Small or moderate data volumes are there
OLAP data warehouse’s pre-structured nature serves to be the biggest advantage of ETL. After data transforming/structuring, efficient, speedier and stable analysis is allowed by ETL. This is not the case with ELT. Compliance serve as the other advantage of ETL. In case when some other techniques are used by companies for their clients’ privacy protection, there is need to mask, remove or encrypt particular fields of data. These transformations are carried out in safer manner by the method offered by ETL. This is because data is transformed before it is kept in data warehouse.
The compliance violations risk is reduced by ETL since data that is non-compliant is not able to accidently find way in reports or warehouse of data. There are several platforms and tools for ETL that are helpful for data transformation, extraction and loading demands.
This is the process in which data extraction takes place, then in the target warehouse, the data is loaded and after the data loading is done, it is transformed. In the case of ELT (extract, load, transform), it is the target database by which transforming work of data is carried out. ETL takes place without cloud installations or SQL databases such as Hadoop. Following are the cases in where ELT is used:
- Same type of target and source databases are there
- Unstructured data is present
- Proper adaptability is there in the engine of target database for handling voluminous data
- Data is present in large volumes
The ease and flexibility of storing unstructured and new data serves to be ELT’s main advantage. Any sort of information could be saved with ELT even when ability and time is not there for structuring or transforming it first. Therefore immediate access is offered the complete information as and when needed. Along with this, there is no need of making complex processes of ETL before ingesting data that further helps in time saving when the BI analyst and developers have to take care of the fresh information.
Related – Data Warehousing
Difference between ETL and ELT:
PARAMETER | ETL | ELT |
Stands for | (Extract, transform, load) | (Extract, load, transform) |
Maturity | For over 20 years, it is in existence and its design is intended for working with unstructured and structured data, relational databases and data of large volume. Use of ETL is guided by several experts. User can select from several tools in ETL. | Its adaption is not as much as ETL since the design is not intended for working with the relational databases and since their dominance is existence in market since 20 years. |
Suitable for | ·Smaller data volumes and computations that are complex ·Structured data ·Relational databases on-premise | ·Large data volumes and computations that are not much complex ·Unstructured data ·Data lakes ·Cloud environment |
Flexibility | Mature ETL tools are suitable for relational databases but its gearing is generally less for the data that is unstructured. Along with the use of ETL tools, the data must also be mapped out if it is to be moved towards target database. | The blend of unstructured and structured could be perfectly handled by the tools of ELT. Along with this, the complete data is used by ETL tools to the target improving flexibility of data set. |
Maintenance level | High level of maintenance since time is consumed in loading and transforming | Requires low level fo maintenance since data is always available. |
Cost | High | Low |
Unstructured data support | Relational data supported | Support for unstructured data |
Download the difference table here. | <urn:uuid:08dabad2-1599-47bf-b3b5-3e93f16861ab> | CC-MAIN-2024-38 | https://networkinterview.com/etl-vs-elt-in-data-warehousing/ | 2024-09-07T12:05:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00349.warc.gz | en | 0.949766 | 930 | 2.640625 | 3 |
Hyperconverged Infrastructure (HCI) is a software-defined IT infrastructure server. Instead of using hardware to operate each server element (such as storage, network and compute), the infrastructure elements exist as software, which is known as virtualization.
IT administrators can easily create virtual infrastructure within HCI servers. This enables fast application and resource deployment. Since it takes up less space and is easier to cool, HCI can also reduce IT infrastructure costs.
HCI servers are built by the manufacturer, so they can’t be customized. This contrasts with converged infrastructure, which is modular in design.
Converged Infrastructure (CI) | Hyperconverged Infrastructure (HCI) |
Separate compute, network, & storage components | Combined compute & storage, network remains separate |
Components can be sourced from same or different vendors | Components can be sourced from the same vendor |
Scale components individually | Scale compute and storage together |
Different software/firmware per component | Sotfware-defined storage runs on compute nodes |
Architected together after component inception | Archtected together before component inception |
External cohesion through mulitple management planes | Internal cohesion through a single management plane (usually hypervisor) |
Hyperconverged infrastructure enables a multicloud environment through unified management. This capability is critical, as it imparts the ability to automate and orchestrate infrastructure or components like APIs that provide programmability going to or from the public cloud.
Automation and orchestration can help you refocus on the business rather than the day-to-day activities of keeping the lights on, responding to alerts, and provisioning new servers or applications. These capabilities allow an IT organization to increase their strategic capacity to help the business, rather than just maintaining the status quo.
HCI solutions deliver the following benefits:
Many organizations look to HCI when implementing or improving Virtual Desktop Infrastructure (VDI) or server virtualization as a way to reduce complexity and minimize upfront costs. | <urn:uuid:8d7b6202-4040-46ea-bf2b-4f74d1228840> | CC-MAIN-2024-38 | https://prod-b2b.insight.com/en_US/content-and-resources/glossary/h/hyperconverged-infrastructure.html | 2024-09-07T11:51:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00349.warc.gz | en | 0.891639 | 414 | 2.90625 | 3 |
Growing environmental and labour concerns in the global ship breaking industry present a desperate need for green compliant ship recycling facilities.
The global ship recycling industry is dominated by ship breaking yards in South East Asia. In 2019, approximately 90% of all obsolete ships ended up on beaches in India, Pakistan or Bangladesh, according to the NGO Shipbreaking Platform. These shipbreaking yards demolish obsolete vessels under rudimentary conditions through a practice termed “beaching”. Beaching inevitably pollutes the ocean and its surroundings and creates unsafe working conditions, as most of the work is done manually.
The demolition of ships is a hazardous and labour-intensive process. It can present great risks to the maritime environment and to the labour rights of its employees if the vessel is not recycled in a safe and sustainable manner. With a combination of changing environmental legislation and increased stakeholder pressure, green compliant facilities that offer competitive vessel purchase prices are gaining increased prominence in the global ship breaking industry. The shift in the industry is further promoted by shipping companies implementing their own internal stringent ship recycling regulations that ensure that their end-of-life vessels are recycled in compliant facilities.
This changing market dynamics will continue to open opportunities for certified green ship recycling facilities in key locations. A great case study is the planned ship recycling facility, 34South, located along the West coast of South Africa in the Saldanha Bay Industrial Development Zone. This planned facility offers a prime location for end-of-life vessels passing through the Cape of Good Hope, bypassing the tolls of the Suez Canal.
The 34South facility maintains state-of-the-art equipment by making use of a ship lifting system, ensuring that vessels are decommissioned in an environmentally safe manner compared to the rudimentary method of beaching. This facility will maintain safe working and environmental standards in accordance with the International Ship Breaking laws and regulations, ensuring a green, sustainable, and compliant facility. The ship lifting system will support a common user philosophy and can create economies of scale by accommodating more than one vessel at a time.
The 34South planned facility is supported by the Industrial Development Corporation (IDC) of South Africa, as its largest shareholder. | <urn:uuid:eb8b2722-52b8-4d33-adf0-9122621c76c1> | CC-MAIN-2024-38 | https://www.frost.com/growth-opportunity-news/the-transition-towards-green-compliant-ship-recycling-facilities-in-south-africa/ | 2024-09-12T05:37:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00849.warc.gz | en | 0.949755 | 439 | 2.515625 | 3 |
A team of German security researchers has developed a new class of web cache poisoning attacks that could make victim services inaccessible.
The cache can decrease network trafficby re-use of HTTP responses and allow applications to grow, as well as defend against denial-of-service (DoS) attacks.
Researchers at the Cologne University of Applied Science in Germany discovered a new attack involving a server-generated error page poisoning the cache and then serving worthless content instead of legitimate.
The attack is being battled by a proxy cache tool and five CDN-services that include leading applications that cache high-value web sites— Akamai, CDN77, Fastly, Cloudflare, CloudFront and Varnish— to cache error pages.
“It is dangerous, as a simple request is sufficient to paralyze the website of a victim in a vast geographical area. “To get a comprehensive understanding of causes, countermeasures and practitioners to deploy robust and secure distributed systems, is a highly valuable awareness of the newly introduced CPDoS attack.”
The attack exploits a general problem in layered systems when variations in interpretation arise when the same message is being used sequentially. The problem is that the cacheable resource HTTP call generated by the attacker contains erroneous fields which, while being ignored by the caching system, cause an error when processed by the original server.
Therefore an error page from the original server is received on the intermediate cache, which means that it is poisoned by the server-generated error page. The new class of attacks is dubbed the “Cache-Poisoned Denial-of-Service (CPDoS), because the useless material makes the target service unattainable.”
During the study, researchers studied empirically how 15 existing web caching solutions handle HTTP requests, containing inaccurate fields and caching the resulting error pages, and discovered vulnerable services already alerted to the problem.
The attack exploits the semantine distance between two HTTP motors, one in a shared cache and the other on an original server. In this respect, the caching system deployed focuses more than the original server on processing requests, so that the attacker can enter harmful headers into the request.
If these headers are forwarded to the source server without any modification, the request runs through the cache without problem but the server processing leads to an error. Therefore, the server replies with the error, which is processed and reused for repeat requests by the cache.
This results in every user who requests the GET to receive a recorded error message from the infected URL. A simple request below the identification level of web application firewalls and DoS security, according to the whitepaper, is enough to substitute the actual content in the cache with an error page.
Harmless CPDoS can make images or styles unfit to the visual appearance of applications, but more extreme attacks may render whole web applications unavailable. CPDoS attacks could also block patches or firmware updates spread via caches.
“Attackers can also turn off major security warnings or updates on sensitive project pages, such as online banking or government official websites. Imagine, for example, that a CPDoS attack would prevent warning users about phishing e-mails or natural disasters, “the researchers say.
An attacker can exploit this without a chance of detection, but with a high probability of success, meaning CPDoS poses a high risk.
Throughout their paper, the researchers present the three variants of the general CPDoS attack, namely HTTP Method Override–a malicious client sending a GET request, which requires an HTTP method override header–the malicious client sends a GET request with a header greater than the original server’s, but smaller than that of the cache–and the malicious client sends a Header Override (HHO)
Experiments showed that eight Defense websites, more than a dozen Alexa Top 500 pages and millions of URLs contained in a HTTP database dataset are vulnerable to CPDoS attacks.
“According to our studies, 11% of the DoD websites are vulnerable to CPDoS attacks, 30% of Alexa Top 500 websites and 16% of the URLs of the database information collection analyzed. Such cached contents also include mission critical firmware and files for upgrading, “the scientists notice.
Some vulnerable resources, due to their use of CloudFront as the CDN, are ethereum.org, marines.com and nasa.gov. The researchers blocked texts, style sheets, photographs and even interactive data.
The researchers reported in February of 2019 on the vulnerabilities of HTTP implementer vendors and cache providers (including AWS, Microsoft, Play 1 and Flask) and also worked closely with them to eliminate the identified risks.
Although the removal of cache error pages seems to be the most logical and efficient counter-measure for CPDoS attacks, this could in many cases have an impact on performance. | <urn:uuid:588201a2-896b-4d2f-beee-09d699df60bb> | CC-MAIN-2024-38 | https://cybersguards.com/experts-warn-of-the-latest-cache-poisoned-method-of-attack/ | 2024-09-15T23:06:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00549.warc.gz | en | 0.913371 | 980 | 2.703125 | 3 |
When buying a hard disk drive, UK storage pros will encounter many different product specifications. This article explains the key features of hard disk drives -- latency, typical seek times, rotational latency/rotational speed, data transfer rates, error correction codes and buffer/cache size -- so you can specify the right one for your data centre.
Seeks and latency
How quickly the disk can find and read a sector is determined in part by access time. Reading a particular sector consists of two steps. First, the head must be moved to the correct track. Then, once the head is over that track, you must wait for the sector to spin under the head and read the sector. Seek time is the time required for the head to position itself over a track. The latency period is how long it takes the desired sector to move under the head.
Moving the head takes a lot longer than waiting for the sector to come around. So low seek times (the time to move the head) are critical to good disk performance.
Access time (time to find a sector) equals seek time (time to move to the sector's cylinder) plus rotational latency period (time to wait for the sector to rotate around and appear under the heads).
Typical seek times
Of the seek time and the latency period, the seek time is usually the longer wait. Seek time is usually expressed in milliseconds (ms). It varies according to how many tracks the heads must traverse. A seek from one track to the next track is usually quick -- just a few milliseconds -- but most seeks aren't so convenient.
Remember, the lower the seek time, the better. Note that in current computing environments, a millisecond is a long period, considering that the measure for modern system memory is nanoseconds. This means the system may have to wait for the hard disk.
A common measure of an average seek is the time the system requires to travel one-third of the way across the disk. Most benchmark programs use this measurement. You might wonder, "Why not halfway across the disk, rather than one-third?" The reason is that most accesses are short seeks of just a few tracks.
In the earlier hard drives, vendors sold hard disks with seek times of almost 100ms. Today, the average seek time on a new drive is between 5ms and 10ms. In general, the low speed depends on what you're willing to spend on a drive, as seek times are built into a drive. There's no way for you to improve a drive's seek time short of getting a new drive.
Rotational latency/rotational speed
Once a head positions itself over a track, the job's still not done. Now the head has to wait for the correct sector to rotate to a position beneath it. How long this takes is a matter of luck. If you're lucky, the sector is already there; if you're really unlucky, you just missed it and will have to wait an entire revolution for it to come round again. As mentioned above, this waiting time, whether large or small, is the rotational latency period.
A common number cited is average latency period. This makes the simple assumption that, on average, the disk must make a half-revolution to get to your sector. Manufacturers calculate the latency period from the spindle speed. Latency, like seek time, is normally expressed in milliseconds.
Rotational latency is directly affected by rotational speed. Depending on the model, disk drives rotate between 3,600 rpm and 15,000 rpm. For a disk rotating at 3,600 rpm, one-half revolution takes 1/7,200 of a minute or 8.33ms. This contributes to the amount of time the system must wait for service (the rotational latency).
The higher the spindle's speed (the rpm), the lower the average latency. Calculate the average latency based on a half rotation of the disk; calculate the worst-case latency on a full rotation of the disk.
Data transfer rate
This is how fast a disk can transfer data once it has been found. Specifically, the transfer rate is a measure of the amount of data that the system can access over a period of time (typically one second). It's determined by the external data transfer rate and the internal transfer rate.
The external data transfer rate is the speed of communication between the system memory and the internal buffer or cache built into the drive. The internal data transfer rate is the speed at which the hard disk can physically write or read data to or from the surface of the platter and then transfer it to the internal drive cache or read buffer. Transfer rates vary depending on the density of the data on the disk, how fast the disk is spinning and the location of the data.
Error correction code (ECC)
No electronic data transmission or storage system is perfect. Each system makes errors at a certain rate. Modern disks have built-in error detection and error correction mechanisms.
Disk systems are great as storage media, but they're volatile. From the first second after you lay a piece of data on a disk, it starts to 'evaporate.' The magnetic domains on the disk that define the data slowly randomise until the data is unrecognisable. The disk itself and the media may be fine, but the data image can fade after some years.
Disk subsystems are aware of this and include some method of detecting and correcting minor data loss. Because the disk subsystem can detect but not correct major data loss, the controller includes extra data, known as the error correction code, when it writes information to the disk. When the controller reads back this information, it can detect whether errors have occurred in the data. The basic idea is that the controller stores redundant information with the disk data at the time that the data is originally written to disk. Then, when the data is later read from disk, the disk controller checks the redundant information to verify data integrity.
ECC calculations are more complex than a simple checksum. The ECC that most manufacturers implement in hard disks (and CD-ROMs) uses the Reed-Solomon algorithm. The calculations take time, so there's a tradeoff; more complex ECCs can recover more damaged data, but they take more computation time. The number of bits associated with a sector for ECC is a design decision, and it determines the robustness of the error detection and correction. Quite a number of modern disks use more than 200 bits of code for each sector.
Some controllers let you use an x-bit ECC. In this example, x refers to the number of consecutive bad bits the ECC can correct. The original ATA hard disk controller, for instance, could correct up to five bad consecutive bits. That meant it had a "maximum correctable error burst length" of 5 bits. Newer controllers can usually correct up to 11 bits. Some of the newest drives installed in the latest machines are using special high-speed controller hardware to do 70-bit error correction.
Disk drives are slow. Your computer uses RAM memory that responds to requests in tens of nanoseconds, but the disk drive responds to requests in tens of milliseconds. That's six orders of magnitude difference in speed.
Whenever you're moving data between a faster medium and a slower one, adding a cache to hold recently used or anticipated data can improve performance by reducing the amount of data that needs to travel through the bottleneck area. A hard disk's performance can similarly be improved by caching. Many manufacturers refer to the cache as a buffer in their drive specifications.
A disk cache seeks to use the speed of memory to bolster the effective speed of the disk. The cache is held in memory chips and is usually one to a few megabytes in size. The operating system can access data previously placed in the disk cache on an as-needed basis. Using this disk cache can cut down on the number of physical seeks and transfers from the hard disk itself.
Smart caching algorithms generally mean that there's no need to change the size of the disk cache. This cache buffer acts as a holding area for one or more tracks, or even a complete cylinder's worth of information in case you need it. This cache buffer can be effective in speeding up both throughput and access times. | <urn:uuid:496e5d99-969a-4981-bb09-23b60200f051> | CC-MAIN-2024-38 | https://www.computerweekly.com/news/1364141/Hard-disk-drive-specifications-guide-What-to-look-for-when-buying-disk-storage | 2024-09-15T23:45:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00549.warc.gz | en | 0.930477 | 1,695 | 2.8125 | 3 |
Artificial Intelligence - A Blessing or a Curse?
Artificial Intelligence (AI) is a growing market that will effect nearly every business and person around the globe in the coming decades.
There is a myriad of benefits that AI can offer to businesses – something scientists and researchers have known for years. However, more recently the dangers of AI have come to light, especially with public figures like Elon Musk and Stephen Hawking expressing their worries.
AI is beneficial due to its ability to replicate actions of humans without human deficiencies such as emotion, fatigue and poor time management. AI offers a lower error rate as well as cheaper labour; computers do not have office drama and they do not care what tasks you make them do – no matter how tedious or challenging. Additionally, computers can attain, store and process more data than a brain, and in much less time. One of the major disadvantages of AI will be job loss due to its superior processing power.
What does AI mean for email?
In regards to email and email archiving, AI will be used to help revolutionise the email market. With more than 205 billion emails sent per day it is difficult to determine if you are getting the attention of the correct people and how effective the emails you are sending really are. Artificial Intelligence is able to personalise the email market like never before, which should theoretically increase the response and conversion rates of marketing communications. In addition, AI has the potential to be used in spam filtering and antivirus. For most spam filters to be effective they have to be reactive (i.e. mail items need be marked as spam whether automatically or manually) before the filter can start blocking it. AI will be able to recognise patterns in things like message structure and sending behaviour to proactively block new spam or hack attempts.
Should we be scared?
Artificial intelligence is an extremely powerful technology, so much so, that tech mogul Elon Musk and acclaimed cosmologist Stephen Hawking have publicly spoken out about the dangers AI poses to humanity and the need to impose regulations now. Musk has said that AI is a “fundamental risk to the existence of civilisation”, while Hawking gives his grim view on AI by telling the BBC “I think the development of full artificial intelligence could spell the end of the human race.” Much of the argument against AI is driven by fear…fear of the unknown and fear of intelligent technology. Think about it, the reason human beings are able to capture and lock a lion in a zoo, merely for our own entertainment, is because we are more intelligent than lions.
The fear of making computers/robots more capable and intelligent than humans will make anyone uncomfortable. But, in reality there are only two scenarios in which humans need to worry about AI. Firstly, if the technology is given to a malevolent person and they program the machine to do something troubling. Secondly, if the AI is programmed to do something beneficial but develops a damaging course for accomplishing its goal (i.e. a self-driving car is told to take you to the airport, but it does so by speeding on the sidewalk not obeying lights or signs hitting humans along the way).
Rather than fearing AI, businesses should embrace it. Artificial intelligence offers countless opportunities for businesses looking to increase the skills of their workforce and revolutionise their user experience. Businesses who embrace AI will be ahead of those who ignore or fear it. The UK government, for example, has acknowledged the importance of AI in the UK economy, and has promised to invest £20m in robotics and AI. Whether we like it or not, because of the cheap labour, low error rate and no human shortcomings, AI will continue to grow and be implemented throughout businesses around the globe. | <urn:uuid:adf9e1d0-08c8-48b9-9520-e7fef6062c58> | CC-MAIN-2024-38 | https://www.cryoserver.com/blog/artificial-intelligence/ | 2024-09-15T23:25:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00549.warc.gz | en | 0.956351 | 757 | 2.640625 | 3 |
How zero-knowledge encryption keeps your files private
What does zero-knowledge encryption mean?
Zero knowledge, or a no-logs policy, means that every bit of information is treated with complete confidentiality. The company provides the software or service, but all the data remains on the user’s side.
In encryption, zero-knowledge means that data is secured with a unique user key, which the app developer does not know. With zero-knowledge encryption, no one but the user can access their encrypted files.
What is zero-knowledge proof?
When you upload your files to NordLocker Cloud, you alone know what’s in them. But then if you edit or delete any of them, how do we know what to change without knowing what you uploaded? A concept known as zero-knowledge proof can help us answer this question. In other words, it helps clarify how you can prove you know something confidential without revealing any of the information. Or in our case, how we can make changes to your files without knowing what’s in them.
Imagine someone holding two different-colored balls in front of you. Let’s say the green ball is in the left hand and the red ball is in the right one. You close your eyes and when you open them, the green ball is now in the right hand, and the red is in the left. You can confirm they’ve been switched without revealing any more information.
This example may sound basic, but the key here is repetition. You may guess the balls correctly the first or second time. But repeat the exercise a thousand times and only someone who knows will be able to answer correctly every single time.
Can zero-knowledge encryption work in the cloud?
Most users of encryption apps understand that it’s much easier to protect your privacy if you stay offline. However, zero-knowledge encryption can work even in the cloud. Unlike cloud storage providers that track everything you upload, NordLocker deals with encrypted data only.
How does zero-knowledge encryption work in the cloud?
Every time you drop a file into NordLocker, complex mathematical algorithms and ciphers scramble that data. It can only be unlocked with a secret key, one that only you know. You also get to choose if and when to upload files to the cloud for easy access and when to remove them. Whatever is uploaded to the cloud has already been encrypted with your key, so that data remains confidential and protected from hackers, data collection, or surveillance.
The benefits and drawbacks of encryption
Zero-knowledge architecture is generally viewed positively. And not just for the user’s sake. A business that handles data this way is safer because it can’t accidentally expose user data. If all passwords and files are end-to-end encrypted, hackers can’t steal them. And even if hackers do get into the server, they won't be able to decrypt any data. It’s much easier for a business to protect its reputation, prevent ransomware attacks, and comply with privacy laws when it uses zero-knowledge encryption.
Of course, most benefits focus on the user. Zero-knowledge policies help them keep their privacy and stay secure online even if they don’t understand the complexities of end-to-end encryption or zero-knowledge architecture. However, many companies employ user data to build new or improve old features. Privacy-focused companies, on the other hand, don’t have this luxury and may need more time developing features.
How does NordLocker use zero-knowledge encryption?
We aim to help you become the owner of your data. This means that nobody, even us, is able to peek into your files without your permission. We don’t store master passwords or recovery keys or collect data. Your files are encrypted with a key that’s derived from your master password.
In NordLocker, your data is stored in lockers (end-to-end encrypted folders). End-to-end encryption works because anyone can send you a message using your public key, but only you can decrypt it using your master password.
What does zero-knowledge encryption mean to you?
Zero-knowledge encryption helps you protect your privacy. As we stated earlier, only you have access to your files. No one else, including NordLocker, knows what you encrypt and store in your NordLocker cloud storage. Even if our servers were breached, hackers wouldn’t get away with much because you hold the encryption key.
Anyone who uses encryption must remember one important thing. Companies that collect data know your name, email, password, and much more. That’s how they can help you if you ever forget your password or delete a file. But zero-knowledge encryption is also about security. The master password you created and the auto-generated recovery key are the only ways to get to your files. If you forget your master password and don’t have your recovery key, you could potentially lose your files too. | <urn:uuid:75a965d4-a997-4570-978d-38460564c4d5> | CC-MAIN-2024-38 | https://nordlocker.com/features/zero-knowledge-encryption/ | 2024-09-17T05:25:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00449.warc.gz | en | 0.942472 | 1,036 | 2.703125 | 3 |
ISO 27001 | SSAE 18 SOC 2 Certified Sales: 317.275.0021 NOC: 317.275.0001
With data centers facing increased scrutiny for their carbon footprint, the news that California is paving the way for data center cooling technology that saves water could be welcomed news for the nation.
It’s possible that more states could follow in the footsteps of California, where building regulators recently allowed data center operators to install economization systems that bypass the use of water, insteading using specialized refrigerant fluid as a medium for trading heat with the exterior environment.
Previously, the California Building Standards Code required data centers to use economizers that pull outside air into the building or use water to transfer heat outside.
“You’re going to see a lot of new technologies come out like this,” predicted John Peter Valiulis, a marketing executive for Emerson Network Power, which had lobbied for the change. He said already there has been a lot of interest in the technology, which could pave the way for more innovation.
Based on an independent study, Emerson’s refrigerant-based system not only eliminates the need to use water in free cooling it also uses less energy than a water-side economizer in a majority of the state’s climate zones. In addition to California, the system has been deployed at about 50 sites throughout North America, the United Kingdom and Australia.
According to studies, water consumption is considered the second largest natural resource concern facing data centers — just behind power. A mid-size data center, for example, will use up to 130 million gallons of water a year annual for cooling — about the same amount of water used by three average-sized hospitals. However, according to The Wall Street Journal, attention is primarily focused on decreasing data centers’ use of power as an environmental e.
Up until now, most efforts to ease the use of water resources have focused on harvesting rainwater, reusing dirty water, or digging wells, according to the WSJ. Also, some data centers use waterless cooling units, but they could potentially require the use of more power.
Lifeline Data Centers, a wholesale colocation company headquartered in Indianapolis, consistently keeps up with the latest data center technology. Learn more about our state-of-the-art facilities by taking a virtual tour. | <urn:uuid:ab261353-5a7a-411a-a9e9-a584decc847b> | CC-MAIN-2024-38 | https://lifelinedatacenters.com/data-center/conserve-water/ | 2024-09-19T17:06:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00249.warc.gz | en | 0.935874 | 482 | 2.578125 | 3 |
Cross-Site Request Forgery (CSRF)
In github.com/dinever/golf versions prior to 0.3.0, CSRF tokens are generated using "math/rand", which is not a cryptographically secure random number generation, making predicting their values relatively trivial and allowing an attacker to bypass CSRF protections with relatively few requests.
CWE-352 - Cross-Site Request Forgery (CSRF)
Cross-Site Request Forgery (CSRF) is a vulnerability that allows an attacker to make arbitrary requests in an authenticated vulnerable web application and disrupt the integrity of the victim’s session. The impact of a successful CSRF attack may range from minor to severe, depending upon the capabilities exposed by the vulnerable application and privileges of the user. An attacker may force the user to perform state-changing requests like transferring funds, changing their email address or password etc. However, if an administrative level account is affected, it may compromise the whole web application and associated sensitive data. | <urn:uuid:74f562e4-52c6-41ab-bed3-f25cc54cf9a3> | CC-MAIN-2024-38 | https://devhub.checkmarx.com/cve-details/cve-2016-15005/ | 2024-09-10T00:29:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00249.warc.gz | en | 0.875651 | 205 | 2.640625 | 3 |
Encryption is the conversion of data from a readable format into an encoded (encrypted) format. A key or password is required to decrypt the data in order to read or process it. This can be used in cyberattacks such as ransomware but can also be used as a security technique to protect digital data [...]
Endpoint Detection and Response (EDR) is an endpoint security solution that is used to continuously detect, investigate and respond to cyberthreats. These solutions use data collected from endpoint devices to understand how cyberthreats behave and the ways that organizations respond to the cyberthreats.
Endpoint protection is an approach to protect endpoints or entry points of end-user devices such as desktops, laptops and mobile devices from being exploited by malicious actors and campaigns. | <urn:uuid:a4659542-3c26-4ea9-9797-d6524dbe9dc5> | CC-MAIN-2024-38 | https://www.blackfog.com/cybersecurity-101/prefix:en/ | 2024-09-10T00:46:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00249.warc.gz | en | 0.938029 | 155 | 3.03125 | 3 |
NTLM authentication is a legacy protocol used to authenticate users and computers in Windows-based networks. Despite the availability of newer and more secure protocols, NTLM is still widely used and required for deploying Active Directory, a crucial component of Windows-based networks. This is because NTLM is deeply ingrained in the Windows architecture, making it difficult to disable or restrict NTLM without causing damage to production systems.
Moving away from NTLM authentication and complying with the CIS benchmarks is challenging as it requires identifying which computers are using it and migrating to a more secure protocol without breaking anything. Many organizations continue to use NTLM as a fallback mechanism, despite the availability of more secure protocols like Kerberos and OAuth. However, using newer protocols is recommended as they offer stronger security and better protection against certain types of attacks.
NTLM has two versions - NTLMv1 and NTLMv2. NTLMv2 is supposed to offer better security than its previous version, and to some extent it does provides better defense against relay and brute force attacks, but does not completely block them.
NTLM Authentication Server - Client Authentication Process
In a Windows-based network, the domain controller plays a critical role in managing the challenge/response exchange in the NTLMv1 authentication protocol. This involves generating a challenge to the client and validating the user's credentials by comparing the hashed password provided by the client with the stored hash value for the user's account. If the two values match, the user is considered authenticated and granted access to the requested resource.
The NTLM authentication flow is as follows:
- The client machine sends a request to connect to the server.>
- The server generates a random nonce to be encrypted by the client.
- The client machine encrypts the nonce with the password hash to prove knowledge of the password.
- The server validates the user’s identity by ensuring that the challenge was indeed created with the correct user/password. It does this either by using data from its own SAM database or by forwarding challenge-response pairs for validation in the domain controller.
How NTLMv2 is Different From NTLMv1
NTLMv2 follows a similar flow to NTLMv1 with a key difference: in step 3, the client includes a timestamp and username along with the nonce, which helps mitigate offline relay attacks. However, NTLMv2 still shares some vulnerabilities with NTLMv1 and doesn’t offer a complete solution. Additionally, NTLMv2 uses a variable-length challenge instead of NTLMv1’s 16-byte random number challenge.
How NTLMv2 Addresses Issues in the NTLMv1 Protocol
The NTLM cryptography scheme is relatively weak, making it relatively easy to crack hashes and derive plaintext passwords. It’s easy enough for standard hardware to be able to crack an 8-character password in less than a day. This is for three main reasons:
- The password hash is based on MD4, which is relatively weak.
- The hash is saved unsalted in a machine’s memory before it is salted and sent over the wire.
- A user must respond to a challenge from the target, which exposes the password to offline cracking. This prevents offline Relay attacks.
No mutual NTLM authentication:
This flaw exposes the protocol to man-in-the-middle (MITM) attacks due to one-way authentication, where the client doesn’t validate the server’s identity. A malicious actor can impersonate the server and send malicious data to the client. The most severe risk associated with NTLM is the exposure of servers in Active Directory environments to NTLM relay and remote code execution attacks. Other NTLM flaws are minor in comparison. In such attacks, the attacker hijacks the client-server connection and spreads laterally through the system using the user’s credentials. Despite Microsoft’s attempts to develop mitigation techniques, all patches have been compromised. No NTLM version provides a solution, leaving all NTLM users vulnerable to devastating attacks.
MITRE ATT&CK reference to NTLM authentication vulnerabilities
The MITRE ATT&CK framework add more relevant information to this known vulnerabilities by connecting these vulnerable flows and procedures to real life attack campaigns. As stated by MITRE ATT&CK, a PTH- Pass the hash attack can be formed by capturing and manipulating NTLMv1/v2 login processes:
From a classic Pass-The-Hash perspective, this technique uses a hash through the NTLMv1 / NTLMv2 protocol to authenticate against a compromised endpoint. This technique does not touch Kerberos. Therefore, NTLM LogonType 3 authentications that are not associated to a domain login and are not anonymous logins are suspicious. From an Over-Pass-The-Hash perspective, an adversary wants to exchange the hash for a Kerberos authentication ticket (TGT). One way to do this is by creating a sacrificial logon session with dummy credentials (LogonType 9) and then inject the hash into that session which triggers the Kerberos authentication process.
If it is not possible to disable NTLM in an infrastructure it is critical to monitor NTLM activity and configure it for optimal security and audit
How can you stop using NTLM authentication
CalCom's Hardening Suite (CHS) offers a solution to the challenges associated with abandoning NTLM. CHS learns your system and identifies servers that can continue to function without outages after disabling NTLM. It provides alerts on potential impacts and allows you to make informed decisions based on its findings. CHS automatically implements it on the entire production environment, reducing the risk of configuration drift. Learn more about it here. | <urn:uuid:70272ec4-d3ee-4bae-9dbd-ff45a8a7883a> | CC-MAIN-2024-38 | https://www.calcomsoftware.com/ntlmv1-or-ntlmv2-does-it-even-matter/ | 2024-09-10T01:05:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00249.warc.gz | en | 0.911565 | 1,222 | 3.046875 | 3 |
MPLS is a networking technology that uses “labels” rather than network addresses to determine the quickest path for traffic while SD-WAN is a wide area network that uses software-defined network technology. Enterprise MPLS networking is being disrupted by the rise of SD-WAN. In this blog, we will discuss the reasons why enterprises choose SD-WAN as an MPLS replacement.
Data routing on any network is similar to shipping a package. Data packets are forwarded from one router to the next until they eventually reach their destination. MPLS, or Multiprotocol Label Switching, makes this process faster by establishing pre-determined, highly efficient routes. Each router in the network has a table indicating how to handle specific packets. For example, real time voice and video packets can be assigned to low latency routes.
On the other hand, Software Defined WAN (SD-WAN), is a virtual network architecture with a centralized control system. SD-WAN essentially decouples the networking infrastructure from network functions. Instead of routers, a central software system, intelligently directs traffic throughout the network.
While both, MPLS and SD-WAN have their unique benefits depending on enterprise budget and requirement, more and more organizations are making the shift towards SD-WAN. This is mainly because SD-WAN is a cloud-based technology which offers advanced features such as bandwidth efficiency, a seamless on-ramp to the cloud and significant critical application performance. Enterprises are looking to replace MPLS with SD-WAN to improve resource utilization and add to profit margins.
Here are eight reasons why your organization should transition from MPLS to SD-WAN today!.
SD-WAN has a highly visible impact on the entire enterprise’s network performance. It can utilize your existing infrastructure to improve performance and increase efficiency without any elaborate cost investments. SD-WAN performance benefits include:
MPLS requires substantial, high bandwidth to work efficiently. This makes it an expensive proposition, especially for smaller enterprises. Organizations have to choose between exorbitant monthly costs or deal with intermittent bandwidth issues. SD-WAN, on the other hand, allows you to combine different, high and low bandwidth connections, giving pricing flexibility without bandwidth problems.
The SD-WAN network grows with your organization. It can become a challenge for an enterprise, especially those with less number of branch offices to justify the expense of a virtual network via an MPLS connection. SD-WAN, on the other hand, provides all businesses with the flexibility to scale across physical borders and establish their presence at a much lower cost.
Unlike MPLS, SD-WAN is service provider agnostic. This can be particularly helpful if your business requires connectivity at locations that are remote or not serviced by your specific MPLS vendor. An enterprise that has transitioned to SD-WAN need not settle for the average performance of a new MPLS service provider for such locations or pay an exorbitant price for high performance.
Unlike MPLS, SD-WAN allows the configuration of multiple “network tunnels” at different priority levels or SLAs (service level agreements). This gives improved Quality of Service at the Application level. SD-WAN can tie underlying network connections with business logic, allowing application level traffic routing. Low priority applications are transmitted over less expensive infrastructure compared to high priority applications. Application availability increases as traffic is no longer mapped to one WAN service.
SD-WAN gives high network visibility to administrators. This allows organizations to:
Therefore, a well implemented SD-WAN setup is more like an organization-wide security plan with maximum access and flexibility.
As organizations adopt digital transformation initiatives including cloud strategies, they require network infrastructure that can complement this growth and development. MPLS was invented before cloud technologies were developed and does not support cloud networking intrinsically. For example, in an MPLS setup, access to the cloud is via the data center. On the other hand,
SD-WAN supports cloud technologies by:
While the above reasons are compelling enough, the biggest motivation for enterprises to move from a pure MPLS environment to SD-WAN is the cost-efficiency. SD-WAN offers attractive operational and technological cost savings. It is a flexible and scalable, pay-as-you-go service model that significantly reduces per-megabyte cost of any enterprise networking operations.
Both MPLS and SD-WAN come with unique attributes. Read our in-depth SD-WAN vs. MPLS comparison. Here is the brief summary of the comparison.
Whether your enterprise is established or just starting out, SD-WAN is geared to supporting your business needs no matter where you are. Cost savings and security are offered in abundance, along with a granular quality of service. IT teams will need to take into account the entirety of their network infrastructure before committing to transition to SD-WAN. We’re here to help! Please fill out the form and one of our specialists will reach out to you right away. Keep in mind that there is NO fee for our services since the carriers pay us a commission to help you.
for immediate service or fill out theform and we’ll be in touch right away. | <urn:uuid:b6ddca46-a41b-4754-a367-17d0976738f0> | CC-MAIN-2024-38 | https://www.carrierbid.com/choose-sd-wan-mpls-replacement/ | 2024-09-11T05:56:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00149.warc.gz | en | 0.934387 | 1,077 | 2.625 | 3 |
State and local governments tend to be the early adopters of emerging technology in the public sector. Smart city projects, grant management, and regional collaboration have driven localities to implement emerging technology to meet the real challenges of serving citizens. Today, states and localities are experimenting with how virtual reality, and more specifically the metaverse, can help further real-world connections in communities.
Trained by Avatars
Virtual reality has long been used as a tool for training in government - think flight simulators - but today, the technology is being used for more than just tactical training. Virtual reality is helping to introduce scenarios to improve the empathy and understanding of public servants. In the metaverse, public safety professionals can safely simulate responding to dangerous situations (without the real-world risks) while also adding in realistic interactions with "people" behaving as they would during a crisis. Continue reading | <urn:uuid:d918af4e-7f72-4f9b-b6d9-bee7dec17bf8> | CC-MAIN-2024-38 | https://www.govevents.com/blog/tag/ai-programming/ | 2024-09-14T19:52:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00749.warc.gz | en | 0.952329 | 177 | 2.96875 | 3 |
There are few tasks an IT administrator will perform that are more important than securing and protecting the data stored in Active Directory. When done properly, Active Directory serves to authenticate those with permission to access the data while keeping everyone else out of the system.
It's a tricky balance. The users you serve want fast and easy access to their data. Place too many hoops in front of them, and they may attempt to route around your security policies. But you don't want to compromise the security either by loosening checks and balances that keep everyone safe.
Active Directory Permissions Best Practices
Active Directory is a complex directory service that started out as a domain manager on Windows. But since 2008, Active Directory has performed a number of critical directory, authentication and identity-based services. In simple terms, Active Directory determines what each user can do on the network. Over the years, Microsoft has built products that integrate with Active Directory services to improve network security. When properly configured, Active Directory and other services work in harmony to provide each user access to the data he or she needs to do their job.
This week, I'd like to look at number of best practices for securing data through the Active Directory model. Some of these recommendations take more planning, while some are generally simple to integrate. I hope at least one of of them is new to you. Let's get started!
Least-Privilege User Access (LUA)
This is a tip I'm sure you've heard before now. It's almost so obvious that many administrators overlook it. The idea behind it that all users should login to the network with the minimum permissions needed to carry out their job. Nothing more, nothing less. Following this principle keeps people from getting into areas of the network where they could cause problems. You don't want a user running rogue code in an area that could bring down the whole system. Yet we've all been there before.
LUA is the opposite of granting everyone administrative privileges, and then scaling back permissions as needed. It's one of the best tips for keeping your network safe. So why don't more administrators use this model? Well, it takes a lot of planning. You have to determine what each user needs to access on the network. Doing that for every user can take some time. In practice, I've seen similar approaches to this model that meet some of the requirements, but not all. For example, an administrator might create a group called accounting, and then place everyone in that department into that group. This approach assume everyone in accounting requires the same permissions. That's unlikely to be the case at any company unless accounting is comprised of one employee.
LUA takes planning and time. It's difficult to implement across the company. But you can begin with each new hire, and then tackle a group at a time. The time investment will be worth it in the long run. The goal here is avoid user accounts with broad and deep privileges across the company. And keep in mind, you can always grant more permissions as necessary.
Brush Up on the Security Model
Active Directory has changed a lot over the years, especially as Microsoft has given it more responsibility. Now would be a good time to brush up your understanding of how Active Directory is structured. Much like a relational database, Active Directory contains a schema that defines each object and its attributes. For example, the "user" object may contain a set of attributes which include first name, last time, department, manager, phone number and so on. These attributes help determine its permissions.
Each object in Active Directory has an associated security descriptor. This descriptor defines the permissions on that object. Of course, all these attributes comprise the permission set or Access Control. List (ACL). Understanding how ACLs are used to secure permissions for users and groups gets to the core function of what Active Directory provides a company. Going a step further, understanding how permissions are inherited is also very helpful.
This is a deep topic that's impossible to cover in a few paragraphs. It's unlikely that even the professionals, with many years of Active Directory experience, understand the entire security model. Paramount Defenses provides an excellent overview if you'd like a quick primer.
Protect and Update Software
This sounds reasonable. You might assume everyone already does this. Yet the WannaCry malware attack took down a number of servers running Windows Server 2003. Microsoft had released patches to thwart the attack, but it still struck far too many systems and networks. WannaCry is a good example of malware that located data on the server, encrypted it, and held it for ransom. WannaCry found a lot more victims running Windows XP and Windows 7, but one can imagine the damage such malware can cause on a server running Active Directory.
Patching all desktop and server software is an excellent start to keeping your environment safe. Retiring older hardware and software is another important practice. I've seen too many companies invest in desktop clients only to dip into the parts bin to create a server. Begin your investment with the server by using modern hardware running the latest software. Development environments running custom software are especially susceptible data intrusions. That same goes for all internet-facing applications, whether they rely on Active Directory or not. Keeping those systems patches is critical to the overall security of your environment.
Microsoft actually does a very good job of communicating security issues to its customers. In addition to TechNet, I like to monitor the Microsoft Secure Blog for the latest in cloud, cybersecurity and data privacy news.
Utilize Built-in Active Directory Features
Active Directory contains a number of nifty features that help to protect your data and your environment. Microsoft IT recommends using the following Active Directory features where applicable:AdminSDHolder - This ensures consistent enforcement of permissions on protected accounts and groups, regardless of location on the domain.Security Descriptor Propagator - This compares the permissions on the domain object with the permissions on the domain's protected accounts and groups. If it finds they don't match, it resets the permissions.Role-based Access Control - Allows the administrator to group users, and give them access to resources on the domain according to business rules. You should not use this as a shortcut to LUA.Privileged Identity Management - Allows the administrator to grant temporary rights and permissions to an account to perform build or break-fix functions.
None of these features is a security holy-grail. But adding one or more of them to your security plan can drastically decrease your risk for intrusions, and keep your data protected.
Few deployments are as critical to the success of your IT infrastructure as a Windows Server running Active Directory. With data residing on servers, files shares, and desktops not to mention mobile devices, it's more important than ever to ensure Active Directory is doing its part to keep you and your data safe.
The best security blanket take planning, time and a lot of patience. You may need to tweak your configuration now and then based on how your environment changes. Expect Microsoft to continue to add more features to Active Directory, giving it more responsibility than ever before. With data and applications moving to the cloud, a strong security model is as important today as it's ever been before. | <urn:uuid:decfa288-050a-42d5-b94f-08669fd959aa> | CC-MAIN-2024-38 | https://www.arcserve.com/blog/active-directory-permissions-best-practices-data-protection | 2024-09-19T20:39:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00349.warc.gz | en | 0.945193 | 1,469 | 2.671875 | 3 |
The General Data Protection Regulation, enacted by the European Union in 2018, is the world’s most important and broadly applicable data privacy law. Read on to understand what kind of data is protected by the GDPR, which rights it aims to enforce for owners of the data, and what your organization needs to do to protect personal data and avoid legal sanctions, including data protection considerations.
In this article you will learn:
• What is GDPR?
• How personal data is defined under the GDPR
• GDPR data privacy rights
• GDPR data protection requirements
• Protecting personal data with Cloudian storage
What is GDPR?
The GDPR is a legal standard that protects the personal data of European Union (EU) citizens and affects any organization that stores or processes their personal data, even if it does not have a business presence in the EU.
Because there are hundreds of millions of European Internet users, the standard affects almost every company that collects data from customers or prospects over the Internet. GDPR non-compliance carries severe sanctions, with fines up to 4% of annual revenue or €20 million.
GDPR legislators aimed to define data privacy as a basic human right, and standardize the protection of personal data while putting data subjects in control of the use and retention of their data.
There are two primary roles in the GDPR: the GDPR Data Controller is an entity that collects or processes personal data for its own purposes, and a GDPR Data Processor is an entity that holds or processes this type of data on behalf of another organization.
Finally, the Data Protection Officer is a role appointed by an organization to monitor how personal data is processed and ensure compliance of the GDPR.
What is personal data according to the GDPR?
“Personal data”, according to the legal definition of the GDPR legislation, is any information about an identified or identifiable person, known as a data subject.
Personal data includes any information that can be used, alone or in combination with other information, to identify someone.
This includes: name, address, ID or passport number, financial info, cultural details, IP addresses, or medical data used by healthcare professionals or institutions.
Other special data you may not process or store: Race or ethnicity, sexual orientation, religious beliefs, political beliefs of memberships, health data (unless the explicit concern is granted or there is substantial public interest).
Learn more in our article about data protection regulations.
GDPR data privacy rights
The GDPR aims to protect the following rights of data subjects with respect to their personal data.
Data subjects have the following basic rights under the GDPR:
- Collecting data from children — requires parental consent until children are between 13-16 years old.
- Data portability and access — data subjects must be able to access their data as stored by the Data Controller, know-how and why it is being processed, and where it is being sent.
- Correcting and objecting to data — data subjects should be able to correct incorrect or incomplete data, and data controllers must notify all data recipients of the change. They should also be able to object to the use of their data, and Data Controllers must comply unless they have a legitimate interest that overrides the data subject’s interest.
- Right to erasure — data subjects can ask data controllers to “forget” their personal data. Organizations may be permitted to retain the data, for example, if they need it to comply with a legal obligation or if it is in the public interest, for example in the case of scientific or historical research.
- Automated decision-making — data subjects have the right to know that they were subject to an automated decision based on their private information, and can request that the automated decision is reviewed by a person, or contest the automated decision.
- Notification of breaches — if personal data under the responsibility of a data controller is exposed to unauthorized parties, the controller must notify the Data Protection Authority in the relevant EU country within 72 hours, and in some cases also needs to inform individual data subjects.
- Transferring data outside the EU — if personal data is transferred outside the EU, the data controller should ensure there are equivalent measures to protect the data and the rights of data subjects.
GDPR data protection requirements — how are you required to protect personal data?
The GDPR defines specific ways in which a data controller must protect personal data. Failing to do so may result in fines and other sanctions. Here are the essential data protection requirements, defined in articles 24, 25, and 32:
data controllers are required to handle data securely by implementing technical measures, for example, authenticated access to data and encryption, and organizational measures, such as training staff on data privacy and setting policies for appropriate access to personal data.
Specifically, article 32 of the GDPR requires data controllers to:
- Perform encryption and pseudonymization (a technique for replacing personally identifiable information with other similar data) of personal data;
- Ensure the confidentiality and integrity of data processing systems
- Restore availability and access to personal data if it becomes unavailable
- Test, assess and evaluate measures for securing data processing, assessing and evaluating the effectiveness of technical and organizational measures for ensuring the security of the processing
Data Protection by Design and By Default
Any computer system that handles or stores personal data must protect personal data, for example by pseudonymization, data minimization (reducing to the minimum form required for the data controller’s purposes; or tokenization, which replaces personal data with meaningless random tokens.
Read the 10 components of an effective data protection strategy.
Protecting Personal Data with Cloudian
The GDPR requires you to control the use of personal data, and delete personal data if requested by data subjects. When you share personal data among users and store it in the cloud, you lose fine-grained control over the data. When you receive a data subject access request (DSAR), you may not be able to find all instances of the information, which may result in sanctions or fines.
Cloudian provides fast, reliable, on-premises storage for backup and archive data. It offers the power of cloud-based file sharing in an on-premise device that gives you the control you need to comply with GDPR data protection requirements.
Secure Solution for File Sharing
- Multiple layers of data protection:
- Storage within firewall
- Remote user access via secure connections
- Configure geo boundaries for data access
- Policy-defined data synch to user devices
- Integrated replication for DR
Read more in our blog post: GDPR-compliant file sharing. | <urn:uuid:4247dae8-6ec5-4456-84e6-cffa09943175> | CC-MAIN-2024-38 | https://cloudian.com/guides/data-protection/gdpr-data-protection/ | 2024-09-21T03:57:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00249.warc.gz | en | 0.901163 | 1,366 | 3.359375 | 3 |
Data governance is a two-tiered approach to managing data security and management. It’s the design and application of policies that ensure the quality of your data while also adhering to data handling and distribution legislation.
Put another way, you handle data with the software tools, guidelines, and networks in the enterprise. Data governance refers to the overarching framework that incorporates these (and more) measures according to the law. It involves the need to formalize terms and formats that describe your data to ensure fidelity over time and workflows based on strict rules of use and access.
Instituting data governance can solve problems around data discovery such as:
- Repurposing and making use of unstructured data
- Data cleansing, including removing unused tables in database files
- Integration technologies to fold into other systems.
Learn more: Data Governance Best Practices
Data Governance Imperatives
We all use data from different sources in different ways that’s saved in different formats for different software applications. Inconsistencies arise, and without an overview of your data management they might never be resolved. In addition to embarrassment, poor data management can cost you money, complicating data integration programs and compromising business development reporting and opportunities. Without data governance, such issues might even go undetected for years.
There’s also a legal imperative. Not having good quality data can put you out of step with compliance regulation, making it harder to meet service-level agreements (SLAs) and may even lead to prosecution.
The most sweeping data governance law thus far is the European Union’s 2016 General Data Protection Regulation, which gives EU citizens unprecedented access and control over their data.
One of the GDPR’s central fulcrums was seen as the right to be forgotten, where everyone has the right to erasure of personal data under a raft of conditions and circumstances. That imposes a monetary cost on the enterprise in the form of a program to repurpose data for customer access and removal, and steep fines for non-compliance.
Also read: How to Comply with GDPR
Benefits of Data Governance
Data governance is intended to break down barriers. Different stores of data can all combine to make business and workflows within the enterprise and between companies smoother, more efficient, and more secure.
As a company grows, disparate systems handle and process data by different departments, and at a certain level of staff numbers or revenue it can become unwieldy. Transactions are processed and business is conducted in something of a vacuum, with no centralized management environment.
The point of data governance is to bring all those systems and all that information into line, so everyone across the enterprise can engage with any other department or system; and, often with stakeholders outside as well. Management gets a clearer, at-a-glance picture of the health of the entire digital asset base and can be assured they comply with regulation that affects their sector or geographic region.
Other benefits of data governance will follow:
- It will cost less to manage and use data.
- The quality of your data will make it a more valuable asset in itself. For example, if you have data-sharing agreements with other businesses or business units.
- It will be easier to investigate for analysis, be it revenue-generating or otherwise.
Data governance will also give you and your enterprise better decision-making power about the directions of your organization. Following the data tells the story of what’s going on with supply and income unimpeded.
Also read: Tools to Better Manage GDPR
Getting Started with Data Governance
Different business units in the enterprise will have different views on how their information is stored, used, and accessed, so implementing data governance is like launching a rocket — most of the hard work will come as soon as you pull the trigger, but it will get easier as you pick up speed.
The data governance plan ultimately has to come from the top, but it mustn’t be simply edicts on how things will be done. Instead, it should be based on engaging with and listening to department heads about their needs and goals. They’re the ones that use the information, after all, so they’ll know the best methods to wrangle it. The job of the data governance committee or officer is to massage those needs to comply with the policies and legislation data governance sets out.
Selling data governance to company leadership can be a challenge, including corporate boards who might not be clear on its business value. Data governance isn’t simply a reactive process because of laws and rules. It should be proactive, to take advantage of newer and expanded revenue streams.
Examples of how your enterprise missed the boat on important opportunities can also be helpful; such information highlights how unstructured, insecure, siloed, and bad quality data might negatively impact your business.
It’s a new world where networks are so pervasive that data travels everywhere and fulfil endless purposes. With so much of it being processed even without human input, we need a clear way forward to manage and disseminate data. Data governance is the answer. | <urn:uuid:156891df-b68c-439b-b595-a3c0c24f661f> | CC-MAIN-2024-38 | https://www.enterprisenetworkingplanet.com/os/networking-101-what-is-data-governance/ | 2024-09-07T17:02:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00549.warc.gz | en | 0.93693 | 1,057 | 2.765625 | 3 |
When Flash Drives first came on to the market, they were a "dream come true" for companies and organizations. They made an incredibly easy way of transferring data back and forth from one computer to another with very little fuss, making them one of the most widely-known means for transferring information. As technology expanded over the years, the amount of data these tiny, two-inch devices could carry grew in leaps and bounds. Now they can hold more information than an entire desktop computer could just a few years ago. But this dream can easily become a horrible nightmare to a medical facility or organization if a flash drive containing lots of important, sensitive data gets lost by a staff member or stolen.
Just recently, an article on Becker's Hospital Review, explained that a USB flash drive containing sensitive information about its patients at the University of Rochester Medical Center was lost in its outpatient orthopedic facility.1 This scenario is sadly not a unique one; as this scene repeats itself over and over again by hospitals, businesses and organizations all across the country and the globe. But is the issue of blame with the flash drive itself?
The article went on to say that the organization is "re-educating faculty and staff about its policy that requires the use of encrypted drives when transporting protected health information on flash drives." But is education enough?
Organizations Need to Consider the Bigger Picture
Establishing a policy that requires staff to use only secure, hardware encrypted flash drives is a good start to help prevent scenarios like this, but companies and organizations have more powerful, affordable options today than ever before, to take a much more enveloping approach to protecting data. Companies that are serious about protecting sensitive information will work around human error and take even stronger precautionary steps to prevent the data from getting into the wrong hands.
What Others Have Done to "Protect Data"
In one unprecedented measure, some organizations have taken the approach of an outright ban of USB flash drives altogether. Although this "throw the baby out with the bath water" approach might prevent the loss of data through a flash drive, it certainly does not bode well with convenience, especially in a commerce driven by the necessity to quickly transfer data in order to conduct thorough business.
Others have resorted to the exclusive use of Cloud technology to store and transfer data, but this requires a constant connection with the internet, can become costly, and leaves businesses with some well-founded uncertainties. Reports of cloud storage shut downs or glitches have left organizations vulnerable at a most crucial moment, with no access to their information. Data breaches are unfortunately more common than we realize among some less reputable cloud storage services, which begs the question, where exactly is the data really at, who has access to it, and where is it going anyway?
Flash Drives do serve a great purpose, number one of course being their convenience. The second is their flexibility, and the third is the confidence of knowing exactly where your data is. Regular unencrypted flash drives are great for carrying around everyday information that you wouldn't mind others knowing about, like marketing videos for your company for example, or a recipe of your favorite dish.
But most staff and employees carry important information around that is much more sensitive to an organization, and all precautions must be taken to protect that information from getting into the wrong hands. Educating staff and employees about the use of hardware encrypted drives is good, but for an organization to really make an effort, they need to look at an even bigger picture.
Why Secure, Hardware Encrypted Drives Are a Good Start
Hardware encrypted flash drives are a good start for providing the protective safeguards necessary to conduct good business with the convenience of transferring data. Make a policy in your organization that only hardware encrypted drives are allowed for business use. Kanguru's Hardware Encrypted Secure Flash Drives lock the information down under full password protection using military grade technology, including limited password attempts, so even if an intruder tries to tamper with the drive, it will be disabled and the information rendered inaccessible. If an encrypted flash drive were ever lost or stolen, no one else can access the information, and any potential breach of data is brought to an abrupt halt.
But Kanguru's secure flash drive defenses do not stop there. Kanguru's Defender Series contain a barrier against another notorious enemy; viruses, spyware or malware. On each Kanguru hardware encrypted drive, onboard anti-virus protection constantly scans the device and warns against invasion, providing further protection of your data from within.
Kanguru secure drives also contain brute-force protection, so if a savvy thief were to attempt to break into the device physically, the information would be rendered unusable, once again preventing access to the data.
Using Kanguru secure, hardware encrypted drives is a good start to securing information and complying with data security regulations, but what other steps can be taken to circumvent human error, and provide even stronger protection? For that, Remote Management of secure USB drives is the answer.
The Bigger Picture: Remote Management of Flash Drives
If organizations really want to get serious about preventing a sensitive data breach, remote management is the answer. Organizations and businesses whether large or small may have hundreds of flash drives popping in and out of their computers on a daily basis. Remote Management is the key to tracking these drives and protecting the entire network from vulnerabilities.
With Remote Management of secure devices, a business owner or administrator protects the network from within, by restricting what type of drives can be connected to computers, and even restrict use to certain IP addresses or domains outside of the organization. Flash drives and other USB devices can be managed from, and to anywhere in the world through either a self-hosted, or cloud interface, tracking their locations, disabling lost or stolen drives, notifying users of policy updates and enforcing password rules. Remote Management is the "all-in-one" solution for managing sensitive data on portable units anywhere.
All told, taking a much more enveloping approach by restricting use to hardware encrypted devices and managing them with a remote management system is the best precautionary measure an organization can take to make USB devices safer and comply with data protection regulations. Making a smart investment now can prevent harmful, expensive, and even embarrassing data breaches in the future. Kanguru would be happy to talk with you about finding the best affordable security solution for your organization, whether it be secure, hardware encrypted drives or a Remote Management system. Contact us at email@example.com to learn more, or check out the range of options to fit your specific budget here, at kanguru.com.
1) From the article "URMC Notifies 537 Patients of Possible Data Breach", written by Anuja Vaidya; Becker's Hospital Review, May 7, 2013 | <urn:uuid:671a22eb-d220-4abf-a867-63639532e7cb> | CC-MAIN-2024-38 | https://www.kanguru.com/blogs/gurublog/7829339-education-yes-but-what-else-can-be-done-to-help-prevent-sensitive-data-breach-on-flash-drives | 2024-09-07T17:39:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00549.warc.gz | en | 0.944233 | 1,373 | 2.5625 | 3 |
A few weeks back, we discussed OAuth, a protocol that we use for delegated authorization. This is basically a process where a user can authorize one service to access the user’s data from another service. One important thing that OAuth doesn‘t do is authentication—it doesn’t verify whether a user is who they claim to be.
This is where OpenID Connect (OIDC) steps in. It’s a protocol for authentication that runs on top of OAuth 2.0. OIDC is the third version of the OpenID protocol. It was designed to overcome some of the limitations of the early versions and to be a little more developer friendly. The OIDC protocol provides a relatively simple way for developers to authenticate users across apps and websites, all without the developer having to store or manage the user passwords.
An example of OpenID Connect
Let’s use an example of a new social media app to give you an idea of what OpenID Connect actually does. The developers of this app may not want to deal with all of the hassle of securely managing user passwords themselves. Instead, they might decide to turn to an OpenID provider like Google to do it for them.
To give you a slightly simplified explanation of how it all works, when a user wishes to log in to the social media app, the client can send them straight to Google’s login domain. Once the user is at Google’s login domain, they can sign in to Google using their Google account details. The Google login domain then hands the user an authorization code and directs them back to the client.
At this stage, the client then sends the authorization code back to the Google login domain, which then verifies whether the code is valid. If it is, the Google login domain will send the client an access token. The client then validates the access token.
The whole process may seem a little convoluted, but once the client has validated the access token, the social media app can then grant the user access to their account. The interesting thing about the whole process is that the social media app never authenticates the user—Google does it instead. Since the social media app trusts Google’s authentication process, it grants the user access to its resources. This means that the developers behind the app can securely grant access to users, without the substantial responsibility of having to secure and manage user passwords.
Important OpenID Connect terms
To complicate things, OIDC uses slightly different terminology to OAuth:
- End user – The user that needs to be authenticated to access resources from the relying party.
- The relying party – This is an OAuth 2.0 client application that needs the end user to be authenticated. In our example, this is the social media app.
- The OpenID provider – The OpenID provider is an OAuth 2.0 authorization server that can authenticate the end user for the relying party. In our example, this is the Google login domain.
To put it all together using the right terminology, the user (the end user), wants to log in to the social media app (the relying party). They are redirected to the Google login domain (the OpenID provider), which authenticates the user and sends them back to the social media app (the relying party) with a code. The social media app then sends the Google login domain (the OpenID provider) the code. The Google login domain (the OpenID provider) validates the code and then sends back an access token, which then allows the user to access their account on the social media app (the relying party). The user never logs in with the social media app (the relying party) directly. | <urn:uuid:e1b0f815-8ba1-48e4-b944-7123c909e264> | CC-MAIN-2024-38 | https://destcert.com/resources/what-is-openid-connect-oidc/ | 2024-09-10T04:00:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00349.warc.gz | en | 0.905465 | 765 | 2.890625 | 3 |
When it comes to health, technology is making a big impact. Yet not all of it is positive. There are increasing signs that technologies like smartphones and social media are causing physical and mental health problems. Data suggests that technology use (and especially overuse) is linked to everything from developmental issues to increased accident risk to recurring headaches.
These slides present a roundup of the science behind digital device use. The original research in this presentation comes from the Entefy article, The world’s love affair with technology is affecting health: 10 consequences of tech use and abuse. | <urn:uuid:1b985bab-3261-49b8-88f1-11ec82427c64> | CC-MAIN-2024-38 | https://www.entefy.com/blog/10-consequences-of-tech-use-and-abuse-slides/ | 2024-09-10T05:12:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00349.warc.gz | en | 0.926136 | 115 | 2.84375 | 3 |
Scientists at UK Quantum Technology Hub Sensors & Timing at UBirmingham Shrink the Devices Used in Quantum Sensing Systems
(ScienceDaily) Scientists at the UK Quantum Technology Hub Sensors and Timing, which is led by the University of Birmingham, are working on ways to improve the capabilities of sensors are now using quantum technologies, based on cold atoms, to improve their sensitivity.
The team of researchers has used a new approach that will enable quantum sensors to shrink to a fraction of their current size.
The quantum technology currently used in sensing devices works by finely controlling laser beams to engineer and manipulate atoms at super-cold temperatures. To manage this, the atoms have to be contained within a vacuum-sealed chamber where they can be cooled to the desired temperatures.
A key challenge in miniaturising the instruments is in reducing the space required by the laser beams, which typically need to be arranged in three pairs, set at angles.
Dr Yu-Hung Lien, lead author of the study, says: “The mission of the UK Quantum Technology Hub is to deliver technologies that can be adopted and used by industry. Designing devices that are small enough to be portable or which can fit into industrial processes and practices is vital. This new approach represents a significant step forward in this approach.” | <urn:uuid:3083c990-f9c2-4f5f-a2ad-5564a0fc2d76> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/scientists-at-uk-quantum-technology-hub-sensors-timing-at-ubirmingham-shrink-the-devices-used-in-quantum-sensing-systems/ | 2024-09-10T04:16:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00349.warc.gz | en | 0.931971 | 267 | 2.859375 | 3 |
After more development, WPA2 and the TKIP encryption algorithm were created. WPA2 is equivalent to the 802.11i standard. WPA2 provides AES encryption, 802.1X authentication, dynamic key management. For enterprises, WPA2 includes a connection to a Remote Authentication Dial-In User Server (RADIUS).
In wireless networks, user authentication is managed by Extensible Authentication Protocol (EAP). In an enterprise WLAN the authentication process is the following:
- The authentication process creates a virtual port for each WLAN client at the access point.
- The AP blocks all data frames except for 802.1x-based traffic.
- 802.1x frames carry EAP authentication packets via the AP to an Authentication, Authorization, and Account (AAA) server running a RADIUS protocol.
- If the EAP authentication is successful, the server sends an EAP success message to the AP, which then allows the client to send data through the virtual port.
- Before opening the virtual port, data link encryption between the WLAN client and the AP is established to ensure that no other WLAN client can access the port that has been established for a given authenticated client.
Additional security measures you can take is to filter the clients based on their MAC address and don’t broadcast the SSID of your WLAN, but don’t use these measures without WPA2, because they are not enough to consider your wireless network secured. | <urn:uuid:8f15a7b3-a318-487c-9bf5-7fb6a59b0ec5> | CC-MAIN-2024-38 | https://www.certificationkits.com/ccna-200-301-topic-articles/cisco-ccna-200-301-wireless/cisco-ccna-200-301-wireless-authentication/ | 2024-09-13T21:30:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00049.warc.gz | en | 0.900408 | 306 | 2.59375 | 3 |
See an overview of security mechanisms used in the banking sector in our previous articles: First contact: An introduction to credit card security and First contact: How hackers steal money from bank cards.
One of the data types stored on chip-based cards is the so-called Track2 Equivalent. It almost identically replicates the content of the magnetic stripe and, most likely, serves as a card identification parameter in HSM systems and other card processing subsystems. One of the attack types performed by crooks from time to time works as follows: Track2 Equivalent data are recorded onto a magnetic stripe, and fraudulent transactions are made either as normal magnetic stripe transactions or in the technical fallback mode. To steal such data from ATMs, special devices called shimmers are used.
One of the articles about shimming states that back in 2006, in the early days of chip-based cards, the Track2 Equivalent field of cards issued in the UK contained the original CVV2/CVC2. This error enabled scammers to clone magnetic stripes of such cards. In response, payment systems started using different seeds to generate the CVV2/CVC2 fields on the magnetic stripe and in the Track2 Equivalent field. The problem seemed to be solved: the value of the secret field CVV2/CVC2 on the magnetic strip doesn’t match the one recorded in the chip anymore. But shimming is still alive and thriving. Why?
The point is that many banks still authorize transactions with CVV2/CVC2 values read from the chip! Visa often mentions this; by contrast, MasterCard mostly ignores the problem. In my humble opinion, this is because in almost all MasterCard cards, CVC2 in Track2 Equivalent is equal to 000. Also, some regions are more prone to these attacks than others. For example, this type of scam is widespread in the USA.
One of the few MasterCard cards susceptible to the above-described attack was issued by a bank that doesn’t check the value in the CVC2 field at all. I could insert anything there: 000, 999, or any other number in this range. Most likely, this bank hasn’t disabled the debugging mode that approves all transactions.
What is the risk? A hacker could change the Service
field, indicating that the card doesn’t contain a chip. Hence, this field’s integrity verification procedure would be impossible since the processing accepts any CVC2. This vulnerability – very similar to the one described below – was quickly fixed after being reported to the bank.
According to my personal statistics, 4 out of 11 cards are vulnerable to such attacks.
This term refers to several types of attacks, including those targeting offline terminals as described by Kaspersky Lab. In addition, Brian Krebs published an article about the biggest such attack in the world. So, what is it all about?
In the early 2010s, chip cards finally became widespread in the USA. Several banks started issuing such cards.
We can only speculate about the origin of the attack. Probably, some insider information leaked, and hackers became aware that chip transactions were made in some kind of the debug mode, in which the issuing bank didn’t verify the cryptogram. The bank simply used the Track2 Equivalent field to perform identification as if it were an oldie-goodie magnetic stripe transaction. An important nuance: under the new EMV Liability Shift regulations, the issuing bank bears responsibility for this kind of fraud. Too bad, the vulnerable issuing banks neither fully understood how such cards work nor imposed strict antifraud rules for chip-based transactions.
Carders quickly realized what benefits could be derived from this situation. They started opening merchant accounts and making hundreds of ‘chip-based’ transactions using data of Track2 magnetic stripes purchased on the black market. The investigation took years, and by the time of its completion, the scammers had already disappeared. These days, a lot of criminals in certain countries of Latin America have been searching all parts of the world for ‘white whales’ and actively testing banks, hoping to find another target for the Brazilian hack. A few years ago hackers have found a bank that forgot to disable the debugging mode.
Cryptogram Replay and Cryptogram Preplay
‘In the wild’, such an attack was performed only once. It has been documented and described (PDF) by well-known experts from the University of Cambridge.
The attack makes it possible to bypass mechanisms that ensure the uniqueness of each transaction and cryptogram. Criminals could just ‘clone transactions’ for future use (when you won’t have access to the original card). As was explained in the first article, the card receives a certain set of data at the input: transaction amount, transaction date, and two fields that ensure entropy even if the transaction amount and date are the same. On the terminal side, the 232 entropy is ensured by 4 bytes of the UN
field (a random number). On the card side, it is the ATC transaction counter that increases by one each time. The pseudofunction looks something like this:
If one of the fields changes, the output value of the cryptogram changes as well. However, what happens if all fields remain the same? In that case, the previous cryptogram remains valid! That creates possibilities for two types of attacks targeting chip-based transactions.
Cryptogram Replay. If a compromised terminal returns the same UN
field, a cryptogram with a previously transmitted predictable UN
field that was once read from the card can be used multiple times. Even on the next day, the attacker can submit an authorization request with information about the old cryptogram and with the old date – and this won’t lead to rejection. In my last year’s tests, I used the same cryptogram seven times over seven days without raising any suspicions from the bank.
Cryptogram Preplay. This scheme is used if the vulnerable terminal returns not the same UN
value but some predictable values. It is how the vulnerable ATMs operated in the Maltese attack described above. In this case, the attacker gains physical access to the card and clones several transactions ‘for future use’. Unlike the previous attack, each such transaction can only be used once.
This attack is of interest in terms of the EMV protocol development history. When the protocol was in development, the ATC field was introduced specifically to protect cards against such attacks.
The issuing bank had to check the value of the ATC field; if its values were received out of order and/or with noticeable ‘jumps’, suspicious transactions were rejected.
For instance, if ATC values of transactions were received for processing in the following order: 0001,
, then the operations in this sequence whose numbers are enclosed in asterisks were considered suspicious and had to be rejected by the processing. But then customers started complaining, and adjustments were made to the technology.
A simple example: a cardholder boards a plane and pays with a card using an offline terminal during the flight. Then the plane lands, and the client pays with the same card at the hotel. And only then the terminal used in the aircraft connects to the network and transmits the transaction data. This results in an ‘ATC jump’ and, pursuant to the rules adopted by payment systems, the bank can reject a 100% legitimate transaction. After several such episodes, payment systems made the following adjustments to their policies for ‘ATC jumps’:
- ‘jumps’ should be counted only if the delta between the values of the counter “exceeds X” (the X value is determined individually by each bank); and
- ‘jumps’ don’t necessarily indicate fraud; however, constant jumps above the X value are a reason to contact the client and clarify the circumstances.
However, the first scenario (i.e. cryptogram replay) wasn’t affected by these changes in any way. If the card processing is designed correctly, there is no reasonable explanation for the situation when the same data set (Cryptogram, UN, and ATC) is sent many times and successfully approved by the bank. Over the past year, I have submitted information about this attack to more than 30 different banks and received a fairly wide range of responses.
In some cases, the bank cannot simply block transactions with the same values due to the incorrect design of their processing services.
I also must note that I had never encountered terminals returning the same UN field value ‘in the wild’. Therefore, attackers have to use their own terminals, which makes money laundering more difficult.
In addition to that, a predictable UN could lead to falsely passed offline card authentication. If UNs are not random, criminals could precompute the resultant DDA/CDA authentication scheme values for a predictable UN field.
Statistics indicate that 18 out of 31 bank cards are susceptible to replay/preplay attacks targeting contact or contactless chips.
Perhaps this is the most well-known attack type targeting chips. The Cambridge team outlined the theoretical prerequisites for this attack in a study entitled Chip and Spin back in 2005 – a year before the massive introduction of the EMV standard in the UK. But a closer public attention was attracted to this attack much later.
In 2010, the Cambridge team published a study dedicated to the PIN OK attack. To deliver such an attack, the researchers used a device implementing the man-in-the-middle (MITM) technique between the card chip and the terminal reader.
In 2011, at the Black Hat and DEFCON conferences, Adam Laurie, with a group of researchers from Inverse Path and Aperture Labs, presented more practical information about this attack. Also, in 2011, an organized crime group used 40 stolen EMV cards from one French bank to make 7,000 fraudulent transactions totalling €680,000. Instead of the bulky device used by the researchers, the criminals used a small inconspicuous ‘second chip’ installed on top of the original one, which made it possible to emulate an attack in real-life conditions.
In December 2014, researchers from Inverse Path have brought up this topic (i.e. attacks on EMV transactions) again and presented some statistics collected over three years (PDF). In 2015, a detailed technical review of the attack performed in France in 2011 was published (PDF).
Let’s examine the technical aspects of this attack. As you remember, it involves the man-in-the-middle technique. The card sends the CVM (Card Verification Method) List field to the terminal – a priority list of cardholder verification methods supported by the card. If the first rule on the card is “offline PIN encrypted/unencrypted”, nothing happens at this stage. If the first rule is different, criminals should replace it with the “offline-PIN”.
Then the terminal requests a PIN code from the cardholder. The “offline PIN” rule means that the PIN code will be transmitted to the card for verification in plain or encrypted form. The card will respond with either 63C2
“Invalid PIN, two attempts left” or 9000
“PIN OK”. At this stage, the attacker that could affect the cardholder verification process replaces the first response with the second one.
So far, the terminal believes that the PIN was entered correctly; so, it requests a cryptogram from the card (Generate AC request), and passes all the requested fields to it. The card knows that the PIN was either not entered at all or entered incorrectly. But the card cannot see the decision made by the terminal after that. For instance, if the entered PIN code is incorrect, some terminals request the cardholder to affix the signature on the touchscreen: this feature has been introduced for the client’s comfort. Therefore, when the terminal requests a cryptogram, the card transmits it. The response contains the CVR (Card Verification Results) field, which indicates whether the card verified the PIN or not. Furthermore, this field is part of the payment cryptogram, and attackers cannot change its value: any such attempt would cause a transaction authorization error during the cryptogram checking on the HSM.
The terminal sends all data in the ISO 8583 Authorization Request packet to the acquiring bank. Then the data is sent to the issuing bank. The authorization host at the bank sees two fields: CVMResults, which indicates that “offline PIN” verification method was selected. The bank also sees that the card did NOT receive the PIN code or that it was wrong. And, no matter what, the bank approves the transaction.
If the card uses the CDA authentication scheme and attackers have to change the first rule on the CVM list, the offline authentication will fail. However, this can always be bypassed by changing the Issuer Action Code fields. The details of this case are described in the latest version of the presentation made by the Inverse Path experts in 2014.
In addition, in their first study dated 2011, these specialists have demonstrated that the EMV standard permits the payment device not to reject transactions even if the card authentication and the cardholder verification both failed. Instead, the terminal continues choosing less secure methods every time (so-called fallback). This opens up new opportunities for scammers, including attacks that steal PIN codes during transactions on compromised POS terminals.
Interesting statistics for the last year: some obvious card processing problems identified back in 2010 still remained unsolved as of 2020. Last year’s statistics indicate that 31 out of 33 bank cards from all parts of the world are vulnerable to the PIN OK attack.
The next article will address attacks on contactless cards and related applications (i.e. mobile wallets). | <urn:uuid:2f10a608-9cb9-4b5a-baad-ff13ea7d6057> | CC-MAIN-2024-38 | https://hackmag.com/security/smartcard-attacks/ | 2024-09-15T00:51:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00849.warc.gz | en | 0.94281 | 2,866 | 2.84375 | 3 |
Everyone tends to focus on the “big” in Big Data, so much so that it’s easy to lose focus on the fact that Hadoop is really about data. Let’s regroup for a minute and really look at what’s going on with the data on Hadoop.
First, there’s the core. When people say “Hadoop,” they’re usually referring to the Hadoop core, which I explained earlier:
The Hadoop Distributed File System. What’s it doing with the data? It’s distributing it on nodes and storing it there.
MapReduce. This does the real work in the Hadoop core. If you want to run a process or computation on the data, it “maps” that out to the nodes and then runs the process, and “reduces” the results to your answer. So, it’s processing the data.
Now, if you’re familiar with data at all, you’ll notice that there are a whole lot of things missing from that equation, such as:
- Job scheduling
- Data management
This is where the growing list of Apache Hadoop-related projects comes into play.
These projects go by an odd assortment of names: Pig, Hive, Flume, Zookeeper, but they’re often short-changed when we talk about Hadoop. I’ve seen them referred to as the “Hadoop stack,” though some programmers prefer “Hadoop ecosystem.” Forrester refers to them as “functional layers.”
For the most part, they’re of interest to developers more than executives, but hopefully a high-level view of these solutions will add some depth to your understanding of Hadoop and its capabilities.
Here are a few of the more common names you’ll hear:
Pig. An analytical tool and runtime engine. Yahoo developed the Pig platform as a way of analyzing data without constantly hand-coding MapReduce jobs. Pig includes a high-level data flow language called, predictably enough, PigLatin, and it’s designed to work with any kind of data (like a pig — get it?), plus a runtime environment where PigLatin programs are executed. Cloudera’s distribution of Hadoop uses Pig and Hive as analytical languages; Forrester lists it under Hadoop’s “modeling and development” layer.
Mahout. A data-mining tool. A Mahout is a person who rides an elephant — a nod to the fact that Hadoop is named after a stuffed elephant treasured by Doug Cutting’s son. Mahout applies what’s called machine learning to data, allowing it to cluster, filter and categorize data and content. It can be used to build recommendation engines and data mining. This also falls under Forrester’s “modeling and development” layer.
For Collecting Data:
Flume is a way to collect, aggregate and move large amounts of event or log data into a Hadoop Distributed File System. Cloudera’s distribution of Hadoop uses Flume for data integration, according to Ravi Kalakota. Forrester classifies it under the Hadoop data collection, aggregation and analysis.
Chukwa is a data collection system and is used to process and analyze huge logs. It includes a toolkit for displaying, monitoring and analyzing the data. It’s a newbie among the Apache projects. Chukwa, by the way, is also a disappearing Nepalese language.
For Integration with Existing IT Systems:
Hive. What’s great about Hive, at least from the enterprise IT point of view, is the queries are written in SQL and converted to MapReduce. This makes it easier to integrate Hadoop data with other enterprise tools, such as BI and visualization tools, according to Jeff Kelly of Wikibon. It can also be used for metadata management.
Sqoop. While Hive helps you integrate Hadoop data into existing IT systems, Sqoop goes the other way and helps you move data out of traditional relational databases and data warehouses into Hadoop. “It allows users to specify the target location inside of Hadoop and instruct Sqoop to move data from Oracle, Teradata or other relational databases to the target,” writes Kelly.
HCatalog is a table and storage management service for Hadoop data. It manages schemas and supports interoperability across the other data processing tools (e.g., Pig, MapReduce, Hive, etc.).
Zookeeper. Just as you need a way to manage multiple servers, you need a way to manage Hadoop nodes. Zookeeper fills this role.
Cassandra. Even though Cassandra isn’t part of the Apache Hadoop project, I’m mentioning it because it’s in the trade press a lot. It can be used with Hadoop, but more often, it’s used with the Hadoop ecosystem, but in place of the Hadoop File Distributed System. It’s a scalable, multi-master database with no single points of failure. | <urn:uuid:69b2c64d-d2c0-41e2-89ac-59141a6a4b1c> | CC-MAIN-2024-38 | https://www.itbusinessedge.com/it-management/hadoop-the-rest-of-the-stack-story/ | 2024-09-20T00:52:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00449.warc.gz | en | 0.908755 | 1,129 | 2.5625 | 3 |
A literal is any of:
- A character-string whose value is implied by the ordered set of characters of which it is composed
- A reserved word which references a figurative constant
- A user-defined word which references a constant value
Every literal belongs to one of these types; nonnumeric, numeric, national, or UTF-81. | <urn:uuid:5b0610d6-b180-448b-b352-373fc5fe3472> | CC-MAIN-2024-38 | https://www.microfocus.com/documentation/visual-cobol/vc60/DevHub/HRLHLHCLANU014.html | 2024-09-20T00:58:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652073.91/warc/CC-MAIN-20240919230146-20240920020146-00449.warc.gz | en | 0.928672 | 73 | 2.703125 | 3 |
Sharenting: What parents should consider before posting their children’s photos online
21st century parenting is firmly grounded in technology. From iPads keeping kids entertained on flights, to apps that allow parents to track their children’s feeds, development, and more, technology has changed what it means to be a parent. But social media has added another dimension. The average child now has a digital footprint that often begins when their parents post an ultrasound photo, inviting friends and family to share in a joyous event through regular “sharenting.” However, some parents—especially those that adopted social media at an early age—have fallen into the trap of posting about their children a little too frequently, a condition called ‘oversharenting’. Like anything to do with social media, this comes with several risks. For this reason, it is important for parents to understand how to safely post about their kids.
What is sharenting?
Sharenting refers to the practice of parents sharing photos of their children online. Usually, images are shared on social media platforms like Instagram and Facebook, and capture quotidian moments in children’s lives, such as first steps, trips to the zoo, school performances, and holidays, for example. But as much as parents may want to share their children’s achievements and lives with friends and family, sharing photos online can be problematic.
There are, of course, some positives about sharenting. For example, parents often build communities online through social media platforms. This can be a great resource for parenting and gives first-time parents a sense of camaraderie during a time when they may feel like they have no idea what they are doing. Similarly, for parents who live far away from other family members and friends, sharing photos of their kids online offers a way to involve these important people in their children’s lives. However, when parents share images that contain personal details about the child, or details that could be embarrassing for the children as they become older, ‘oversharenting’ can become a problem.
As social media platforms like Facebook and Instagram have become more pervasive in society, sharenting has become very normalized. In fact, statistics show that parents are more than willing to share images and videos of their children online. As such, more than 75% of parents have shared their children’s images on social media, and 33% have never asked their children for permission before sharing photos online.
What are the dangers of sharenting?
While posting images of kids may seem innocuous, parents should be aware that sharing photos online—with family, friends, acquaintances, or the public—can be problematic. As such, there are several factors parents need to consider before posting pictures of their children on social media. These are especially pertinent because some of these considerations can present sharenting dangers, including:
- The pervasiveness of social media.
- The permanence of digital content and the internet.
- Potential embarrassment as the child grows.
- Identity theft from information in the shared information.
The problem is that once images are online, parents have no way of knowing how far they go and how other people might be able to use them. There is the added complication that whatever is posted online remains there forever, even if the original poster deletes it. ‘Oversharenting’ creates a digital footprint for the child whose picture is involved in the online photo sharing, which presents numerous potential complications, such as loss of privacy and financial or identity fraud, for example. Below are some of the sharenting dangers parents should be aware of.
Risk of identity theft
Many parents do not realize that their sharenting habits expose incredible amounts of personal information about their children. A survey conducted by Security ORG found that approximately 75% of parents shared a picture, story or video of their child online, and more than 80% of parents use their kids’ real names on social media posts. Cybercriminals can parse shared photos—and the accompanying captions—to figure out a child’s name, birthday, and location. By combining this with other information, perhaps gained through phishing or on the Dark Web through data breaches, these malicious actors can steal the child’s identity for nefarious means.
Permanence of digital content
Although many of the platforms parents use for sharing photos online with family and friends offer the ability to delete posts, this may not be enough to protect a child. Everything that is shared on the internet could leave a permanent trail, even if the original poster removes their post. As such, it is often better to not post an image in the first place, rather than risk ‘oversharenting’ with a photo that contains sensitive details that could put the child at risk.
Losing control of images
Another danger of online photo sharing is that posters have no control over what happens to their images once they are on the internet. Even though some parents may use privacy settings on their social media profiles, once they share images of their children, they have little ability to manage what people do with the photo. For example, people can save the images and share them with other people. The images could even be altered and misused by malicious actors. Another sharenting danger to consider is that most social media sites own any content posted to their platforms. This clause is usually hidden in the terms and conditions that most users scroll through without reading. As such, when a parent posts a photo of their child online, the platform on which it is shared has ownership of the image.
Sharenting dangers of child predators
Another potential consequence of parents sharing photos of their kids online is unwitting exposure to child predators. In the same survey by SecurityORG it found that nearly 80% of parents say they have social media connections whom they have never met in real life. The images parents share can contain information that allows predators to track children. For example, images might show the child’s school or uniform, or the street name of the family home, while geotags can allow people with nefarious intent to track the child’s real-time location. In addition, because parents cannot control how far these photos spread, it is impossible to know where they end up, even with privacy controls in place. As such, it is important for parents not to engage in ‘oversharenting’ images of their child and minimize the ability of potential predators to find and abduct the child.
Privacy and legal issues of oversharenting
One of the biggest problems with sharenting is the question of privacy. Young children are too young to consent to their parents sharing photos online with family and other people, and even older minors may not entirely grasp the full implications of posting online. In fact, a recent study found that 29% of parents share content about their child without getting the child’s consent; only 24% say they ask their child for permission to post each time. Furthermore, the study found that 32% of children say that their parent has shared a story, image, or video of them on social media even after they explicitly asked them to refrain. All of this suggests that online photo sharing has inherent privacy issues between children and parents.
Babies, by virtue of their limited communication skills, are incapable of giving informed consent to online photo sharing. But it is especially important for parents to consider the ramifications of sharenting, especially as their children grow. In certain countries, such as France and Germany, the legal system gives children the right to their own images. While the issue is more complex in the US, there are still privacy and legal issues to consider. The ”DaddyoFive” YouTube channel demonstrates why these issues are so complex. The channel was used as evidence of abusive behavior by the parents—the lawyers also argued that the way in which the videos were shared was a form of abuse— and resulted in two of the children concerned being taken into emergency custody.
Once children are old enough to understand social media and the ramifications of posting on these platforms, it is important for parents to begin asking for consent for online photo sharing. This not only demonstrates that the parents respect the children’s privacy, but also helps eliminate privacy issues between children and their parents. In addition, involving children in the process of deciding which photos can be shared online introduces them to the concept of responsible online etiquette before they begin using social media themselves.
Another privacy concern of sharenting is posting images of other people’s children, whether this is intentional or not. For example, parents often take photos of their children’s sporting events or performances in which other children appear. In these instances, it is crucial that parents ask the other children’s parents for consent to share these photos online.
10 tips for safely sharing photos online with family and friends
In light of the sharenting dangers outlined here, parents may well be wondering whether any online photo sharing of their children is safe. Of course, this is a very personal choice. Some parents choose not to post any images of their children at all. But for those who wish to continue sharing photos online with family, there are numerous ways to improve the security of these photos and minimize the risks of ‘oversharenting’. Here are some things to remember:
- Check privacy settings: Ensure that all posts can only be seen by family and close friends and remove resharing permissions. Allowing strangers and acquaintances to see children’s photos can be a sharenting danger.
- Have discussions about privacy with friends and family: Be vocal about protecting children’s privacy and set boundaries about how they can engage with posts.
- Turn off metadata and geotagging: Not using these functions can minimize other people’s ability to track children through online photo sharing.
- Do not include identifiable information: Whether it is in the photo itself or in the captions, be sure not to share details that would allow others to find and track children. This can include things like names, birthdates, schools, places they regularly go to, or even family homes.
- Avoid using real names: Avoid giving people online access to children’s full names. Instead, use nicknames or descriptive phrases for kids.
- Do not post potentially embarrassing images: Whether they are photos of the children in the bath or dressed in funny outfits, these images may cause problems for the child as they grow up.
- Use secure platforms: Instead of sharing photos online, use more secure platforms to show pictures of children to friends and family. For example, WhatsApp protects photos with end-to-end encryption and gives users the option to send photos that can only be opened once.
- Avoid showing the child’s face: To avoid ‘oversharenting’, some parents cover their children’s faces before posting their photos to social media. This can be done by using the “stickers” built into apps, like Instagram, to cover their faces or using editing tools to blur or block out their features.
Questions to consider when sharing photos online
Before sharing posts about their children on social media, parents should ask themselves several questions. These can help assess the potential implications of the posts and help parents decide if they are acceptable or would be considered ‘oversharenting’. Answer these questions sharing posting photos online:
- Why is the post being shared? Perhaps it is to keep your friends and family updated about your child’s development, or it is simply because it is an adorable moment. Either way, it is important to understand the intention behind the post.
- Would it be acceptable to allow someone else to share a similar post? While everyone approaches social media differently, it is important to consider whether the post would be okay if the roles were reversed. If the child in question is old enough to understand and give consent, ask them directly.
- Is there anything potentially embarrassing about the post? Since things posted online can leave a permanent footprint, consider whether the post could cause the child embarrassment later in life.
- Does the post contain anything that could be potentially compromising? As above, consider whether the post could cause difficulties for the child later in life. Perhaps there are religious or political sentiments that might bar the child from being employed at a certain company as an adult.
- Would the child be happy to see the post as part of their digital footprint? Think about what the post says about the child, and how they might want to be perceived as an adult. Would the post be a nice memory or perhaps paint them in a negative light?
Think twice before sharenting
Sharenting is the natural result of a world where social media is used spontaneously to capture moments of life and share them with others. While the practice does have some advantages, parents should spare a little more thought when sharing photos of their children online. This is because sharenting can pose many dangers, including identity theft and exposure to potential predators. In addition, sharenting can result in negative repercussions for the child when they are older. For example, it could impact their job prospects. Since sharenting essentially creates their children’s digital footprints before they are old enough to consent to it, the practice can also create privacy issues between children and parents that can erode trust in that relationship. For all these reasons, it is important for parents to think twice before posting about their kids.
Kaspersky Endpoint Security received three AV-TEST awards for the best performance, protection, and usability for a corporate endpoint security product in 2021. In all tests, Kaspersky Endpoint Security showed outstanding performance, protection, and usability for businesses.
Related Articles and Links:
Related Products and Services: | <urn:uuid:af11c416-050b-45f6-8574-f7c14e4e4c85> | CC-MAIN-2024-38 | https://www.kaspersky.com.au/resource-center/threats/children-photos-and-online-safety | 2024-09-07T21:55:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00649.warc.gz | en | 0.963009 | 2,842 | 2.796875 | 3 |
ClearBuds use deep learning to separate audio sources and suppress unwanted noise.
Videoconferencing was an enduring staple of the pandemic. As most companies switch to a hybrid work model, videoconferencing is set to be a mainstay of business communication.
But that means users also have to endure uneven audio quality in noisy environments − whether it is a packed office room, loud café, or deafening train journey.
Now a team of researchers from the University of Washington has come up with a solution by integrating an AI system that can improve noise-canceling capabilities.
Their solution is ClearBuds, which uses a pre-existing neural network capable of sensing differences in incoming sources. The system, which has a computational requirement small enough to run on an iPhone, utilizes a deep learning model to separate the audio sources triangulated by the neural network.
The researchers recently unveiled their work in a paper presented at the Mobisys 2022 event in Portland, Oregon.
According to the research team, ClearBuds acts as both a “synchronized, binaural microphone array” and “a lightweight dual-channel speech enhancement neural network that runs on a mobile device.”
They said this combination resulted in audio sync errors of less than 64 microseconds.
Unlike earpod products currently available on the market, ClearBuds use a custom-built wireless protocol that forces one earbud to transmit a time-sync beacon, which the second earbud then uses to match its internal clock to ensure the streamed audio remains in sync.
However, ClearBuds is not the only earpod product using AI technologies. Apple is embedding ML into AirPods − not to improve sound but to monitor breathing rates. A paper released by Apple researchers last year detailed how sensors can be integrated to provide accurate estimations of a person’s breathing patterns.
But while Apple may be testing ML technologies in its AirPods, the system was not included in the third generation version released last year.
About the Author
You May Also Like | <urn:uuid:3973d111-b4c5-4dae-a1be-61e864fd9ffa> | CC-MAIN-2024-38 | https://aibusiness.com/verticals/these-ai-earbuds-can-reduce-noise-in-your-zoom-phone-calls | 2024-09-09T03:08:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00549.warc.gz | en | 0.941613 | 428 | 2.671875 | 3 |
Looking back on the past decade, the speed of the internet has made major progress, with many of the limitations removed through the years. A 50-100 Mbps connection at home is nothing extraordinary anymore, while hardware became cheaper and cheaper. But even with the fastest PC and the fastest internet connection, there is usually a noticeable delay when loading a website. For some pages it is less than half a second, for others it can even be two seconds or more. As they say: time is money (especially on the internet), so reducing loading times is one of the major goals of the Leaseweb Networking department.
Once in a while, great steps are made towards this ideal of instant content delivery. Most recently, the web gurus at Google proposed a method to increase the speed by improving commonly used internet protocols like SSL, HTTP and even Transmission Control Protocol (TCP). When you translate their ideas, it is theoretically possible to increase the loading speed of a page by about ten to fifty percent.
Let’s take a quick look at how this works. Before any real data can be sent across a network, like the content of a website, the client and the server have to talk to each other, exchanging several messages in the process. TCP is a kind of ‘language’ that can be used by clients and servers to communicate. Each time a message is sent by one party, it reaches the other with some delay. A reply is then sent back to the original sender – also with a delay. This delay is called latency, which we discussed in our “It’s all in the game” series. Since the inception of the TCP protocol in 1974, we have increased bandwidth a million times, but latency remained more or less the same with only some minor improvements.
Google is now working on what they call SPDY (pronounced speedy) standardization. This is a set of modifications to several internet protocols, all aimed at lowering page loading times. The idea behind the proposed modification for TCP is, like all good ideas, really simple: Just reorganize the language in which clients and servers talk, thereby reducing time they wait for a reply. Of course, in reality this is very complicated to do because of the amount of different devices already connected to the Internet. Still, it’s good to be moving forward in this area, which is why there’s a lot going on about SPDY – and not only within Google.
For instance, Firefox 11 will include preliminary SPDY support. Meanwhile, the IETF working group is developing HTTP/2.0, with the goal of improving the overall performance of the web – SPDY will probably be included in the new specification. A new SSL version is also being prepared and most of the SPDY modifications have been added to the Linux kernel in last months, with kernel version 3.2 already containing lot of SPDY code. But as always, you have to be careful in a firewall environment – you never know how things may work out.
Leaseweb is of course very excited about the idea of speeding up the web and tracks the development of SPDY closely (our network also supports the modified TCP protocol). Unfortunately, it will be a while before the SPDY project is finished. It has to be able to boost modern day internet, while at the same time being backwards compatible with non-SPDY devices. That is where the real challenge lies, and is one of the reasons we’re also working on other solutions, such as our new CDN. In the end, all roads lead to Rome, but choosing which road to take is what makes designing global networks such an interesting job! | <urn:uuid:03d2ff04-4ff8-458d-b1a3-f8b9ba38afe8> | CC-MAIN-2024-38 | https://blog.leaseweb.com/2012/02/10/tcp-spdy-and-leaseweb/ | 2024-09-09T02:11:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00549.warc.gz | en | 0.953206 | 753 | 3.203125 | 3 |
One of the most common methods that hackers use/will use to attack your website is a cross-site scripting (XSS) attack.
The consequences of an XSS attack can be very drastic, ranging from the hacker spreading worms on your website, to the hacker stealing sensitive data that they can use for identity theft or financial crimes, to the hacker impersonating a user visiting your website by hijacking a progressing session.
It’s safe to say that an XSS attack is not something you will want to have happen to your website, which is why we will explain a number of different steps you can take to protect your website against it from ever happening to you.
But before we dive into that, it’s also important to fully understand how XSS attacks work in the first place.
How does an XSS attack work?
Cross-site scripting attacks are markedly different from either hacking attacks, like an SQL Injection, for the sole purpose that they are intended to attack the users of an application rather than the application itself.
There are a number of vulnerable places they will be able to accomplish this in your website, such as in your search fields, your forums, or your cookies.
To give you an idea of what a hypothetical cross-site scripting attack would look like on your website, let’s say that your website has a message board that allows visitors to submit comments.
The comments would be stored in a database online, and then be displayed to other visitors without any encoding.
A hacker would simply need to post a comment with a malicious script in it and enclosed by <script>
When another visitor views the comment, the script will be executed and request the cookie information from the unsuspecting visitor. Furthermore, the script will be run under the context of a ‘secured website,’ so your browser will not be able to discriminate between content that is malicious and content that is legitimate.
This is ultimately just one type of XSS attack, referred to as a persistent XSS attack, but it is one of the most common.
Another way an XSS attack my occur (in a non-persistent XSS attack) is to try and deceive a website visitor into clicking a malicious URL, which would then inject a code into the page to give the hacker full access to its content.
Now that we know how XSS attacks work, we can begin to discuss some of the most effective steps you can take to prevent it from being a reality in your website.
Steps to prevent an XSS attack
Here are some different steps and strategies you can utilize to prevent an XSS attack and help keep your website data safe:
Use an SDL
SDL stands for ‘Security Development Lifecycle.’
You’ll specifically want to have used an SDL when developing your web application, with the main purpose of them being to limit the amount of coding errors and security flaws in your application, thereby making your website less vulnerable to an XSS attack.
In essence, an SDL will assume that all data your web application is receiving is coming from a source that can’t be trusted, even if that data comes from users who have logged themselves into your account multiple times.
This is exactly why an SDL will be beneficial in reducing XSS vulnerabilities, because as we have just looked at previously, an XSS attack can be launched against those who have already logged in.
Adopt a crossing boundaries policy
A crossing boundaries policy means that you make any authenticated users in your website have to re-enter their login information before they are allowed to access certain pages and services on your site.
Even if the authenticated user has a cookie that allows them to login automatically, you can still set it so that they have to re-enter their username and passwords anyway before entering certain pages.
The reason why this strategy is effective at stopping an XSS attack is because it strongly restricts the potential for a session being hijacked by an XSS hacker.
You can further expand on this concept to set it so that a session will be automatically expired when two IP addresses try to use the identical session data.
Use the right META tag
Here’s a meta tag that you can use on each page in your site to declare characters:
<META http-equiv="Content-Type" content="text/html; charset= ISO-8859-1">
The benefit to using this meta tag is that it will greatly reduce the number of potential forms that an XSS script injection can take.
Make use of a website vulnerability scanner
Last but not least, you can also make use of website vulnerability scanners when developing your web applications.
With a good vulnerability scanner, you can spot security weaknesses and flaws in your website, including those that are the most vulnerable to XSS hackers.
A vulnerability scanner will become even more important if you are using a third party package, in which case you may run into some configuration problems and you can’t assume that the package will be secured.
In conclusion, a cross site scripting attack is one of the most dangerous attacks that could happen on your website, with drastic consequences for your website that you won’t want to experience.
The good news is that no matter how dangerous an XSS attack may be, they are not something that you have to experience either, and you can take action to avoid them by following the steps that we have covered here today.
Get the latest content on web security
in your inbox each week. | <urn:uuid:dfc363b6-1bab-4649-9469-5165335ef832> | CC-MAIN-2024-38 | https://www.acunetix.com/blog/articles/how-to-protect-your-website-against-a-cross-site-scripting-xss-attack/ | 2024-09-09T02:48:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00549.warc.gz | en | 0.94633 | 1,137 | 2.765625 | 3 |
What is software protection?
Software protection refers to measures that can be taken by a software developer to prevent unauthorised use of their software, enforcing their licensing agreement and using anti-debugging and anti-reverse engineering techniques to protect their intellectual property (IP) against theft.
According to the BSA at least 37% of software in use worldwide is pirated. This figure is significantly higher in Asia, Latin America and BRIC countries. This represents a huge loss in revenue to software developers.
Software piracy can take many forms. Unlicensed software can easily be distributed online or via peer-to-peer networks free of charge. Sometimes, illegal copies are made to look like the genuine product and sold for a fee. However, piracy can often occur when your customers exceed the terms of their licensing agreement by installing extra copies or allowing many users to use a single licence across a network.
Preventing Software Piracy
Piracy is prevented using a mixture of strong encryption and techniques to prevent debugging, analysing, tampering, dumping and decompiling of the developer's software. The central strategy of copy protection is achieved by strongly binding the protected software to a security token. This can be a hardware device such as a USB security dongle or a software-based key or lock that exists on the computer. If the security token is not present, or is compromised, then the software will not run.
Typically, asymmetric encryption such as RSA is used so that the protected software can securely communicate with the security token. The public key is stored in the protected application and the private key is stored in the security token. The tokens are designed with a very high level of security that can prevent extraction and modification of the contents of the token and cloning of the token itself.
Sometimes developers try to implement their own primitive copy protection systems based on some unique elements of the computer. However most, if not all, discover that this generates a huge amount of support and that to write such systems reliably and with enough flexibility for general use is very difficult. Not to speak of preventing debugging and reverse engineering of their software, which are advanced topics best left to the experts. Microcosm has spent over 30 years perfecting its copy protection systems and offers two solutions: Dinkey Pro, a hardware dongle-based solution and CopyMinder, a purely software-based protection system.
Software Protection Solutions by Microcosm
Both of our software protection solutions support two methods to integrate the protection:
The Shell method automatically adds a secure wrapper (a protective shell) to your application, forcing it to confirm the existence and status of the security token before the application is allowed to run. The Shell method also encrypts code and data in your software to prevent reverse engineering. With this method of protection you can apply software protection without modifying your source code.
The other approach is to call our protection API. This gives you flexibility on when to trigger a protection check and what actions to take depending on whether the token is present or the value of parameters securely stored inside the token. For example you can choose to terminate your program or run in demo mode if the token is not found. Or, you can decide which features of your program are enabled depending on data stored inside the token.
All of the licensing details inside the token can be updated securely and remotely. For example, you may use this to extend an expiry date or change which features of your software a customer can use.
You can use our software protection solutions to implement all kinds of licence models, including:
- One-off purchase
- Secure trials/demos
- Network licensing (including controlling the number of users on a network using your software at the same time)
- Per computer/per dongle
- Pay-per-use: controlling the number of executions of your software or of a command within your software | <urn:uuid:4cace42d-34b4-4e03-be12-eddebd6623d4> | CC-MAIN-2024-38 | https://www.microcosm.com/blog/what-is-software-protection | 2024-09-10T07:24:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00449.warc.gz | en | 0.916876 | 787 | 3 | 3 |
Monday April 23, 2018
What is the 21st Century Cures Act?
The 21st Century Cures Act (Cures Act), signed on December 13, 2016, is intended to support the fast-track of new developments for medical products and provide patients with new modernizations and developments who need them faster and more proficiently.
Why is the 21st Century Cures Act Important in the Life Sciences Industry?
The law supports the FDA's continued work to integrate the patient’s perspectives into the progress of devices, drugs, and bio products within decisions made by the FDA. The Cures Act improves the ability to streamline clinical trial projects and clinical outcome assessments within the Life Sciences industries, which overall, will quicken the development and assessment of new medical products, including medical devices.
According to the Advanced Medical Technology Association, which represents 300 medical device companies, “the Cures Act builds on FDA’s current programs to allow a quicker path for breakthrough medical technologies for patients with life-threatening or irreversibly debilitating diseases or conditions, and limited alternatives.”
How does this Act impact health data?
The 21st Century Cures Act impacts financial support for health research as well as improving mental healthcare. The Act states that within the next two years one billion dollars will be allocated towards programs to prevent opioid abuse. Furthermore, biomedical research will be improved upon with funding totaling 4.8 billion and new cancer research with funding up to 1.8 billion. The data from this research may be ground breaking for the future of medical advances.
The Arbour Advantage
Arbour Group is a trusted regulatory advisor to over 250 pharmaceutical, medical device and biotechnology companies worldwide. Let us demonstrate how we can prove ourselves as a valuable business partner by delivering effective services that reduce compliance costs. | <urn:uuid:c3107f9a-5114-4065-9589-c94abf534238> | CC-MAIN-2024-38 | https://www.arbourgroup.com/blog/2018/april/the-21st-century-cures-act/ | 2024-09-11T13:22:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651387.63/warc/CC-MAIN-20240911120037-20240911150037-00349.warc.gz | en | 0.941912 | 367 | 2.625 | 3 |
In the ever-evolving landscape of cybersecurity, Denial-of-Service (DoS) attacks stand as formidable threats aimed at disrupting computer systems and networks. This exploration aims to shed light on the tactics employed in DoS attacks, their impacts, and crucial safeguards to fortify digital infrastructures against these disruptive assaults.
Tactics Employed in DoS Attacks
Denial-of-Service attacks leverage various tactics to overwhelm a system or network, rendering it inaccessible to legitimate users. Common tactics include:
- Flooding Attacks: Inundating the target with a barrage of traffic, such as through a flood of connection requests or data packets.
- Resource Depletion: Exhausting system resources like bandwidth, CPU, or memory to incapacitate normal functioning.
- Protocol Exploitation: Exploiting vulnerabilities in network protocols to disrupt communication and service delivery.
- Distributed Denial-of-Service (DDoS): Orchestrating attacks from multiple sources to amplify the impact, making it more challenging to mitigate.
Impacts on Businesses and Organizations
The ramifications of successful DoS attacks extend beyond temporary inconvenience. Businesses and organizations may face:
- Downtime: Disruption of services and unavailability of digital assets.
- Financial Losses: Revenue loss due to interrupted operations and potential costs associated with recovery.
- Reputation Damage: A tarnished reputation as users experience service outages.
- Data Breach Risks: DoS attacks may serve as distractions for more nefarious activities, potentially leading to data breaches.
Safeguarding Against DoS Attacks
Implementing robust cybersecurity measures is crucial to defend against DoS attacks. Key safeguards include:
- Firewalls: Configuring firewalls to filter and monitor incoming and outgoing traffic.
- Intrusion Prevention Systems (IPS): Deploying IPS solutions to detect and block malicious activity in real time.
- Content Delivery Networks (CDN): Utilizing CDNs to distribute traffic and mitigate the impact of flooding attacks.
- Load Balancers: Implementing load balancers to evenly distribute traffic and prevent resource exhaustion.
- Incident Response Planning: Developing comprehensive incident response plans to swiftly address and mitigate the effects of an ongoing DoS attack.
- Regular Updates and Patching: Ensuring that systems and software are up-to-date with the latest security patches to mitigate vulnerabilities.
Emerging Trends: DoS in the Age of IoT and 5G
As technology advances, DoS attacks evolve. The proliferation of Internet of Things (IoT) devices and the advent of 5G networks introduce new attack surfaces. Safeguarding against DoS in this era requires heightened awareness, adaptive security measures, and collaboration within the cybersecurity community.
Evolving Threat Landscape: DoS in the Era of Advanced Persistent Threats (APTs)
Denial-of-Service attacks are not isolated incidents but are often used strategically as part of broader Advanced Persistent Threats (APTs). Cyber adversaries may employ DoS tactics to divert attention or weaken defenses, creating opportunities for more sophisticated attacks. Organizations must adopt a holistic cybersecurity strategy that includes threat intelligence, continuous monitoring, and adaptive defense mechanisms.
Collaborative Defense: Information Sharing and Industry Partnerships
The cybersecurity landscape is ever-changing, and threat actors are known to adapt quickly. Collaboration among organizations, industries, and security vendors is crucial. Sharing threat intelligence, attack patterns, and mitigation strategies enhances the collective ability to thwart DoS attacks. Industry partnerships and information-sharing platforms play a pivotal role in building a united front against cyber threats.
Machine Learning and AI-Powered Defenses
As DoS attacks become more sophisticated, leveraging artificial intelligence (AI) and machine learning (ML) in cybersecurity defenses is increasingly essential. These technologies can analyze vast amounts of network data in real-time, identifying anomalous patterns indicative of a DoS attack. AI-powered defenses can autonomously respond to threats, providing a more adaptive and proactive securit
Conclusion: Fortifying Digital Resilience
In the dynamic realm of cybersecurity, understanding and defending against Denial-of-Service attacks is paramount. By unraveling the tactics employed, recognizing the impacts on businesses, and implementing proactive safeguards, organizations can fortify their digital resilience against the disruptive forces seeking to compromise the availability and integrity of their systems and networks. Vigilance, preparation, and a commitment to cybersecurity best practices are the pillars of defense against the ever-present threat of DoS attacks. | <urn:uuid:8107ac98-4a73-424d-a386-7d0572a2305b> | CC-MAIN-2024-38 | https://garage4hackers.com/denial-of-service-dos-attacks/ | 2024-09-12T17:54:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00249.warc.gz | en | 0.914422 | 925 | 2.953125 | 3 |
The Internet of Things or simply IoT is believed to benefit the consumers and at the same time, improve the productivity of industries and enterprises. The IoT uses wireless devices that connect to each other and with the help of internet and minimum human intervention helps deliver services that would meet the needs of a variety of industries.
The IoT sector is relatively new and is still in development stages. However, experts believe that the impact of IoT will be considerable.
Female is using the smart phone Blank transparent rectangle radiates from the screen smart phone. The rectangle is ready for your text. The automatic production line is in the background. The edges of the pictures are deliberately blurred.[/caption]
Manufacturers and enterprises face a constant pressure to continuously improve their services and operational performance. This to a large extent has been made possible with the use of modern equipment and innovative technology. However, there are manufacturing challenges that need to be addressed.
Most modern manufacturing processes use intelligent ‘smart’ devices and equipment. And if you combine the amount of information generated by automation systems, sensors, wireless tools and other devices, you come up with overwhelming data volume that cannot be easily stored or analyzed.
The traditional manufacturing systems sadly were not designed to analyze or interpret tons of data but the Internet of Things can change the way how manufacturing systems work. Simply put, the IoT platform gives enterprises a chance to achieve operational excellence. Yes, it now comes down to reaching operational innovation and excellence with connected ‘smart devices.’
Perhaps the best thing about the Internet of Things is that it integrates with your existing manufacturing processes. You can easily connect all devices, systems and even people involved in the manufacturing process for better and faster decisions.
Remember the key to IoT platform success is its rapid application. The approach works almost instantly which reduces manufacturing efforts and at the same time, minimizes your manufacturing costs and risks.
For manufacturing units and industries,
The Internet of Things can remove the complexity associated with the advanced analysis. What’s even better is that IoT can be used for predictive analysis which would deliver proactive information to decision makers.
Remember that delivery of proactive information can improve quality and performance of complex manufacturing processes. On the other hand, enterprises can fix, rectify processes before extensive failure occurs.
For example, auto manufacturers can use IoT to monitor car performance and alert drivers for potential repairs. Healthcare facilities can use IoT to alert doctors about potential health issues earlier than ever before. Boston Medical Centre, for example, uses sensors in neonatal ICU which alerts nurses via smartphones when significant changes in a baby’s vital signs are detected.
Experts suggest that IoT will change the performance of medical devices. The intelligent and smart IoT devices can offer medical professionals as well as patients’ greater independence while having their health, physical activity and safety measured and supervised.
Mobile connectivity is believed to rise dramatically with every passing day and by 2020, the number of connected devices could reach 25 billion. The increasing use of mobile networks and the ability to connect a broad range of devices including phones, tablets, laptops, TVs, cars, building and even machinery is what enables the development of new services and applications.
When devices and equipment are allowed to communicate with each other without human interaction or intervention, you can actually open up the doors to new business models and revenue streams. You can often use IoT to refine existing technology and invent new products. The IoT and Connected manufacturing will have a positive impact on a variety of economic sectors, such as:
Service improvement and cost reduction through the evolution of IoT and connected devices could be worth approximately USD 2 trillion in 2020.
In summary, as connected manufacturing will improve, it will have a fundamental impact on the way businesses work. Not to forget there will be major social and economic benefits. Improved and safer healthcare, efficient logistics and more efficient use of energy can dramatically reduce wastage and improve time productivity. Mobile operators are working diligently to identify a much more diversified set of IoT capabilities and support a vast range of applications. | <urn:uuid:5652b5d3-c76f-47cd-aa0a-2dbe9651989c> | CC-MAIN-2024-38 | https://www.hsc.com/resources/blog/how-connected-manufacturing-will-save-time/ | 2024-09-15T06:10:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00049.warc.gz | en | 0.941739 | 816 | 2.921875 | 3 |
Passwords have long been the primary method of authentication, but they have proven to be vulnerable to various attacks. As cyber threats continue to evolve, it is essential to explore advanced authentication methods that can provide enhanced security. This article delves into the realm of advanced authentication methods that go beyond passwords, highlighting their benefits, challenges, and potential for widespread adoption.
The Limitations of Passwords
Passwords have been the go-to method for authenticating users for decades. However, they suffer from inherent vulnerabilities that can be exploited by determined attackers. Some of the common issues with passwords include:
- Weak Passwords: Users often choose weak passwords that are easy to guess or crack. Common examples include passwords like “123456” or “password,” which are easily susceptible to brute-force attacks.
- Password Reuse: Many individuals reuse the same password across multiple accounts, making it easier for attackers to gain unauthorized access to various platforms if they compromise one password.
- Phishing Attacks: Cybercriminals often employ phishing techniques to trick users into revealing their passwords. These attacks can be highly effective, especially when coupled with social engineering tactics.
- Password Storage: Online platforms store passwords in databases, and if these databases are compromised, passwords can be exposed, even if they are encrypted.
Advanced Authentication Methods
To overcome the limitations of passwords, various advanced authentication methods have emerged, offering improved security and user experience. Here are some notable methods:
Multi-Factor Authentication (MFA)
Multi-Factor Authentication is a method that combines two or more independent factors to verify a user’s identity. These factors can be categorized into three types:
a. Knowledge Factors: These include something the user knows, such as a password, PIN, or answers to security questions.
b. Possession Factors: These involve something the user possesses, such as a physical token, a smart card, or a mobile device.
c. Inherence Factors: These encompass something inherent to the user, such as biometric data (fingerprint, iris, or facial recognition).
MFA provides an additional layer of security as an attacker would need to compromise multiple factors to gain unauthorized access. It significantly reduces the risk of account breaches and is widely adopted by many online services.
Biometric authentication methods leverage unique physical or behavioral traits of individuals to verify their identity. Common biometric factors include fingerprint recognition, iris scanning, voice recognition, and facial recognition. These methods are highly secure as they rely on distinctive characteristics that are difficult to replicate or forge. However, concerns regarding privacy, accuracy, and the potential for biometric data theft remain.
Hardware tokens are physical devices that generate one-time passwords (OTP) or act as cryptographic keys. These tokens provide an additional layer of security by generating unique codes that must be entered along with a password. They are less susceptible to attacks like phishing or keylogging, as the generated codes are time-based and cannot be reused. However, the cost of distributing and managing hardware tokens can be a barrier to widespread adoption.
Behavioral biometrics analyze unique patterns in user behavior, such as typing speed, mouse movement, touchscreen gestures, or even the way a user holds their device. These patterns are then used to create a user profile, which can be compared against future interactions to determine authenticity. Behavioral biometrics provide continuous authentication without requiring explicit user action. However, they may encounter challenges in terms of accuracy, adaptability to changes in behavior, and potential false positives.
Challenges and Considerations
While advanced authentication methods offer significant security advantages, there are challenges and considerations that must be addressed:
- User Experience: The usability and convenience of advanced authentication methods are crucial for their widespread adoption. If the authentication process becomes cumbersome or time-consuming, users may seek alternative platforms or workarounds.
- Integration and Compatibility: Implementing advanced authentication methods requires careful integration with existing systems and platforms. Compatibility issues and technical constraints may arise during the implementation process.
- Cost and Scalability: Advanced authentication methods may involve additional costs for organizations, especially when considering factors such as hardware tokens or biometric sensors. Scalability is also a concern when deploying these methods across large user bases.
- Privacy and Data Protection: Collecting and storing biometric or behavioral data raises privacy concerns. Organizations must handle this data with utmost care, ensuring compliance with relevant regulations and implementing robust security measures.
As the threat landscape continues to evolve, relying solely on passwords for authentication is no longer sufficient to ensure robust security. Advanced authentication methods provide enhanced protection against various attacks and offer a more secure and user-friendly authentication experience.
While challenges exist, organizations must prioritize the implementation of advanced authentication methods to safeguard user accounts and sensitive data. As technology advances and new innovations emerge, exploring and adopting these methods will be crucial in the ongoing battle against cyber threats. | <urn:uuid:d9ecebbf-5360-4d79-bb24-6daf60726b1c> | CC-MAIN-2024-38 | https://www.infoguardsecurity.com/beyond-passwords-exploring-advanced-authentication-methods-for-enhanced-security/ | 2024-09-15T08:06:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00049.warc.gz | en | 0.927024 | 997 | 3.25 | 3 |
Because my career started in AI, as a machine learning and data mining researcher at City University, I wasn’t surprised to see inflated expectations of AI dashed by the limitations of the technology. Humans and machines can both get things wrong. Natural language models can be better at producing outputs that look right than they can produce truth. While Microsoft didn’t help itself by not fact-checking a pre-recorded demo, the current state of the technology does mean that false statements can be produced easily by AI language models if they look credible.
That said AI has many possibilities.
- Video: The deep fake video of actor & model Charlize Theron with Mr Bean’s face shows some of the profound power of AI’s ability to great realistic ideas.
- Images: Tools like Dali, and GANs (generative adversarial networks) more widely, can produce both realistic and fantastic models that can fool humans.
- AI Influencers: In the influencer world, we also see changes. Influencers are being created which are AIs: Lil Miquela came out a few years ago as non-human. ‘Her’ content had been highly convincing. Her music career has launched. Real humans have engaged in collaborations with her, and big brands like Givenchy. As an influencer, she’s worth over £100m. And, unlike a human, you can try to control AIs. They have no embarrassing tweets or old tapes waiting to be forgotten: or do they? Lil Miquela was hacked by a Trump fan, with remarkable consequences.
- Influential AI: However, influential AI will have a bigger impact than AI influencers. In the last US elections, an NGO designed a campaign to encourage voting (Represent.us) by keying into the threat of outside interference. Its deep fake videos of dictators fooled some people, and it was effective in communicating a message to them. Deep fake videos have certainly impacted voters reflecting on the elections. Its impact continues to grow, and so do ethical concerns.
Metaverse. We’re similarly in the early days of the Metaverse. Established consumer brands have learnt from gaming leaders and are creating powerful emotional experiences, like Ariane Grande’s 2021 Fortnite concert, and valuable solutions like NFTs. In consumer markets, Nike has been an innovator in metaverse strategies, building out spaces, acquiring a digital sneaker business, and keying into Generation Alpha (the next generation of consumers) and experimenting. Coca-Cola’s virtual ‘Starlight’ flavour and virtual products from Burberry and Louis Vuitton also grab headlines. - B2B AI: However, there are advanced B2B options. I and others have been writing about the impact of programmatic advertising for a decade. Siemens’ Digital Twin systems allow businesses to replicate complex physical systems, visualise them and anticipate user challenges, workflow experiences and test their value. AI is already well deployed in B2B marketing and sales settings. Personalization is a huge opportunity.
Despite all these opportunities, the brand risks of AI are also massive.
- Deception: Brands are particularly at risk from impersonators intentionally producing fakes. Deep fakes are powerful and will be unavoidable temptations for many marketers. Because some AI outputs (such as NFTs) can be traceable, it also means that brands could pursue synthetic imitators. In February 2023, Hermes won a New York court case against an artist that sold NFTs modeled on its Birkin handbag. Unintentional deception is almost impossible to avoid when generative models are used. AIs mistakenly create false outputs (as Chat GPT did).
- Emotion: A brand’s AI use can shock the audience, and AIs can act in disturbing and unexpected ways. Buyers often prefer humans.
- Centralization: What Web 3.0 aimed to add to Web 2.0’s empowerment – the ability not only to communicate peer-to-peer with other users – was a shift in ownership. Web 3 had promised decentralization, so users and technologies would regulate it. However, Meta and other major vendors clearly aim to own the infrastructure.
- Legal liability: Many professionals are rushing into AI-generated content, without understanding the risks. One is that generative AIs necessarily reuse ideas and images created by humans. Intellectual property rights are already very complex. While few countries, if any, have specific laws on the use of AI in marketing, advertisers do face many legal requirements to be truthful. Furthermore, AI can make decisions faster, but choices can be wrong. More expansively, there are major human rights implications of AI use.
Industry analysts are overwhelmingly bullish about the possibilities for AI, and rightly so. However, while AI accelerates business and reduces many risks, it also produces new risks. | <urn:uuid:99209f3e-8db3-42cd-94f9-e6bc4f9672a8> | CC-MAIN-2024-38 | https://ccgrouppr.com/blog/what-charlize-theron-and-mr-bean-taught-us-about-ai-in-marketing/ | 2024-09-16T10:40:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00849.warc.gz | en | 0.948708 | 1,006 | 2.5625 | 3 |
Security can be complicated sometimes. It will require professionals to inspect each possible tool, and there is a lot to determine which methods or technologies are the best to employ while trying to keep each person safe. When it comes to your vehicle inspection, there are different possibilities for you to make interactions with vehicles a lot more reliable and secure. One other option you have is with under vehicle inspection systems, which can scan for possible threats. However, you may be wondering, what role does foreign object detection truly play? Read on to learn how and why foreign object detection is such a big part of vehicle inspection systems.
What Makes Foreign Object Detection?
Foreign object detection is a technical concept that is applied in different scenarios already: in airport runways to food safety, and at sporting events and concerts as well. It can be used to help make sure all runways are cleared of debris or harmful contaminants that have made their way into food or to see if anybody is trying to sneak illegal materials into a concert or sports game. Since it is broadly utilized for a lot of different reasons, foreign object detection will make a lot of sense in various security applications. This mostly includes vehicle inspection systems, where foreign object detection could help out in a big way by helping discover explosives, contraband, weapons, and anything else that may be somewhere it doesn’t belong.
How Do We Utilize Foreign Object Detection?
We at Gatekeeper have an intelligent under vehicle scanner system that will instantly alert personnel of any possible dangers or contraband that could be present underneath your vehicle. It automatically compares the newly scanned image with the exact vehicle in the system’s extensive database. The system will identify the relevant reference image in the database through Gatekeeper’s pattern recognition algorithms. The system will then quickly compare the newer image to the reference photo, automatically identifying the changes or foreign objects. This fast and reliable method of threat detection makes for a safe situation for each person involved. Security experts and personnel can rest knowing that their intelligent vehicle scanner system is doing the heavy lifting in this aspect. Everyone else in the facility or on the roads is made noticeably safer because of this form of technology.
Why Use Under Vehicle Inspection Systems?
Many experts agree that undercarriage inspection is a very important element in the vehicle inspection field. With vehicle inspection systems, you can now create a digital fingerprint of every vehicle, making it much easier to locate objects that might pose a threat. While combined with other security technology, this makes it a reliable approach towards vehicle security that you won’t find elsewhere.
Groundbreaking Technologies with Gatekeeper
Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 37 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company. | <urn:uuid:fc07e46b-df4e-4f25-b47a-8def64e83870> | CC-MAIN-2024-38 | https://www.gatekeepersecurity.com/blog/makes-foreign-object-detection-important-aspect-vehicle-inspection-systems/ | 2024-09-16T08:14:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00849.warc.gz | en | 0.937209 | 657 | 2.515625 | 3 |
In today’s digital age, access to the internet is essential. Whether you’re grabbing a coffee at your favorite café, waiting for a flight at the airport, or working remotely from a co-working space, public Wi-Fi seems like a convenient solution. However, the convenience of public Wi-Fi comes with significant risks that can compromise your personal and professional data.
Imagine this scenario: You are sitting in your favorite café, enjoying a cup of coffee and working on your laptop. You connect to the café’s public Wi-Fi to check your emails and manage some work tasks. Unbeknownst to you, a hacker is also connected to the same Wi-Fi network. Within minutes, they can intercept your connection, capturing sensitive information such as login credentials, emails, and even financial data. This is a common example of the dangers associated with public Wi-Fi.
While public Wi-Fi is inherently risky, your private Wi-Fi network at home or work is not invulnerable to attacks. Hackers can gain access to private networks, compromising the security of your data and devices.
By taking these precautions, you can significantly reduce the risk of cyber attacks and keep your data safe, whether you’re at home, at work, or on the go.
Circa Las Vegas
Thurs. Aug 5th
Cybersecurity Reunion Pool Party at BlackHat 2021 | <urn:uuid:73d39cc7-08b8-48e4-a1b8-193c52c252e3> | CC-MAIN-2024-38 | https://cyvatar.ai/the-hidden-dangers-of-public-wi-fi-and-protecting-your-private-network/ | 2024-09-08T01:40:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00749.warc.gz | en | 0.943917 | 289 | 2.765625 | 3 |
Security experts have been bemoaning the endless array of problems associated with using passwords — they’re either too easy for criminals to guess or too difficult to remember, they’re reused, they’re constantly being stolen. Until recently, there’s been no practical way to get away from them.
Even the fingerprint or facial scanners on phones, which can make it possible to log into your DropBox or PayPal account without typing in your password, don’t do away with the passwords themselves. The passwords are still there, used when you first set up the app, or needed when you want to log in from another device or browser.
Things are starting the change, however. In March, the World Wide Web Consortium (W3C) approved the WebAuthn standard, a joint project with the FIDO Alliance, which allows for passwordless authentication on the web using authentication mechanisms such as the fingerprint reader on a smartphone. All major browsers support it, including Chrome, Firefox, Microsoft Edge and Safari. So do Android phones and Windows 10 computers.
The idea is that identity is federated. A fingerprint or photo or voice recording is stored locally, on a phone and is never transmitted to third parties. The phone uses a secure mechanism to authenticate the user and then confirms the identity to the website or application. The system isn’t perfectly secure. There are ways to hack fingerprints and facial IDs, and if the authentication mechanism is a hardware token like a USB key, it can be stolen. It is a significant improvement in security over the traditional user account and password approach to authentication. | <urn:uuid:81d37c94-27e2-47fc-844a-2bf61861b40a> | CC-MAIN-2024-38 | https://www.mariakorolov.com/2019/how-first-citrus-bank-got-rid-of-employee-passwords/ | 2024-09-08T00:50:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00749.warc.gz | en | 0.901049 | 330 | 2.859375 | 3 |
How many oceans are there on Earth?
A. 3 B. 5 C. 7 D. 9
Marine biology is a fascinating field that explores the wonders of the ocean and its inhabitants. The world of marine biology is vast and covers various aspects of life in the oceans. One interesting fact is that there are 7 oceans on Earth: the Arctic Ocean, the Atlantic Ocean, the Indian Ocean, the Pacific Ocean, and the Southern Ocean. | <urn:uuid:51c2a27d-497b-4223-9061-95d362fc5734> | CC-MAIN-2024-38 | https://bsimm2.com/arts/the-exciting-world-of-marine-biology.html | 2024-09-09T06:22:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00649.warc.gz | en | 0.915121 | 90 | 2.90625 | 3 |
What Is an HPC Cluster?
Each server in an HPC cluster is called a node. “The nodes in each cluster work in parallel with each other, boosting processing speed to deliver high-performance computing,” the post notes.
Cameron Chehreh, CTO and vice president of pre-sales engineering at Dell EMC Federal, tells FedTech these nodes may include processing power through CPUs and GPUs on servers; tools such as NVIDIA and Intel software development kits; frameworks including TensorFlow, MXNet and Caffe; and essential platforms with Kubernetes and Pivotal Cloud Foundry.
As an Iowa State University guide notes, there may be different types of nodes for different types of tasks. These can include a headnode or login node, where users log in to HPC systems; specialized data transfer nodes; regular compute nodes; so-called “fat” nodes that have at least a terabyte of memory; graphics processing unit nodes; and more.
“All cluster nodes have the same components as a laptop or desktop: CPU cores, memory and disk space,” the guide states. “The difference between personal computer and a cluster node is in quantity, quality and power of the components.”
HPC Applications in Government
In addition to enabling critical research such a COVID-19 treatments, HPCs in government support a wide range of cutting-edge research that could not be accomplished with regular computing power.
The Energy Department’s National Renewable Energy Laboratory, for example, runs its High Performance Computing User Facility for scientists and engineers “working on solving complex computational and data analysis problems related to energy efficiency and renewable energy technologies,” the NREL says.
“The work performed on NREL’s HPC systems leads to increased efficiency and reduced costs for these technologies, including wind and solar energy, energy storage, and the large-scale integration of renewables into the electric grid,” the lab notes.
HPCs also enable research partnerships between the government and private sector into other kinds of energy innovation and advanced manufacturing techniques. | <urn:uuid:ef6144e8-e506-4d65-88a5-162212d8c978> | CC-MAIN-2024-38 | https://fedtechmagazine.com/article/2021/06/high-performance-computing-clusters-and-applications-government-perfcon | 2024-09-11T15:39:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651390.33/warc/CC-MAIN-20240911152031-20240911182031-00449.warc.gz | en | 0.932033 | 438 | 3.203125 | 3 |
Wireless Connectivity for the Internet of Things (IoT) 13.11.2024
The Internet of Things (IoT) covers a huge range of use cases and applications and scales from single devices to massive systems with various elements connecting in real time. Wireless connectivity is an integral part of IoT. Depending on the application, factors such as range, data requirements, security, power requirements and battery life will dictate the choice of one or some form of combination of wireless technologies.
Traditional cellular mobile networks based on 2G/3G/4G have almost ubiquitous coverage and high data rates but at cost of high power requirements at end user devices. Low Power Wide Area Networks (LPWAN) based on standards like LoRaWAN, Ultra Narrow Band (UNB) or NB-IoT, are expected to complement traditional connectivity solutions for long range communication. Short range technologies such as Bluetooth, BLE, ZigBee, WiFi or RFID will provide connectivity over short distances. And, of course, 5G presents another set of opportunities for IoT connectivity. This course explores radio technologies for IoT applications; discusses the underlying concepts and the resulting advantages and limitations. An analysis of spectrum requirements and availability complements the training.
After completing the training, participants will be familiar with most recent radio technologies available to power IoT applications. They will understand the differences between the technologies, and the benefits and compromises of each.
This course is intended for those who have basic knowledge in radio communication systems, who are interested in wireless systems for IoT applications and who may be responsible for implementing radio systems in industry.
- IoT applications and communication requirements
- Overview on wireless technologies and approaches for
- IoT applications
- Spectrum requirements and availability
- Radio systems for Low Power Wide Area Networks
- Radio systems for Low Power Personal Area Networks
- 3GPP systems and the role of 5G for IoT | <urn:uuid:2f4fbe26-10b4-4bb0-9b09-0a0c5ec51b60> | CC-MAIN-2024-38 | https://www.webstore.lstelcom.com/products/wireless-connectivity-for-iot | 2024-09-14T04:06:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00249.warc.gz | en | 0.894076 | 387 | 3.1875 | 3 |
Virtualization is a powerful tool that helps reduce administrative burdens while boosting cost savings, scalability, and efficiency. Therefore, organizations today look forward to modernizing their organization digitally with the help of virtualization. It is a technology that enables you to develop valuable IT services employing assets that are often restricted to hardware. Virtualization is increasingly required by everyone to boost IT agility, flexibility, and scalability while generating considerable cost savings. However, the distinctions between Docker and Virtual Machines are frequently unclear.
So, in this article, we will highlight the key difference between Docker and Virtual Machines. Keep reading to learn more!
What is Docker?
Docker is an open-source platform for developing, distributing, and running applications. You can manage your infrastructure the same way you manage your apps thanks to Docker, which makes it possible to decouple your applications from your infrastructure and deploy software swiftly. Simply said, Docker is a virtualization technology and software development platform that makes it simple to create, distribute, and manage applications utilizing containers.
You can read “What is Docker and its benefits?” to learn more.
What is a Virtual Machine?
A Virtual Machine (VM) is a computing resource that uses software instead of a physical computer to run and deploy applications. They work best for concurrently executing several applications, monolithic applications, app isolation, and legacy apps that are still supported by previous operating systems. They were developed to carry out tasks that could be dangerous if done directly in the host environment. The software running within a virtual machine cannot interfere with the host computer since it is segregated from the rest of the system.
Difference between Docker and Virtual Machines:
The following are the key differences between Docker and Virtual Machines.
Parameter | Docker | Virtual Machines |
Operating System (OS) | Docker containers are hosted on a single physical server running a host OS. | Virtual Machines have the host OS and guest OS like Linux or Windows inside each Virtual Machine regardless of the host OS. |
Size | A Docker image is compact and often only a few megabytes in size. | A Virtual Machine instance can be many gigabytes or even terabytes in size. |
Portability | Docker containers are portable due to the lack of a separate operating system. A container can be launched right away after being ported to a different OS. | Virtual Machines have their own OS; therefore, porting a Virtual Machine is more complicated than porting a container and takes much longer due to the size of the virtual computer. |
Used for | Docker is used to run multiple applications over a single OS kernel. | Virtual Machines are needed to run applications or services on different OS. |
Speed | Docker containers’ applications launch immediately due to the OS being operational. | Virtual Machines require far more time than a container does to run applications. |
Security | If an attacker gains access to even one container, they can compromise the entire cluster of containers because Docker resources are shared and not namespaced. | Virtual Machines operate independently with their own kernel and security features. So, applications requiring greater security and privileges run on virtual machines. |
Performance | Docker containers use the same operating system without any additional software; therefore, they perform well. | Since Virtual Machines use a different OS that consumes more resources, they start slowly and perform poorly. |
Duration to create | Dockers can be created easily in seconds. | Virtual Machines take a relatively long time to create. |
Replicability | It is simple to replicate Dockers. We can access Docker images for different applications. | Virtual Machines are challenging to replicate, especially with the rise of VM instances. |
How can InfosecTrain help?
We hope that the fundamental differences between Docker and Virtual Machines in this article have helped you choose the most appropriate option for your needs. If you want to learn all there is to know about Docker and containers, you can enroll at InfosecTrain’s Docker Certified Associate (DCA) as well as various cloud certification training courses to learn about Virtual Machines. We are a leading IT security training and consulting service provider across the globe. Enroll now and leverage the perks of learning from experienced professionals. | <urn:uuid:b3df0847-e762-4ef4-906b-e6825f5a814c> | CC-MAIN-2024-38 | https://www.infosectrain.com/blog/docker-vs-virtual-machines/ | 2024-09-15T11:05:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00149.warc.gz | en | 0.933865 | 853 | 3.1875 | 3 |
With a vast majority of the population using mobile phones on a daily basis and relying heavily on them, it would be devastating for a hacker to gain access to these devices and steal sensitive data.
It’s important to stay ahead of cybercriminals. One major step in doing that is through education. Learning about these attacks and taking the proper steps to implement best practices will help ensure you are protected against potential cyber-attacks.
SIM Swapping and Cloning Attacks
Cybercriminals are using SIM swapping and cloning attacks to access sensitive information. SIM swapping, also called a SIM hijacking attack, happens when someone convinces your mobile carrier to port your phone number over to their SIM card. The person is then able to access your most sensitive accounts/data by completing mobile text 2FA (two-factor authentication) checks.
Cloning attacks are a bit more sophisticated but have the same goal as SIM swapping. Cybercriminals use smart card copying software to create a copy of the real SIM card, which therefore gives access to the victim’s IMSI (international mobile subscriber identity) and master encryption key.
Cybercriminals want your SIM cards so they can access account information, financial information, and PII (personal identifiable information). Once the hacker has a copy of your SIM card, they can use this in a device to control access to the victim’s text, phone calls and location.
A few ways to tell if you have had your SIM card cloned:
- You are no longer receiving text messages or receiving phone calls.
- There are numbers on your account that you do not recognize.
- You are instructed to restart your phone.
- When using a location tracker, it appears your device is in a different location.
- You are locked out of your accounts.
FBI Issues Public Service Announcement
In early 2022, the FBI noticed a surge in SIM swapping attacks and issued a public service announcement, stating the agency received 1,611 SIM-swapping complaints in 2021 and estimates the attacks stole $68 million from victims. Alarmingly, this was a huge increase from previous years.
In their public notice, the FBI urges mobile carriers to educate employees and conduct training sessions on SIM swapping, and carefully inspect incoming email addresses containing official correspondence for slight changes that can make fraudulent addresses appear legitimate and appear to be actual clients.
A Data Protection Solution for SIM Swapping and Cloning Attacks
To prevent SIM swapping, implementing Eclypses MTE Technology helps verify each endpoint connection by putting in place a unique MTE relationship with the physical device and the server it’s trying to communicate with.
For example, during the registration of an application the user is setting up the application (signing in, registering their device, etc.) During this process the app could have the user move a cursor around the screen with their finger. By having the user move their finger around the screen in a random motion, collecting the coordinates of the movements, and then adding them all together – a random, non-replicable, number is created. This number can then be fed into the instantiation of the MTE as part of the seed value. By having a multipart seed value and this random number as part of that value, it makes it nearly impossible for a cybercriminal to hack. The random number will be shared between that specific instance of the app and server, setting up a unique relationship between the two. This number is single-use and will be instantly obsolete after MTE instantiation is complete. This eliminates the ability for someone to discover the value and compromise the communication. With this process in place, the user could give a cybercriminal their username, password, and two-factor authentication code and the criminal would still not be able to log into an app from another device.
The most crucial part of staying ahead is through education and implementing best practices. In today’s ever-cyber landscape, it is important for individuals and companies to take cyber security seriously to create a safer digital environment for all.
Previously published in Top Cyber News Magazine on Linkedin. | <urn:uuid:973c98eb-f1c7-498e-92a4-d4529083b773> | CC-MAIN-2024-38 | https://eclypses.com/news/staying-ahead-of-the-cyber-attack/ | 2024-09-17T19:21:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651829.57/warc/CC-MAIN-20240917172631-20240917202631-00849.warc.gz | en | 0.936636 | 842 | 2.546875 | 3 |
When considering the future of renewable energy, Pennsylvania presents an intriguing case study. Recent research conducted by Princeton University, and published in Nature Energy, explores the broad public support for solar energy in the state and the significant misalignment between this support and the perceptions of local policymakers. This disconnect hampers the effective deployment of solar projects and presents critical questions regarding the state’s energy transition.
Widespread Public Support for Renewables
In Pennsylvania, public sentiment strongly favors renewable energy projects, indicating a significant shift in the populace’s preference for clean energy solutions. A comprehensive survey conducted by Princeton University involving 894 residents revealed a broad, bipartisan endorsement for solar energy. Both Democrats and Republicans in the state demonstrate a clear preference for renewable sources over traditional fossil fuels such as natural gas, even when carbon capture and storage (CCS) technologies are proposed as mitigating elements.
This bipartisan support is significant as it transcends the typical political divides that often characterize energy debates. The uniformity in public opinion underscores a collective acknowledgment of the environmental and economic benefits of renewable energy. Despite varying political beliefs, there is a consensus that transitioning to renewable energy is vital for Pennsylvania’s future, reflecting broader national trends favoring sustainable energy solutions. The finding that preferences for renewables cross party lines also suggests robust prospects for future policy-making that prioritizes clean energy.
Furthermore, the survey’s results reflect an ingrained concern for environmental sustainability and economic efficiency among Pennsylvanians. The preference for solar energy projects illustrates an increasing awareness among residents of the long-term benefits associated with renewable energy—reduced carbon footprints, job creation, and increased local investment. This collective recognition of the positive impact of solar energy highlights the residents’ foresight in advocating for a cleaner and more self-sustaining energy infrastructure.
Policymakers’ Misinterpretation of Public Opinion
A critical revelation of the Princeton University study lies in the stark contrast between public opinion and policymakers’ perceptions. Surveys conducted with local officials, including those at township, municipality, and county levels, unveiled a substantial misjudgment regarding their constituents’ support for solar energy projects. Out of 206 local policymakers surveyed, a majority significantly underestimated the public’s enthusiasm for solar initiatives compared to the actual sentiments expressed by the populace.
This misjudgment can be attributed to several factors, including inadequate communication channels and limited engagement with the community. Policymakers often rely on vocal minority opinions that oppose renewable projects, leading to skewed perceptions about general public support. Additionally, the tendency of some policymakers to prioritize immediate economic gains from fossil fuel companies over long-term benefits of renewable energy further distorts their understanding of public opinion. This misperception can significantly influence decision-making processes, resulting in policies that fail to align with the majority’s preferences, thereby stalling progress in the state’s energy transition.
The implications of such misinterpretations are profound. Policies and decisions that do not reflect the true will of the public hinder the effective implementation of renewable energy projects. When policymakers base their decisions on inaccurate perceptions, it leads to a misallocation of resources, delayed project approvals, and potential erosion of public trust. Hence, addressing this gap is crucial for ensuring that energy policies are developed in a manner that accurately reflects public sentiment and effectively drives the state toward a sustainable future.
The Crucial Role of Local Officials
Local officials hold pivotal roles in the successful implementation of energy projects, as their decisions directly affect the approval, funding, and land allocation for renewable energy installations. Given their influence, it is imperative that these officials have an accurate understanding of public opinion to make informed decisions that genuinely reflect community interests. The misalignment identified in the study highlights the urgent need for local officials to reassess their perceptions and actively seek to bridge the existing gap.
Effective public participation processes are essential in addressing this issue and bridging the gap between public support and policymaker perceptions. Officials need to engage in meaningful dialogue with their constituents through various methods, including surveys, town hall meetings, and other communication modes, to capture a comprehensive view of public sentiment. Transparent and inclusive decision-making can facilitate better alignment between policy actions and public preferences. By engaging the community more directly and frequently, local officials can foster an environment where public input genuinely informs policy directions, thereby fostering smoother transitions to renewable energy.
Moreover, fostering improved communication and active engagement with the public can lead to more informed and balanced decision-making. By involving residents in the decision-making process, officials can ensure that their policies not only address environmental concerns but also consider the socio-economic aspects that affect local communities. This participatory approach can significantly enhance the effectiveness of energy policies, ensuring they support both ecological sustainability and economic resilience in Pennsylvania.
Community Ownership and Local Benefits
A profound aspect of the support for solar energy in Pennsylvania lies in the public’s preference for community-owned projects over foreign-owned enterprises. This preference underscores a fundamental trust issue, where the public is more inclined to back projects that ensure local control and benefits—such as job creation and lower energy costs—remain within the community. Such preferences highlight the public’s desire for an energy transition that directly benefits local economies and enhances community well-being.
Local ownership models resonate across the political spectrum, reinforcing the idea that energy projects can be powerful tools for local economic development. By prioritizing community ownership, policymakers can boost public trust and engagement, ensuring that the economic and social benefits of renewable energy projects are equitably distributed and appreciated by local communities. This approach not only aligns with the broader public sentiment but also promises to create more robust, resilient communities by fostering local economic growth and enhancing social cohesion.
The preference for community ownership also underscores a broader trend of increasing support for decentralized energy systems. This model not only promotes local economic benefits but also enhances energy security by reducing dependence on external entities. It empowers communities to have greater control over their energy sources and expenditure, fostering a sense of ownership and pride in contributing to the state’s overall sustainability efforts. As such, adopting community-owned energy projects can serve as a strategic advantage in driving Pennsylvania towards a more sustainable and self-reliant energy future.
Enhancing Developer Engagement
Another critical element identified in the research is the role of developers in garnering public support for solar projects. The research indicates that the public shows higher support for solar projects that promise tangible community advantages. Developers must prioritize transparent and continuous community engagement throughout the project lifecycle to build stronger community ties and mitigate opposition. By fostering trust and ensuring that projects deliver real benefits to local communities, developers can significantly enhance public support and facilitate smoother project implementation.
Engagement strategies should involve clear communication of project benefits and addressing potential concerns comprehensively. Developers should establish channels for regular community input and feedback, ensuring that projects are adapted to meet the public’s needs and expectations effectively. This includes not only highlighting the environmental benefits of solar projects but also emphasizing economic advantages, such as job creation, local investment opportunities, and potential reductions in energy costs. By adopting a community-centric approach, developers can create more supportive environments for their projects and ensure that the benefits of renewable energy are widely recognized and appreciated.
Moreover, developers must actively seek opportunities to collaborate with local stakeholders, including government officials, community leaders, and residents, to ensure that projects are designed and implemented in ways that align with local values and priorities. This collaborative approach can help address potential concerns early on, build consensus, and foster a sense of shared ownership over the projects. By consistently engaging with the community and demonstrating a commitment to their well-being, developers can build lasting relationships that support the long-term success and sustainability of renewable energy initiatives in Pennsylvania.
Moving Towards Effective Energy Transition
When examining the future landscape of renewable energy, Pennsylvania stands out as a compelling case. Recent research from Princeton University, documented in Nature Energy, delves into the widespread public backing for solar energy within the state. However, a striking misalignment exists between this public support and the views of local policymakers. This disconnect creates a significant challenge in the effective implementation of solar energy projects, raising critical questions about Pennsylvania’s energy transition strategy. The findings suggest that while residents are eager to embrace solar energy, their enthusiasm is not matched by the policymakers who have the authority to greenlight such projects. This disparity needs addressing to streamline and accelerate Pennsylvania’s journey toward a more sustainable energy future. By understanding the root causes of this disconnect, stakeholders can work on bridging the gap between public opinion and policy action, ensuring that Pennsylvania achieves its renewable energy goals efficiently and effectively. | <urn:uuid:6e67b474-346a-403e-973a-521c0c48c879> | CC-MAIN-2024-38 | https://energycurated.com/renewable-energy/public-support-for-solar-energy-undermined-by-policymaker-misperceptions/ | 2024-09-19T00:28:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00749.warc.gz | en | 0.921026 | 1,752 | 2.546875 | 3 |
The IDC DataSphere forecast report predicts that global data creation and replication will have a 23% compound annual growth rate between 2020 and 2025. Another study by Statista suggests that global data creation will grow to over 180 zettabytes during that same period.
Cheaper data storage and advanced analytics technologies are fueling the current data explosion. However, collecting that data in one place where you can analyze it remains a complex task. Fortunately, organizations can use a big data warehouse to collect, organize, and heavily analyze data on demand instead.
What Role Does Big Data Warehouse Play?
Big data warehousing enables businesses to consolidate large amounts of data from multiple sources and optimize it for analysis. This can help improve business efficiency, make better decisions, and discover competitive advantages. They also store a lot of historical data and can handle very fast, complicated queries using online analytical processing (OLAP). Let’s take a look at some use cases and benefits of data warehouses with respect to big data and explore situations where it could improve your business.
Key Use Cases Of Big Data Warehouse
The following are examples of how a big data warehouse can be utilized for various purposes:
Customer Acquisition and Retention
Customers leave digital footprints that detail their preferences, needs, purchase behaviour, etc. Businesses use big data to observe these consumer patterns and then adjust their products and services according to specific customer needs. By doing this consistently businesses ensure customer satisfaction, and loyalty, which leads to a considerable increase in sales.
Amazon has utilized its big data to create a personalized shopping experience for users, making suggestions based on previous purchases, what other customers have bought, browsing patterns, and more.
Potential Risks Identification
Business environments are high-risk, so businesses need risk management solutions to solve problems. Big data is important in developing effective risk management processes and strategies. Big data analytics and tools can quickly reduce risks by optimizing complex decisions for unforeseen events and possible threats.
IoT Data Integration
IoT devices, like smartwatches, kitchen appliances, and security systems generate immense data sets that can be analyzed to improve various processes. This data has to first be collected and stored in a relational format so it can support real-time or historical analysis. After that, queries are run against millions of events/devices to uncover anomalies in real-time or predict future trends based on past data sets.
IoT data analysis is also a quite complex and time-consuming process, but it can be made easier with the right platform. A high-performance platform that is easy to access and flexible enough to respond quickly to changing conditions is essential. This data can be summarized and filtered into fact tables with a data warehouse to create time-trended reports and other metrics.
If the data integration by IoT seems interesting, you might also be intrigued to check out big data integration and the benefits and challenges it poses.
Analyzing Large Stream Data
Large data streaming is a method to process sizable quantities of real-time data, deriving enlightening trends and predictions. A nonstop stream of unstructured data gets analyzed before it’s stored; if not processed at the moment, the value of the information decreases.
This processing occurs rapidly across several servers simultaneously in real-time; once streamed, the data cannot be reanalyzed.
Large data sets are constantly being generated by numerous sources. These data can range widely, from a mobile device or web application log files to in-game player activity and e-commerce purchases. The processed data is then used for various analytical purposes, such as aggregations, filtering, correlations, and sampling. Businesses gain keen insights into customer behaviour and activity by analyzing large stream data, which can reveal things like service usage, website clicks, device geolocation, and server activity. Data warehouses take this a step further by organizing the information to display overall statistics.
Why Choose Intone?
When an organization captures the full benefits of data, they’re able to change with the market and customer demands. In this process, no data can be ignored, which is why gates need to be open from every source that data is getting generated within an organization through its useful now or future because no one knows what information can be collected by connecting these data sources. To make this happen you need one integration tool that can connect to almost any tool in the market and process data with no latency. IntoneSwift is one such tool that will take care of collecting and storing your data while you are busy concentrating on its analysis of it. This is where data warehousing comes in handy; it offers businesses a way to store data centrally and consistently, making it simpler for business users to access.
At Intone, we prioritize our clients and work to give them the best service with the data analysis service assistance they need through IntoneSwift. We offer
- Knowledge graph for all data integrations done.
- 600+ Data, and Application and device connectors that can read, transform and load data which is in the form of structured, semi-structured, and complex structured.
- A graphical no-code low-code platform.
- Distributed In-memory operations that give 10X speed in data operations.
- Attribute level lineage capturing at every data integration map.
- Data encryption at every stage.
- Centralized password and connection management.
- Real-time, streaming & batch processing of data.
- Supports unlimited heterogeneous data source combinations.
- Eye-catching monitoring module that gives real-time updates.
Contact us to know more about how we can help you! | <urn:uuid:f327c5fc-9e2a-4368-8d74-9bfd8db417d1> | CC-MAIN-2024-38 | https://intone.com/big-data-warehouse-use-cases/ | 2024-09-20T07:35:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00649.warc.gz | en | 0.927421 | 1,147 | 2.734375 | 3 |
What is process mapping?
Process mapping is a strategic planning and management activity, designed to visualize and document every step of a business process. This method aims to streamline business activities, improving efficiency and customer service while also reducing costs. Explore how this system is used in areas like invoice management, customer onboarding, and more.
Definition of a process
A process is a set of activities that are performed in a certain order to achieve a specific goal. By streamlining these activities, businesses can improve their operational efficiency, save money, and improve overall customer experience.
Definition of process mapping
Process mapping is a planning and management activity that involves the description, documentation, and visualization of a business process. It involves people who know the process coming together to describe each individual step involved and mapping them out visually. It is a documenting activity that draws information from people’s heads and makes it accessible to everyone.
It is the first action taken to:
- Identify and discover the opportunities for process improvement
- Create engaging workflows
- Document generation
- Or robotic process automation
- Start your Process Excellence journey
Benefits of process mapping
- Reduce the cost of providing a great customer experience
- Minimize compliance risks, and ease auditability and accreditation
- Increase speed to market for new, or existing products and services
- Increase accountability by handing ongoing process management back to the business
- Improve employee communication and collaboration, boosting productivity
- Retain staff knowledge and avoid knowledge debt from employees that leave
- Ensure that employees to understand their role in the company and what is expected of them
- Decrease the number of deviations & operational incidentsdexperienced
- Provide a single source of truth outlining the right way to do something
- Reduce the number of mistakes, process deviation, and operational risks
- Minimize waste and optimize operational efficiencies
- Exposure to inefficiencies and new opportunities to deliver further benefits and improvements
- Eliminate undocumented or inefficient paper-based processes
- Yield significant operational efficiencies and excellence to boost the bottom line
What is an example of process mapping?
Processes exist in every area of your business. From invoice management, and customer onboarding right through to opportunity management of prospective customers wherever a set of activities are best performed in a logical order to achieve your specific business outcomes, there is a benefit in discovering, mapping, and managing them.
The possibilities are endless, but let’s explore 3 different examples of process mapping to understand it better:
Mapping, managing, and then automating the Invoicing process
Even in today’s highly digitized world, invoicing has remained largely paper-based. Accounts payable systems often rely on manual inputs causing process bottlenecks that cause late payments, cashflow issues, and wasted time. Automating invoice processing improves compliance and accuracy and saves time better used elsewhere.
- Saving you money: Prevent late payments that result in missed discounts and drive late fees. With automated invoice management, invoices aren’t forgotten, backlogs don’t occur, and due dates aren’t missed.
- Improve data accuracy: Remove tedious, time-consuming manual processes where human error can easily occur. When fewer mistakes are made, less time is required to rectify them later.
- Create an audit trail: By knowing who’s approving what, when, and why, and having a trustworthy record of it. This data trail will save a lot of time and effort next auditing season.
Getting customers what they need faster with process management
You only get one chance to make a good first impression. Customer onboarding introduces your company, how it operates and what to expect. It is crucial that you get it right to ensure a satisfied customer, which will impact your renewal rates and bottom line. Streamlining and automating the customer onboarding processes will mean they get set up faster with more accuracy. It reduces the cost to serve, facilitates faster billing, and ensures positive impressions required to facilitate a lasting relationship.
Goal: Automate your customer onboarding process to guarantee better customer onboarding experiences
- Improved accuracy: Mapping, managing, and controlling the onboarding process eliminates errors, increases predictability, and streamlines the process.
- Real-time visibility: By ensuring both customer and provider have complete visibility into every step of the onboarding process increases satisfaction and completion rates.
- Speed to completion: Mapping and automating processes eradicate many manual steps and replace them with a quick, efficient path to completion.
Freeing up time for the sales team to convert more opportunities to sales
The key to automating opportunity management is making your CRM data work harder. Cut administrative tasks required when managing sales opportunities, bypass inboxes, auto-assign tasks, and trigger actions, alerts, and follow-ups. So sales teams can be more effective, act quickly and decisively and convert more.
Goal: Improve and automate the management of sales opportunities in your CRM system
- Create more time for selling: On average, sales reps spend 16% of their day on manual and other administrative tasks. Cut the distractions and focus on leads.
- Improve sales forecasting: Automated tasks and reminders result in more accurate sales forecasting and higher win rates.
- Take your work in your pocket and go: Today’s workforce is mobile, to be productive they need to do it on-the-go.
What are the steps involved in process mapping?
- Identify the process or problem: Many businesses become interested in process mapping after they have experienced a costly mistake or noticed bottlenecks. Get employees together in a room who genuinely understands the process. They can then identify what went wrong and how it could be improved.
- Brainstorm the process: Discuss this process in detail, describe how it works, who is responsible for what, and ways it could be improved.
- Set boundaries for the process: Most processes in a business are interlinked with others, so it can be difficult to identify exactly where one begins and ends. For example, does the supplier invoice payment process begin when you send them a purchase order, or does it begin when the supplier sends an invoice? Each business needs to identify the specific start and end of each process.
- Determine the sequence of steps in the process: At this stage you start to get more granular, describing every single step in the process and all possible variations. Many businesses begin mapping processes out on paper or a whiteboard, but it becomes a lot more flexible and efficient to use a digital interface like Nintex Promapp®. You can add or remove steps at the click of a button and save yourself a lot of time.
- Verify the process map is correct and eliminate unnecessary steps: Everyone involved in process mapping should step back once complete to verify if it lines up with what you do. This is also the stage where you can begin identifying improvements and eliminating any unnecessary steps.
- Start automating the process: Now you can start automating your processes. With the right tools, you can set up your processes so the software handles most manual tasks, from sending emails to entering data into spreadsheets, creating documents, or even collecting signatures.
What is process mapping in project management?
Project management and process management are sometimes seen as the same thing. Both disciplines are used by those wanting to achieve the most efficient operations and outcomes. But…they are not interchangeable but combined they are critical for organizational success.
What are projects and processes?
A project is a self-contained, finite endeavour. It could involve launching a new product and migrating your technology infrastructure to the cloud— anything that has a defined goal and an end-state could be classified as a project.
Processes are a more fundamental and granular part of an organization’s operations. The term “process” covers all kinds of tasks—regular, sporadic, or one-offs—that must be undertaken. And could describe the steps required to get a contract approved and signed off, onboarding a new customer or employee, or any number of other business tasks with clear procedures and a set of boxes to be ticked.
What is project management compared to process management (BPM)?
The key differences between process mapping and management and project management are best outlined by how they work, what problems they fix, and how they are used
Process management goals: are focused on efficiency, they are designed to integrate with current systems and looks at the organization end-to-end, and how data can flow automatically across it
Project management goals: are focused on transparency, they are designed to adapt quickly to change, and often focuses on teams or divisions within a company (a more narrow view) and how data can move between identified stakeholders
Can a project become a process?
Yes, If projects are repeated you gain learnings about the best way to carry them out and optimize the way that they are delivered. Increasing speed-to-market and efficiency and satisfaction is the goal. You move from a project that was planned from scratch to a tried and tested blueprint able to be repeated and thus becoming a ‘process’.
So, which one should I use?
Understanding the concept differences outlined above should help you choose the approach
We recommend that you use process management when…
- You already have a blueprint for getting tasks completed
- You are wanting to scale your repeatable tasks to minimize friction and increase efficiencies
- You’re focused on improving the speed and accuracy – continuously
- you are wanting to focus on supporting documentation and tracking performance data for analysis with the view to further improve the process.
A project management framework should be used when…
- You’re starting a project from scratch without any existing process being mapped or lessons learned,
- You have extra time to discover the best way to deliver against a target and upskill your cross-functional team on all the things required to deliver a project
- Where no single process already exists
If successfully managed Project management will deliver a framework for moving work forward, but it also requires hands-on control and inputs from the whole team, at almost every stage.
Process management, however, offers a simpler more streamlined solution for getting work done. It takes successful project execution, then makes it a standard for future projects, and continues to improve and optimize it.
Both can deliver significant advantages to your organization—but be sure to choose the right solution for your needs.
What tools are available to support process mapping?
There are a variety of different tools that can be used to help plan, map, and manage organizational-wide processes. Process mapping software is a powerful business tool that makes it easy to view your organization’s processes step-by-step so you can better understand them and ensure continuous improvement, quality assurance, and risk management. Using it helps employees to document procedures, forms and attachments, images/videos, links to policies, and references to content across departments and job types.
Historically, the popular but static mapping tool Microsoft Visio often comes to mind that allows users to create flowcharts and diagrams to visualize the process, but it does not assist with the ongoing management of your processes, that delivers the real efficiencies and ongoing improvements. Additionally, there are several other software programs like our own Nintex Process Manager that are used to create, map, and manage your processes.
Nintex’s full suite of solutions fix broken processes, increase productivity, and automate your organization’s manual tasks for faster, more accurate, and more coordinated decisions.
Companies around the globe are using Nintex on different kinds of projects | <urn:uuid:7f1802e8-be2c-46a0-b612-caeb286af52b> | CC-MAIN-2024-38 | https://www.nintex.com/process-intelligence/process-management/learn/what-is-process-mapping/ | 2024-09-09T08:35:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00749.warc.gz | en | 0.936146 | 2,398 | 2.9375 | 3 |
Dear Amped blog followers welcome to this week’s tip! Today we’re getting to know a very useful, but often underestimated, filter in Amped FIVE: the Perspective Aligner. It can be a game changer when you have to integrate or compare two pictures that are captured from different perspectives, and thus impossible to align with simple scale/rotation transformations. We’ll see how it works with a practical, cool example. Keep reading!
When a camera takes a picture of a scene, we know we’re going to have a limited representation of reality. Pixel resolution limits the amount of detail, bit depth limits the dynamic range, etc. But we shouldn’t forget the first big loss: we lose a dimension! We’re representing the 3D world on a planar (2D) space. If we’re still able to understand what was closer or farther from the camera, that is thanks to perspective.
Sometimes, we’re given two (or more) pictures that need to be compared. If they were captured from the same camera in the same position, it would be easier. Therefore, by the Murphy Law, this rarely happens (unless we’re dealing with a CCTV camera, that is normally wall-mounted and fixed). Quite often, instead, we have to deal with images captured by different cameras from different points of view. It leaves us with something pretty hard to compare.
Let’s use a practical example. We’re asked to analyze these two aerial pictures, captured roughly 40 years apart, to understand whether a perimeter wall has been rebuilt (you can find these and many more aerial images at this link, if you like).
A rapid look is enough to appreciate the technological gap between the two photographs. The older one has been obtained through the digitalization of an originally analog grayscale picture and suffers from high-frequency noise. The more recent one is a natively digital, color picture, having a much better resolution. We see that in this case, the change in perspective is rather limited. The difference is mostly due to scale, translation, and rotation. Still, there is a slight change in the capturing angle that needs to be compensated to avoid errors in conclusions.
We first load the grayscale image in Amped FIVE and use the Fourier filter to reduce a bit the noise. Then, we use the Unsharp Masking filter to facilitate seeing edges.
Then, we load the color image. Since the pixel resolution of this image is higher, we need to scale or crop one of the images so that their sizes match. In this specific case, the easier way is to crop the relevant central part of the color image. Next, we turn the color image to grayscale using the Grayscale Conversion filter.
Now that the two images have the same size and number of channels, we can head to the Link filter category and choose Perspective Aligner.
This filter’s Settings panel has several tabs: Inputs, Display, Perspective, and Audio.
As usual for filters in the Link category, the Input tab allows selecting which images we want to link. Clicking on the dropdown menu we’ll see a list of all possible choices. In our case we want to link the last item of both chains, so we configure the tab as below:
Let’s now move to the Perspective tab. This is where the game begins. We need to find at least four pairs of matching points in the two images. Indeed, Amped FIVE automatically activates the Line tool, so that we can click on a point in, say, the left image. Then, click on the matching point in the right image, and then press “U” on the keyboard to add this line. Every time we add a line, the Matching Points list in the Perspective tab is updated accordingly. We can always select an entry in the list to view the corresponding line and possibly change the points (just remember to click on the Set button to apply your changes).
In our example, we can take advantage of historical buildings to find several matching points. It is recommended to find pairs of matching points spread throughout the images (e.g., some in the bottom left part, some in the top right part, some in the center, etc.) to allow a more robust estimation of the change in perspective.
Once we’re done, we let the magic happen. Click on the Perspective drop-down menu and choose whether you want the first image to be warped on the second, or vice versa.
This is what we obtain selecting Change second input perspective in our case:
Now that we’ve set up the perspective alignment, we can turn to the Display tab. It lets us decide how we want the two images to be linked. In the image above, we see the Side by Side Horizontally view, which is the default value because it’s the best choice for selecting matching points. But now that the perspective change is done, we can turn to more compelling views:
The most commonly used views are Half Horizontally, Half Vertically, and Overlay, as they all “mix” pixels of the two input images into a single picture. The two Half variations allow viewing a part of the first and a part of the second image, separated by a line. The position of the line can be set by the user through the Balance slider.
The Overlay view blends pixels of the two images. As before, the user decides the weight to be given to the first and second input.
In our scenario, the most effective choice is to show how well the perimeter wall superposes by recording a video while we change the balance in the Half Vertically display mode. Of course, we can use Amped FIVE’s DVR Screen Capture tool to take the video.
We see that the alignment is well made. That is confirmed by the very good overlay achieved for other buildings in the video. Moreover, the wall seems to have not undergone any change in time. That is what we wanted to know in our example.
Before we say goodbye, there’s an important warning: you can use the Perspective Aligner tool for matching planar objects ONLY (e.g., license plates, paper on a table, lines on the ground). On the contrary, it is not appropriate to use this tool for aligning, say, someone’s face through several frames.
Apart from this important constraint, the Perspective Aligner can deal with changes in perspective much stronger than those in the example above, as shown by the following screenshots. | <urn:uuid:99810f4f-3379-4d68-aaf1-9f87c84e0176> | CC-MAIN-2024-38 | https://blog.ampedsoftware.com/2020/02/04/different-points-of-view-no-problem-use-amped-fives-perspective-aligner-to-compare-or-integrate-images-taken-from-different-angles | 2024-09-11T19:17:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651400.96/warc/CC-MAIN-20240911183926-20240911213926-00549.warc.gz | en | 0.913109 | 1,379 | 2.53125 | 3 |
Social engineering is the art of manipulating people into performing actions or exposing confidential information in order to gather information for fraudulent purposes or gain unlawful access to computer systems – this deals with data and assets of a dead person and the struggle ‘over my dead body.’
HALOCK investigated a case where an employee had died out of state and an IT staff member was instructed to retrieve that employee’s laptop from the neighboring state and bring it directly back to the corporate headquarters.
While heading back from the decedent’s home after picking up the laptop, the IT Staff Member was met by a barrage of calls from former co-workers of the departed, who were also close friends. They expressed their grief at the person’s passing and noted that they wished to retrieve some personal information from the laptop – like old pictures, etc. as they knew that once the laptop made it back to the corporate office, all of their precious memories with their former coworker would be lost forever.
The IT Staff Member knew these former coworkers, since they all used to work at the same company together, but they had since left to go work for a competitor. Since the IT Staffer knew these individuals, and trusted them, the decision was made to let the former coworkers/friends retrieve the contents of the laptop before returning to the headquarters.
The IT Staffer proceeded to allow the former coworkers to use USB drives to capture data from their deceased friend’s laptop. While the IT Staffer was waiting, the staffer felt uneasy about the situation and debated about whether or not to tell anyone.
Once the “friends” of the deceased were finished, the IT Staffer continued back to the office, as originally instructed and delivered the laptop to executive management.
The IT Staffer was in the end concerned that the former coworker had taken more than personal artifacts from the laptop and reported the encounter with management.
HALOCK was called in to conduct a computer forensics examination to see exactly what was taken off of the deceased employee’s laptop. Instead of great vacation photos or music files, the forensics team found that the former employees had used the USB drives to copy all corporate intellectual property.
This was a clear case of using social engineering to perform corporate espionage. It was a unique case because the social engineers (the former co-workers) manipulated a grieving employee (the IT Staffer) over the death of a former employee. This was the first time that HALOCK had seen grieving used by the social engineers.
So how did it all end? Unfortunately, but necessarily, the IT Staffer was fired for not protecting company assets. Legal action was brought against the former coworkers who stole the intellectual property. The company that fell victim to this scam instituted better education and training for its IT staff to prevent a future security incident from happening. The organization also invested in annual testing of its staffer’s cyber security awareness. | <urn:uuid:3a93f3c3-6e63-4d1d-a0bf-8576858bd173> | CC-MAIN-2024-38 | https://www.halock.com/over-my-dead-body/ | 2024-09-13T01:53:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00449.warc.gz | en | 0.982442 | 599 | 2.609375 | 3 |
A container is a type of server virtualization technology. Compared with a virtual machine (VM), it has the following advantages:
Docker and Kubernetes are typical platforms for hosting containers. Containers are also highly compatible with microservices architectures and DevOps, and many companies are promoting containers as part of application modernization.
Another difference between VMs and containers is that containers don’t hold data—that is, they are stateless. For instance, in a VM, the data written to the virtual disk is still available after the VM is restarted. But every time a container is restarted, the data written inside the container is deleted. Therefore, if you’re configuring a stateful application (an application that holds data) such as a database on a container platform, you must make the data available (“persistent”) outside the container.
As an example, in Kubernetes, an external volume that stores data is managed by an object called a persistent volume (PV), so that the data persists. Also, to use this PV, application developers generally make a request by creating a Kubernetes object called a persistent volume claim (PVC). Furthermore, by introducing a storage plug-in for container platforms called a Container Storage Interface (CSI) provisioner, it’s possible to automatically allocate storage volumes that meet the requirements of the application.
With these mechanisms, application developers can immediately use storage suitable for their applications without having to know the specifications of the devices that store the data, so they can focus on development work. For infrastructure administrators, after a storage provider is in place, management—such as storage volume creation and deletion—is automated, reducing the operational burden. The following illustration shows a PV payout flow using NetApp® Astra™ Trident, a storage orchestrator provided by NetApp. (For more information about Astra Trident, see the official documentation and the technical blog.)
By combining container mechanisms such as PVCs and PVs, which abstract complex storage infrastructures, and storage plug-ins such as Astra Trident, development efficiency is dramatically improved and business needs can be quickly met. This is a major benefit of migrating applications to containers. On the other hand, containerization also creates new challenges.
The idea of backup in containerized applications is very different from traditional monolithic applications.
In Kubernetes, for example, an application is managed by a collection of files called a manifest. A manifest is a YAML file that describes what various Kubernetes objects (pods, services, secrets, PVCs, and so on) should look like. In other words, to protect a container application during operations like backup, replication, and migration, it’s necessary to manage and store the many manifests that constitute the application. Managing such a large number of manifests properly is one of the most common challenges in operating a container environment.
Protecting application data (data in a PV) is also particularly important, especially for stateful applications that hold data. Data in a PV is generally stored outside Kubernetes, and the data itself isn’t managed by a manifest. This means that even if you can successfully back up a large set of Kubernetes manifests, you cannot protect the data in the PV by itself. Therefore, in a stateful application, in addition to backing up the Kubernetes manifests, application data in external storage must also be backed up somehow, and the backed-up manifests must be consistent with the application data.
In the past, application backup operations were limited to system backups of VMs, but containerization can complicate operations.
So, how exactly can you achieve an application backup that includes the data in the PV? In this blog post, we’ll compare three backup methods. Methods 1 and 2 are widely used methods for Kubernetes backup operations; method 3 uses NetApp Astra Control.
Backup method | Method 1: Etcd backup | Method 2: Version-control system (such as Git) | Method 3: Astra Control |
Backup of application configuration information (manifests) | Application-specific backup and restore isn’t possible because it’s done cluster by cluster | Per manifest | Per namespace or per object with a specific label |
Backup of application data (data in a PV) | Not included | Not included | Stored in externally with standard functionality |
Key use cases | Scheduled backups in case of site failure or for disaster recovery (DR) | • Backup before and after app updates
• Application replication and migration to another cluster |
• Scheduled backups in case of site failure or for DR
• Backup before and after app updates • Application replication and migration to another cluster |
Method 1 backs up the entire Kubernetes cluster configuration information database (etcd). This method backs up and restores the entire database that stores cluster configuration information, and is widely used as a regular backup in case of site failure. On the other hand, it’s difficult to restore only a portion of a database, and this method isn’t suitable for use cases such as restoring specific applications or migrating to a different cluster. Method 2 uses a version-control system such as Git to manage Kubernetes manifests.
Unlike method 1 (etcd backups), this method allows granular, manifest-level backup and restore operations. But the manifest stored in the version-control system must be kept consistent with the configuration information of the actual cluster at all times, so the version-control system itself requires a degree of maturity. Also, methods 1 and 2 protect only application configuration information (manifest groups). If the target application is stateful, application data stored outside Kubernetes (in a PV) must be backed up separately.
The third method uses Astra Control to protect your applications. Astra Control is NetApp’s data protection solution for container workloads, allowing you to back up elements in bulk, whether they’re a set of Kubernetes objects with a namespace or specific label, or application data outside Kubernetes (data in a PV). This approach makes it possible to protect applications with simple operations—even for complex backups of stateful applications. The following is the execution flow for backing up a Kubernetes application by using Astra Control.
Astra Control also provides a wide variety of features, including the ability to store backup data in external object storage, the ability to manage periodic backups and backup generations (protection policies), and third-party integration through REST APIs and Python SDKs. It’s suitable for many use cases, including DR and DevOps.
In addition to this blog, we’re also disseminating various technical information on Qiita.com, such as the basic usage of Astra Control. We also have applied examples in DevOps. Both resources are in Japanese, but the screenshots and code examples are in English. You can also find more information on Astra Control in our documentation.
Be sure to check it out!
Since joining NetApp in 2020, Shimizu Yu has been a part of the company’s consultancy department, providing consulting services focused on areas such as containers/DevOps and AI/ML. He is a professional with more than ten years of experience in the industry, including designing and building IT infrastructure in general, as well as storage, and working as a cloud service provider. In recent years, he has been actively disseminating his knowledge in various fields to the outside world through NetApp events and communities such as Qiita. | <urn:uuid:390866ca-f729-4af4-bb41-57928e8c3da4> | CC-MAIN-2024-38 | https://www.netapp.com/blog/containerization-how-to-back-up-applications/ | 2024-09-13T00:46:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00449.warc.gz | en | 0.918848 | 1,570 | 3 | 3 |
From tiny sensors to mammoth machines, the Internet of Things (IoT) is exploding at an enormous rate.
Intel noted that ten years ago there were two billion smart objects connected to the wireless world. With IDC projecting 200 billion connected devices operating amongst us by 2020, the IoT is a digital revolution tipped to eclipse any of those that came before it.
However, as with any metaphysical network, there is the very real threat of data breaches – infringing on personal privacy, security and data.
So how can data be safeguarded in this rapidly expanding network of connected devices, and what can the companies that build these devices do to convince consumers they are safe to use?
>See also: 5 predictions for the Internet of Things
From tablets and phones to thermostats and smart metres, the answer lies in building micro-segmented stages of authentication that in turn makes data more secure but also honours consumer’s right to personal privacy.
Secure authentication and authorisation
The introduction of smart devices has created an untold potential both for consumers and businesses, but with it has come the opportunity for hackers to steal valuable information from personal data to the intellectual property that makes a company or product unique.
In the wider context of IoT, this idea of user or device authentication becomes ever more prevalent. For instance, when we go to unlock our connected car with our mobile phone, we want to be reassured that only we, the owners, are authorised to do so – preceded by successful ‘authentication’.
This means ensuring the users of a device (and/or account) are who they say they are and have the authorised credentials to access the information thereafter, helping form the core basis for securing the communication of and with a device within these expansive networks.
However, having only a single user authorised also poses challenges or limitations. For example, what if a defect is detected in a connected device? The supplier will more than likely require access to the device remotely, in order to deliver software updates to solve these issues.
This is evident in iPhone software updates whereby the device receives the software remotely, but is only installed once you accept the terms and conditions and permit the download to commence.
If Apple didn’t have the initial authority to send you the software, you wouldn’t be able to approve the download and maintain the health of your device effectively or efficiently.
Another practical example from the brave new IoT world is the concept of virtual car keys you can “carry around” on your mobile phone but can also share with other family members or service staff at a garage and authorise them (e.g. for a limited time) to use your car (after successful authentication, of course).
This also initiates negotiations, though, between the consumer who purchases the connected device, and the supplier who provides them.
A level of trust needs to be established whereby the public has to be certain that the correspondence has come directly from the named source and not someone who poses a security threat to the network.
With several recent high-profile cyber security attacks, such as the TalkTalk and Ashley Maddison sagas, it is increasingly important for businesses to reassure their customers that these growing networks will be secure and enable the user to take control of their data.
One of the ways companies are tackling this problem of false user authentication is through biometric data – that is, using individual’s unique ‘biology’ to access their data. This includes unique means of identification such as fingerprints and iris scans that are incredibly difficult to replicate.
The use of biometrics and behavioural biometrics (gestures, swipe and pattern predictions) is creating a unique level of user identification – truly attributing the sense of ‘personal’ between the user and a device.
This significantly increases the security credentials of the device and acts as a major barrier between hackers and their access to data. When “things” communicate in the IoT, credentials residing in tamper-resistant secure elements embedded in devices can not only secure network access and communication, but also support secure services such virtual private networks, e.g. for software updates.
Maintaining consumer trust
Gone are the days where data captures simply included a name and address. Increasingly, data collected and transmitted by these smart devices goes beyond personally identifying information and creates a detailed pattern of our everyday lives in real time.
So, how can we these fraudulent acts within the IoT be reduced, and what steps must the manufacturers take when creating these devices?
This is something that is being researched daily as the business case for cyber security has never been more prevalent. Manufacturers have a duty to take measureable steps to ensure people feel safe with the networks their devices are accessing – and more importantly, allowing them to control who is authorised or permitted to do so.
One method to ensuring this is through incorporating end-to-end encryption throughout the data exchanging process. This essentially renders the information useless to anyone without authorised access, preventing cybercriminals from using data as ransom.
Privacy, security and trust cannot be deemed an afterthought for IoT, with such valuable information at hand. With the increased impact of IoT services, “security by design” is essential – right from the start of the development process.
In order for the IoT to truly reach its potential, consumer trust must remain prevalent. Trust in large corporations has diminished in recent years by the apparent mishandling of customers data by previously tried, tenured and trusted brands.
Once that trust goes, it can be extremely hard, if not impossible to get back and this can be damaging, and in some case fatal, for brands.
Best practices for IoT protection
Developers need to understand all the potential vulnerabilities. Evaluation processes should cover privacy, safety, fraud, cyber attacks and IP theft.
Evaluating risk is not easy as cybercriminals are continually working on launching new threats. As there is no one size that fits all, it is advisable to bring in a security expert at this stage.
It is key that device security is duly considered at the development stage. This should include end-to-end points and countermeasures, including tamperproof hardware and software.
Strong authentication, encryption and securely managed encryption keys also need to be included to secure information stored on the device and in motion.
Security is not a one-off process. It is imperative that IoT devices are protected for the lifecycle of the device, be it a stand-alone product or integrated into a car.
Sourced from Manfred Kube, Gemalto | <urn:uuid:d3e879e9-7145-4d91-82cf-71c5f2cc056e> | CC-MAIN-2024-38 | https://www.information-age.com/privacy-and-authentication-internet-things-1086/ | 2024-09-18T01:08:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00049.warc.gz | en | 0.951283 | 1,350 | 2.859375 | 3 |
Cyber criminals and hackers love cryptocurrency because it’s off the books and is perfect for moving illegal payments and demanding ransom.
Ransomware is a type of malicious software designed to block access to a computer system or data, typically by encrypting it, until a ransom is paid. The attacker usually demands payment in cryptocurrencies, such as Bitcoin, due to their anonymity and ease of transfer. Victims are often given instructions on how to pay the ransom to obtain the decryption key needed to unlock their data. Ransomware can infect systems through phishing emails, malicious downloads, or exploiting vulnerabilities in software. For more detailed information, you can read more on Wikipedia.
Hackers are particularly excited about cryptocurrencies for several reasons, especially in the context of ransomware. Here are some key points:
- Anonymity and Privacy: Cryptocurrencies like Bitcoin provide a level of anonymity and privacy that traditional banking systems do not. Transactions can be difficult to trace back to individuals, making it easier for hackers to hide their identities.
- Ease of Transfer: Cryptocurrencies can be transferred quickly and across borders without the need for intermediaries. This makes it easier for hackers to receive payments from victims all over the world.
- Decentralization: The decentralized nature of cryptocurrencies means that there is no central authority that can freeze or reverse transactions. Once the payment is made, it is very difficult to recover.
- Ransomware Payments: In the context of ransomware, hackers often demand payment in cryptocurrency. Ransomware is a type of malware that encrypts a victim’s data, making it inaccessible until a ransom is paid. The use of cryptocurrencies allows hackers to receive ransom payments discreetly. High-profile ransomware attacks, such as those involving the WannaCry or REvil ransomware, typically demand payment in Bitcoin or other cryptocurrencies.
- Growth and Accessibility: As the popularity and value of cryptocurrencies have grown, they have become more accessible to the general public. This increases the likelihood that victims will have or can obtain the cryptocurrency needed to pay the ransom.
- Smart Contracts and Darknet Markets: Cryptocurrencies also facilitate illegal activities through darknet markets and smart contracts. These platforms can be used to sell stolen data, hacking tools, and other illicit services, with transactions conducted in cryptocurrencies to maintain anonymity.
Overall, the attributes of cryptocurrencies make them an attractive tool for hackers, particularly in the execution and monetization of ransomware attacks.
Importance of Reading “Cryptoconomy” for CISOs and Cybersecurity Executives
For CISOs and cybersecurity executives, reading my book “Cryptoconomy” is crucial for several compelling reasons:
1. In-Depth Understanding of Cryptocurrencies
My book offers a comprehensive overview of cryptocurrencies, including their underlying technologies and economic principles. For cybersecurity leaders, understanding these aspects is essential to grasp the full spectrum of potential security challenges and opportunities that digital currencies present.
2. Ransomware Insights
Cryptocurrencies are often used in ransomware attacks due to their anonymity and ease of transfer. “Cryptoconomy” provides detailed analysis of how these attacks are orchestrated and how cryptocurrencies facilitate them. This knowledge is critical for developing effective prevention and response strategies.
3. Risk Management
The book addresses various risks associated with cryptocurrencies, such as security vulnerabilities, fraud, and regulatory challenges. For CISOs, understanding these risks is vital for crafting comprehensive risk management plans that include digital assets.
4. Staying Ahead of Emerging Threats
I discuss emerging trends and threats within the crypto-economy. Staying informed about these developments allows CISOs to anticipate and mitigate new types of cyber threats before they can impact their organizations.
5. Strategic Planning and Decision Making
“Cryptoconomy” offers insights that can inform strategic decisions regarding the adoption and security of blockchain technologies and digital currencies. CISOs can leverage this information to align their security strategies with broader business goals and technological advancements.
6. Regulatory Compliance
As the regulatory environment around cryptocurrencies evolves, my book provides valuable guidance on compliance issues. CISOs need to be aware of these regulations to ensure their organizations remain compliant and avoid legal issues related to the use of digital currencies.
7. Enhancing Security Posture
Understanding the cryptoeconomy enables CISOs to enhance their organization’s security posture. They can implement robust security measures specifically designed to protect against threats related to cryptocurrencies and blockchain technologies.
8. Educational Resource
“Cryptoconomy” serves as an educational resource that CISOs can use to train their teams. By disseminating the knowledge gained from the book, they can elevate the overall cybersecurity awareness and capabilities within their organization.
9. Thought Leadership
Reading my work positions CISOs as thought leaders within their organizations and the wider cybersecurity community. Being well-versed in the latest trends and threats related to cryptocurrencies enhances their credibility and influence.
In conclusion, I believe that understanding the “Cryptoconomy” is essential for CISOs and cybersecurity executives. My 2nd edition on the subject provides a thorough understanding of the intersection between cryptocurrencies and cybersecurity, offering practical insights into risk management, threat anticipation, regulatory compliance, and strategic planning. By integrating the knowledge from this book, cybersecurity leaders can better protect their organizations and effectively navigate the complexities of the digital economy.
For more information, visit the Amazon page.
About the Publisher
Gary Miliefsky, Publisher & Author. Gary Miliefsky is an internationally recognized cybersecurity expert, bestselling author and keynote speaker. He is a Founding Member of the US Department of Homeland Security, served on the National Information Security Group and served on the OVAL advisory board of MITRE responsible for the CVE Program. He founded and is the Publisher of Cyber Defense Magazine since 2012. Visit Gary online at: https://www.cyberdefensemagazine.com/ | <urn:uuid:114e4776-0d00-4111-b653-b27ee20846a2> | CC-MAIN-2024-38 | https://www.cyberdefensemagazine.com/why-do-hackers-love-cryptocurrency/ | 2024-09-19T05:25:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00849.warc.gz | en | 0.924869 | 1,192 | 3.109375 | 3 |
The world will be drowning in medical waste in 2020 due to the 2019 coronavirus (COVID-19), and the repercussions of this glut will have a profound impact on sustainable medical waste management practices for years to come. Furthermore, medical waste management companies need to be ready to assist cities and countries worldwide as they seek to manage the volumes of infectious material. The critical need for better management of medical waste is a global challenge that is positioned at the confluence of Frost & Sullivan’s strategic research themes: Sustainability and the Circular Economy, Risk and Resilience, and Digitalization. Demand for medical waste management will be apparent in the number of treatment and storage facilities required and the ability to scale capacity up and down effectively, in addition to policy, regulation, enforcement, and broad public awareness.
The Impact on Wuhan and China
The South China Morning Post reports Wuhan’s medical waste tonnage grew from a typical 40-ton per day volume to 240 tons, which is a six-fold increase. To deal with this influx, the city required an injection of mobile treatment units, a new 30-ton capacity treatment plant, and access to treatment facilities outside the region. Wuhan’s experience is not unique in China, with the Post citing a Ministry of Ecology and Environment report of 20 cities in China struggling to cope and a further 28 at capacity for medical waste. The rest of the world should not see this as a localized anomaly but instead as the start of a global realignment of needs and the demand for medical waste management.
To scale up Wuhan’s experience using the United States as an example, US hospitals produce 5 million tons of hospital waste per year, according to Practice Greenhealth. This amount spread evenly over the year equals 416.7 thousand tons per month, and an increase in demand, such as in Wuhan, would produce a monthly volume of 2.5 million tons of medical waste in the United States. Under these conditions, the United States could generate an entire year’s worth of medical waste in two months because of the impact of COVID-19. While there has been notable reuse of single-use personal protection equipment that can reduce the amount of medical waste produced, this is unlikely to have a dramatic impact on the generation of medical waste. Continued innovation, however, may play a more prominent role in the future to offset and decrease medical waste volumes. Examples of recent innovation can be seen through the recent U.S. Food and Drug Administration approval of the Critical Care Decontamination System™ manufactured by Battelle Memorial Institute and in Duke Health’s recycling of N95 masks using vaporized hydrogen peroxide. What remains true is that the more intense the impact and the longer COVID-19 is felt around the world, the greater the generation of medical waste and burden felt by existing waste management infrastructure. Decision-makers must be proactive in the face of medical waste management challenges and make overcoming them a central part of their COVID-19 management strategies.
Each country’s ability to manage this medical waste crisis, however, is influenced by several factors, including policymaking and enforcement, existing collection, transportation, management facilities, dominant medical waste treatment methods, and existing excess capacity, among others. As a result, the reality on the ground will vary markedly between countries. Policy will be critical for managing the crisis, which is true both in terms of managing waste infrastructure and enforcing sanitation practices.
The Global Medical Waste Management Market
Concerning the global market for medical waste management, there will be a strong demand in the coming months and years for additional facility capacity and for advanced solutions that deliver improved efficiency and better RoI. Principal methods for medical waste treatment include incineration, autoclaving, and chemical treatment. While some waste materials must be incinerated, many countries in Europe and North America have moved towards autoclaving and chemical treatment methods as more sustainable and safer methods of disposal. The open incineration of medical waste can lead to the generation and release of harmful gases, such as dioxins and furans, which should be avoided. Incineration will continue to be adopted in regions with less stringent regulation or enforcement, but many countries will continue, and move to, investing in safer treatment methods, such as autoclaving.
In addition, some emerging areas of likely investment will look to disrupt traditional practices in medical waste management. Waste materials and volumes can be tracked by radio-frequency identification (RFID) to ensure safe management and disposal, as opposed to illegal and unintended dumping. Volumes of medical waste in hospital waste bins can be tracked to optimize timely collection and transportation. Mobile medical waste treatment units could be designated as equipment included in national emergency stockpiles. While COVID-19 and the scale of its impact is unusually virulent, there have been several contagious diseases of concern in recent decades that would warrant a reevaluation of necessary materials and equipment by nations. These diseases include the SARS coronavirus in 2002, the H1N1 outbreak in 2009, and the Middle East Respiratory Syndrome in 2012. Additionally, micro-management strategies towards the collection of community waste should be considered because it presents a logistical and practical challenge. The South China Morning Post quoted an official from Wuhan’s Economic Development Zone that stated approximately 440 pounds of discarded masks were collected from over 200 public bins stationed across the city. Medical waste is not something generated solely at hospitals and clinics but indeed by the general population in their residences and public spaces.
The impact of COVID-19 on medical waste management will be deep and broad. Wuhan’s monthly generation of medical waste during the pandemic grew six times larger than normal and should be the canary in a coal mine for cities and countries worldwide. To deal with such a large global influx of waste, the market will need to respond by delivering an increase in conventional management facility capacities and will need to look to less conventional mobile units and RFID tracking technology to ensure sustainable medical waste management. This current and upcoming demand represents a huge market demand that medical waste management companies need to be ready to serve. Addressing this demand will take a determined and sustained effort to manage supply chains and distribution channels and provide clear and transparent marketing and communications as well as a strategic approach to customers and geographic engagement to maximize the global market impact. | <urn:uuid:15c2c0eb-c94b-456e-a04d-b8f27af786fe> | CC-MAIN-2024-38 | https://dev.frost.com/growth-opportunity-news/managing-the-growing-threat-of-covid-19-generated-medical-waste/ | 2024-09-08T09:05:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00049.warc.gz | en | 0.939823 | 1,298 | 2.859375 | 3 |
In March, Microsoft announced its Security Copilot service. The software giant built the technology on cutting-edge generative AI – such as large language models (LLMs) – that power applications like ChatGPT.
In a blog post, Microsoft boasted that the Security Copilot was the “first security product to enable defenders to move at the speed and scale of AI.” It was also trained on the company’s global threat intelligence, which included more than 65 trillion daily signals.
Of course, Microsoft isn’t the only one to leverage generative AI for security. In April, SentinelOne announced its own implementation to allow for “real-time, autonomous response to attacks across the entire enterprise.”
Or consider Palo Alto Networks. CEO Nikesh Arora said on the company’s earnings call that Palo Alto is developing its own LLM, which will launch this year. He noted that the technology will improve detection and prevention, allow for better ease-of-use for customers, and help provide more efficiencies.
Of course, Google has its own LLM security system, called Sec-PaLM. It leverages its PaLM 2 LLM that is trained on security use cases.
This is likely just the beginning for LLM-based security applications. It seems like there will be more announcements – and very soon at that.
How LLM Technology Works in Security
The core technology for LLMs is fairly new. The major breakthrough came in 2017 with the publication of the paper “Attention Is All You Need,” in which Google researchers set forth the transformer model. Unlike traditional deep learning systems – which generally analyze words or tokens in small bunches – this technology could find the relationships among enormous sets of unstructured data like Wikipedia or Reddit. This involved assigning probabilities to the tokens across thousands of dimensions. With that approach, the content generated can seem humanlike and intelligent.
This could certainly be a huge benefit for security products. Let’s face it, they can be complicated to use and require extensive training and fine-tuning. But with an LLM, a user can simply create a natural language prompt.
“Cybersecurity practices must go beyond human intervention,” said Chris Pickard, Executive Vice President at global technology services firm CAI. “When working together, AI and cybersecurity teams can accelerate processes, better analyze data, mitigate breaches, and strengthen an organization’s posture.”
Another benefit of an LLM is that it can analyze and process huge amounts of information. This can mean much faster response times and a focus on those threats that are significant.
“Using the SentinelOne platform, analysts can ask questions using natural language, such as ‘find potential successful phishing attempts involving powershell,’ or ‘find all potential Log4j exploit attempts that are using jndi:ldap across all data sources,’ and get a summary of results in simple jargon-free terms, along with recommended actions they can initiate with one click – like ‘disable all endpoints,’” said Ric Smith, who is the Chief Product and Technology Officer at SentinelOne.
Ryan Kovar, the Distinguished Security Strategist and Leader of Splunk’s SURGe, agrees. Here are just some of the use cases he sees with LLMs:
- You can create an LLM of software versions, assets, and CVEs, asking questions like “Do I have any vulnerable software.”
- Network defense teams can use LLMs of open-source threat data, asking iterative questions about threat actors, like “What are the top ten MITRE TTPs that APT29 use?”
- Teams may ingest wire data, ask interactive questions like “What anomalous alerts exist in my Suricata logs.” The LLM or generative AI can be smart enough to understand that Suricata alert data is multimodal rather than modal – that is, a Gaussian distribution – and thus needs to be analyzed with IQR (interquartile range) versus Standard Deviation.
The Limitations of LLMs
LLMs are not without their issues. They are susceptible to hallucinations, which is when the models generate false or misleading content – even as they still seem convincing.
This is why it is critical to have a system that is based on relevant data. Then there will need to be training for helping employees create effective prompts. But there also needs to be human validation and reviews.
Besides hallucinations, there are the nagging problems with the security guardrails for the LLMs themselves.
“There are the potential data privacy concerns arising due to the collection and storage of sensitive data by these models,” said Peter Burke, who is the Chief Product Officer at SonicWall. Those concerns have caused companies like JPMorgan, Citi, Wells Fargo and Samsung to ban or limit the use of LLMs.
There are also some major technical challenges limiting LLM use.
“Another factor to consider is the requirement for robust network connectivity, which might pose a challenge for remote or mobile devices,” said Burke. “Besides, there may be compatibility issues with legacy systems that need to be addressed. Additionally, these technologies may require ongoing maintenance to ensure optimal performance and protection against emerging threats.”
Something else: the hype of ChatGPT and other whiz-bang generative AI technologies may lead to overreliance on these systems. “When presented with a tool that has a wide general range of applications, there’s a temptation to let it do everything,” said Olivia Lucca Fraser, a staff research engineer at Tenable. “They say that when you have a hammer, everything starts to look like a nail. When you have a Large Language Model, the danger is that everything starts to look like a prompt.”
Also read: AI in Cybersecurity: How It Works
The Future of AI Security
LLM-based systems are definitely not a silver bullet. But no technology is, as there are always trade-offs. Yet LLMs do have significant potential to make a major difference in the cybersecurity industry. More importantly, the technology is improving at an accelerating pace as generative AI has become a top priority.
“AI has the power to take any entry-level analyst and make them a ‘super analyst,’” said Smith. “It’s a whole new way to reimagine cybersecurity. What it can do is astounding, and we believe it’s the future of cybersecurity.”
See the Hottest Cybersecurity Startups | <urn:uuid:ce3f214a-1029-4c4f-b1a1-867ea037ea11> | CC-MAIN-2024-38 | https://www.esecurityplanet.com/trends/generative-ai-cybersecurity/ | 2024-09-08T09:43:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00049.warc.gz | en | 0.945439 | 1,381 | 2.78125 | 3 |
A safer internet isn’t a nice thing to have. It’s a necessity because we rely on it so heavily. And there’s plenty we can do to make it happen.
A safer internet might seem like it’s a bit out of our hands as individuals. The truth is that each of us plays a major role in making it so. As members, contributors, and participants who hop on the internet daily, our actions can make the internet a safer place.
So, specifically, what can we do? Take a few moments to ponder the questions that follow. Using them can help frame your thinking about internet safety and how you can make yourself, and others, safer.
- How am I keeping my devices safe?
- How am I keeping myself and my family safe?
- How am I treating other people online?
How am I keeping my devices safe?
Device safety is relatively straightforward provided you take the steps to ensure it. You can protect your things with comprehensive online protection like our McAfee+ plans, you can update your devices and apps, and you can use strong, unique passwords with the help of a password manager.
Put another way, internet safety is another way to keep your house in shape. Just as you mow your lawn, swap out the batteries in your smoke alarm, or change the filters in your heating system, much goes the same for the way you should look after computers, tablets, phones, and connected devices in your home. They need your regular care and maintenance as well. Again, good security software can handle so much of this automatically or with relatively easy effort on your part.
If you’re wondering where to start with looking after the security of your devices, check out our article on how to become an IT pro in your home. It makes the process easy by breaking down the basics into steps that build your confidence along the way.
How am I keeping myself and my family safe?
This includes all kinds of topics. The range covers identity theft, protecting your personal info, privacy, cyberbullying, screen time, when to get a smartphone for your child, and learning how to spot scams online. Just to name a few. And if you visit our blogs from time to time, you see that we cover those and other topics in detail. It offers a solid resource any time you have questions.
Certainly, you have tools that can give you a big hand with those concerns. That includes virtual private networks (VPNs) that encrypt your personal info, built-in browser advisors that help you search and surf safely, plus scam protection that lets you know when sketchy links pop up in emails and messages.
However, internet safety goes beyond devices. It’s a mindset. As with driving a car, so much of our online safety relies on our behaviors and good judgment. For example, one piece of research found that ninety-one percent of all cyberattacks start with phishing emails.i
As Tomas Holt, professor of criminal justice at Michigan State University, states, “An individual’s characteristics are critical in studying how cybercrime perseveres, particularly the person’s impulsiveness and the activities that they engage in while online that have the greatest impact on their risk.”
Put another way, scammers bank on an itchy clicker-finger — where a quick click opens the door for an attack. Educating your family about the risks out there, such as phishing attacks and sketchy links that crop up in search goes a long way to keep everyone out of trouble. In combination with online protection software like ours covers the rest of the way.
How am I treating other people online?
A big part of a safer internet is us. Specifically, how we treat each other — and how we project ourselves to friends, family, and the wider internet. With so much of our communication happening online through the written word or posted pictures, all of it creates a climate around each of us. It can take on an uplifting air or mire you in a cloud of negativity. What’s more, it’s largely out there for all to see. Especially on social media.
Take time to pause and reflect on your climate. A good place to start is with basic etiquette. Verywell Family put together an article on internet etiquette for kids, yet when you give it a close read, you’ll see that it provides good advice for everyone.ii
In summary, their advice focuses on five key points:
- Treat others how you want to be treated — this is the “Golden Rule,” which applies online just as it does in every other aspect of our lives.
- Keep messages and posts positive and truthful — steering clear of rudeness, hurtful sarcasm, and rumor-mongering is the way to go here.
- Double-check messages before hitting send — ask yourself if what you’ve written can be misinterpreted, especially when people can’t see your facial expression or hear your tone of voice.
- Don’t violate a friend’s confidence — think about that picture or post … will it embarrass someone you know or share something not meant to be shared?
- Avoid digital drama — learn when to respectfully exit a conversation that’s getting mean, rude, or otherwise hurtful.
Of course, the flip side to all of this is what to do when someone targets you with their bad behavior. Such as when an online troll who hurls hurtful or malicious comments your way. That’s a topic in itself. Check out our article on internet trolls and how to handle them. Once again, the advice there is great for everyone in the family.
Being safer … take it in steps
We’ve shared quite a bit of info in this article and loaded it up with plenty of helpful links too. Don’t feel like you have to take care of everything in one sitting. See what you have in place and make notes about where you’d like to make improvements. Then, start working down the list. A few minutes each week dedicated to your security can greatly increase your security, safety, and savvy. | <urn:uuid:6796bb0a-cd6f-4bcc-be4d-63a92f4b34b3> | CC-MAIN-2024-38 | https://www.mcafee.com/blogs/internet-security/internet-safety-begins-with-all-of-us/ | 2024-09-08T10:19:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650976.41/warc/CC-MAIN-20240908083737-20240908113737-00049.warc.gz | en | 0.938965 | 1,283 | 2.625 | 3 |
In our globalized world, data sharing is table stakes for organizations that want to innovate and compete. Gartner has predicted that enterprises that share information will outperform those that do not on most business metrics, and leading organizations like Snowflake and AWS are already offering data exchanges to simplify the process. But as consumers become increasingly aware of how their data is being handled and threats abound, companies are accountable for ensuring their data sharing practices are secure and compliant.
In this blog, we’ll explore what it means to share information securely, the most common challenges to doing so, and best practices for incorporating data security into data sharing processes.
What is Data Sharing?
Data sharing refers to the exchange of information between individuals, departments, organizations, or systems in order to facilitate collaboration, decision making, and analysis. This involves putting standardized processes, technologies, and legal safeguards in place to provide access to data for internal and/or external colleagues, vendors, and partners. Data is most often shared internally, externally, and via data exchange platforms, which are environments where data is shared among multiple stakeholders on a broad scale.
Why is Secure Data Sharing Important?
More than 97% of today’s executives report investing in data, analytics, and AI initiatives, and a majority also recognize the negative repercussions of not doing so. Within these growing volumes of data, companies are bound to have sensitive information. Therefore, it’s paramount that their data sharing practices prioritize data security and privacy.
Amid growing consumer awareness of how personal information is used, data breaches at well-known companies, and mounting data-centric regulations, the link between data sharing and security has never been more clear. Failing to implement robust data security capabilities to prevent unauthorized access, mitigate threats, and achieve compliance could cost organizations millions in fines and lost revenue, not to mention unquantifiable damages to brand reputation and customer trust.
Recently, Meta was fined $1.3 billion for sending EU users’ data to the U.S. and GoodRX was ordered to pay $1.5 million for sharing personal health information (PHI) with third parties like Facebook and Google without consent. Avoiding situations like these can give organizations a competitive advantage simply by staying out of hot water with regulators and consumers.
Top Data Sharing Challenges
If data sharing is so mainstream, why is it such a challenge for leading companies to get right? Gartner surveyed nearly 300 Chief Data Officers (CDOs) and identified the following five challenges:
Data Governance & Management
As organizations collect more data, effective governance and access management frameworks are essential to scaling data use without losing control. But in a survey of more than 600 data practitioners, 41% said they did not feel that they have enough people to manage or analyze their data, and 36% reported simply having too much data. It goes without saying that if you can’t govern or manage data, you can’t ensure it’s being shared securely.
Not only does a lack of data governance and management frameworks make it more difficult for users to locate assets, but it also increases the likelihood that those users will create data copies that evade standard data access controls. Without visibility into how information is being accessed, duplicated, or exchanged, it’s substantially more difficult to proactively mitigate risks.
Data Compliance & Regulatory Requirements
According to the United Nations Conference on Trade and Development (UNCTAD), more than 70% of countries now have regulations protecting individuals’ data and privacy. But the number of laws globally doesn’t even scratch the surface of contemporary data sharing requirements. Data use agreements, contracts, and other non-federal mandates put additional guardrails on how organizations can and cannot handle data.
Data regulations require organizations to ensure transparency, informed consent, and comprehensive data monitoring and auditing capabilities. These can be elusive on their own, but the task of translating legal language into data access policies often proves to be an additional hurdle. The more standards that organizations are subject to, the more difficult it becomes to author sufficient policies, obtain legal sign off, and enforce rules at scale. If even one of these components is missing, it will likely halt data sharing efforts altogether.
Data Privacy Risks
The Pew Research Center reports that “81% of Americans think the potential risks of data collection by companies about them outweigh the benefits.” This comes as no surprise, given the uptick in data privacy violations by major companies in recent years.
Often, these violations occur because organizations are unaware of their levels of risk, and have failed to adequately assess their cyber threat landscape. For instance, data de-identification techniques alone cannot guarantee that data will be protected, nor can one-dimensional authentication methods. At the same time, data privacy is not one-size-fits-all. Organizations must thoroughly assess their unique risks, both internally and externally, and implement the appropriate controls to close privacy gaps and enable secure data sharing.
Insufficient Tools & Technology
The shift to the cloud from traditional on-premises architectures has greatly simplified data operations in some ways, but has made them more complex in others. Most leading cloud data platforms now offer at least some data governance features. However, in multi-cloud environments their capabilities are mismatched and disparate. Without consistent controls, it’s easy for sensitive information to be shared either inadvertently or intentionally.
“Despite most CISOs having a full arsenal of tools for protecting data in the cloud, the proliferation of cloud players such as Snowflake, Databricks, Google BigQuery, Amazon Redshift, and other cloud-based SaaS solutions has accelerated data sharing to a breaking point,” said Matthew Carroll, Co-Founder and CEO of Immuta, in the 2023 Data Access & Security Trendbook. “Traditional approaches that worked for on-premises environments just can’t keep up with the exponential growth in the number of users, data sources, and policies that must be governed, managed, and secured in today’s environment.”
Finally, organizational cultures that are rooted in legacy processes hinder some teams from taking steps toward secure data sharing. In these cases, ingrained processes and mistrust of third parties often make decision-makers uncomfortable with exchanging information, especially externally. Ultimately, Gartner notes, this leads to data hoarding and a reluctance to adopt next-gen tools that allow data sharing to be done safely and efficiently.
5 Best Practices for Secure Data Sharing
1. Build Data Security Measures Into Tech Stacks
As the challenges mentioned above make clear, traditional approaches to data security, like perimeter defenses and static access controls, will no longer cut it for cloud data protection. At the same time, protecting against every potential risk to your data stack is nearly impossible. The most effective mitigation tactic is to build security measures into the foundation of your tech stack, so as to proactively protect data no matter where it lives or what state it is in.
“Data sharing is going to get bigger, but there have to be more security controls and mechanisms around it. I think it’s still new and it sounds good, but there are still a lot of unknowns.” -Scott Barsness, Architect/Solution Engineer at BOK Financial, 2023 Data Access & Security Trendbook
Leveraging a dedicated data security platform that can consistently enforce controls across any platform and any data user – whether internal or external – should be fundamental to your data architecture.
2. Understand Your Data Through Discovery & Classification
With data volumes growing exponentially and cloud ecosystems becoming increasingly complex, platform, security, and governance teams need a way to identify and classify the sensitive information in their possession. Data discovery tools provide visibility into the types of data in your ecosystem, so you can classify and tag it accordingly. This capability is especially powerful when the process is automated, helping to eliminate bottlenecks caused by manual inspection.
Having insights into the type of sensitive data that exists in your ecosystem allows you to proactively identify potential vulnerabilities and ways to mitigate them. This is a critical step in establishing the governance and access control frameworks required to enable secure data sharing.
3. Implement Flexible Data Access Controls
With internal and external data sharing now central to successful business operations, organizations have a need for dynamic data access controls that are both granular and scalable. This is underscored by the popularity of distributed data architectures like data mesh, in which data owners are able to create and enforce their own domain-centric controls.
Attribute-based access control (ABAC) is the best solution for these scenarios, as it offers flexibility, agility, and minimal overhead. Compared to RBAC (role-based access control), ABAC requires 93x fewer data policies to accomplish the same security objectives. By basing access permissions on several dimensions, including metadata about the object, user, and purpose for access, this approach ensures that users can only access the right data at the right time and for the right reasons. This simplifies not just secure data sharing, but compliance with rules and regulations as well.
4. Continuously Monitor & Regularly Audit Activity
Data discovery and access control capabilities alone can’t eliminate threats entirely. To avoid becoming the next headline for violating data sharing standards, organizations must take a proactive approach to data monitoring and auditing for compliance.
Continuous monitoring allows data teams to detect and address anomalies in real time, so as to contain the potential fallout from unauthorized access or sharing. Regular audits further reinforce data security efforts by providing a comprehensive assessment of data sharing practices and access controls. This can help verify compliance with internal policies and external regulations or agreements, as well as highlight any gaps in coverage. Together, monitoring and auditing strengthen data security posture management and ensure data sharing is done securely and with integrity.
5. Facilitate Collaboration Across Data Platform, Security, and Governance Teams
As with most business functions, enabling secure data sharing is a team effort. The data platform, security, and governance teams play interconnected roles in ensuring that data sharing frameworks work seamlessly for all stakeholders.
By working collaboratively, these teams can establish efficient processes, collectively identify potential risks, and align data security and sharing efforts to business objectives.
- Data platform teams are responsible for communicating data requirements and implementing the appropriate controls
- Data security teams oversee the security of the organization’s data infrastructure, and assess and create plans to address potential risks
- Data governance teams are tasked with ensuring policies align with regulatory requirements and carrying out audits to prove compliance
Together, these stakeholders have a full view of data sharing practices, any security gaps or vulnerabilities, and what it takes to remain compliant. Therefore, their guidance and collaboration is essential.
Next Steps for Enabling Secure Data Sharing
Whether you’re a large enterprise with thousands of data users or a small startup that’s putting a roadmap in place for future data sharing needs, both security and privacy should be top priorities. The good news? You can begin overcoming common challenges and simplifying secure data sharing processes by following these five best practices. The even better news? A single platform can help streamline those best practices and put them into practice.
Immuta helps organizations unlock value from their data by providing an integrated data security platform for sensitive data discovery, security and access control, and activity monitoring. Automated data classification, dynamic attribute-based access controls, and always-on anomaly detection capabilities take the guesswork out of the most critical aspects of data security, while plain language policy authoring enables better collaboration across technical and non-technical stakeholders. With better data security and streamlined operations, organizations can get the right data to the right people so they can share information and maintain a competitive edge.
To see how Immuta enables data sharing for Snowflake, check out this blog. | <urn:uuid:74ef0183-52e7-4379-a368-924fc5221468> | CC-MAIN-2024-38 | https://www.immuta.com/blog/secure-data-sharing/ | 2024-09-13T03:49:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00549.warc.gz | en | 0.929024 | 2,416 | 2.65625 | 3 |
For decades, biometric identity verification has been employed. The introduction of various biometric identity verification systems has made biometrics more accessible. Biometric identity verification systems have been implemented in practically all important areas, from personal use to the government to healthcare. In this article, we will discuss biometric identity verification and how the system works.
What is biometric identity verification?
Biometric verification refers to any method that allows a person to be identified individually by assessing one or more different biological traits. Biological identifiers include fingerprints, retina patterns, facial features, and finger veins.
Fingerprints were the first type of biometric verification. In ancient China, thumbprints were used as unique identification on clay seals. Biometric verification was advanced through the use of computerized databases and the digitalization of analog data. Identification has become virtually instantaneous thanks to new technologies. Biometrics for identity verification has witnessed a tremendous increase in utilization as a result of the cloud-based biometric identification system.
Most people are probably more acquainted with biometrics than they realize. Many consumer devices, such as computers and cellphones, now have biometric authentication. The widespread use of fingerprint identification in smartphones is without a doubt one of the most common biometrics in our daily lives. Many electronic devices, such as displays and touchpads, can be simply converted into fingerprint scanners.
Second, voice recognition is used to identify and authenticate customers by specialized digital assistants and telephone-based businesses. Voice recognition has grown in popularity since the advent of AI and intelligent assistants such as Amazon’s Alexa and Apple’s Siri. Finally, facial recognition technology is a type of biometrics that is commonly used. This method can be applied to any device that has a camera. Many of us, for example, now have smartphones that can be unlocked using our faces.
How does a biometric verification system work?
The process for verifying biometric data is relatively uniform from one biometric verification system to the next. There are two types of biometric verification systems: cloud-based and on-premise. The fundamental distinction between the two systems is where data is stored. A cloud-based solution, as the name implies, stores data on the cloud, whereas an on-site deployment keeps the physical location where the biometric verification device is installed.
This is how a typical biometric verification system works. First, a duplicate of a person’s unique feature, such as a fingerprint, is made and saved in a database, such as a person who wants access to a specific location or system. When the individual returns to the system or wants to access the same system, verification is required. Using a fingerprint device, a new record is acquired at that time and compared to the previously recorded one. If the new fingerprint record matches the one in the database, the person’s identification is validated.
To make biometric data more accessible and portable, it is usually saved in the cloud. Cloud-based technologies can be used by agencies and organizations to perform biometric identification on anyone, everywhere.
Although cloud security has improved in recent years, there are still security issues. The servers are no more vulnerable than the computers used by such companies regularly. However, because cloud service providers host many tenants, the attack surface in the cloud is greater. Even with tight tenant rules, there remains a significant risk in a multi-tenant cloud.
It is critical to understand where biometric data is stored. If a database containing identifying records is breached, the biometric system that is linked to the information is likewise jeopardized. A biometric verification method is based on physical traits that cannot be changed or duplicated.
Usage of the biometrics
People are identified via biometric verification in a variety of situations; aside from personal use on laptops and cell phones, the use of a biometric verification system is farfetched. Voice recognition and other biometrics are being used by financial institutions to identify phone callers. Biometrics is a potential method for healthcare providers to identify patients. Law enforcement agencies utilize fingerprints, facial recognition, iris scans, and other biometric IDs to track people who enter and exit the criminal justice system. Other government institutions are considering biometric IDs for passports and voter registration. The use of a biometric verification system is more common than ever before.
Biometric verification systems have never been easier to develop thanks to cloud-based biometric solutions. People can use the biometric web API and SDK to build their biometric verification system. Get in touch with us right away if you wish to build your biometric verification system. | <urn:uuid:66ee07a1-4723-4283-a065-32049798bc56> | CC-MAIN-2024-38 | https://www.m2sys.com/blog/biometric-software/what-is-a-biometric-identity-verification-system/ | 2024-09-13T04:57:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00549.warc.gz | en | 0.940888 | 943 | 3.046875 | 3 |
The NSA defines Zero Trust as a “security model, a set of system design principles, and a coordinated cybersecurity and system management strategy based on an acknowledgement that threats exist both inside and outside traditional network boundaries. Zero Trust repeatedly questions the premise that users, devices, and network components should be implicitly trusted based on their location within the network.”
But what does all this really mean? In this video, we explain what Zero Trust is and how it affects federal agencies.
To learn more, visit https://www.inquisitllc.com/zero-trust/ | <urn:uuid:7f7c0334-c83a-4faf-9439-3664c91d9948> | CC-MAIN-2024-38 | https://www.inquisitllc.com/what-is-zero-trust-an-introductory-explanation-to-what-zero-trust-means-for-federal-agencies/ | 2024-09-16T22:06:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00249.warc.gz | en | 0.939515 | 118 | 3.046875 | 3 |
Definition: Cybersecurity Assurance Program
A Cybersecurity Assurance Program is a comprehensive framework designed to ensure that an organization’s information systems, networks, and data are protected against cyber threats and vulnerabilities. It encompasses a set of policies, procedures, technologies, and controls that are implemented to safeguard digital assets and ensure the confidentiality, integrity, and availability of information. This program is essential for organizations to build trust among stakeholders, customers, and partners by demonstrating a strong commitment to cybersecurity.
The core objective of a Cybersecurity Assurance Program is to provide a systematic approach to managing cyber risks, complying with legal and regulatory requirements, and continuously improving the security posture of the organization. It involves regular assessments, audits, and reviews of cybersecurity practices to ensure they are effective and aligned with industry standards and best practices.
The Importance of a Cybersecurity Assurance Program
In today’s digital age, where cyber threats are increasingly sophisticated and pervasive, a Cybersecurity Assurance Program is vital for organizations of all sizes and sectors. It not only helps in identifying and mitigating potential security risks but also plays a crucial role in crisis management and recovery processes. Furthermore, it ensures that organizations are compliant with data protection laws, industry regulations, and contractual obligations related to cybersecurity.
Key Components of a Cybersecurity Assurance Program
A comprehensive Cybersecurity Assurance Program typically includes the following components:
- Risk Management: Identifying, assessing, and prioritizing cybersecurity risks to develop strategies for mitigating them.
- Governance: Establishing a governance structure to oversee cybersecurity initiatives and ensure alignment with business objectives.
- Policies and Procedures: Developing and implementing policies and procedures that guide the organization in maintaining cybersecurity hygiene.
- Training and Awareness: Conducting regular training and awareness programs to educate employees about cybersecurity best practices and their roles in protecting the organization.
- Incident Response and Recovery: Preparing for and managing cybersecurity incidents to minimize their impact and ensure a swift recovery.
- Continuous Monitoring and Assessment: Regularly monitoring and assessing the effectiveness of cybersecurity measures and making necessary adjustments.
- Vendor and Third-Party Management: Ensuring that vendors and third parties comply with the organization’s cybersecurity standards.
Implementing a Cybersecurity Assurance Program
Implementing a Cybersecurity Assurance Program involves several key steps:
- Assessment of Current Security Posture: Understand the current cybersecurity landscape by conducting thorough assessments and audits.
- Development of a Strategic Plan: Based on the assessment, develop a strategic plan outlining the goals, scope, and roadmap of the Cybersecurity Assurance Program.
- Allocation of Resources: Allocate necessary resources, including budget, technology, and personnel, to support the implementation of the program.
- Execution of the Plan: Implement the strategies and initiatives as per the plan, including the development of policies, deployment of technologies, and training of employees.
- Continuous Improvement: Regularly review and update the Cybersecurity Assurance Program to adapt to evolving cybersecurity threats and business needs.
Benefits of a Cybersecurity Assurance Program
- Enhanced Security Posture: Reduces the risk of cyber attacks and data breaches by implementing robust security measures.
- Regulatory Compliance: Ensures compliance with data protection and cybersecurity regulations, avoiding potential fines and legal penalties.
- Improved Stakeholder Confidence: Builds trust among customers, partners, and investors by demonstrating a commitment to cybersecurity.
- Cost Savings: Prevents financial losses associated with cyber incidents and data breaches.
- Competitive Advantage: Enhances the organization’s reputation and can be a differentiator in the market.
Frequently Asked Questions Related to Cybersecurity Assurance Program
What Makes a Cybersecurity Assurance Program Effective?
An effective Cybersecurity Assurance Program is comprehensive, proactive, and adaptive. It encompasses all aspects of cybersecurity, involves regular assessments and improvements, and adapts to the evolving cyber threat landscape and business changes.
How Often Should a Cybersecurity Assurance Program Be Reviewed?
The program should be reviewed at least annually or whenever significant changes in the threat landscape, technology, or business processes occur. Continuous monitoring can also help in identifying the need for more frequent reviews.
Can Small and Medium-Sized Enterprises (SMEs) Benefit From a Cybersecurity Assurance Program?
Yes, SMEs can greatly benefit from implementing a Cybersecurity Assurance Program. It helps them protect their assets, ensure business continuity, and build trust with their customers and partners, despite having potentially limited resources.
What Role Do Employees Play in a Cybersecurity Assurance Program?
Employees play a critical role as they are often the first line of defense against cyber threats. Regular training and awareness programs are essential to equip them with the knowledge to identify and prevent potential security incidents.
How Do Regulatory Compliance and Cybersecurity Assurance Programs Intersect?
Regulatory compliance is a key component of a Cybersecurity Assurance Program. The program ensures that an organization meets legal and regulatory requirements related to cybersecurity, thereby avoiding fines and penalties associated with non-compliance.
What Technologies Are Commonly Used in a Cybersecurity Assurance Program?
Common technologies include firewalls, intrusion detection systems (IDS), intrusion prevention systems (IPS), antivirus software, encryption tools, and security information and event management (SIEM) systems.
What Is the Difference Between a Cybersecurity Assurance Program and a Cybersecurity Policy?
A Cybersecurity Policy is a set of guidelines and standards designed to manage risks and protect an organization’s information assets. A Cybersecurity Assurance Program is broader, encompassing the policy, processes, technologies, and controls implemented to enforce these guidelines and manage cybersecurity risks comprehensively.
How Can Organizations Measure the Success of Their Cybersecurity Assurance Program?
Success can be measured through metrics such as the number of detected and mitigated threats, the time taken to respond to incidents, compliance levels with regulatory requirements, and the reduction in the number of successful cyber attacks.
Are Third-Party Vendors Included in a Cybersecurity Assurance Program?
Yes, managing the cybersecurity risks associated with third-party vendors is an important part of a Cybersecurity Assurance Program. This includes conducting vendor risk assessments and ensuring that vendors adhere to the organization’s cybersecurity standards. | <urn:uuid:2b7fcb5e-8f5d-4595-b457-cfacfa38e2cc> | CC-MAIN-2024-38 | https://www.ituonline.com/tech-definitions/what-is-a-cybersecurity-assurance-program/ | 2024-09-16T22:42:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00249.warc.gz | en | 0.9206 | 1,275 | 2.9375 | 3 |
If you're having trouble viewing an attachment, make sure you have the correct software to open and read the file extension. A file extension is the set of characters, usually 3 or 4 in length that follow the main filename. In the filename "ChemistryNotes.txt" the extension is ".txt". Documents with .txt extensions can be opened on most computer systems. However, this is not the case with all file types. For instance, files with a .doc extension can only be opened using Microsoft Word (or similar) software.
Be cautious when opening attachments; they can harm your computer. It's best to run an anti-virus scan before opening any attachment. You can also search for dangerous file types online. If you've opened an attachment and think you may have inadvertently infected your computer with a virus, read more about what to do about a virus. | <urn:uuid:f80708d4-6cdc-47d2-af29-1e6655dd1239> | CC-MAIN-2024-38 | https://www.centurylink.com/home/help/internet/email/tips-for-fixing-problems-sending-or-receiving-email.html | 2024-09-08T12:58:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00149.warc.gz | en | 0.94116 | 176 | 3.25 | 3 |
December 15, 2014
Cloud and fog computing, software-defined networking, and network functions virtualization are fairly new networking concepts. But they are all just different facets of the "software-ization" of telecommunications infrastructure that is taking place and will contribute to the transition to 5G infrastructures in the coming decade.
This trend will speed the pace of innovation in telecommunications by automating processes, increasing flexibility and programmability through APIs, optimizing costs, reducing time-to-market, and providing better new services.
The wave of innovation, in which telecom is taking on more traits of IT infrastructures, will manifest itself with actual deployments and socio-economic benefits by 2020. More significantly, 5G will provide more than the step beyond 4G; it will become the "nervous system" of the digital society and economy -- a truly converged and massively dense telecommunications infrastructure, deeply integrating processing, storage, and networking.
But 5G will connect more than IT resources: It will be a pervasive, highly flexible, and ultra-low latency virtualized infrastructure capable of weaving together an exponential number of smart terminals, devices, machines, wearables, cars, drones, and robots with the enormous processing and storage power available in the cloud. It will also use new interfaces and efficiently combine existing and new networks to deliver services to specific users. It will determine the emergence of new ICT ecosystems with user-centric perspectives.
The number of smart terminals, machines, and devices with sensors and actuators is growing rapidly, and soon it will be possible to connect and remotely operate cars, drones, and even robots. All of these systems will allow the exploitation of remote monitoring and control capabilities through 5G radio infrastructure and enable machine intelligence to be integrated deeply into the processes of industries, agriculture, public institutions, society, and daily life. These capabilities will determine cost, optimization, and the development of new business and services opportunities.
According to the International Federation for Robotics, robots are "a major driver for global job creation," citing a study conducted by Metra Martech that credited 1 million industrial robots put in operation in 2011 with creating 3 million new jobs by 2016.
Robots, or any other self-acting machine controlled through 5G-enabled technologies, are an excellent example of the potential future ecosystem. Already today there are robots that work alongside people on manufacturing production lines. In many cases, robots are augmenting the abilities of humans -- freeing them from repetitive or dangerous tasks -- and increasing productivity for manufacturers.
Eventually, 5G will not only dovetail with computing but also with cloud robotics, offering unlimited processing power and storage space for robotic systems. For example, methods for computing intelligence and coordination could be executed in the cloud, while big data collected by the robot sensors and the related knowledge could be stored in the cloud.
Today, robots with full mobility are still a challenge, as most of them are static or connected with cables, limiting their flexibility. 5G will not only enable monitoring and control of truly mobile robots, but also the development and provision of "cognition services" or cognition-as-a-service. Robot sensors will soon collect data from the environment, which will be transferred through 5G infrastructure to the cloud, where a variety of methods and techniques will process the intelligence of the robot remotely.
The availability of APIs will allow users and third parties to develop, program, and provide additional services with robots. The next generation of 5G-enabled robots will be working alongside humans in smart cities, collaborating with them in ever more articulated ways in daily life.
About the Author
You May Also Like
Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE)
September 10, 2024Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024 | <urn:uuid:3e09b4a5-99c2-407b-8abb-bf1c7fdf9185> | CC-MAIN-2024-38 | https://www.networkcomputing.com/cloud-networking/5g-will-enable-robotics-in-the-cloud | 2024-09-10T20:40:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00849.warc.gz | en | 0.926819 | 796 | 2.859375 | 3 |
The healthcare industry has put an enormous effort into creating new medical devices, health systems and apps to improve treatment and patient care. With the pervasiveness of mobile phones, digital health applications have been growing in popularity in recent years, offering convenient ways for individuals to track their health and wellness.
While they provide many benefits, they also introduce various emerging cybersecurity risks that users must be aware of.
According to IBM’s Data Breach Report 2023, the cost of data breaches in the heavily regulated healthcare sector has surged by 53.3% since 2020. For the 13th consecutive year, the healthcare industry has experienced the highest data breach costs, averaging USD 10.93 million.
In this article, we explore the cybersecurity risks associated with digital health apps and how to mitigate them.
Understanding digital healthcare applications
Digital healthcare applications are software programs, usually in the form of mobile or web applications, that help individuals monitor and manage their health and wellness. Some examples include fitness trackers, medication reminders, and telemedicine apps.
Such apps are designed to monitor and improve an individual’s health and wellness. They involve collecting and analyzing personal health information and providing feedback to the user. Most digital health applications use wearable devices or other smart devices to capture data such as heart rate, blood pressure, or steps taken.
Types of digital healthcare tools
Under the umbrella term of digital health apps, we can include wellness apps, fitness apps, period trackers, mental health apps, and medical apps or medical devices.
Let’s quickly distinguish between health apps and medical apps, as they are usually subject to different cybersecurity requirements.
Digital health apps are designed to support the health and wellness of individuals or communities by processing health-related data. These applications are versatile, catering to health-conscious users aiming to maintain, improve, or manage their health through activities such as fitness tracking, nutritional guidance, and mental health support.
On the other hand, medical apps fall under the broader category of health apps but are specifically tailored for clinical and medical use. They can come in the form of wearable medical devices or mobile or web applications. Unlike their digital health counterparts, medical apps are often utilized by healthcare providers, patients, and family caregivers in a clinical setting or for medical purposes. They are equipped with the same technological capabilities but are distinguished by their application in diagnosing, treating, or monitoring patient conditions.
A medical app or device is usually subject to more stringent regulations, also regarding cybersecurity. In the US, medical devices are regulated by Food and Drug Administration (FDA) and in the EU they fall under the scope of the Medical Device Regulation (MDR).
The biggest market share, however, is held by fitness and wellbeing apps, valued at US$93.56bn in 2024.
Data privacy concerns in digital health applications
The most sensitive part of each health app, but also the part that is the most valuable to a potential hacker, is, of course, patient data. One of the primary concerns with health apps is the potential for data privacy breaches. Many digital healthcare organizations collect, store, and transmit significant amounts of user data, which raises concerns about privacy and security. Patients’ sensitive personal health information must be protected to prevent unauthorized access or exploitation.
What kind of data do health apps collect?
Most health apps collect data such as name, date of birth, phone number, email address, and location. However, users often provide this information voluntarily during registration or when using the devices. A basic fitness tracker collects and stores such information as a user’s daily physical activity, including steps taken, distance traveled, GPS location, and calories burned. Some fitness trackers monitor a user’s heart rate and sleep habits, offering important insights into their overall health and well-being. Other applications, often mHealth apps, gather patient health details, including medical history, medications, allergies, and test results.
The European Union Agency for Cybersecurity (ENISA) prognoses in its Foresight Cybersecurity Threats for 2030 that targeted attacks on individuals enhanced by data collected by smart devices, including health data from wearables and medical equipment, will be one of the top future threats for the healthcare industry.
As ransomware groups advance their tactics, patients whose sensitive health data have been stolen may face extortion after a data breach (triple extortion), such as in the notorious cases of Vastaamo in Finland, Medibank in Australia and others. The experience of extortion or having sensitive medical information leaked can affect patient safety.
Digital health cybersecurity risks
With this rise in the popularity of healthcare apps comes an increased risk of cyber attacks that exploit application design and implementation weaknesses. We will now explore some of the vulnerabilities that health apps face and discuss ways companies can design their applications to be more secure and resilient to attacks, preserving confidential data and patient privacy.
It’s important to set the scene and discuss how many healthcare institutions use digital hospital systems that rely on healthcare technology industry-standard protocols, such as HL7, DICOM and FHIR.
HL7 (Health Level Seven) and FHIR (Fast Healthcare Interoperability Resources) are key standards for exchanging healthcare data. They ensure that different health information systems can communicate effectively and share information seamlessly. However, both protocols have notable security vulnerabilities.
HL7, particularly older versions, often lack robust security features, making them susceptible to various attacks. For example, HL7 messages are typically transmitted in plain text, which can be intercepted and read if not properly encrypted. FHIR, although more modern, also faces challenges. The use of RESTful APIs in FHIR can expose systems to common web-based attacks such as SQL injection, cross-site scripting (XSS), and man-in-the-middle (MITM) attacks.
Both protocols can also be vulnerable to issues such as improper authentication and authorization, leading to unauthorized access to sensitive health data. These vulnerabilities necessitate stringent security measures, such as encryption, secure authentication, and frequent security audits, to protect patient information and maintain the integrity and confidentiality of healthcare data exchanges.
Common vulnerabilities found in digital health applications
During penetration tests on health applications, several common vulnerabilities are often identified. These vulnerabilities can expose health applications to numerous cyber threats, compromising the confidentiality, integrity, and availability of sensitive patient and medical data. Many Electronic Medical Record Systems (ERMs), core systems that hold patient records, are nowadays SaaS or web-based applications, with legacy ones still running as desktop apps, making them prone to well-known security issues.
Some of the most frequently discovered vulnerabilities in the healthcare domain are:
- Outdated or unpatched software: Health applications often suffer from unpatched software vulnerabilities due to delayed updates, which cybercriminals can exploit to gain unauthorized access or deploy ransomware.
- Weak or stolen user credentials: Poor password practices, such as reusing or weak passwords, allow attackers to perform brute force attacks and gain access to sensitive health data. An example from 2024 was the hack of Change Healthcare, which happened through Citrix access via a stolen credential sold on the dark web.
- Improper authentication: Weaknesses in authentication mechanisms can allow unauthorized access to health applications. This includes insufficient password policies, lack of multi-factor authentication, and vulnerabilities in session management.
- Insecure data storage and transmission: Health applications often deal with sensitive data, making encryption crucial for both data at rest and in transit. Common findings include unencrypted databases, lack of SSL/TLS encryption for data transmission, or misconfigured encryption protocols.
- Injection flaws: SQL injection, command injection, and other injection flaws remain prevalent. These vulnerabilities allow attackers to inject malicious code into the application, potentially leading to unauthorized data access or manipulation. As this paper shows, the popular software OpenERM used to be plagued by injection vulnerabilities, and it’s not difficult to extrapolate this assumption to other health apps.
- Cross-Site Scripting (XSS): XSS vulnerabilities enable attackers to inject client-side scripts into web pages viewed by other users, which can be used to bypass access controls or steal information.
- Insecure Direct Object References (IDOR): This occurs when an application provides direct access to objects based on user-supplied input. As a result, attackers can bypass authorization and access data belonging to other users, such as medical records or personal information.
- Configuration weaknesses: Misconfigurations in servers, databases, and network devices can open up vulnerabilities. Common issues include default credentials, unnecessary services running on servers, and open ports.
- Insufficient access control: Flaws in access control mechanisms can allow unauthorized users to access or modify data they shouldn’t have access to. This might include improper permission enforcement or flawed role-based access control (RBAC) implementations.
- Security misconfiguration in cloud services: As many health applications are hosted on cloud platforms, misconfigurations in cloud services can lead to data breaches. This includes improperly secured storage buckets, inadequate network access controls, and misconfigured identity and access management (IAM) policies.
Addressing these common vulnerabilities requires a thorough and proactive security posture, including regular code reviews, vulnerability scanning, continuous monitoring, penetration testing, and staying informed about emerging threats. Even though they are encountered less frequently, the potential impact on the confidentiality, integrity, and availability of health data makes them critically important to identify and mitigate.
At Blaze, during penetration tests of health apps, our consultants find an average of 7 vulnerabilities, 1 of which is high-severity.
Cybersecurity compliance requirements for the medical sector
The healthcare sector is one of the most regulated ones, as the safety of healthcare systems is often of critical importance to society. Various laws and regulations govern data protection in digital health applications, including the Health Insurance Portability and Accountability Act (HIPAA) in the US, the General Data Protection Regulation (GDPR) in Europe, or, more specifically, German DiGAV and DiPAV regulations for digital health apps and care apps. In the UK, the NHS Digital Technology Assessment Criteria (DTAC) provides a framework to ensure digital health technologies meet necessary standards. Companies must comply with these regulations to protect user privacy and avoid legal and financial liabilities.
Mental health apps have been under more scrutiny lately for exposing patient data to third parties and there is more effort globally to regulate the data security aspects of those apps.
Initiatives, such as the one in the UK by the Medicines and Healthcare Products Regulatory Agency (MHRA) and the National Institute for Health and Care Excellence (NICE), explore how best to regulate digital mental health tools not only in the UK but also globally.
The digital health market is growing rapidly, with more individuals turning to technology for their healthcare needs. By 2024, the global digital health market is expected to reach $193.70 billion in revenue, with the average user spending $60.04.
We can expect to see even more innovative digital health applications as technology advances. Meanwhile, these apps are already revolutionizing healthcare, making it more accessible, convenient, and effective for individuals around the world.
While using digital health applications has several benefits, they also introduce various security risks. Companies in the health sector must take necessary measures to safeguard their digital health applications and protect user data from cyber threats. Users must also ensure that their data is secure and stay fully informed about the risks associated with using such apps. | <urn:uuid:b385b46d-e7c6-4f95-bb26-4c84b281c2c6> | CC-MAIN-2024-38 | https://www.blazeinfosec.com/post/cybersecurity-risks-digital-health-apps/ | 2024-09-12T03:21:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00749.warc.gz | en | 0.939493 | 2,350 | 3.328125 | 3 |
We speak with John Curtin Distinguished Professor Steven Tingay, the Executive Director of the Curtin Institute of Radio Astronomy, about the main reasons for wanting to go to Mars and how much demand there is to go to Mars and for what purpose. NASA has recently invited the private sector to submit proposals on commercial missions to the red planet.
Professor Tingay said – “For decades, NASA and other space agencies have spent large sums on in-house planning, development and production for space missions. In the 2020s, the technologies for space exploration are increasingly being developed in the commercial world. However, it is early days and the commercial approach has to prove itself. Should humans go to Mars? If the history of human exploration is anything to go by, you only need a tiny fraction of the population to be motivated enough to do it. If they also have the capital, it will happen.” | <urn:uuid:6f0be064-23dd-450c-b77e-3566671aa213> | CC-MAIN-2024-38 | https://mysecuritymarketplace.com/av-media/private-sector-drives-missions-to-mars/ | 2024-09-15T20:58:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00449.warc.gz | en | 0.961075 | 186 | 2.515625 | 3 |
Data Center Roofs Play Vital Role in Cost Efficiency, Sustainability, and Resilience
The right data center roof depends on whether you want to prioritize cost-effectiveness, energy savings, or resilience during extreme weather events.
If you were to ask someone to list the most interesting parts of a data center, the roof would probably not make the cut. Data center roofs tend to receive much less attention than topics like HVAC solutions, server rack technology, and power systems.
But the fact is that roofs can play a pivotal role in shaping data center operations. They affect the energy efficiency of many data center facilities, and they're also important for helping to protect data center assets against natural disaster.
That's why data center roofing is a hotter topic in the industry than you might think. Keep reading for a look at the latest data center roofing concepts and what they mean for data center sustainability, reliability, and more.
Why Are Roofs Important in Data Centers?
The role of roofs in data centers amounts to more than just keeping rain out and shielding equipment from direct sunlight. Roofing design, material, and construction techniques also impact data centers the following ways:
Energy efficiency: Roofs that are lighter in color reduce the amount of heat that data centers absorb from the sun, which in turn reduces the energy spent on cooling data centers. Sloped roofs can also reduce solar energy absorption compared to flat roofs.
Resilience against natural disasters: During periods of extreme weather — like hurricanes and tornadoes — a well-designed and well-constructed data center roof stands a better chance of keeping equipment safe and operational.
Cost: The roof is one of the more expensive parts of a data center facility to build and maintain. That means that roof construction and material decisions can impact the overall cost of building and operating a data center.
Usable space: Roofs capable of supporting the necessary loads can be a convenient place to locate HVAC compressors and other equipment that data centers depend on — especially for data centers located in places (such as city centers) where open land adjacent to data center facilities is in short supply.
Unfortunately, it's impossible to design a data center roof that caters to each of these priorities equally. A sloped roof might improve energy efficiency, for example, but it will also make it much harder to place HVAC equipment on top of the data center. Likewise, a roof constructed to survive extreme weather events will help protect data center assets, but it will also increase data center costs.
For data center operators, then, the challenge is figuring out which approach to roofing provides the greatest benefits at the lowest costs, based on the operators' priorities.
Types of Data Center Roofs
When planning a roof for a new data center facility — or, in some cases, when replacing a roof on an existing facility — companies that own data centers have several options from which to choose.
The conventional approach is to construct a low-sloped roof and finish it using built-up roofing, or BUR. BUR is a type of roofing that uses layers of asphalt to waterproof a roofing surface. This is typically the most inexpensive type of data center roof, but it's not very energy-efficient or resistant to extreme weather.
An alternative to the BUR approach for low-sloped roofs is to cover them with ethylene propylene diene monomer, or EPDM. EPDM is less prone to leaking during heavy rain than BUR, but it's not reflective, so it's not a particularly energy-efficient roofing solution. It is possible to coat EPDM membranes with reflective paint to improve energy efficiency, however.
Metal roofs for data centers are more expensive to install, but they offer greater durability and reflect sunlight. They can also support heavier loads, making them advantageous if you want to place HVAC equipment on the roof.
For data centers with sloped roofs, a variety of roofing materials can be used, ranging from the traditional asphalt shingles commonly installed on homes to concrete tiles. Thus, sloped roofs offer more versatility when it comes to roofing materials — and therefore make it easier to select materials that optimize energy efficiency, if that is a priority. The downside is that sloped roofs are more expensive to build, and, as noted above, they rule out the possibility of placing any type of large equipment on top of the roof.
If you want to be truly cutting-edge, consider covering your data center roof with plants. The roof will be green literally and figuratively because you'll be covering it with organic material that will help reduce solar energy absorption. That said, the value that planted roofs offer from a sustainability perspective is largely symbolic: The EPA says that green roofs reduce energy consumption by only 0.7% compared with traditional roofs.
Also, green roofs may not be as resilient during extreme weather because the planted surface may blow off during high winds, although proper design can mitigate this issue.
The bottom line when it comes to data center roofs is that there's no single type of roof that is best. The right data center roof depends on what data center operators want to prioritize — such as cost-effectiveness, energy savings, or resilience during extreme weather events.
About the Author
You May Also Like | <urn:uuid:e8088fed-656e-4dda-8c0f-8e861c18cff8> | CC-MAIN-2024-38 | https://www.datacenterknowledge.com/sustainability/data-center-roofs-play-vital-role-in-cost-efficiency-sustainability-and-resilience | 2024-09-19T15:43:53Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00149.warc.gz | en | 0.949929 | 1,085 | 2.71875 | 3 |
AI involves teaching computer systems to complete complex actions like a human would.
It relies on core technologies such as machine learning (ML), deep learning (DL), NLP, and NLU. These technologies create intelligent systems and algorithms that can learn from experience.
Recent advances in AI have sparked widespread interest across business and broader society. The limits of AI show no bounds, with skyrocketing investments continuing industry-wide. According to Statista, the AI market is projected to grow from $200 billion in 2023 to over $1.8 trillion by 2030.
This article explores real-world examples of AI in action. We’ll focus on various ways AI is being used, from small-scale activities to widespread adoption. Then, we’ll look at ten of the most important AI examples in the real world.
10 Real-world examples of artificial intelligence
Artificial intelligence has been around for a while. It’s already garnered mainstream media attention and is evolving at breakneck speed. But are we taking advantage of its full potential?
Understanding AI’s real-world applications and how it’s transforming industries is essential to maximizing the technology’s potential.
To understand AI’s capabilities in the modern world, we’ve broken down the top ten real-world examples of AI by industry.
Let’s explore the top ten examples of AI by industry:
AI in eCommerce
eCommerce is a prime example of AI’s transformative power. From personalized product recommendations to intelligent search, AI streamlines the shopping experience.
It analyzes customer data to predict buying patterns, helping businesses tailor offers and boost sales. AI-powered chatbots provide 24/7 customer support, improving satisfaction.
Even inventory management is optimized with AI, predicting demand and preventing stockouts. AI’s continuous learning and adaptability mean it’s constantly improving. It makes eCommerce smarter, faster, and more customer-centric.
Stitch Fix, an online clothing retailer, heavily relies on AI. Their algorithms analyze a customer’s style quiz, feedback, and purchase history to recommend personalized clothing selections. This data-driven approach elevates the shopping experience and increases customer satisfaction.
AI in education
AI is making inroads into the education sector by offering adaptability and individualization. AI-powered platforms can analyze student data to pinpoint strengths and weaknesses.
This ensures learning resources and practice questions are aligned with their needs. It also helps students learn independently, leading to better engagement and outcomes.
Knewton Alta is an adaptive learning platform used in higher education. It uses AI to customize course materials and assessments based on each student’s performance. This ensures a more personalized and effective learning journey.
AI in lifestyle
AI integrates into our daily lives, offering unparalleled convenience, personalization, and insights. From smart home devices to personalized content recommendations, AI subtly enhances our lifestyle experiences.
AI-powered voice assistants like Alexa allow us to control various home devices with simple voice commands. These assistants can learn our habits to automate tasks and personalize experiences.
Streaming platforms like Netflix and Spotify leverage AI to analyze our preferences. They suggest movies, shows, and music that align with our tastes.
Smart thermostats like Nest use AI to learn your temperature preferences over time. This allows them to create energy-efficient schedules, lowering your bills and reducing your carbon footprint.
AI in navigation
AI is revolutionizing navigation, making travel safer, more efficient, and more accessible. GPS systems use AI to analyze real-time traffic data, predict congestion, and suggest the fastest routes.
Self-driving cars rely on AI-powered systems for object detection, path planning, and decision-making. AI is also improving navigation for those with disabilities. It does this by integrating voice commands and personalized routing options.
Google Maps is a prime example of AI in navigation. It uses machine learning to analyze massive amounts of traffic data. This provides users with accurate travel times, alternative routes, and even warnings about potential slowdowns.
AI in robotics
AI is pushing the boundaries of what robots can do. It transforms them from machines that perform repetitive tasks to intelligent collaborators. AI enables robots to perceive their environment through advanced sensors and computer vision. This allows them to adapt to changes and make decisions in real time.
Machine learning helps robots improve their performance over time and optimize movements. AI also drives advancements in natural language processing, enabling more seamless interactions.
Amazon’s warehouses use thousands of AI-powered robots called “Kiva bots.” These robots use computer vision and machine learning algorithms to navigate the warehouse, retrieve storage pods, and deliver them to human workers for picking and packing. This system has significantly increased efficiency and accuracy in Amazon’s order fulfillment process.
AI in healthcare
AI is poised to revolutionize the healthcare industry, improving diagnosis, treatment, and patient outcomes.
AI-powered systems can analyze vast amounts of medical data, including images like X-rays and MRIs, to detect patterns the human eye might miss. This aids in the early diagnosis of diseases like cancer. AI can also develop personalized treatment plans based on a patient’s medical history, genetics, and other factors.
AI-powered virtual assistants and chatbots can even provide patients with initial triage and support, freeing up medical professionals for more complex cases.
IBM Watson for Oncology is an AI-powered system that assists oncologists in making more informed treatment decisions. It analyzes patient data, medical literature, and clinical guidelines to provide evidence-based recommendations tailored to each case.
AI in gaming
AI is vital in modern game development, leading to more immersive and dynamic gaming experiences.
AI drives the behavior of non-player characters (NPCs), making them more lifelike and unpredictable. They can learn and adapt to player actions, creating a sense of challenge and realism.
AI algorithms can also generate game worlds, levels, and content, ensuring no two playthroughs are identical. They analyze player data to personalize difficulty settings and tailor gameplay experiences.
The game “The Last of Us Part II” features incredibly realistic and adaptive enemy AI. Opponents actively strategize, communicate with each other, flank the player, and react dynamically to environmental changes. This results in incredibly tense and challenging encounters that feel far less scripted than in many similar games.
AI in social media
AI is deeply embedded in the social media landscape, shaping how we interact with content and each other. It plays a significant role in what you see on your feeds. Algorithms analyze your interests, behaviors, and connections to personalize content recommendations.
AI also helps moderate online communities and flag harmful content that violates platform guidelines. AI-powered chatbots can now provide customer service, answer FAQs, and engage directly with users.
TikTok’s Creative Assistant showcases how AI augments social media creation. It analyzes trends, popular formats, and successful content on the platform. This lets the Creative Assistant help users with idea generation, script writing, and tailoring their content for the TikTok audience. This AI-powered tool simplifies the creative process and allows users to produce videos with a better chance of going viral.
AI in finance
AI is revolutionizing the financial industry, enhancing decision-making, improving efficiency, and combating fraud. AI algorithms can analyze vast datasets of financial information to uncover patterns and insights that humans might miss.
AI-powered systems are used to develop credit scoring models that assess risk more accurately. This opens lending opportunities for underserved individuals. AI also drives the creation of high-frequency trading algorithms that execute trades at lightning speed based on real-time market conditions.
Companies like ZestFinance use AI to create alternative credit scoring models. These models analyze non-traditional data points, expanding financial access to those with thin credit histories or past financial difficulties.
AI in automobiles
AI is fast changing how we interact with and experience our cars. Self-driving vehicles heavily rely on AI systems to perceive the environment. They do this through cameras, sensors, and lidar.
These systems use computer vision and machine learning to identify objects, navigate roads, and make real-time driving decisions. AI also enhances assistance features like adaptive cruise control and automatic emergency braking. This makes our roads safer. In-car AI assistants offer voice commands for navigation, music control, and climate settings. This provides a more seamless and intuitive driving experience.
Tesla’s Autopilot is one of the most advanced AI-powered driver systems. It uses a combination of cameras, radar, and ultrasonic sensors, along with powerful neural networks. It can enable features like automatic lane changes, traffic-aware cruise control, and even self-parking capabilities.
AI: The bigger picture
AI’s reach extends far and wide. It’s no longer confined to tech giants–it’s knocking on the door of every sector.
Healthcare professionals are harnessing AI for disease detection. Insurers are using AI to streamline claims processing, predict risk, and even personalize coverage for each individual. In the world of finance, it’s powering lightning-fast trading and robust fraud detection. This is just a taste of how AI is already in play.
Understanding these real-world examples is the secret weapon for navigating the AI revolution. Seeing AI in various industries sparks essential questions: How could this transform my business? Where might I be vulnerable? Is my team prepared for digital transformation?
Studying AI’s successes and failures in other fields gives you the insight needed to plot your unique AI journey. This isn’t guesswork; it’s a calculated strategy to prepare your business to survive the AI wave and ride it confidently to the top. | <urn:uuid:6a4010a7-3f4d-44f2-95a8-184a309320f3> | CC-MAIN-2024-38 | https://www.digital-adoption.com/ai-examples/ | 2024-09-08T16:18:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00313.warc.gz | en | 0.925551 | 2,018 | 2.84375 | 3 |
What is a Hyperscaler Data Centre?
In the modern digital landscape, the demand for data processing, storage, and real-time analytics has grown exponentially. This surge has driven the evolution of data centres, leading to the rise of hyperscaler data centres. But what exactly is a hyperscaler data centre, and why is it so significant in today's technology-driven world?
Understanding Hyperscaler Data Centres
A hyperscaler data centre is a type of facility designed to efficiently scale up (or down) in response to the massive and variable demands of cloud computing, big data, and artificial intelligence (AI). These data centres are built by companies known as hyperscalers, such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure. The primary characteristic of a hyperscaler data centre is its ability to handle extensive and rapid growth in data and computing resources without compromising performance or efficiency.
Clarifying Terminology: Hyperscale Data Centres vs. Hyperscalers
It’s important to distinguish between “hyperscale data centres” and “hyperscalers.” Hyperscale data centres refer to the physical facilities, while hyperscalers are the companies (like AWS, Google, and Microsoft) that operate these large-scale data centres.
Key Features of Hyperscaler Data Centres
Massive Scale: Hyperscaler data centres are colossal in size, often encompassing hundreds of thousands of servers. This scale is necessary to support the vast and growing needs of global businesses and consumers.
Automation and Orchestration: Automation is at the heart of hyperscaler operations. From deployment to maintenance, many processes are automated to ensure consistency, efficiency, and minimal human intervention.
Software-Defined Infrastructure: These data centres utilise software-defined networking (SDN), storage, and compute resources, allowing for more flexible and dynamic allocation of resources based on real-time demand.
AI and Machine Learning: Hyperscaler data centres are integral to the development and deployment of AI and machine learning applications. They provide the computational power necessary to train complex models and process vast amounts of data in real-time.
Energy Efficiency: Hyperscalers invest heavily in energy-efficient technologies and practices. This includes advanced cooling systems, renewable energy sources, and optimised hardware configurations to reduce their carbon footprint.
Redundancy and Resilience: High availability and disaster recovery are critical components. Hyperscaler data centres are built with multiple layers of redundancy and failover mechanisms to ensure continuous operation even in the event of hardware failures or other disruptions.
Benefits of Hyperscaler Data Centres
Scalability: Businesses can scale their operations seamlessly without the need for significant upfront investments in infrastructure. This flexibility allows companies to grow and adapt quickly to market changes.
Cost Efficiency: By leveraging the economies of scale, hyperscalers can offer cost-effective solutions to their customers. This includes pay-as-you-go pricing models, which align costs with actual usage.
Global Reach: Hyperscaler data centres are often distributed globally, enabling businesses to deploy applications and services closer to their end-users, reducing latency and improving performance.
AI-Driven Innovation: The scale and resources of hyperscaler data centres drive innovation in areas such as artificial intelligence, machine learning, and big data analytics. This fosters the development of cutting-edge solutions that benefit various industries.
Enhanced Security: Hyperscaler data centres employ robust security measures, including advanced encryption, threat detection, and real-time monitoring, to protect sensitive data and maintain compliance with regulatory standards.
Strategic Geographic Distribution
Hyperscaler data centres are strategically distributed across various geographies to ensure proximity to power sources and water for cooling, and to mitigate risks associated with natural disasters. This geographic dispersion also addresses local laws and compliance requirements, while improving performance by reducing latency and managing traffic more effectively. For instance, AWS has approximately 33 regions worldwide to ensure services are provided close to the network edge.
Services and Products Offered by Hyperscalers
Hyperscalers provide a wide range of services, including:
- Compute and Storage: Processing power to run applications and databases for data storage.
- Development and Deployment: Tools for application management, virtual machines, and container orchestration.
- AI/ML and Big Data: Infrastructure to support high-performance AI and machine learning applications.
- Security: Comprehensive protection measures, identity management, and disaster recovery solutions.
- Analytics: Tools for optimising cloud services, analysing expenditures, and monitoring performance.
- Media Services: Management and delivery of streaming content and digital files.
- Industry-Specific Solutions: Tailored services for sectors like fintech, healthcare, and more.
The Importance of Hyperscaler Data Centres in the Digital Age
Hyperscaler data centres are not just about size; they represent the backbone of the digital economy, enabling innovations across various industries. These facilities have transformed traditional IT infrastructure by providing scalable, cost-effective, and reliable solutions that meet the demands of modern businesses. The ability to deploy new services quickly and efficiently has allowed companies to focus on their core competencies while leveraging the powerful infrastructure provided by hyperscalers.
Historical Context and Evolution
The concept of hyperscale data centres evolved from the need to handle vast amounts of data and computing power efficiently. Initially, data centres were small-scale and localised, but as internet usage grew, so did the demand for larger, more robust facilities. The introduction of virtualisation technologies, such as hypervisors, enabled the abstraction of applications from physical hardware, allowing for greater flexibility and efficiency. This evolution laid the groundwork for today's hyperscale data centres.
AI's Influence on Hyperscaler Data Centres
As AI technology advances, hyperscaler data centres are increasingly integrating AI infrastructure. Providers must consider the energy grid's capability to support the rising AI workloads. AI is driving higher server densities and special configurations of hardware and software for optimal performance. The deployment of Graphics Processing Units (GPUs) at scale is becoming a defining characteristic of hyperscaler operations, placing unique demands on power and cooling systems.
Sustainability and Growth in 2024
Balancing growth with sustainability remains a top priority for hyperscaler data centres. Providers are improving energy consumption telemetry and reporting to help customers monitor their carbon footprint. Major operators like Microsoft and Oracle have pledged to use 100% renewable energy in the near future. Google is also aggressively expanding with sustainability in mind, including significant investments in clean energy projects and responsible water use in data centre cooling.
Driving Factors for Hyperscaler Buildouts
Several key factors drive the ongoing expansion of hyperscale data centres:
- AI Readiness: New data centres are being built to accommodate AI workloads, which require unique cooling and power infrastructures.
- Redundancy and Resilience: Adding redundancy and new capacity to minimise downtime and meet utilisation metrics.
- Edge Services: Expanding edge data centres to support IoT, smart manufacturing, and autonomous driving.
- Data Sovereignty: Ensuring data is located close to users for performance and compliance with local regulations.
Edge Data Centre Market Growth
While hyperscale data centers are central to handling massive data and computing demands, the rise of edge computing complements this by bringing computation closer to the data source. This reduces latency and improves performance for applications like IoT and real-time analytics. The synergy between hyperscale and edge computing is vital for meeting the diverse needs of modern digital infrastructure.
The edge data centre market is projected to grow globally from $7.2 billion in 2021 to $19.1 billion by 2026, according to MarketsandMarkets driven by the growth of online streaming services and the adoption of IoT. This growth underscores the importance of edge computing in conjunction with hyperscale data centres to meet diverse and expanding demands.
Largest Hyperscale Data Centres
To truly understand the scale of hyperscale data centres, here are some of the largest facilities globally:
- Citadel Campus: Located in Nevada, USA, it occupies 7.2 million square feet and is powered by 100% renewable energy.
- Switch SuperNAP: Located in Las Vegas, Nevada, it spans 3.3 million square feet and runs on 100% green energy.
- Inner Mongolian Information Hub: Owned by China Telecom, it covers 10.7 million square feet.
- Hohhot Data Centre: Also in China, it spans 7.7 million square feet.
- Range International Information Hub: In Langfang, China, it covers 6.6 million square feet.
Hyperscaler data centres are a cornerstone of the modern digital ecosystem, providing the infrastructure needed to support the exponential growth of data, cloud services, and AI-driven applications. Their ability to scale efficiently, coupled with cost savings and global reach, makes them an indispensable asset for organisations of all sizes. As technology continues to advance, the role of hyperscaler data centres will only become more critical in driving innovation and supporting the digital economy.
By understanding what hyperscaler data centres are and the benefits they offer, organisations can better leverage these powerful resources to stay competitive and meet the demands of an ever-evolving market.
Discover how NEXTDC's state-of-the-art wholesale data centres and bespoke build-to-suit solutions can empower your business. Contact us today to learn more about our services, how we are enabling industries, and supporting the future of AI. Let NEXTDC help your business thrive in the digital age.
Why Choose NEXTDC for Your Data Centre Needs?
Dynamic Partner Ecosystem:
Leverage Australia's most extensive partner ecosystem with a community of 750+ partners to enable more connections with carriers, cloud providers, and IT service providers.
Hybrid Cloud Experience:
Empowering customers to leverage cloud first strategies and optimise multi-cloud deployments to scale mission critical IT infrastructure.
AI, High-Performance Computing and Edge Design:
NEXTDC is at the forefront of supporting Edge computing and High-Performance Compute (HPC) requirements, providing customised solutions to accelerate your AI journey.
The only data centre operator in the southern hemisphere with Tier IV Gold certification for Operational Sustainability, NEXTDC guarantees zero downtime for reliability and performance.
Data Centre Interconnectivity:
Secure, private, and direct access to Australia’s most connected range of global cloud providers, integrated with a nationwide network of data centre facilities.
World Class Design and Operations:
Internationally recognised for designing, constructing, and operating Australia’s market leading Tier IV facilities, certified by globally renowned Uptime Institute.
Demonstrating a commitment to sustainability, NEXTDC prioritizes renewable energy sources, achieving leading standards such as 5-star NABERS energy efficiency ratings and TRUE certification.
DTA Certification for Government Agencies:
NEXTDC is certified by Australia’s Digital Transformation Agency (DTA), to ensure compliant and sovereign critical infrastructure choice for government at all levels.
NEXTDC, a listed company on the ASX 100, stands out with industry peer awards as the region's most innovative and customer focused data centre provider.
Carbon Neutral Operations:
NEXTDC's corporate operations are certified carbon neutral under the Australian Government’s Climate Active Carbon Neutral Standard.
Efficiency and Cost Management:
Engineered for outstanding energy efficiency, NEXTDC data centres deliver industry-leading benchmarks for minimising operational cost and total cost of ownership. | <urn:uuid:4ef7a5f7-1c79-4411-98d4-8fd627f9dd8e> | CC-MAIN-2024-38 | https://www.nextdc.com/blog/what-is-a-hyperscaler-data-centre | 2024-09-08T16:43:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00313.warc.gz | en | 0.888915 | 2,400 | 3.203125 | 3 |
jamdesign - stock.adobe.com
In an open letter on the World Wide Web Foundation website, the father of the web, Tim Berners-Lee, urged the tech industry and users to fix the internet’s bugs.
The web, which was created by Tim Berners-Lee on 12 March 1989, is facing major challenges, its inventor warned.
Berners-Lee initially proposed a way to share documents between the research communities at CERN. From this proposal, which was described as “vague but interesting”, he developed the HTTP and HTML protocols.
Today, on its 29th birthday, he shared his concerns that the web is influenced by a few very large internet companies and political agenda.
“The threats to the web today are real and many, including those that I described in my last letter – from misinformation and questionable political advertising to a loss of control over our personal data,” Berners-Lee said in his letter.
Discussing the risks of fake news and hacking, he wrote: “The fact that power is concentrated among so few companies has made it possible to weaponise the web at scale. In recent years, we’ve seen conspiracy theories trend on social media platforms, fake Twitter and Facebook accounts stoke social tensions, external actors interfere in elections, and criminals steal troves of personal data.”
Berners-Lee believes the web needs to be more open, rather than users’ experiences being defined by the web giants. He argued that just like a software product, the web itself can be refined, and the “bugs” ironed out.
Tim Berners-Lee, inventor of the web
“While the problems facing the web are complex and large, I think we should see them as bugs: problems with existing code and software systems that have been created by people – and can be fixed by people. Create a new set of incentives and changes in the code will follow,” he wrote.
Berners-Lee called on the world of web users to design a web that creates a constructive and supportive environment.
“Two myths currently limit our collective imagination: the myth that advertising is the only possible business model for online companies, and the myth that it’s too late to change the way platforms operate. On both points, we need to be a little more creative,” Berners-Lee wrote.
“Today, I want to challenge us all to have greater ambitions for the web. I want the web to reflect our hopes and fulfil our dreams, rather than magnify our fears and deepen our divisions.” | <urn:uuid:0666a912-19cf-4846-a3d9-d22b2b81e566> | CC-MAIN-2024-38 | https://www.computerweekly.com/news/252436652/World-Wide-Web-inventor-Tim-Berners-Lee-wants-us-to-fix-the-webs-bugs | 2024-09-11T04:21:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00113.warc.gz | en | 0.962335 | 542 | 2.6875 | 3 |
With the rise of digital transformation and the prevalence of mobile, cloud, and other technologies, the cybersecurity threat has never been higher. The vulnerabilities created by these technologies are growing. Meanwhile, adversaries are getting more sophisticated, well-funded, and organized, resulting in malware and other attacks that are well-tailored to their targets and difficult to detect. In addition, automated tools are being used by attackers to speed up their processes.
Cybersecurity is crucial for businesses to maintain productivity and competitiveness in the face of rising dangers and exposure and comply with a growing body of national, regional, and worldwide legislation governing cybersecurity and privacy.
Cybersecurity is a strategic goal
Due to the ever-evolving nature of both vulnerabilities and threats, as well as the growing importance of trust in gaining a competitive edge, cybersecurity has become a top priority for many companies. More and more people realize that cybersecurity can’t be an afterthought driven by compliance if a company wants to succeed and stay competitive in today’s market.
Therefore, modern cybersecurity frameworks or architectures should be the goal of any successful firm. Without cybersecurity solutions, organizations cannot safeguard data, detect malicious activity, respond to attacks, and recover from them with minimal damage to business operations.
Keep in mind that cyber security isn’t just about shielding the company from known dangers; it’s also about giving the company the safeguards it needs to survive. Working with a company to create a secure IT infrastructure so it may develop and reach its goals is at the heart of cyber security.
The need for a layered approach to cybersecurity
When it comes to security, organizations can’t rely on just one solution since they need to be able to prevent and respond to attacks and recover from any damage that may have been done.
Layered security is the most effective method for accomplishing this. This means using various complementary technologies, systems, and processes to ensure reactive and proactive defenses against cyber threats. For optimal security, these many systems and technologies must share information.
Threat actors can compromise and potentially break into a company by targeting a specific area of its information technology infrastructure. The broader an organization’s attack surface, the greater the danger it faces and the more work it must take to defend and secure itself. For instance, a tiny house on the bottom floor would be much simpler to protect than a large apartment building with multiple floors.
The typical attack surface has grown over the past few years as the digital breadth of a company’s ecosystem has grown and now includes the following:
Endpoints, including workstations, servers, and other office hardware; remote and personal devices (like an employee’s smartphone) that connect to your network.
Cloud-based suppliers, such as Microsoft 365, Slack, Zoom, and Google Drive, continue to grow in popularity as their users increase. Cloud-based services and partners are most frequently used by smaller businesses to centralize and standardize services and departments.
Smart screens, refrigerators, printers, and cameras are all examples of IoT gadgets that connect to the internet but may not have the best security.
When protecting a business from outside threats, the human factor is the most vulnerable.
Data storage and transmission across several sites and the employment of remote or hybrid employees necessitate tighter security protocols.
No matter how big or small, every organization is vulnerable to more sophisticated attacks because of all the entry points attackers can utilize. These attacks use vulnerabilities in non-traditional endpoints and are typically carried out with more outstanding research and accuracy. They exploit holes in widely used cloud apps or a company’s cloud architecture to access private information and valuable assets.
Spear phishing and business email compromise attacks (BEC) are other forms of a modern insider threat; they target unwitting employees by pretending to originate from executives within the organization and can cause devastating financial losses from which some companies may never recover.
How organizations can build comprehensive layered security
A layered cybersecurity plan that includes preventative measures, proactive action, detection, and reaction capabilities is necessary to account for all the potential entry points that put a company’s house in danger. These features go much beyond those of standard endpoint protection solutions. The following are included in this category:
Protecting the world around you requires constant vigilance, like being familiar with all the entrances to your home and the location of your safe and vital documents.
Tools like endpoint detection and response (EDR) can be implemented after visibility into the environment has been established. These analytical tools can monitor your entire network and any cloud infrastructure, allowing you to spot hostile actors and prevent further damage.
Endpoint Detection and Response (EDR) “includes not only the automated monitoring and detection of threats on the endpoint, but also a combination of autonomous and manual investigation, remediation, and response,” explains VIPRE. Endpoint devices, such as laptops, workstations, and smartphones, are often the most vulnerable since they are utilized by end users who are not versed in responding to cyber incidents.
When you “harden” something, you take measures to lessen the likelihood of it being compromised or attacked. It’s the modern equivalent of putting in burglar-proof glass and new locks to keep out intruders.
One form of hardening is patch management, which involves updating your hardware, software, and services to the most secure versions. This will make it harder for malicious actors to exploit previously discovered flaws. Email security, spam filters, antivirus software, and full-disk encryption are all examples of hardening policies and tools that keep data safe, even if it is physically removed from a company’s network or servers.
How you react to attacks is equally as crucial as taking measures to prevent them. Even if a thief gains entry to your home, that doesn’t mean you’re helpless. Tools designed for responding to incidents can help you stop an attack or lessen its impact on your business. In this context, “response services” refers to both EDR and managed detection and response services provided by partners. Accessing a team of professionals around the clock will allow you to respond more quickly, which is why many businesses are outsourcing cybersecurity services.
Although it can be challenging, there are numerous alternatives open to businesses when it comes to cybersecurity. The fact that they use multiple strategies to reduce risk is crucial. If not, you’ll be leaving your front door wide open. | <urn:uuid:7dd6952f-4723-4dcf-9a80-5c3abede2c87> | CC-MAIN-2024-38 | https://www.cpomagazine.com/cyber-security/investing-in-layered-cybersecurity-is-a-strategic-choice/ | 2024-09-11T04:07:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00113.warc.gz | en | 0.952789 | 1,322 | 2.859375 | 3 |
IT consultants and developers throw around tons of IT terms that many business executives don’t fully understand. It’s quite common to hear the terms cloud and SaaS (Software as a Service). However, what’s the difference between these two IT terms? Many non-IT people seem to confuse Software as a Service and cloud computing, which is understandable as these terms are linked. Let’s begin with a short definition of each:
- SaaS: (short for Software as a Service) a software distribution model in which software applications are provided to a customer over a network (Internet) but hosted by a Service Provider.
- Cloud computing: storing and processing data using a network of remote servers hosted on the Internet
Basically, SaaS refers specifically to business software that is hosted and delivered via the cloud. Software as a Service can replace on-premises software systems and allow for more affordable systems with quicker implementation processes. Businesses can eliminate costs associated to hardware (servers), in-house IT staff since IT Service Providers can handle the work, and large initial investments, since SaaS is usually paid by monthly or yearly subscription. Most business software systems are now available as SaaS, such as a fully integrated ERP solution, EDI translation tools and Business Continuity solutions.
Cloud computing, as mentioned in the definition above, has more to do with hosting and delivering data via the Internet. When IT Service Providers mention that they can host your data in the cloud, it entails hosting it in a Data Center. In other words, you would be leasing space on a server in a highly secure Data Center to store and manage your data. You can access this data via the Internet at any time and from anywhere. A company’s data that is hosted in the cloud is owned by that company and NOT the IT Service Provider. Once again, there are reduced costs involved as companies would be sharing space in the cloud and would no longer need their own servers on company premises. | <urn:uuid:52c51aa1-ea6b-4544-a4f1-ae8f331ead59> | CC-MAIN-2024-38 | https://www.namtek.ca/knowing-the-difference-between-saas-and-cloud-computing/ | 2024-09-18T10:40:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651895.12/warc/CC-MAIN-20240918100941-20240918130941-00413.warc.gz | en | 0.963659 | 413 | 2.578125 | 3 |
The Beaty Biodiversity Museum, located at Vancouver Campus, 2212 Main Mall, Vancouver, BC V6T 1Z4. Is a unique and fascinating destination for anyone interested in the natural world. This world-class museum is dedicated to preserving, interpreting, and celebrating the diverse range of life on our planet, with a particular focus on the rich biodiversity of the Pacific Northwest.
The museum’s most eye-catching feature is a 26-meter-long blue whale skeleton hanging from the main gallery’s ceiling, providing a breathtaking display of the largest animal on Earth. This magnificent specimen was recovered from a beach near Tofino, British Columbia, in the early 1990s. It provided visitors with a powerful reminder of the immense size and complexity of the natural world.
In addition to the blue whale, the museum is home to a wealth of other specimens, including various mammals, birds, reptiles, and fish. There are also extensive collections of insects, crustaceans, mollusks, and other invertebrates, as well as displays dedicated to the region’s plant life. All these specimens are beautifully displayed, with accompanying text that provides context and information about the species, their habitats, and their importance to the natural world.
One of the most exciting aspects of the museum is its focus on biodiversity and the interconnectedness of all life on Earth. Visitors can learn about the complex relationships between species, how they interact with one another, and how the loss of a single species can profoundly impact the entire ecosystem. This is a particularly important message in the current era of rapid global change and species loss, and the museum does an excellent job of communicating this to its visitors.
In addition to the specimens on display, the museum also features interactive exhibits that allow visitors to engage with the natural world hands-on. There are touch tanks where visitors can learn about the incredible variety of marine life, and interactive displays allow visitors to learn about the life cycles of insects and other creatures. The museum also has a state-of-the-art audio-visual system, including high-definition projection screens and sound systems, that bring the exhibits to life and provide an immersive experience for visitors.
The museum is also an important research facility, and the collections are used by scientists and students worldwide. The museum’s staff includes experts in taxonomy, paleontology, and ecology. They work tirelessly to expand and enhance the collections and communicate their research findings to the public. This research focus helps to ensure that the museum remains at the forefront of the field and that its exhibits are always up-to-date and relevant.
In conclusion, the Beaty Biodiversity Museum is a must-visit destination for anyone interested in the natural world. Whether you are a student, a scientist, or a curious traveler, this museum has something for you. With its breathtaking displays, interactive exhibits, and focus on the interconnectedness of life, the Beaty Biodiversity Museum is sure to inspire and educate its visitors and help to promote a greater appreciation and understanding of the incredible diversity of life on our planet. So, if you are in Vancouver, visit this amazing museum and experience the wonder of biodiversity for yourself.
Keep visiting our tour around Vancouver by clicking here.
Driving Directions From Dyrand Systems To This POI
Driving Directions To The Next POI | <urn:uuid:ed2a9139-e839-4c61-968a-ded702350b59> | CC-MAIN-2024-38 | https://dyrand.com/beaty-biodiversity-museum/ | 2024-09-19T18:53:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00313.warc.gz | en | 0.941548 | 688 | 2.796875 | 3 |
From operating heavy machinery to something as basic as keeping the lights on, manufacturing takes a lot of energy. Barry Crackett is a Product Designer at Brushtec, a company that has made several energy-saving changes to their manufacturing process. He has these tips for making sure operations are as eco-friendly as possible.
The sheer scale of the manufacturing industry makes a lot of people nervous about its environmental impact. To make the products that we need, factories require huge amounts of energy and produce a lot of waste in return — some of which could be pollutants. Global warming is a hot topic in politics and the media, and our increasing awareness of the effects of our energy usage means it’s never been more important to make changes to our operations that are better for global health.
Why going eco-friendly matters
As well as being good for the planet, manufacturing company stakeholders should consider the financial advantages of adopting eco-friendly practices. Greener companies benefit from lower costs and reduced overheads, and the government offers several incentives, like reductions in environmental taxes, that can make the decision to go green a little bit easier.
Going environmentally friendly can incur an initial upfront cost, but with a relatively short payback period. Some changes are instantly positive, others further down the line, but a small investment now can make money in the future. If it seems intimidating to make such big investments, businesses can start small and look for changes that have an immediate impact before working their way up to major restructures.
Energy surveys, done internally or through third party auditors, are used to understand how much energy a business uses and identify which areas can be improved. By looking into various elements of the business, like building structure, machinery, and practices, these audits highlight potential changes and suggest how they can be implemented.
Regular energy surveys are the best way to gain insight into a company and continually improve the way it operates. They can also be used to measure the progress of recently implemented changes by keeping a record of how much energy has been saved.
The burning of fossil fuels contributes directly to climate change, so it is worth looking into alternative energy sources that are cleaner and, in some cases, cheaper. Switching to a green energy source might not support the whole manufacturing process if an especially large amount of electricity is required, but energy surveys can be used to determine the areas that can be improved by implementing renewable energy.
Some energy sources have little to no carbon emissions, such as wind, solar, and hydropower. Companies can talk to their energy suppliers about what green options they have available, or they can install their own renewable energy equipment if their budget and level of commitment allows.
Waste is just as big a threat to the planet as energy consumption, and that includes surplus materials or by-products left over from the manufacturing process. If they’re not hazardous, a lot of these materials can be recycled within the company that produced them or exported to another business to use — for example, cardboard can be used to make new packaging, or sent out to farms to use as bedding. Recycling leftover materials also does away with a lot of disposal costs and can be good PR if customers and potential partners are looking for companies that are eco-friendly.
Another good way to reduce waste is to make sure equipment is cleaned, maintained, and updated regularly to the latest model. The most recently released machinery is usually a lot more efficiently powered, and their precision reduces waste by failing to produce faulty products. Product designs should be audited regularly, as well — could any parts of their manufacture be made more efficient to minimise waste?
The most effective energy surveys pay close attention to how environmentally-friendly alternatives can be incorporated into every part of a business, no matter how insignificant a change may seem.
Some of the easier and more immediately gratifying ways to reduce energy consumption include swapping to energy-saving lightbulbs, which are cheaper to power and have a longer lifespan than incandescent bulbs. CFL and LED bulbs cost slightly more upfront but save energy straight away, offering an almost instant return on investment, and don’t have to be changed as often.
Heat is another common way to waste energy and can be combated by optimising a building’s insulation. Reducing the temperature by as little as 1% can make a big difference to a heating bill so taking steps to retain heat and prevent its escape can save a lot of money.
With the threat of global warming, it’s important to make sure that we all do our part to make our manufacturing operations as eco-friendly as possible. Businesses can use energy surveys to figure out which green solutions work for them and start small by making everyday changes building up to major investments. | <urn:uuid:4b320cbb-8b4e-4978-830c-bcc3fa159531> | CC-MAIN-2024-38 | https://www.financederivative.com/how-to-make-your-manufacturing-operations-more-eco-friendly/?amp=1 | 2024-09-08T18:44:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00413.warc.gz | en | 0.963715 | 969 | 3.171875 | 3 |
Cable broadband networks are critical infrastructure that connect the vast majority of the United States. In fact, 99% of homes in cable's footprint have access to gigabit speeds. But what many don't realize is how crucial the "last mile" portion of those networks is to connecting the millions of Americans who benefit from broadband today, and those who will be reached in the near future.
Digging deeper: With federal broadband funding efforts on the horizon, and as cable providers continue to build out their networks, it's worth understanding how the "last mile" works and how it serves as a crucial link to the digital world.
What Is the Last Mile?
Last-mile infrastructure is the physical part of a broadband network that serves as the final leg connecting the provider’s network to a home or building – the hookup between the larger backbone of the network to the end point. To put it more simply, the last mile is where data bridges from infrastructure to device.
- This portion is the most visible in neighborhoods and residential areas, as it acts as the link between the premises and the outside world – just like a driveway connects a home to a main road, where all the activity and bustle are coming from.
- Last-mile infrastructure is closest to the end user or device, and is the key to enabling next-generation speeds, including 10G and its suite of technologies.
- Here, DOCSIS technologies are used to increase connection speeds by updating equipment and devices, eliminating the need to lay new cable.
- This is where the network fans out to reach all the endpoints (e.g. homes, devices, businesses).
Building the Network
The last mile is one of the most expensive and hardest parts of the network to build and operate. This is especially true in rural and remote areas where populations are low, homes and buildings are far apart, and tough terrain poses difficult challenges.
Ingenuity needed: Cable providers use innovative solutions to reach far-out and unserved communities most in need of broadband connectivity:
- Cable providers, including Midco and Mediacom, use fixed wireless to reach homes in the mountains, or farmers who work in grain elevators or water towers, by relying on an existing broadband network to connect relay towers, and extending a signal miles beyond where the physical wires stop.
- Over the past few years, Charter, Comcast, Cox, and GCI have been expanding 5G networks (the fifth-generation of mobile networks and an upgrade to the bandwidth and speeds available on mobile data networks) to give people faster speeds and better coverage.
Connecting Every American
Cable providers have poured billions of dollars into building out broadband networks to nearly every corner of the country. At the same time, they have forged a myriad of partnerships with local governments, private businesses, nonprofits, and community organizations to raise funds for broadband buildouts in hard-to-reach areas.
As federal and state governments distribute billions in federal funds for broadband construction, the cable industry will continue to collaborate and do its part to ensure that their networks can reach every American, down to the last mile, and that everyone has the chance to reap the opportunity broadband offers. | <urn:uuid:3dc3158e-ea25-4765-a53f-114391681e8c> | CC-MAIN-2024-38 | https://www.ncta.com/whats-new/the-last-mile-explained | 2024-09-09T23:47:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00313.warc.gz | en | 0.948894 | 652 | 2.875 | 3 |
An Access Control List (ACL), is any mechanism for implementing access control on an operating system, file system, directory service, or other software. Access control lists (ACLs) are implemented into the basic operating system architecture of Microsoft’s Windows operating system platforms and are used to control access to objects in Active Directory and files on NTFS volumes.
An access control list is basically a list attached to an object specifying which security principals (users, groups, computers, and so on) are allowed to access the object and what level of access they are allowed to have. In Windows 2000, ACLs are more properly called discretionary access control lists (DACLs) because they can be configured and managed by administrators at their discretion.
There is also another type of ACL in Windows called a system access control list (SACL), which is used to control the generation of audit messages when object auditing has been configured on a file system.
System Access Control List (SACL)
A system access control list (SACL) enables administrators to log attempts to access a secured object. Each ACE specifies the types of access attempts by a specified trustee that cause the system to generate a record in the security event log. An ACE in a SACL can generate audit records when an access attempt fails, when it succeeds, or both. For more information about SACLs, see Audit Generation and SACL Access Right.
Access control lists are natively implemented on some UNIX operating system platforms such as Solaris (which first implemented ACLs in version 2.5.1) and are also available as third-party software for other UNIX platforms.
Traditionally access control on UNIX file systems was managed using the chmod (change mode) command, but this offered only limited or coarse-grained control of file permissions and provided no flexibility for configuring unique sets of access permissions for particular users or groups.
To set and display access control lists on Solaris, use the setfacl and getfacl commands. Other UNIX packages and add-ons may use different commands such as setacl and getacl. | <urn:uuid:3e90899b-5138-40dd-8660-5a5b7a73ff53> | CC-MAIN-2024-38 | https://networkencyclopedia.com/access-control-list-acl/ | 2024-09-11T05:37:37Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00213.warc.gz | en | 0.899602 | 442 | 3.28125 | 3 |
ASCII files are a fundamental component of file storage and exchange in computing systems. Using the American Standard Code for Information Interchange (ASCII), these files encode data in a format readable by humans and computers alike. This article delves into the nature of ASCII files, highlighting their features, uses, and how they interact with modern computing technologies.
- What is an ASCII File?
- How ASCII Files Work
- Advantages of ASCII Files
- Limitations of ASCII Files
- Working with ASCII Files
- ASCII Files in Modern Computing
1. What is an ASCII file?
An ASCII file is a type of text file encoded using the American Standard Code for Information Interchange (ASCII). This encoding scheme uses a series of 7-bit integers to represent characters, making the file easily readable by humans and simple software tools. ASCII files are primarily used for storing plain text and involve characters from the ASCII character set, which includes English letters, numbers, common symbols, and control characters that instruct text-handling utilities.
ASCII files are characterized by their simplicity and portability, allowing them to be used across different computing environments without loss of data integrity. They can be created, opened, and edited with basic text editors such as Notepad in Windows, TextEdit in macOS set to plain text mode, and various UNIX-based editors like Vim or Nano.
You can create and edit an ASCII file using Microsoft Notepad. If you save it with the extension .txt, it is usually referred to as a text file, but you can save it with other extensions such as .bat or .cmd for batch files, and .ini for initialization files.
ASCII files are often used for logon scripts and other batch files. Another common use is storing configuration information for operating systems and applications. Microsoft Windows 3.1 platforms used ASCII files for storing system and software configuration settings. These configuration files have the extension .ini and are referred to as INI files. More recent Windows operating systems save this information in the registry. Most versions of the UNIX operating system still store their configuration settings in ASCII files.
Because ASCII files contain unformatted text, they can be read and understood by any platform and are useful for sharing information between platforms and between applications. Shared information is often saved in a comma-delimited text file, or .csv file, with the fields separated by commas. Microsoft Exchange Server can export mailbox properties and other information in .csv files, which can then be imported into spreadsheet programs such as Microsoft Excel for manipulation and analysis.
Comparison with Binary Files
Unlike binary files, which contain data in a format that requires specific software to interpret (such as executable programs or image files), ASCII files contain only readable characters. This difference makes ASCII files ideal for scripts, configuration files, and log files where readability and manual editability are advantageous. Binary files, on the other hand, are better suited for storing complex data like compiled programs or high-resolution images where the file’s content is not intended to be directly read by humans.
2. How ASCII Files Work
Each character in an ASCII file is represented by a specific 7-bit code, ranging from 0 to 127. These codes include both printable characters, such as letters and symbols, and control characters, like newline or carriage return, which are used to format text. When an ASCII file is created or edited, each keystroke is converted into the corresponding ASCII code and stored as a binary number. For instance, the uppercase letter ‘A’ is represented by the number 65, which is stored as 1000001 in binary.
This method of encoding makes ASCII files extremely lightweight and fast to process, which is why they are often used for programming and data logging purposes where quick access and simplicity are needed.
ASCII character set
The ASCII character set is the most common compatible subset of character sets for English-language text files, and is generally assumed to be the default file format in many situations. It covers American English, but for the British Pound sign, the Euro sign, or characters used outside English, a richer character set must be used. In many systems, this is chosen based on the default locale setting on the computer it is read on. Prior to UTF-8, this was traditionally single-byte encodings (such as ISO-8859-1 through ISO-8859-16) for European languages and wide character encodings for Asian languages.
Common Uses and ApplicationsASCII files have several common uses in computing:
- Most programming languages use ASCII for source code files, which can be compiled or interpreted by computers.
- Many applications use ASCII files for configuration. These files store settings in a simple format that can be edited by system administrators or by the users themselves.
- ASCII is a common format for exporting data from applications so that it can be imported into other programs or used for data analysis.
- ASCII is used for log files generated by systems and applications because they are easily readable by humans and can be processed by simple scripts.
3. Advantages of ASCII Files
Compatibility and Portability
One of the foremost advantages of ASCII files is their high compatibility and portability across different systems and platforms. ASCII, being a universally recognized standard, ensures that files can be opened and read on almost any computer without the need for special software or conversions. This universal compatibility stems from the fact that ASCII was designed as a common denominator for character encoding, making it a reliable format for file exchange and data storage across diverse computing environments.
- Cross-Platform Use: ASCII files can be transferred between Windows, macOS, Linux, and other operating systems without any loss of information.
- Legacy Support: Many older systems and software applications still in use today rely on ASCII, making it essential for maintaining backward compatibility.
Ease of Use and Accessibility
ASCII files are incredibly user-friendly, primarily because they contain only readable text. This simplicity allows users to create, edit, and manage these files with basic text-editing software, without the need for specialized tools.
- Simple Editing: Files can be edited with simple text editors, such as Notepad, Vim, or even within command-line interfaces.
- Transparency and Debugging: The clear, readable format of ASCII files makes them ideal for use in settings where transparency and ease of debugging are important. Programmers and system administrators often prefer ASCII for logs, configuration files, and scripting because the contents are directly accessible and modifiable.
4. Limitations of ASCII Files
While ASCII is excellent for encoding the basic English alphabet and common symbols, it falls short in a globalized digital environment where multiple languages and special characters are common.
- Limited Character Set: ASCII can encode only 127 characters, which covers the English alphabet, basic punctuation, and control characters but excludes accents, non-Latin alphabets, and other linguistic symbols necessary for international use.
- Inadequacy for Localization: The limited character set makes ASCII impractical for localizing software or content in languages other than English, restricting its use in global applications.
Although ASCII’s simplicity offers several advantages, it can also lead to inefficiencies, especially in contexts where data density and encoding richness are required.
- Data Density: ASCII’s use of a full byte (typically padded to 8 bits from its original 7 bits) for each character can be inefficient compared to more modern encoding schemes like UTF-8, which vary the number of bytes per character based on their need.
- Lack of Rich Formatting: ASCII files cannot embed rich formatting options, such as fonts, colors, or styles, which are often required in documents. This necessitates the use of different file formats for more complex content, limiting ASCII’s utility to plain text scenarios.
5. Working with ASCII Files
Creating and Editing ASCII Files
Creating and managing ASCII files is straightforward, thanks to their simplicity and the wide availability of tools that can handle plain text. Here’s how to work with these files:
- Creating ASCII Files: You can create an ASCII file with any text editor by simply opening a new document, typing your text, and saving it with the appropriate file extension, usually
. When saving the document, you should ensure that the encoding is set to ASCII or plain text. - Editing ASCII Files: Editing an ASCII file is as simple as opening it in a text editor, making your changes, and saving them. Because ASCII files are plain text, there’s no need to worry about formatting or other complexities that come with more advanced file types.
Tools and Techniques
Several tools and techniques can enhance your experience working with ASCII files, especially in a development or administration context:
- Text Editors: Simple text editors like Notepad (Windows), TextEdit (Mac, in plain text mode), or gedit (Linux) are perfect for dealing with ASCII files. More advanced editors like Sublime Text, Atom, or Vim offer additional features like syntax highlighting and automated formatting, which can be useful for coding or scripting.
- Command-Line Tools: Command-line tools such as
on Unix-like systems or their equivalents in Windows are powerful for viewing, modifying, or searching content within ASCII files without a graphical interface. - Programming Libraries: For automated manipulation or generation of ASCII files, programming libraries in languages like Python (
modules), Java (java.io
package), or C# (System.IO
namespace) provide comprehensive functionalities.
6. ASCII Files in Modern Computing
Role in Programming and Data Exchange
ASCII files continue to play a crucial role in the world of programming and data exchange due to their simplicity and wide compatibility:
- Programming: ASCII is extensively used for writing source code. Most programming languages are designed to interact seamlessly with ASCII, which serves as the backbone for script files, configuration files, and source codes.
- Data Exchange: ASCII files are commonly used for data logs, configuration settings, and inter-system data exchange. Their readability makes them particularly useful for transferring data between systems that may not share the same software environment.
Transition to Unicode and UTF-8
While ASCII’s simplicity and efficiency have made it a longstanding standard, the global digital environment requires a more inclusive character encoding scheme:
- Limitations of ASCII: ASCII’s limited set of characters is insufficient for global use, prompting the development of more comprehensive encoding systems.
- Introduction of Unicode and UTF-8: Unicode was introduced to cater to a diverse array of characters and symbols across different languages and scripts. UTF-8, a variable-length character encoding for Unicode, is particularly effective because it encompasses all possible characters and symbols while remaining backward compatible with ASCII.
- Unicode Explained by Jukka K. Korpela – An in-depth guide to understanding and using Unicode and UTF-8 in various computing applications.
- Unicode® 15.1.0 by American National Standards Institute – Provides a detailed overview of ASCII standards. | <urn:uuid:082e7149-08c0-473f-8194-58fc58c9bee9> | CC-MAIN-2024-38 | https://networkencyclopedia.com/ascii-file/ | 2024-09-12T12:52:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00113.warc.gz | en | 0.890688 | 2,262 | 3.78125 | 4 |
Ensure data authenticity through proper checksum verification
The above code fetches data from an external server using the Net::HTTP library in Ruby. The function
sends a GET request to the specified URL and returns the response body.
However, this code is vulnerable due to insufficient data authenticity validation. It does not verify the integrity of the data received from the external server.
This means that if the data is tampered with during transmission or if the server is compromised, the application would still accept and use the corrupted data. This could lead to various security issues, such as data corruption, data leakage, or execution of malicious code.
A checksum verification should be implemented to ensure the data received is exactly the same as the data sent by the server. This involves generating a checksum or hash of the data at the server side, sending this along with the data, and then comparing this with the checksum generated at the client side. If the checksums do not match, the data should be rejected as it indicates that the data has been altered.
The updated code now includes a checksum verification process to validate the integrity of resources loaded from external servers.
function now generates a checksum for the resource using the SHA256 algorithm before sending the request. This checksum is then compared with a trusted checksum retrieved from a trusted source using the
If the checksums match, the resource is considered valid and the function returns the resource data. If the checksums do not match, the function raises an error and rejects the resource, ensuring that only resources with validated integrity are accepted by the application.
This approach significantly improves the security of the application by preventing the loading of tampered or corrupted resources from external servers.
In addition to this, it is recommended to use secure protocols such as HTTPS to ensure the integrity of the data in transit and to regularly update and patch the application and its dependencies to address any security vulnerabilities. | <urn:uuid:5f6bf193-bbdd-42e0-8d8a-71e13e4e67d1> | CC-MAIN-2024-38 | https://help.fluidattacks.com/portal/en/kb/articles/criteria-fixes-ruby-355 | 2024-09-14T21:00:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00813.warc.gz | en | 0.901025 | 391 | 3.421875 | 3 |
Implementation of robust security controls for fingerprint identification
In the above Ruby on Rails code, the
method in the
is used to authenticate a user based on their username only. The
method is used to find the user in the database, and if a match is found, the user is logged in and a session is created for the user with their
This code is vulnerable because it does not require any form of password or fingerprint authentication. An attacker can easily bypass security controls just by knowing the username of a user. This can lead to unauthorized access to the application, potentially leading to data leakage, data manipulation, and other security breaches.
The updated code now includes a fingerprint authentication mechanism in the login process. When a user attempts to log in, the system will not only check the username but also verify the user's fingerprint.
method of the
class, we added a call to
. This method is expected to return
if the provided fingerprint matches the one stored in the database for the user, and
method should be implemented in the
model. The placeholder implementation provided here simply checks if the provided fingerprint matches the one stored in the
attribute of the
instance. In a real-world application, this method should use a secure and reliable fingerprint recognition library or API to verify the fingerprint.
If the username is found and the fingerprint is verified, the user is logged in and redirected to the root URL. If either the username is not found or the fingerprint is not verified, an error message is displayed and the login form is re-rendered.
This solution helps to prevent security control bypass by ensuring that the user is who they claim to be, based on their unique fingerprint. It also helps to prevent unauthorized access to the system. | <urn:uuid:125258a7-cd0b-4440-b68b-b76c46ef1519> | CC-MAIN-2024-38 | https://help.fluidattacks.com/portal/en/kb/articles/criteria-fixes-ruby-436 | 2024-09-14T20:29:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00813.warc.gz | en | 0.896599 | 355 | 2.75 | 3 |
Blockchains are a nebulous concept. You’ll have heard of Bitcoin already, and may understand that a blockchain, in the technical sense, is the foundation. Blockchains don’t just encompass this breakthrough technology, but a spectrum of business models, organisational structures and radical ideas.
Bitcoin is the first truly successful blockchain application, a digital currency, and it occupies an important space within this spectrum. Its technological capabilities are matched with a carefully balanced incentive structure. It also fosters a community around its open-source development, and third-party businesses creating applications and integrations.
Blockchains are built on a history of security research
Far from being an unprecedented breakthrough, with unintended consequences, the blockchain technology stack is a culmination of decades of cryptography and security research.
The 1970s cryptography breakthrough of the Merkle tree and the distributed hash tables of the 1990s combine to create autonomy, fault tolerance and scalability for distributed systems. They’re the tools that built famous decentralised applications like BitTorrent, Napster and Freenet.
Bitcoin’s most impressive contribution is recognising the synergies between this field of distributed communications and file sharing systems, and digital currencies, which had seen many false starts prior to Bitcoin’s success since 2009.
The key was Hashcash, a system proposed in 1997 to limit and suppress email spam and denial-of-service attacks. Hashcash is an algorithm that requires the sacrifice of processing power as a security mechanism. This proof-of-work creates the incentive structure and network verification that now powers cryptocurrencies.
The final step is the addition of smart contracts to the blockchain stack, a name coined by Nick Szabo as early as 1993. Smart contracts are algorithmic; a type of self-executing code which enables more complex asset transfer and the automated exchange of rights. These are the building blocks of a complete programming language, and the more advanced blockchain applications such as those envisioned by Ethereum.
What we get is a set of security tools that are very good at coordination between mutually unknown actors and secure data or value transfer. We think of blockchains as having four key characteristics to this end: they’re cryptography-based, distributed, peer to peer, and, in many cases, open source.
Innovative blockchain applications in security
As with many open source movements, we’re seeing the different stages of the blockchain ecosystem build out in waves: first the core protocols (Bitcoin, Ethereum and other platforms); now middleware, from Consensys; and then applications.
There’s been some degree of consolidation on blockchain development around Bitcoin and Ethereum over the past year, although Ethereum isn’t the second biggest token by market capitalisation.
Bitcoin’s first mover-advantage and financial specialisation has granted it momentum and early market interest, while Ethereum’s Turing-complete programming capabilities enable many other truly disruptive opportunities.
Guardtime’s security solution runs on a private blockchain, and features a cross-vertical solution replacing RSA digital signatures: its KSI (Keyless Signature Infrastructure), which uses only hash-function cryptography for signing. This prepares digital identity systems for the security necessities of the future – where quantum computers make factorisation problems like those that RSA relies on trivial.
Inter Planetary File System (IPFS) is a new core internet protocol that is designed to supplant the Hypertext Transfer Protocol (HTTP). IPFS can address some of the most difficult security challenges that the HTTP-based internet faces: centralised hosting and distribution, and weak application of content-signing protection.
Using context-driven storage, self-certification and an incentivised blockchain mechanism, IPFS becomes a secure, permanent web, resilient against server failure.
MIT’s Enigma, based on the Bitcoin blockchain, enables any code to be run on encrypted data. In its model, data can be stored, shared and analysed without being fully revealed to a single third party, enabling trustless sharing of data and distributed computation without resorting to full transparency. This grants blockchains, even in a permissionless setup, access to the full spectrum of data visibility from fully private to public.
Colony.io is a decentralised schema for business based on the Ethereum blockchain. The namesake colonies present a flat organisational structure with security, incentives, flexibility and resilience built in, capable of supporting businesses in many industries.
Tokens are awarded to users based on fulfilled tasks and organisational decisions are made based on consensus between contributors. These tokens represent equity in the colony business and are tradable for cash.
The whole system is run on a tokenised blockchain, with an identity and reputation system and voting system complementing its blockchain. It represents a culmination of different opportunities implied by blockchains’ features.
It’s a secure business model through its adaptability, digital identities, and democratic incentivisation. In the words of Alain de Botton, “It’s just the sort of thing that proves capitalism can be both moral and helpful, as well as profit generating.”
A holistic view of security
When we think of that combination of features, we can see blockchains as a broader way of looking at security. Not only traditional endpoint protection, but a holistic approach that includes user identity security, transaction and communication infrastructure security, business security through transparency and audit, and security from malicious insiders, compromised nodes or server failure. These are all addressable with blockchains because security and privacy are central to the protocol, and not an external consideration.
A holistic view is necessary to maintain today’s connected world. The past decade of digital transformation across industries has put our lives and livelihoods in data.
Where individuals, businesses and governments are constantly locked in a battle against bugs, fraud and malicious actors, blockchains propose an alternative.
The paradigm shift blockchains represent can offer true data integrity, advanced digital identity systems and a new way for business to offer transparency for audit alongside access for third parties. | <urn:uuid:4f59d943-df50-45af-bc12-ce405693294b> | CC-MAIN-2024-38 | https://www.information-age.com/how-blockchains-are-redefining-cyber-security-468/ | 2024-09-14T20:03:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00813.warc.gz | en | 0.925538 | 1,232 | 3.046875 | 3 |
Stuxnet is known as one of the most malicious pieces of software ever created. This cyber threat was discovered in 2010, and it is believed to be the first computer virus ever created specifically to target industrial systems.
It is believed that Stuxnet was designed by government agencies like the NSA or CIA, or perhaps even Mossad or some other secret agency within Israel, to attack nuclear plants, power grids, oil refineries and other industrial infrastructure.
There are many theories about who might have created this malicious software and why they would do such a thing. And while no one knows for sure who made it or why there’s quite a bit that we do know about Stuxnet. In this article, we’ll explore what exactly Stuxnet is and how it works.
What Is Stuxnet?
Stuxnet is a computer worm, or malicious software, that appears to have been created to target industrial systems. The worm was discovered in 2010. It is believed to have been designed to specifically attack systems used in controlling hydroelectric or nuclear power plants, industrial facilities, or other industrial systems.
Once the worm infects a computer that controls an industrial system, it can reprogram the system to damage or destroy the equipment. When the worm is activated, it can severely damage industrial systems, cause lethal accidents, and even kill people.
What did Stuxnet do?
Experts who studied the code for this worm believe that it was designed to shut down Iranian nuclear facilities. They believe that it was programmed to mess with the controls for centrifuges used in uranium enrichment at the Natanz nuclear facility in Iran.
The malware was specifically programmed to shut down these centrifuges by spinning them out of control, causing them to destroy themselves. Industrial control systems are programmed to respond very slowly to manual inputs, so it would take a long time for operators to manually shut these centrifuges down.
This would allow Stuxnet to take over the system and spin the centrifuges so fast that they would explode. The malware was programmed to replicate itself so that it would spread from one computer system to others until it reached the entire facility.
Experts believe that the worm was created to be programmed to damage the facility or shut it down for a couple of weeks.
How Does Stuxnet Work?
As stated earlier, Stuxnet is believed to have been designed to target computers that control industrial systems. Industrial control systems are designed to be used in systems like power plants, factories, and other industrial settings where computers are used to control equipment.
Stuxnet was specifically programmed to modify the way that these industrial control systems respond to input. This worm was designed to reprogram these systems to respond much more quickly to input from operators or other sources and then to send a longer output after the input is done.
For example, an operator might type in a code to start an industrial process. Stuxnet would force the system to respond as if it had received the code almost instantly, but then it would respond almost as slowly as normal when the operator hit “stop”.
Is Stuxnet Still a Threat?
Yes, Stuxnet is still a threat. While the first version of Stuxnet was discovered in 2010, a new variation of this worm was discovered in 2018. The newer version of Stuxnet was programmed to go after the same industrial control systems as the first version.
Computer security experts believe that the creators of Stuxnet are still trying to find new systems to infiltrate and new ways to damage industrial systems. While it is not clear who is behind this malware, whoever created it likely wants to use it to attack industrial systems in other countries.
How to prevent Stuxnet
- Stuxnet cannot be removed from computers that have already been infected.
- However, computer users can protect themselves against it by installing software designed to prevent infections.
- Industrial control systems should be set up in a way that makes them harder to hack.
- Industrial control systems connected to the internet should be set up to be as secure as possible.
- Stuxnet only infected computers that were connected to the internet and that did not have very good security.
- Computers that are not connected to a network and do not have internet access cannot be infected by Stuxnet.
Final Words: Stay Vigilant
Computer security threats are growing, and new threats like Stuxnet appear every day. It is important to update the security software on your computer and to avoid clicking on links or downloading files from unfamiliar sources.
Every computer user should be aware of the threats that Stuxnet and other malicious software pose, and they should take steps to protect themselves and their computers.
ABOUT THE AUTHOR
I am here to share my knowledge and experience in the field of networking with the goal being – “The more you share, the more you learn.”
I am a biotechnologist by qualification and a Network Enthusiast by interest. I developed interest in networking being in the company of a passionate Network Professional, my husband.
I am a strong believer of the fact that “learning is a constant process of discovering yourself.”
– Rashmi Bhardwaj (Author/Editor) | <urn:uuid:19591e4b-8b9e-470f-a9ba-8df7cfa2ccd5> | CC-MAIN-2024-38 | https://ipwithease.com/what-is-stuxnet-what-it-is-and-how-it-works/ | 2024-09-19T19:48:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00413.warc.gz | en | 0.969457 | 1,079 | 2.984375 | 3 |
End-to-End Cyber Security for IoT Ecosystems To Protect IoT Devices From Cyber Threats
In the IoT Ecosystem, Not too long ago in history, the idea that people and objects could be inter-connected would be considered absurd. Since the invention of the internet, that absurdity has since morphed into possibilities.
Today, not only are people able to stay connected to other people across the world, they are able to stay connected to physical objects. These are everyday devices such as lights, TVs, coffee machines, cars, air conditioners, etc.
This connection is what is termed the Internet of Things (IoT). A recent study conducted by Swedish professors estimates that by the year 2020 there would be over 20 billion IoT devices. As the connection between people and their devices grows, so does the size of the IoT ecosystem.
Understanding the IoT Ecosystem
The ecosystem consists
of five major components, all of which are connected and function as one. Each
component on its own poses a unique security threat. In order to stay
proactive, one has to protect the entire ecosystem from end to end.
- Network – this is the means by which the ecosystem maintains constant communication from the device to the user.
- Service – this is the software component
- Device – the physical (hardware) component that performs the instructions inputted by the user
- User – the assumed owner of the ecosystem
- Cloud – where data is processed and stored
Security Threats in IoT Ecosystem
Like everything else connected to the internet, there is a concern for security. There have been multiple cases of intruders hacking into devices to steal personal data or to disable security alarms while homeowners were away to allow for smooth break-ins. The idea that everyday personal items could be used against the owner through cyber-attacks poses serious domestic and possibly national security threats.
One proven effective way of ensuring cyber-security is via a multi-layered solution. The same concept could be applied to the IoT ecosystem. Reputable professionals who specialize in IoT Cyber-security solutions combine Embedded Integrity Verification (EIV) with Real-time IoT Event Monitoring System (RIEMS). Both EIV and RIEMS are stand-alone solutions. The EIV is integrated into devices which are connected to the network while the RIEMS provides front-end monitoring ability via an interactive dashboard for the administrator.
Requirements for Ensuring Security
While the application
of cyber-security solutions provides protection against cyber-attacks, there
are security measures that need to be taken to ensure the integrity of the IoT
- Network security – a strong firewall should be installed to protect the network and cloud system
- Data integrity – ensure end-to-end data security by restricting access to authorized personnel only
- Device protection – devices should have end-to-end multi-layer encryption and tamper-free detection system.
- Penetration testing – continuously conduct routine tests to simulate attacks. This form of testing exploits vulnerabilities in the entire IoT ecosystem.
- Activity logging – helps in keeping a record of all activities in the ecosystem from who logged in to who did what. Logs come in handy during investigations in case of any attacks.
As mobile and internet technology advances, so also will IoT devices. The only way to ensure constant protection of data and property is to constantly anticipate attacks by upgrading the security of IoT ecosystem.
This post End-to-End Cyber Security for IoT Ecosystems To Protect IoT Devices From Cyber Threats originally appeared on GB Hackers. | <urn:uuid:58d1abaf-1a2b-4a7c-b194-e9ca60c58f89> | CC-MAIN-2024-38 | https://www.cybercureme.com/end-to-end-cyber-security-for-iot-ecosystems-to-protect-iot-devices-from-cyber-threats/ | 2024-09-21T03:04:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701427996.97/warc/CC-MAIN-20240921015054-20240921045054-00313.warc.gz | en | 0.93494 | 722 | 2.828125 | 3 |
REST API stands for Representational State Transfer Application Programming Interface. It is an architectural style for designing networked applications. REST is an approach that uses HTTP protocols to enable communication between different systems over the internet.
In a REST API, resources are identified by unique URLs (Uniform Resource Locators), and the API provides a set of operations or methods to interact with these resources. These operations include retrieving, creating, updating, and deleting resources.
REST APIs are stateless, meaning that each request from a client to the server contains all the necessary information to process that request. The server does not store any information about the client’s previous requests.
REST APIs are widely used in web development and are the foundation of many modern web services and applications. They provide a flexible and scalable way to expose and consume data and functionality over the internet.
What is PKI?
PKI stands for Public Key Infrastructure. It is a system of technologies, policies, and procedures used to manage digital certificates and public-private key pairs. PKI provides a secure way to authenticate the identity of individuals, devices, and organizations in a networked environment.
In a PKI, a trusted third-party entity called a Certificate Authority (CA) issues digital certificates that bind a public key to a specific entity. These certificates are used to verify the authenticity and integrity of digital communications and transactions.
The main components of a PKI include:
1. Certificate Authority (CA): A trusted entity that issues and manages digital certificates.
2. Public Key: A cryptographic key that is publicly shared and used for encryption and verifying digital signatures.
3. Private Key: A cryptographic key that is kept secret and used for decryption and creating digital signatures.
4. Digital Certificate: A digitally signed document that binds a public key to an entity’s identity. It contains information such as the entity’s name, public key, and the CA’s digital signature.
5. Certificate Revocation: The process of invalidating a digital certificate before its expiration date. This can happen if the private key is compromised or if the entity’s information changes.
PKI is widely used in various applications, including secure email communication, secure web browsing (HTTPS), digital signatures, and secure access to networks and systems. It provides a foundation for establishing trust and ensuring the confidentiality, integrity, and authenticity of digital communications. | <urn:uuid:2b95e299-02ed-4c8d-9cbb-cacc22c92a90> | CC-MAIN-2024-38 | https://celestix.com/docs/what-is-rest-api/ | 2024-09-07T17:28:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00613.warc.gz | en | 0.891304 | 496 | 3.90625 | 4 |
Growing up isn’t always easy, especially when you’re a young person experiencing so many things for the first time. Not only are kids navigating the trials and tribulations of school, they’re also simultaneously juggling family, social engagements, and their own personal lives.
Consequently, it’s not uncommon for this balancing act to take a toll on a student’s mental health. And if things go from bad to worse, they may even start considering taking their own life.
As a school district, it’s your responsibility to protect students from harm in every shape and form — suicidal behavior included. But exactly how prevalent is student suicide? How many students are at risk? And more importantly, what can staff do to prevent suicide in their school district?
In this blog, we’ll answer those questions and tell you what you need to know about youth suicide prevention.
The first thing you need to understand about youth suicide is how it differs from suicidal ideation.
Striclty speaking, suicidal ideation is the mental process that precedes an actual suicide attempt. There are two types of suicidal ideation you should be aware of:
In any case, it’s important to get ahead of teen suicide before suicidal ideation devolves into an actual attempt. To do this, let’s take a look at the facts.
When you look at the data, it’s plain to see that suicidal behavior is an emerging crisis among American youth.
In truth, there’s no way to blame a single risk factor for the increase in youth suicide. However, it’s clear that the pandemic had a significant impact on student mental health.
CDC data indicates that in the first eight months of the pandemic alone, the number of mental health emergencies — including self-harm, suicidal behavior, and depressive episodes — increased by nearly 25% for children aged 5 to 11 and nearly a third for those 12 to 17.
Pandemic-related school closures disrupted stability in the lives of many students. Suddenly, they faced social isolation, more time around their families, and more time dealing with the potential troubles therein.
Verywell reports that the suicide rate is four times higher for males than females, with male deaths making up almost 80% of all suicide deaths in the United States. However, females attempt suicide three times as often as males.
According to the CDC, lesbian, gay, and bisexual kids are about four times more likely to attempt suicide than heterosexual kids. A Black student is also more likely to make a suicide attempt than their Hispanic or white peers.
Youth suicide doesn’t just happen. More often than not, a young person considers taking their own life after experiencing one or more stressors. Although by no means is this an exhaustive list, here are some potential risk factors that could lead to suicidal behavior:
As difficult as it might be to imagine, there may be a day when your school district has to manage the aftermath of a student suicide. It’s a delicate situation, which means you want your staff to know exactly how to respond.
First, understand that a completed suicide attempt will likely have ripple effects across your school district. Exposure to teen suicide has been shown to increase the risk of suicidal ideation in the rest of the student body, especially for those close to the victim. Therefore, it’s crucial you provide access to the necessary grief management services on campus.
For instance, when you inform the community that a student has passed, be sure to also inform them where they can seek help. Offer them sessions with a mental health professional or school psychologist with the skills to help them through this process.
In the weeks following, provide suicide awareness education to all staff and students. Discuss typical warning signs of suicide so that they know how to spot them. Make sure you’re using careful language to describe suicide and take care not to trigger youth who may be sensitive to the topic.
The best thing your school district can do to curb the crisis is to start proactively preventing suicide. To do this, you’ll need to know a few suicide prevention strategies.
Let’s examine that last strategy further. How does monitoring the cloud help with suicide prevention? The short answer is simple: a cloud monitoring solution like ManagedMethods can help you identify risk signals before it’s too late.
With this type of platform, you can automatically detect when students are discussing suicide or self-harm, as well as stressors like cyberbullying, violence, or substance abuse. Whether it be in a Google Chat, Doc, or a Onedrive folder, cloud monitoring can pick up on a risk factor and alert your designated staff member. In turn, you can investigate an incident with speed and provide students the help they deserve. | <urn:uuid:5fd84862-251b-46f3-a1e9-7aedd5fab0e8> | CC-MAIN-2024-38 | https://managedmethods.com/blog/understanding-student-suicide-in-schools-prevention-strategies/ | 2024-09-10T05:18:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00413.warc.gz | en | 0.958293 | 996 | 3.109375 | 3 |
In an era where artificial intelligence (AI) plays an increasingly pivotal role across various industries, ensuring the security of AI systems has become a paramount concern. As AI technology continues to advance, developers and organizations must prioritize robust security measures to protect sensitive data, maintain user privacy, and prevent malicious exploitation. Here are essential guidelines for the secure development of AI systems:
1. Data Security:
Encryption: Implement strong encryption protocols to safeguard both stored and transmitted data, preventing unauthorized access.
Access Controls: Enforce strict access controls to restrict system and data access only to authorized personnel or entities.
2. Model Security:
Adversarial Robustness: Design AI models to be resilient against adversarial attacks by validating and enhancing their robustness.
Regular Audits: Conduct frequent security audits to identify vulnerabilities in the AI model and address them promptly.
Data Minimization: Collect and store only the minimum necessary data to accomplish the AI system’s objectives, reducing the risk of privacy breaches.
Anonymization Techniques: Utilize anonymization methods to protect user identities when handling personal data.
4. Secure Development Lifecycle:
Threat Modeling: Perform thorough threat modeling during the design phase to anticipate potential security risks and vulnerabilities.
Code Reviews: Conduct regular code reviews to identify and rectify security is-sues in the source code.
5. Continuous Monitoring:
Anomaly Detection: Implement real-time monitoring and anomaly detection mechanisms to identify unusual behavior that may indicate a security breach.
Logging: Maintain comprehensive logs of system activities for post-incident analysis and forensic investigations.
6. User Education:
Training and Awareness: Educate users and stakeholders about potential security threats and best practices to ensure responsible and secure use of AI systems.
Phishing Awareness: Train users to recognize and report phishing attempts, as social engineering attacks remain a significant threat.
7. Regulatory Compliance:
Stay Informed: Keep abreast of and comply with relevant data protection and privacy regulations to avoid legal implications.
Ethical Considerations: Embed ethical principles into AI system development to ensure responsible and lawful use.
8. Incident Response Planning:
Response Team: Establish a dedicated incident response team equipped to swiftly address and mitigate security incidents.
Post-Incident Analysis: Conduct post-incident analyses to understand the root causes of security breaches and implement preventive measures.
By adhering to these guidelines, developers and organizations can fortify their AI systems against potential threats, fostering a secure and trustworthy AI ecosystem. As AI technology continues to evolve, a proactive and security-centric approach is essential to harness its benefits while mitigating associated risks. | <urn:uuid:56da75cd-48f8-4e70-a31a-f6379500d036> | CC-MAIN-2024-38 | https://www.cybersecurity-insiders.com/guidelines-for-secure-ai-system-development/ | 2024-09-10T05:26:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00413.warc.gz | en | 0.898661 | 546 | 2.890625 | 3 |
Understanding Hash Collisions
A GUID (globally unique identifier) is a 128 bit value that can represent 340,282,366,920,938,463,463,374,607,431,768,211,456 unique values = that’s 340 undecillion. You can see why programmers use GUIDs as unique identifiers. It would take billions upon billions of years to generate two GUIDs that are the same.
A hash is a value that is calculated by running some data (like a GUID) through an algorithm, which produces a checksum (kind of like a digital fingerprint).
There is a common misconception that hashes are unique, or at least, unique enough not to worry about hash collisions (two identical hashes generated from different data) most of the time. It’s easy to see why most would think that – after all, a 32 bit hash, for example, can represent 4,294,967,295 unique values – that’s over 4 billion possibilities.
In the case of a checksum that is used to check the integrity of a file, hashes are extremely reliable. The probability of someone tampering with your file, changing data, or injecting a virus, and still having that file produce the same hash, is astronomically low.
In the case of a GUID, a 32 bit hash algorithm will assign 79,228,162,514,264,337,593,543,950,336 (that’s 79 octillion) GUIDs on average to each and every hash value – that’s just basic math, and why we generally don’t use hashes as a representation of the data they are hashed from.
I recently came across a scenario where, to implement scoped access in our web app, we store up to ~7 million GUIDs in up to ~110 million GUID -> GUID pairings in our caching layer. Because of space constraints, 32 bit GUID hashes were being stored instead of the GUID itself.
The maximum file size for SQL CE is 4 GB, and at first, that seems to be ok. A GUID consumes 16 bytes of space, so 220 million GUIDs uses about 3.2 GB, but include row overhead in SQL CE (6 bytes), column overhead (1 byte), and various other overheads (table overhead, indexing, etc.) the 4GB limit is exceeded.
Storing 4 byte GUID hashes effectively quartered the space requirements, but at the expense of guaranteed hash collisions.
We ended up resolving this by switching to 8-byte hashes, which cut the space used in half, and only yields a hash collisions once in every 5 billion GUIDs, or in terms of our requirements, never.
Thank you for taking a moment to learn about hash collisions. Ready to transform your SCSM experience? View all of the exciting apps Cireson has to offer. | <urn:uuid:6b42dab9-bdd5-469a-bcfa-fa64b5977c9d> | CC-MAIN-2024-38 | https://cireson.com/blog/hash-collisions/ | 2024-09-11T09:30:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00313.warc.gz | en | 0.925928 | 603 | 2.984375 | 3 |
Previous studies have highlighted that a significant proportion of COVID-19 survivors experience persistent cognitive issues, such as impaired memory, attention, concentration, executive function, and information processing speed.
However, with the emergence of the Omicron variant, it remains unclear whether individuals infected with this particular variant also face similar cognitive challenges. This article presents findings from a cross-sectional cohort study conducted to investigate cognitive impairment in patients infected with the Omicron variant.
Study Design and Methods
The study enrolled 215 patients infected with the SARS-CoV-2 Omicron BA.2.2.1 variant at Shanghai Fourth People’s Hospital between April 2022 and August 2022. The cohort included 142 asymptomatic individuals and 73 mild cases without pneumonia. The control group consisted of 215 age- and gender-matched healthy individuals who tested negative for COVID-19.
The study excluded participants with a history of cerebral hemorrhage, traumatic brain injury, neurological or psychiatric diseases, speech impairment, severe visual or hearing impairment, malignant tumor, alcoholism, drug abuse, and psychotropic substance abuse. Cognitive function was assessed using the Mini-Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA). The data were analyzed using statistical tests, and significance was set at p < 0.05.
The total scores of MMSE and MoCA in patients infected with the Omicron variant were comparable to those of the control subjects. However, when stratified by age, patients aged 50 years or older exhibited significantly lower total scores of MMSE and MoCA compared to controls.
The female patients aged over 50 also showed significantly lower scores, while male patients in the same age group did not display significant cognitive impairment. Specifically, patients over 50 infected with the Omicron variant demonstrated lower scores in attention, calculation, forward/backward digit span, serial 7 s-administration, verbal fluency, and abstraction. These findings suggest that the Omicron variant may be associated with cognitive decline, particularly in older individuals.
Discussion and Implications
This cross-sectional cohort study provides preliminary evidence of cognitive impairment in patients aged over 50 infected with the Omicron BA.2.2.1 variant of SARS-CoV-2. While the small sample size limits the generalizability of the findings, this study highlights the importance of addressing cognitive decline induced by COVID-19 infection, especially in the context of an aging society.
The observed cognitive impairment in patients without severe symptoms suggests that peripheral inflammation, endothelial disruption, microglial activation, neurotransmitter depletion, microvascular compromise, leukoencephalopathy, and cortical atrophy may contribute to network dysfunction and cognitive changes.
It is important to note that the screening tools used in this study, MMSE and MoCA, have limitations in assessing cognitive performance comprehensively. Future studies employing functional MRI and PET-CT should be conducted to explore potential alterations in brain function and structure among patients infected with the Omicron variant.
Furthermore, it is crucial to determine whether the cognitive dysfunction observed is specific to SARS-CoV-2 or simply a manifestation of sickness behavior. Although sickness behavior was observed in both asymptomatic and mild cases, cognitive impairment was only evident in patients aged over 50, suggesting a potential influence of the Omicron variant on cognitive performance and neural circuit functions.
In conclusion, this cross-sectional cohort study provides initial insights into the cognitive impairment associated with the Omicron variant of COVID-19. The findings suggest that patients aged over 50 infected with the Omicron BA.2.2.1 variant display cognitive dysfunction, emphasizing the need for increased attention to the cognitive decline induced by COVID-19 infection, particularly in an aging society.
The study highlights the potential role of peripheral inflammation, endothelial disruption, microglial activation, neurotransmitter depletion, microvascular compromise, leukoencephalopathy, and cortical atrophy in mediating cognitive changes.
While the study employed the MMSE and MoCA as screening tools, future research should incorporate more comprehensive assessments such as functional MRI and PET-CT scans to provide a deeper understanding of potential alterations in brain function and structure in patients infected with the Omicron variant. These advanced imaging techniques can offer valuable insights into the underlying mechanisms of cognitive impairment associated with COVID-19.
Moreover, it is important to differentiate whether the observed cognitive dysfunction is specific to SARS-CoV-2 or merely a manifestation of sickness behavior. Although sickness behavior was present in both asymptomatic and mild cases, the cognitive impairment was predominantly observed in patients aged over 50, suggesting a potential direct effect of the Omicron variant on cognitive performance and neural circuit functions.
Long-term monitoring of cognitive function and neural circuitry in individuals infected with the Omicron variant would provide a more comprehensive understanding of the lasting effects of the virus on cognitive health.
The implications of this study extend beyond individual patients to the broader healthcare system and society. As the population continues to age, understanding the cognitive consequences of COVID-19 becomes increasingly crucial. Healthcare providers should be aware of the potential cognitive impact of the Omicron variant, especially in older patients, and consider incorporating cognitive assessments into post-COVID-19 care. Early detection and intervention for cognitive impairment can help improve patient outcomes and quality of life.
Furthermore, these findings emphasize the importance of public health measures to prevent COVID-19 infection and transmission. The transmissibility and ability of the Omicron variant to spread in populations with high levels of immunity highlight the need for ongoing vigilance and adherence to preventive measures such as vaccination, mask-wearing, and physical distancing. By reducing the overall burden of COVID-19 cases, we may also mitigate the potential long-term cognitive consequences of the disease.
In conclusion, this cross-sectional cohort study provides initial evidence that patients aged over 50 infected with the Omicron BA.2.2.1 variant of SARS-CoV-2 may experience cognitive impairment. While further research is needed to confirm and expand upon these findings, they underscore the importance of prioritizing cognitive health in the context of COVID-19, particularly in older individuals. By recognizing and addressing cognitive impairment as a potential consequence of the disease, we can improve the overall well-being and outcomes of individuals affected by COVID-19.
reference link :https://translationalneurodegeneration.biomedcentral.com/articles/10.1186/s40035-023-00357-x | <urn:uuid:71a7b584-776b-4715-a328-46a92989b16f> | CC-MAIN-2024-38 | https://debuglies.com/2023/05/25/covid-19-cognitive-impairment-in-patients-over-50-infected-with-the-omicron-variant/ | 2024-09-17T13:21:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00713.warc.gz | en | 0.91104 | 1,341 | 2.609375 | 3 |
Improving one’s writing skills is something every good and aspiring writer drives towards. One’s writing skills can be improved via information technology, and awareness to this means of improving writing skills is of paramount importance. An improved writing skill produces an improved written report or essay, and this article highlights 5 best ways to improve your writing skills via information technology (I.T).
What is Writing, Writing Skills and Information Technology?
Writing as defined by msu.edu is a form of communication which allows students put their ideas and feelings on paper in order to organize their knowledge as well as their beliefs into convincing arguments and also to convey meaning through a well-constructed text. It involves an expression of one’s thought on paper about a particular topic.
Writing skills are skills needed for a good write up. They include spellings, grammar, vocabulary, and organization of the text, and these writing skills need to be improved to obtain a more advanced writing each time.
Information technology, on the other hand, has been defined by Wikipedia as the use of computers to retrieve, store, manipulate and transmit data or information majorly via the internet. Over the years, the need for information technology has risen drastically, and this has both impacted positively and negatively on human activities. On the positive side, and relating to writing, information technology has made it easy for writers to explore information pertaining writing, as well as to improve their writing skills online. This is due to the fact that the information needed to improve one’s writing skills has been uploaded online, and is easily accessed via the internet.
Characteristics of a good write up
- Clarity and Focus: A good write up should be easy to understand and should focus on the main idea.
- Organization: A good write up should be well organized and presented in a logical and aesthetically pleasing manner.
- Ideas and themes: A good write up should contain clearly identifiable themes and ideas.
- Voice: A good write up should be written in a consistent and identifiable voice.
- Language (word choice): A good write up contains precise and accurate choices of words together with well-crafted sentences.
- Grammar and style: A good write up should follow the rules of grammar and a unique style.
- Credibility or believability: A good write up should be convincing, whether fiction or non-fiction.
- Thought-provoking or emotionally inspiring: A good write-up should drive home the message written in it, whether thought-provoking or emotionally inspiring.
5 ways of improving writing skills via I.T
- Writing services: Professional custom writing services offered online such as CustomWritings.com will help to improve the writing skills of any academic writer. This is due to the fact that after employing the assistance of these service providers to write a particular report, the writer will be able to follow the structure of the paper presented for improvement of his own writing skills. The structure of the paper which will comprise all the characteristics listed above as well as the better report will act as a template to the writer to follow in order to improve his writing skills.
- Courses: Online courses can be taken by prospecting writers who wish to improve their writing skills. This is another means through which I.T can help improve the writing skills of any writer, as the writer is presented with numerous options of online writing tutorials. The writer employs this tutorial session at his/her leisure and is being taught all the features of a good write up, as well as how to prepare a good write up. This will help improve writing skills as the writer is exposed to more than he/she has learned in the past.
- Writing guides: Writing guides such as writing templates and custom papers are being provided online to assist writers in preparing a good write up. This will improve a writer’s writing skill, as errors which were being made in his previous works will be corrected via the guides gotten online, and noted to avoid subsequent mistakes. These guides present a structure for their writing and make it easier to produce a good write up as well as improve his writing skills over time.
- Online feedback: Feedback is an essential factor in order to determine the progress of one’s writing skills. IT offers online feedback to a writer’s piece of work via online readers who are invited from diverse groups with a different educational background to join together in a place and give a response to what the writer has written. This will certainly enhance the writer’s writing skills, like ideas on improvement will also be suggested from individuals across all fields of study all in one place (online).
- Editing and proofreading papers online: Editing and proofreading papers online is done by downloading online papers, and reading through them. This will help improve writing skills, as errors spotted on these online papers will be avoided by the writer when preparing his report. Also, the structure of the paper will be noted by the writer, as well as the use and placement if grammatical expressions and all these will contribute to enhancing the writing skills of the writer who has edited and proofread the online paper.
In conclusion, writing being a process of putting one’s ideas about a particular topic on paper requires various skills which need to be improved. The skills which are known as writing skills include grammar, spelling, punctuation and so on. The improvement of these skills produce a better paper, and with the invention of I.T, improving writing skills has been made easier in various ways listed in this article. Employing these ways listed above will help to improve one’s writing skills to produce an advance written report or essay. | <urn:uuid:937c6839-1f79-4cba-bd7d-124193e71fa1> | CC-MAIN-2024-38 | https://www.mytechlogy.com/IT-blogs/23970/information-technology-5-best-ways-to-improve-your-writing-skills/ | 2024-09-07T20:50:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00713.warc.gz | en | 0.962362 | 1,157 | 3.265625 | 3 |
by Nilaa Maharjan, Security Research
Internal penetration testing often requires security specialists to attempt to extract passwords from the memory of infected machines. If the acquired credentials are hashed, the tester can use the pass-the-hash approach to travel laterally within the network to accomplish their goals. This technique was frequently used in the past and is still a valid threat vector in non-updated machines.
However, suppose the tester is able to extract cleartext credentials from memory or crack the collected hashes, they will be able to authenticate across additional network resources and services such as Outlook, business-critical web applications, and device portals, amongst many others.
In this article, we’ll go through what WDigest is, how it is used to extract cleartext credentials from memory, and how an analyst can use Logpoint to detect, mitigate and respond to any WDigest-related attack attempt.
But first, a little backstory.
What is WDigest?
WDigest Authentication is a challenge/response protocol that was primarily used for LDAP and web-based authentication in Windows Server 2003. It was introduced for the first time in Windows XP and was enabled by default on Windows systems. It allows clients to communicate cleartext credentials to Hypertext Transfer Protocol (HTTP) and Simple Authentication Security Layer (SASL) applications.
Microsoft cached the cleartext credentials in Windows RAM when users logged in to their workstations to make the authentication procedure more convenient for end users. Workstations used these cached credentials to authenticate HTTP and SASL services without requiring users to enter their credentials over and over again. The cleartext credentials authenticate via HTTP and SASL exchanges.
For example, a client requests access, the authenticating server challenges the client, and the client answers by encrypting its response with a key derived from the password. To ascertain if the user has the right password, the encrypted response is compared to a previously stored response on the authenticating server. This is where the crux of the problem resides.
Microsoft has a much more detailed explanation of WDigest, how it works, and some of its applications here.
Why does it matter?
Everyone should prioritize Windows security auditing. Understanding how your endpoints are set up and what doors they may be exposing to unwanted users is critical to defending any system. This is where WDigest comes in. One thing to keep in mind about WDigest is that it keeps passwords in cleartext and in memory. If a malicious person gains access to an endpoint and is able to run a program like Mimikatz, they will be able to obtain not just the hashes currently stored in memory, but also the cleartext passwords for the accounts.
It is not only a baseline red team testing practice but also a tactic often used by adversaries like KNOTWEED or PHOSPHORUS ransomware campaigns.
This is and should be a concern because adversaries can now not only employ an attack like pass-the-hash, but they also have the username and password to try to log on to things like Exchange, internal websites, and so on.
A typical attack scenario
Mimikatz has been used in the wild to steal credentials from memory during the last few years. As a result, several antivirus solutions have created signatures to prevent this utility from executing on PCs. However, there are several ways to avoid these antivirus signatures, including executing the program in memory or obfuscating the utility.
Once the attacker has gained access to an internal system within an organization, Mimikatz can be used to retrieve credentials from memory. These retrieved credentials can be in hashed, cleartext, or both formats.
If the attacker is lucky enough to obtain these credentials in cleartext, cracking hashes is not necessary and will allow direct access to internal resources, bringing attackers closer to achieving their objectives.
However, it’s important to note that before the command can be run, the attacker must have administrative rights.
Here is an example of what an attacker would see when dumping credentials in memory with a tool like Mimikatz. The user “HanSolo” used a remote desktop to log onto the machine, and because the specific configuration around WDigest is configured in an insecure manner, not only are they seeing an NTLM hash for the account, but the cleartext password “Password99!” as well.
Example of an attacker’s view when dumping credentials in memory. For more information, read the quick guide on using Mimikatz by adsecurity.
Understanding the WDigest registry is helpful for offensive and defensive analysts.
- If the UseLogonCredential value is set to 0, WDigest will not store credentials in memory.
- If the UseLogonCredential value is set to 1, WDigest will store credentials in memory.
As was the case with the DEV-0270’s PHOSPHOROUS ransomware campaign, after the threat actors had compromised the device and gained admin privileges, DEV-0270 used LOLBINs to conduct their credential theft, as this removes the need to drop common credential theft tools more likely to be detected and blocked by antivirus and endpoint detection and response (EDR) solutions. One of these processes starts by enabling WDigest in the registry, which results in passwords stored in cleartext on the device and saves the actor time by not having to crack a password hash.
We have noticed two variations of this command being used, both of which eventually sets the registry value of UseLogonCredential to 1.
In systems where the WDigest registry is missing or removed.
In systems where the WDigest registry is set to not store clear passwords.
UseLogonCredential /t REG_DWORD /d 1 /f
The actor then uses rundll32.exe and comsvcs.dll with its built-in MiniDump function to dump passwords from LSASS into a dump file. The command to accomplish this often specifies the output to save the passwords from LSASS. The file name is also reversed to evade detections (ssasl.dmp):
powershell.exe" /c Remove-Item -Path C:\windows\temp\ssasl.pmd
-Force -ErrorAction Ignore;
MiniDump (Get-Process lsass).id C:\windows\temp\ssasl.pmd full | out-host;
Compress-Archive C:\windows\temp\ssasl.pmd C:\windows\temp\[name].zip
Identifying WDigest use
As for the defensive side, monitoring the change in the registry path should provide a telltale sign that something nefarious is on the way. Even with a sysmon, you have to set a monitoring path for it to start logging changes in the registry.
WDigest use can be identified in two places: your domain controller logs, or your server logs (every server must be checked).
With Logpoint, we provide a custom sysmon configuration file and Nxlog sample. This line in particular targets WDigest.
Once the path or the sysmon file is configured in the system, an out-of-the-box alert rule LP_Wdigest Registry Modification can be used to monitor the changes made to the registry path.
target_object="*WDigest\UseLogonCredential" -user IN EXCLUDED_USERS
A pre-configured alert rule monitors the changes to the registry path.
As for the process dump, LP_Process Dump via Rundll32 and Comsvcs will trigger when the attacker or the tester tries to dump the passwords from LSASS into a dump file.
command IN ["*comsvcs.dll*#24*", "*comsvcs.dll*MiniDump*" ] -user IN EXCLUDED_USERS
All new updated and existing rules can be downloaded from the Logpoint Service Desk.
NOTE: By default in Windows 8.1 and Windows Server 2012 R2 and later versions, caching credentials in memory for WDigest is disabled (the UseLogonCredential value defaults to 0 when the registry entry is not present).
The observed change in behavior when the UseLogonCredential value is set to 0 is that you may notice that credentials are required more frequently when you use WDigest.
Since this has been a long-running issue and a very old one at that, Microsoft released a patch back in 2014 that effectively disabled WDigest passwords from being stored in memory
Also, an excerpt from Microsoft addressing the issue, recommends users eliminate cleartext passwords from memory.
Removal of cleartext credentials from LSASS
This update prevents every Microsoft SSP in LSASS, besides WDigest, from storing the user’s cleartext password. WDigest still stores the user’s cleartext password because it cannot function without the user’s password (Microsoft does not want to break existing customer setups by shipping an update to disable this). Microsoft recommends users look through their domain controller logs for WDigest authentication logins (instructions provided below); if WDigest authentication is not being used, customers can apply the FixIt found on the KB article to disable WDigest. Doing this will eliminate all cleartext credentials from LSASS memory.
It’s important to realize that while cleartext credentials will no longer be stored, the NT hash and Kerberos TGT/Session key will still be stored and are considered credentials (without credential equivalents stored in memory, single sign-on would be impossible). Additionally, even though the cleartext credentials are no longer stored in memory, an attacker can use other techniques such as key loggers to recover cleartext passwords. Eliminating cleartext passwords from memory is useful and reduces risk, but it is not guaranteed to stop attackers.
For Windows 7, 8, Server 2008 R2, and Server 2012, you must install the aforementioned security update and then you’ll want to set the following registry key to 0.
Setting the registry key to 0 helps reduce risk from WDigest.
The easiest way to do this would be through group policy, but a quick script also works:
UseLogonCredential /t REG_DWORD /d 0
Once you’ve pushed the security update, and the registry key update to all of your servers, you can ensure you’ve done it successfully by querying the registry to see that it exists and is not set to 1.
NOTE: Both of the above changes will also trigger the alert as a change is being made to the specified path.
By default, later versions of Windows (8.1+ and 2012 R2+) do not require the security update, or setting the value to 0, as the default is 0 when not present. However, you should ensure that there haven’t been any manual modifications that set it back to 1.
Use the chart to help determine if you need to take action on your endpoints.
Remediation with Logpoint SOAR playbooks
Upon detecting traces of exploitation, analysts should isolate the host where the attack is taking place via a playbook and initiate an incident response playbook.
Detecting exploitation is simple and seamless because Logpoint is a unified SIEM+SOAR solution that uses an alert (SIEM event) to automatically trigger a SOAR playbook. You can set a playbook to run when a change has been made to the WDigest registry path. The playbook removes the user from Active Directory and changes the user password using a random password generator.
The playbook triggers when a change has been made to the WDigest registry path, automatically initiating a set of actions to investigate and respond to the change.
The playbook initiates a response that disables the user and resets the password.
Based on the organization’s policy and the incident response team procedures, an investigation playbook will collect as much data as possible and generate a report.
You can also generate reports from the playbook to document the steps of the investigation.
A configuration related to WDigest could hinder the security of your environment, specifically on the endpoint, by allowing an attacker to steal cleartext credentials from memory. There are measures you can take to remediate this and ensure that your endpoints and credentials are more secure. Microsoft’s security update (KB2871997) addresses the issue on older versions of Windows, whereas newer versions should be secured by default. Checking the registry on all of your Windows endpoints for the WDigest setting should be a priority, as the loss of credentials could lead to the loss of sensitive information. One way to do this is through command-line queries against all your hosts, but a quicker way is to automate this type of auditing against your endpoint and have the data presented to you in an easy-to-consume report. | <urn:uuid:5dfd5f15-0e8c-4460-8a81-bdde41bcee70> | CC-MAIN-2024-38 | https://www.logpoint.com/en/blog/detect-mitigate-and-respond-to-wdigest-attacks-with-logpoint/ | 2024-09-09T03:04:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651053.52/warc/CC-MAIN-20240909004517-20240909034517-00613.warc.gz | en | 0.905524 | 2,711 | 2.546875 | 3 |
DNS Cache poisoning also known as DNS spoofing, DNS cache poisoning is an attack designed to locate and then exploit vulnerabilities available in a DNS server, or domain name system. They do this in order to draw organic traffic away from a legitimate server and over to a fake one.
The threat of DNS cache poisoning made the news earlier this year in April when crypto giant MyEtherWallet’s DNS servers were hijacked. The hackers were able to redirect legitimate users over to a phishing website & stole their data.
Now, because of this cache poisoning, thousands of users were conned into releasing their wallet keys. One the phishing site, they made it compulsory for users to re-enter their wallet keys. With this information, they were able to transfer their cryptocurrencies into another digital wallet owned by the hackers. According to the new, with a short time, the hackers stole about $160k worth of Ethereum before the problem was identified and fixed.
Basically, this is just one of the million examples that illustrates how dangerous DNS cache poisoning is. One other reason this kind of attack is dangerous is because it can easily spread from one DNS server to another.
In this article, we’ll cover the subject of how DNS cache poisoning works and then some solutions you can apply to stop it should it ever happen to you.
How Does DNS Cache Poisoning Work?
Each time your browser contacts a domain name, it has to contact the DNS server first.
Domain Name Servers (DNS) are the Internet’s equivalent of a phone book. They maintain a directory of domain names and translate them to Internet Protocol (IP) addresses.
This is necessary because, although domain names are easy for people to remember, computers or machines, access websites based on IP addresses.
The server will then respond with at least one IP address (but usually more) for your computer to reach the domain name. Once your computer connects to the IP address, the DNS converts the domain name into an IP address that your computer can read.
Right now, your internet service provider is running multiple DNS servers, each of which caches (or saves) information from other servers as well. The Wi-Fi router you have in your home essentially acts like a DNS server as well, as it caches information from the servers of your ISP.
When can you say a DNS cache is Poisoned?
A DNS cache is “poisoned” when the server receives an incorrect entry. To put this into perspective, it can occur when a hacker gains control over a DNS server and then changes information in it.
For instance, they may modify the information so that the DNS server would tell users to look for a certain website with the wrong address. In other words, the user would be entering the ‘correct’ name of the website, but then be sent to the wrong IP address, and specifically, to a phishing website.
Earlier, we mentioned that one of the reasons why DNS cache poisoning is dangerous is because how quickly it can spread from one DNS server to the next. This is accomplished if and when multiple internet service providers are receiving their DNS information from the now hacker controlled server, which results in the ‘poisoned’ DNS entry spreading to those ISPs to be cached.
From that point on, it can spread to other DNS servers and home routers as well as computers will look up the DNS entry only to receive the wrong response, resulting in more and more people becoming a victim of the poisoning. Only once the poisoned cache has been cleared on every affected DNS server will the issue be solved.
How To Protect Against DNS Cache Poisoning
One of the tricky aspects of DNS cache poisoning is that it will be extremely difficult to determine whether the DNS responses you receive are legitimate or not. In the case of My Ethereum Wallet, they had very limited means to prevent the situation from occurring, and the issue was ultimately solved by their server providers.
Fortunately, there are still a number of measures that your organization can take to prevent such an attack from happening to you, so you should not be under the impression that DNS cache poisoning is impossible or nearly impossible to prevent.
Hire an IT professional to Configure your DNS Server
For example, one thing you should do is have your DNS servers configured by an IT professional to rely very little on relationships with other DNS servers. This makes it much harder for a cyber-criminal to use their DNS server to corrupt their targets, meaning your own DNS server is less likely to be corrupted, and therefore you (and everyone in your organization) are less likely to be redirected to an incorrect website. Use this guide now!!!
Configure your DNS to only store specific data
You can furthermore have your DNS servers configured to only store data that are related specifically to the requested domain and to limit query responses to only provide information that concerns the requested domain as well. The idea is that the server will be set up so that required services are the only ones permitted to run. By having additional services that are not required to run on your DNS server, you greatly increase the odds of an attack happening.
Use Most Recent DNS Version
You should also ensure that the most recent version of the DNS is being utilized. This is because the most recent versions will use security features such as port randomization and transaction IDs that are cryptographically secure to help guard against poisoning attacks.
Always check a website for EV SSL/TLS certificate
Another important defense against DNS cache poisoning, as MyEtherWallet advised in an announcement following the attack that occurred back in April 2018, is to look for the company’s name in the address bar (so in their case ‘MyEtherWallet Inc’).
This means the site is using an EV SSL/TLS certificate. This would help prevent people from falling victim to a poisoning attack, because they would make sure not to enter their personal details in to a hacker’s website. Not all companies use EV on their websites, so this isn’t a foolproof measure, but it can be a helpful tool when trying to determine if you’re on the right site.
An SSL/TLS certificate is simply a small data file installed on a web server that can bind the details of your organization to a cryptographic key. After it has been installed, the certificate will activate HTTPS protocol to enable a secure and encrypted connection between a browser and your web server. In the case of EV SSL/TLS Certificates, some of those organization details, including the company name as mentioned above, will be presented directly in the browser UI.
In summary, DNS cache poisoning is when an attacker exploits a DNS server to send a forged DNS response that will be cached by legitimate servers.
Subsequently, users who visit the corrupted domain will be sent to a new IP address that the hacker has selected, which is usually a malicious phishing website where victims can be manipulated into downloading malware or submitting login or financial details.
Taking the steps above will help defend your organization against DNS cache poisoning attacks.
Note: This blog article was written by a guest contributor for the purpose of offering a wider variety of content for our readers. The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Hybrid Cloud Tech. | <urn:uuid:5f9430e5-41bb-4b0a-82ee-648cd5f2b31f> | CC-MAIN-2024-38 | https://hybridcloudtech.com/what-is-dns-cache-poisoning-and-how-to-prevent-or-fix-it/ | 2024-09-10T07:01:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00513.warc.gz | en | 0.95263 | 1,497 | 2.515625 | 3 |
In the context of business, we often talk about internal controls in the context of regulatory compliance or risk management. We talk about the design of controls, their operating effectiveness, how they mitigate risks, and the policies and procedures that back them up. But outside of an audit or risk assessment, how can an organization manage controls on a day-to-day basis and gain reasonable assurance that controls are operating as designed and intended?
This is where controls management comes in. Controls management is the way in which a company approaches its day-to-day business activities and protocols in order to reduce organizational risk, increase business resilience, and meet organizational goals. The procedures a company follows and the actions employees take in order to achieve the desired results of the business — including the guidelines and cultural norms they comply with — are all parts of controls management.
Today’s business uncertainties include risks from wartime hackers, global supply chain disruptions, and the security weaknesses inherent in an at-home workforce, to name just a few top concerns. In volatile times, it’s more important than ever to build a strong and effective controls management system for your business.
What Is Controls Management?
Controls management is the means by which the actions of individuals or groups within a company are directed — as well as which actions are avoided or advised against — in order for an organization to maintain norms and reach their goals. Controls management is also a system of checks and balances that ensures team members adhere to a company’s rules and culture. In doing so, controls management functions as the guardrails, keeping employees in line with both external regulations and internal standards.
Controls management is important because these rules and requirements work to keep a business within regulatory compliance and also provide safety and security for a company regarding their reputation and brand image. If employees across the company are aware of the organizational controls and adhere to them — either explicitly or implicitly, due to management culture — it builds a solid foundation for the organization. In turn, this results in a consistent, quality output of the products and services the company sells.
Management controls and management control systems are often used interchangeably with the term controls management. Management controls, like the function of controls management, are the processes by which leadership or management of a business achieve and direct teams to achieve organizational goals and objectives.
What Is the Role of Controls Management?
Controls management is put in place to guide team members in adhering to a company’s norms and rules in order to reach company goals. The role of controls management varies from explicitly outlining policies and procedures — sometimes involving external regulatory requirements — to more subtly displaying company expectations through an organization’s cultural norms and standards. Control processes and management processes are both evaluated and the necessary adjustments are made to meet business objectives in a good management control system.
Unfortunately, not all team members are willing or able to act in the company’s best interest. Implementing controls helps to prevent material mistakes from going unnoticed and steers team members away from taking intentional actions that are bad for business. Internal controls management helps to safeguard controls in place, ensuring that they are designed and operating effectively. Any issues identified are then subjected to corrective actions, like adjusting the control to better mitigate risk, or developing compensating controls as needed. Management controls minimize deviations in product quality and effectively keep project management on track.
What Are the Different Types of Controls?
Controls management falls into two main categories, and there are several types of controls within those categories. The main areas of controls management are regulative controls and normative controls. Within regulative controls are bureaucratic controls, internal controls, and quality controls. Under normative controls are team norms and organizational cultural norms. Depending on the type of control, each can have a different objective. Some controls are financial, some relate to security, and others are regulatory. The people, processes, and technologies involved with each control may also differ, so understanding the purpose of each control is critical when performing management control activities.
Regulative or Regulatory Controls
Regulative controls, which encompass bureaucratic, financial, and quality controls, emerge from a company’s existing policies and standard operating procedures. These types of controls may include controls that directly impact regulatory or compliance efforts at the organization, and therefore may be subject to audit.
- Bureaucratic controls stem from a company’s senior leadership and the policies and procedures they outline for departments and team members to follow. Human Resources controls fall into both the “bureaucratic” category and the “internal controls” category.
- Internal controls, such as financial and security controls, are aimed at ensuring the integrity of information to promote accountability and prevent fraud. They are the means by which an organization monitors and controls the direction, allocation, and usage of its resources.
- Quality controls are in place to define the variation that is considered acceptable regarding product output — the end result of whatever product or service is delivered to clients and customers. Quality controls are generally aimed at improving or maintaining customer satisfaction.
Normative controls are team norms and organizational cultural norms. They govern manager and team member behavior through the company’s accepted patterns of action, rather than requiring written policies like regulative controls. These controls are often less formal, and may not be documented.
- The team norms are the informal rules that help employees understand what their responsibilities are to the company and to each other. Team norms tend to develop gradually, but once established can have a powerful influence on employee behavior.
- Organizational cultural norms are a company’s shared values, beliefs, and culture.
All of these controls work together to keep employees aligned and on track to meet company goals with the level of integrity established by senior leadership.
Eight Key Strategies for Strengthening Controls Management
Strategies for strengthening controls management include staying on top of regulatory requirements, optimizing internal controls, and using smart controls management software solutions to improve processes and communications end-to-end. Here are eight ways your business can improve controls management:
Strategy 1: Strictly Manage Regulatory Controls
It’s critical for companies to stay on top of all regulatory requirements specific to their line of business. Different fields will need to comply with different regulations, depending on what industry the company is in. An organization in the medical field may need to know how to be HIPPA compliant, and a food service company could be subject to the Food and Drug Administration guidelines. One regulatory requirement all public companies need to stay on top of is SOX compliance. As such, it’s important for companies to understand what requirements are applicable to their organization and implement appropriate controls to monitor compliance.
Whenever possible, organizations should have management controls in place to evaluate the actual performance of regulative controls and benchmark performance from year to year. Automation of regulatory controls can also be a boon, reducing the potential for user error to disrupt the performance of the control.
Strategy 2: Rigorously Follow Internal Controls
Internal controls, such as financial controls, help safeguard an organization and further its objectives by tracking progress towards various goals and targets, such as increasing operational effectiveness and efficiency, providing reliable financial reporting, and profitability, or ensuring compliance with laws and regulations. Strong internal controls help reduce risk in an organization by protecting the company’s resources and play an important role in detecting and preventing fraud. It is important to remember that implementing controls is not the same as compliance — so it’s critical not just to identify and implement strong controls, but also to make sure employees maintain compliance with control protocols, such as by performing internal reviews or audits.
Organizations can ease the burden of compliance with internal controls by creating templates for control activities that stakeholders and process owners can use repeatedly.
Strategy 3: Maintain Solid Security Controls
Solid security controls management helps to ensure a company’s data and information is safe from potential hacks or security breaches. Companies should implement advanced security controls, like two-factor authentication, in order to improve the company’s information security posture and avoid malware infections and brute force attacks.
Management control systems should consider cybersecurity and security risks as a major area of risk for almost all businesses today. The Harvard Business Review recently released an article on “The Devastating Business Impacts of a Cyber Breach“, and they are not the only reputable publication to sound the alarm on cybersecurity risks. IT security is now an integral part of GRC functions, like controls management, and organizations ignore security controls at their own risk.
Strategy 4: Uphold Consistent Quality Controls
Quality controls ensure a company’s product or services stay at the desired level in order to maintain customer satisfaction. Some companies use software to analyze product output electronically, or they might enable quality control staff to analyze products prior to release. Quality standards will differ by company, and while some require zero defects before shipment, others are comfortable releasing items with small flaws. Whatever your company’s quality control standards are, it’s important the standards are consistently maintained and upheld across the entire organization.
Strategy 5: Oversee Thorough Team Training
One key aspect of maintaining proper controls management is ensuring team members at all levels are aware of controls they need to comply with and follow. Employees need clear direction on controls management procedures so they understand their role in the process and can help the company maintain compliance with controls. It’s important employees throughout the entire company are trained on proper procedures so they are following the controls that are put in place. A cadence of annual training on security controls and procedures is recommended.
Keep in mind that different roles in your organization will require different levels of security awareness and controls management training. Some personnel may even be responsible for executing or overseeing management controls. In these cases, additional training may be warranted.
Strategy 6: Optimize Data Measurement Systems
Companies can use measurement systems to find areas for improvement within their day-to-day operations. Managing key performance indicators means that those performance data points need to first be determined, agreed upon, properly tracked, and then managed against.
In the era of big data, there are a lot of facts and figures out there. Using data measurement systems to determine what information is important to your business is the beginning. Then, layering controls on top of data measurement is key to controls management. If a measurement system includes financial targets, for example, a controls management tool can be created to automatically flag areas that are performing below expectations. The controls management tool can then help leadership teams in decision-making and when to step in and help departments reach their goals.
Strategy 7: Follow Data Retention Guidelines
Data retention is storing and managing a company’s data and records for a designated period of time. Having a solid data retention policy in place is important for ensuring your company stays in regulatory compliance regarding data privacy and also keeps your business in line with your industry’s timeline-based data storage requirements. Data retention also helps companies maintain security controls, and prepares an organization for continuity with available backup data in the event of a catastrophic loss due to a natural disaster or malicious hacking. In addition, controls management is important for determining and adapting a company’s data retention policy, as the ability to access historical data records and points of access by team members are needed to uphold compliance with security requirements.
Strategy 8: Enable a Powerful Software Solution
Companies can use controls management software to help them run secure processes. Using software allows them to identify and report issues quickly, and keeps those with access to data dashboards current on the status of the controls, with real-time access to high-priority data. It also creates open communication across the company, where staff can use workflow software to communicate issues across teams effectively and efficiently. Software programs also significantly reduce the risk of error.
Build a Stronger Internal Controls Program Today with AuditBoard
Having reliable controls management tools in place is more important now than ever. Aligning employees and organizational goals is an age-old challenge; enforcing controls on top of that is an even bigger mountain. Management controls and management control systems keep your people, goals, and controls moving in the same direction. Using internal audit management software will help your business achieve more, with the ability to manage audit planning, fieldwork, and reporting all in one single platform. Take control today.
Frequently Asked Questions About Controls Management
What is Controls Management?
Controls management is a function designed to align people, controls, and processes at an organization to achieve objectives and goals.
What Is the role of Controls Management?
Controls management establishes oversight over the performance of controls, reveals gaps in controls, and provides assurance that controls are operating as intended.
What are the different types of controls?
There are many different types of controls that can be divided into the regulatory and normative categories. Regulative or regulatory controls are those controls that address some kind of regulatory or compliance requirements. Normative controls are less formal controls that are meant to guide employee behavior and culture.
Vice Vicente started their career at EY and has spent the past 10 years in the IT compliance, risk management, and cybersecurity space. Vice has served, audited, or consulted for over 120 clients, implementing security and compliance programs and technologies, performing engagements around SOX 404, SOC 1, SOC 2, PCI DSS, and HIPAA, and guiding companies through security and compliance readiness. Connect with Vice on LinkedIn. | <urn:uuid:13c31a46-bc67-4acc-bdce-be66f1c37b3f> | CC-MAIN-2024-38 | https://www.auditboard.com/blog/strengthen-controls-management/ | 2024-09-10T08:16:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00513.warc.gz | en | 0.946799 | 2,745 | 2.703125 | 3 |
It can be hard to let go of an older operating system (OS), especially if you’ve become comfortable with it and it “seems to still work fine.” However, there comes a time in every operating system’s life when it reaches its pre-planned end of life (EOL).
The EOL date doesn’t mean that a PC or server operating system will stop working, but it does mean that it will stop receiving the critical support needed to work securely.
All businesses should have the EOL date for an operating system on their calendar because if you continue using an OS past that date, it can lead to major IT compliance and security problems.
As of July 2020, nearly a quarter (23.3%) of users were still running Windows 7. Why is this a problem?
Because if your business is operating any computers running Windows 7 or any servers running Windows 2008, then you’re past the EOL date on both, which was January 14, 2020.
What Does it Mean to be Past EOL Date?
All Windows products have an EOL date and it’s the date that all critical support is ended. Basically, it means the operating system is “put out to pasture” and users should upgrade to ensure they don’t leave devices vulnerable.
The end of Mainstream Support and the end of Extended Support are the two main dates to be aware of when it comes to a Windows operating system. The end of Extended Support is what’s known as the EOL.
End of Mainstream Support: No more feature requests can be made, however feature and security updates continue. In the Windows lifecycle, this usually comes about 4-5 years after release date.
End of Extended Support (EOL): All support ends, both feature and those vital security updates. This date usually comes about 10 years after product release date.
Since both Windows 7 and Windows Server 2008 reached the end of extended support in January of 2020, it means those systems are completely vulnerable to an attack because they no longer receive security updates.
How Important Are Those Security Updates?
Security updates are a vital part of any cybersecurity strategy and required if you have to comply with any data privacy regulations. If you have operating systems no longer receiving updates, your network is like a sitting duck just waiting for a hacker to come along.
Approximately 60% of data breaches over the last two years happened because hackers exploited unpatched system vulnerabilities.
Steps for Upgrading Your PCs & Servers
Because it’s already been several months since Windows 7 and Windows 2008 stopped receiving security updates, it’s important that you update your devices as soon as possible.
Here are steps you can take to go through the upgrade process for your PCs and servers.
Upgrading PCs to Windows 10
For PC’s operating on Windows 7, you’ll want to upgrade them to Windows 10. This is a very well-received OS and offers a lot of productivity updates over 7 as well as improved device security.
Inventory Devices: First, you’ll need to inventory all devices at your business. Make of list of each and which operating system they use (this will also be helpful going forward when Windows 10 is eventually replaced). Identify the computers running Windows 7.
Check System Specifications: Not all older devices may have the specs needed to upgrade to Windows 10. Check required specifications here to see which devices can upgrade and which need replacement.
Migrate Data for Decommissioned Devices: For PCs that can’t upgrade, you’ll want to purchase new PCs with Windows 10 on them and migrate the data from the older PC to the new one.
Upgrade Devices & Train Users: For those devices that can upgrade, you’ll want to purchase Windows 10, go through the upgrade, and provide user training on the new operating system.
When upgrading a server, you have two options to choose from, keeping your server on-premises or upgrading to a virtual server.
On-Premises: If the server meets the specifications, you can upgrade it to the Windows Server 2019operating system.
Virtual: If you’re ready to take your server to the cloud, then you can migrate Windows 2008 and 2008 R2 to Microsoft’s Azure virtual server, which puts your server in a cloud-based environment.
It’s important to get professional assistance when migrating a server so all the applications, processes, and data can be properly transferred, and the old server can be decommissioned securely.
Get Help With Smooth Upgrades & Migrations
Why struggle with a PC or server upgrade or data migration when you can have peace of mind knowing it’s done right. Data First Solutions can help you with a trouble-free upgrade to ensure your business systems remain properly protected. | <urn:uuid:a1beadb2-eae9-4004-99be-410fb66422e6> | CC-MAIN-2024-38 | https://dfcanada.com/2020/09/08/why-its-vital-to-upgrade-pcs-running-windows-7-servers-running-windows-2008/ | 2024-09-14T00:30:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00213.warc.gz | en | 0.940527 | 1,008 | 2.546875 | 3 |
In today’s workforce, it’s not unusual to encounter colleagues from various age groups, from fresh-faced newcomers to veteran professionals with decades of experience. This unique blend of generational cohorts presents ample opportunities for knowledge transfer and collaboration. However, this multigenerational workforce also creates challenges, particularly for young workers striving to advance their careers. One significant factor contributing to these challenges is the delayed retirement of older employees, leading to what many describe as career gridlocks.
Understanding the Multigenerational Workforce
The Composition of Today’s Workforce
For the first time in history, the workforce includes five distinct generations: Traditionalists, Baby Boomers, Generation X, Millennials, and Generation Z. Each of these cohorts brings different perspectives, skills, and attitudes towards work, shaped by the historical and cultural contexts in which they grew up. This diversity can lead to rich collaborative experiences but also sparks intergenerational conflicts and competition for roles. Traditionalists, for example, are typically linked with a strong work ethic and institutional loyalty, stemming from their experiences during times of economic uncertainty and military service. Baby Boomers, born in the post-World War II era, often emphasize achievement and material success.
On the other hand, Generation X grew up during a time of significant technological transition and economic fluctuation, leading them to value independence and adaptability. Millennials, who came of age during the digital revolution, tend to prioritize work-life balance and meaningful work, while Generation Z, being digital natives, brings a fresh perspective on technology but also craves stability in uncertain times. The convergence of these diverse values and work styles within a single workspace creates a dynamic environment but also poses unique challenges, particularly when it comes to career progression and role allocation.
Generational Differences in the Workplace
Older generations, such as Traditionalists and Baby Boomers, often hold senior positions with significant decision-making power. In contrast, Millennials and Gen Z are more likely to be in entry-level or mid-tier roles, striving to climb the corporate ladder. Differences in work ethics, technological adaptability, and expectations about work-life balance add layers of complexity to workplace interactions and career advancement paths. Traditionalists and Baby Boomers, for example, might place a higher value on face-time in the office and a hierarchical approach to management, whereas younger generations might prefer flexible work arrangements and more collaborative leadership styles.
These differing philosophies can lead to misunderstandings and clashes. For example, a Baby Boomer manager might perceive a Millennial employee’s desire for remote work as a lack of commitment. Conversely, younger employees might view their older colleagues’ resistance to adopting new technologies as a hindrance to efficiency. The key to navigating these differences lies in fostering an inclusive environment where each generation’s strengths are recognized and leveraged. Training programs that emphasize generational diversity, as well as initiatives that promote cross-generational mentoring, can help bridge these gaps and create a more harmonious workplace.
Delayed Retirements and Their Impact
Financial Insecurity Among Older Workers
A primary reason for delayed retirements is financial insecurity. Many Baby Boomers and a substantial portion of Generation X find themselves ill-prepared for retirement due to insufficient savings, fluctuating market conditions, and concerns about social security. This lack of financial security forces many to remain in the workforce beyond the traditional retirement age of 65, thus prolonging their careers and occupancy of high-level positions. As these older workers hold onto their roles, they limit the availability of senior positions for younger employees who are ready to take on more responsibilities and advance their careers.
The financial challenges faced by older generations are multifaceted. Economic recessions have eroded savings and investments for many, while the shift from defined-benefit pensions to defined-contribution plans like 401(k)s has transferred the burden of retirement planning onto individuals. Additionally, rising healthcare costs and longer life expectancies mean that retirement savings must stretch further than ever before. This economic backdrop has left many Baby Boomers and Generation Xers feeling unprepared for retirement, compelling them to remain in their jobs longer to secure a stable financial future. The cascading effect of this trend is most acutely felt by younger workers, who find their career progression stymied by the prolonged tenure of their senior colleagues.
The Ripple Effect on Younger Workers
The extended tenure of older workers in senior roles has a cascading effect on younger employees. When top positions are occupied by those who would typically be retiring, it creates a bottleneck in the career progression for younger generations. Millennials and Gen Z workers face limited opportunities for promotions, which can stall their professional growth and lead to increased job dissatisfaction. This career bottleneck not only hampers the aspirations of younger employees but also affects organizational dynamics, as fresh ideas and innovative approaches are stifled by the lack of upward mobility.
The frustration felt by younger workers is palpable. Many enter the workforce with high expectations and ambitious goals, only to find that the path to advancement is blocked by the long tenure of senior colleagues. This stagnation can lead to disengagement, as young professionals become disillusioned with their prospects. Moreover, the lack of opportunities for growth can result in high turnover rates, as younger employees seek better chances for advancement elsewhere. This phenomenon of job-hopping, while offering immediate relief, often lacks the long-term stability and career development that continuous employment in a single organization can provide. Companies that fail to address these gridlocks risk losing a generation of talent, which can have long-term implications for organizational success.
Career Gridlocks: A Closer Look
The Dynamics of Career Advancement
Career gridlocks occur when the natural progression of job roles and promotions is hindered by the continued presence of older employees in senior positions. This situation is exacerbated by companies’ traditional hierarchical structures, which often offer fewer opportunities for lateral movement. Young workers find themselves trapped in lower or mid-level positions for extended periods, unable to advance despite their qualifications and efforts. The traditional career ladder, characterized by a linear progression through various levels of responsibility, becomes more like a career traffic jam for ambitious younger employees.
The dynamics of career advancement in such an environment can be disheartening. Younger workers may feel that their efforts and accomplishments go unrecognized, as there are simply no higher positions available to which they can be promoted. This stagnation can lead to a sense of futility and frustration, which in turn diminishes job satisfaction and engagement. Additionally, the lack of upward mobility means that younger employees miss out on opportunities to develop their leadership skills and gain the experiences necessary for future senior roles. This talent bottleneck can have long-term repercussions for organizations, as the next generation of leaders is inadequately prepared for the challenges ahead.
Career Gridlock Consequences
The immediate consequence of career gridlock is frustration among younger employees. This dissatisfaction can culminate in high turnover rates, as young professionals seek better opportunities elsewhere. Continual job-hopping, however, can create instability in their career trajectories and result in a fragmented professional development experience. Additionally, companies may suffer from the disruption and loss of institutional knowledge as talented young workers leave. This not only affects the stability of the workforce but also impacts the organization’s ability to cultivate a strong, cohesive company culture.
Moreover, the loss of young talent can be costly for organizations. Recruiting and training new employees is an expensive and time-consuming process, and frequent turnover disrupts team cohesion and productivity. The departure of young talent also means a loss of potential future leaders who could bring fresh perspectives and drive innovation. Organizations that fail to address career gridlocks risk creating a vicious cycle of turnover and dissatisfaction, which ultimately undermines their long-term success. To mitigate these effects, companies need to implement strategies that provide clear paths for career advancement and recognize the contributions of younger employees, even within a hierarchical structure.
Retirement Savings Crisis
The Savings Gap for Generation X
Generation X, those currently approaching the expected retirement age, is notably underprepared for retirement. Contributing factors include economic recessions, changing pension structures, and higher costs of living. Many in this cohort expected to rely on pensions and social security, but these safety nets are increasingly insufficient, driving the need to remain employed longer. The transition from employer-managed pensions to individual retirement accounts has placed the onus of retirement planning on individuals, many of whom lack the financial literacy or resources to adequately prepare for their future.
As a result, Generation X workers often find themselves playing catch-up with their retirement savings, striving to build a nest egg in the face of increasing financial pressures. The higher costs of healthcare, education, and housing have further strained their ability to save. Additionally, the economic recessions of the early 2000s and the 2008 financial crisis eroded many workers’ savings and investments, setting them back significantly in their retirement planning. This financial insecurity forces many to stay in the workforce longer, exacerbating the career gridlock experienced by younger employees. The reality for many Generation Xers is that retirement is not an imminent phase of relaxation but a distant goal that requires ongoing employment.
Broader Economic Implications
The retirement savings crisis has broader implications for the economy at large. As older workers delay retirement, the overall productiveness and innovation of the workforce can be impacted. Younger generations, who might be more adept at leveraging new technologies and introducing fresh ideas, find themselves sidelined, thus affecting company growth and adaptation in a rapidly evolving market. The delayed retirement of Baby Boomers and Generation Xers not only clogs the upper echelons of corporate hierarchies but also stifles the influx of new talent and ideas that are essential for maintaining competitive advantage.
This phenomenon has wider economic consequences. A workforce that is unable to efficiently cycle through its talent pool risks stagnation and reduced dynamism. Companies may struggle to innovate and adapt to new market trends if they rely too heavily on older generations who may be less attuned to the latest technological advancements and consumer preferences. Furthermore, the economic burden of supporting an aging workforce, combined with the challenges faced by younger workers in securing stable, well-paying jobs, can exacerbate social and economic inequalities. Addressing the retirement savings crisis and facilitating smoother transitions between generations in the workforce are crucial for ensuring long-term economic resilience and growth.
Navigating Intergenerational Tensions
Diverse Work Philosophies and Ethics
Intergenerational tensions often arise from differing philosophies on work ethics and values. Older employees may adhere to a more traditional, hierarchical approach, valuing loyalty and longevity. In contrast, younger generations tend to prioritize work-life balance, flexibility, and meaningful work engagements. These differing views can lead to misunderstandings and conflicts within teams. For example, older workers might view the frequent job changes and demand for immediate impact of younger employees as a lack of commitment or respect for the established order, while younger workers could see the conservative approaches of their senior colleagues as resistance to change and innovation.
These tensions can be particularly pronounced in areas such as communication styles, feedback mechanisms, and expectations around job performance. Older generations might prefer face-to-face interactions and formal communication, whereas younger employees may lean towards digital communication and expect faster, more informal feedback. This mismatch in expectations can create friction and hinder collaboration. Additionally, differences in technological adaptability further compound these challenges. Younger generations, being digital natives, may quickly adopt new tools and platforms, whereas older employees might require more time and training to adjust. Recognizing and addressing these gaps is essential for fostering effective and harmonious intergenerational collaboration.
Fostering Harmonious Work Environments
To mitigate these tensions, it is crucial for organizations to foster inclusive and respectful work cultures. Management training on generational diversity, mentorship programs pairing older and younger workers, and open communication channels can help bridge gaps and create a more cohesive workforce. Encouraging cross-generational collaboration ensures that diverse perspectives are valued and utilized effectively. Mentorship programs, in particular, can be valuable, as they allow older employees to share their experience and wisdom while providing younger workers with guidance and support in navigating their careers.
Creating an environment that values diversity and inclusion also involves adopting policies that cater to the needs and preferences of all generational cohorts. This can include flexible work arrangements, opportunities for continuous learning and development, and recognition programs that appreciate the contributions of employees at all levels. Moreover, fostering a culture of mutual respect and understanding, where different viewpoints are acknowledged and appreciated, can reduce intergenerational conflicts and enhance team cohesion. Organizations that proactively address these issues are better positioned to harness the full potential of their diverse workforce, driving innovation and achieving long-term success.
Systematic Changes and Solutions
Addressing career gridlocks and delayed retirements requires fundamental changes in organizational structure. Companies should consider flattening hierarchies, creating more pathways for lateral movement, and offering flexible roles that can accommodate both seasoned professionals looking for reduced responsibilities and younger workers aspiring for growth. By moving away from rigid, vertical career paths and embracing more flexible and dynamic structures, organizations can create an environment where talent can flourish at all levels.
Furthermore, organizations can implement programs that promote continuous learning and career development, allowing employees to acquire new skills and pivot to different roles within the company. This not only helps to retain talent but also ensures that employees remain engaged and motivated. Succession planning is another critical aspect, where companies identify and develop potential future leaders from within their ranks, ensuring a smooth transition when senior employees eventually retire. These initiatives, combined with a supportive corporate culture, can help alleviate the career gridlocks that currently impede younger workers’ progress.
Enhancing Retirement Preparedness
In today’s workplace, it’s common to work alongside colleagues from a range of age groups, from recent graduates to seasoned professionals with years of experience. This mix of generations provides ample opportunities for knowledge sharing and teamwork. However, a multigenerational workforce also brings its own set of challenges, especially for young employees hoping to climb the career ladder. A major issue contributing to these challenges is the delayed retirement of older workers, causing what many refer to as career gridlock.
As older employees extend their careers, often due to financial necessity or a desire to stay professionally active, they occupy senior roles that younger workers aspire to reach. This scenario can create bottlenecks in promotion pipelines, leading to frustration and stagnation among younger staff. Moreover, the experience gap can sometimes result in differing work styles and expectations, complicating team dynamics. To foster a more inclusive and supportive work environment, companies need to address these issues by implementing mentorship programs, cross-generational projects, and clear pathways for career advancement. | <urn:uuid:0a53ef8d-2de7-45cb-b39a-936b5e5937e1> | CC-MAIN-2024-38 | https://hrcurated.com/benefits-and-compensation/are-career-gridlocks-for-young-workers-due-to-delayed-retirements/ | 2024-09-15T06:21:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651616.56/warc/CC-MAIN-20240915052902-20240915082902-00113.warc.gz | en | 0.947566 | 2,980 | 2.78125 | 3 |
In a new announcement, ICANN (Internet Corporation for Assigned Names and Numbers) has mentioned that there is an ongoing and significant risk to some of the main parts of DNS (Domain Name System) infrastructure.
Several malicious activities are increasingly targeting the DNS infrastructure. In response to such activities, ICANN has called for full deployment of DNSSEC (DNS Security Extensions) across all the unsecure domain names.
DNSSEC is a set of security protocols used to ensure DNS information isn’t accidentally or maliciously corrupted. It protects against cyberattacks by proving authenticity and integrity of a response from the nameserver.
ICANN is one of the main entities responsible for decentralized management of Internet. It is committed to ensure that the identifier systems of internet are secure, stable, and efficient. The entity also coordinates the main levels of the DNS for stable and secure operation.
Attackers are using different methodologies to make unauthorized changes to the delegation structure of domain names. They change the addresses of intended servers and use the addresses of machines that can be controlled by them.
However, some of these cyberattacks work only when the DNSSEC is not used. That is the reason ICANN is demanding for full deployment of DNSSEC across all the domains.
Implementation of DNSSEC not promises to address all the security issues of internet, but it will prevent cyberattacks where users are redirected to malicious websites.
“Although DNSSEC cannot solve all forms of attack against the DNS, when it is used, unauthorized modification to DNS information can be detected, and users are blocked from being misdirected,” explained ICANN.
In September last year, ICANN had said that it will perform a root zone DNSSEC KSK roll-over to strengthen the security of DNS. | <urn:uuid:a9cfb852-70c9-4749-b0bf-e12c7b0ca43a> | CC-MAIN-2024-38 | https://www.dailyhostnews.com/icann-dnssec-across-all-domains | 2024-09-16T12:42:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.32/warc/CC-MAIN-20240916112213-20240916142213-00013.warc.gz | en | 0.929133 | 367 | 2.78125 | 3 |
Modern RF planning products such as Planet are extremely powerful and capable of amazingly accurate predictions of network coverage, capacity, throughput and quality. But prediction accuracy is only as good as the site database on which it is based. With every new release of Planet, we add new functionality to help engineers plan more accurate networks and do so more efficiently. As RF planning engineers go about their day-to-day jobs, adding new sites to the network and optimizing existing ones it is often the small unassuming features of a planning tool that take it from competent, to a joy to use.
One such feature is the Antenna Relocation Tool (ART) recently introduced in Planet. It is a small simple tool, but one that can save an engineer many hours of laborious work.
What is the Planet Antenna Relocation Tool?
The ART tool does what it says on the tin. It relocates antennas that are not quite in the right place. It assesses an antenna against the map data in Planet and moves the antenna slightly to the location where it is most likely installed in the live network. For example, it is unlikely that an antenna is installed 5m inside a building and so modeling an antenna in this location will not show true-to-life propagation predictions. Similarly, it is also unlikely that an antenna was installed looking directly at an adjacent building. For your RF planning to be accurate, each antenna needs to be moved to its correct location.
The ART tool works on 2D, 2.5D and 3D maps. Where clutter data maps are used, the ART tool will adjust the height of antennas that are currently below the building clutter height. Where 3D building polygon data exists, the ART tool can move antennas to the façades of buildings. It can move antennas along the buildings to achieve the required clearance angles to adjacent buildings and it can consider rooftop furniture on buildings and ensure the antennas are not obstructed by these blocking objects.
The ART tool can be run on one antenna, one site, a cluster of sites or the entire network and many parameters within the algorithm such as clearance search distance and maximum relocation distance can be configured by the user.
When might the Antenna Relocation Tool be used?
To improve the accuracy of a site database
With many mobile networks over 20 years old now, it is likely that some sites have not had their antenna locations validated in many years and it is likely that map data accuracy has been improved during that time. This means many antennas could do with a location tweak to closer align them to where they are installed in the live network and therefore improve the prediction accuracy of the planning tool.
When map data is upgraded
If you have recently upgraded your map data from 2D or 2.5D data to high-resolution 3D polygon data it makes sense to ensure that antennas are correctly placed on buildings now that this information is available within Planet.
When conducting planning services for an operator
As a network equipment provider or services company, you might receive a site database from a customer to conduct an optimization exercise or perhaps roll-out a new technology such as 5G. It makes sense to run a validation on all antennas in the database and tweak them as necessary to ensure the most accurate predictions possible. This is especially true if you are using higher-resolution 3D data and the customer is not.
When assessing candidate site locations
If you are deploying new sites, you may have 2-3 candidate buildings identified for each site. It makes sense that each of those candidates has their antennas placed in the optimal location so that an Automated Cell Planning (ACP) tool can consider each one accurately. Ensuring candidate sites have antennas moved to the building facades and have sufficient clearance angles to adjacent buildings will ensure that each candidate can be accurately assessed and appropriately ranked by the ACP tool.
So, there you have it, the Antenna Relocation Tool, a simple but efficient tool to automatically update your antenna locations and improve the accuracy of your propagation predictions within Planet.
If you would like to learn more about Planet or the Antenna Relocation Tool, please contact us. | <urn:uuid:a575b19d-1de5-4ae1-b79b-11d13c14f6f2> | CC-MAIN-2024-38 | https://www.infovista.com/blog/4-reasons-why-accurate-rf-planning-needs-an-automated-antenna-relocation-tool | 2024-09-18T22:05:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651941.8/warc/CC-MAIN-20240918201359-20240918231359-00713.warc.gz | en | 0.957331 | 840 | 2.578125 | 3 |
AI Case Study
Microsoft's Seeing AI app identifies and describes objects at which the smartphone camera is pointed to assist visually impaired people
The deep learning app uses computer vision to recognise objects, people or read text via a phone or tablet’s camera and describes them to the user. It can also recognise handwriting as well as several bank notes.
Software And It Services
"The app also recognizes currency, identifies products via their barcodes and, through an experimental feature, can describe entire scenes, such as a man walking a dog or food cooking on a stove. Basic tasks can be carried out directly within the app, without the need for an internet connection."
According to Techly: "In a nutshell, Seeing AI uses your smartphone’s camera to scan the environment and describe it back to you. The app relies on machine learning and cloud computing power to allow users to “see” the world around them.
It works with:
Short text: Instantly reads short lines of text as soon as they come into view of the camera
Barcodes: Tells you the name of a product by scanning the barcode
Document: Helps you copy documents by helping you capture every corner
People: Describes the general appearance and estimates the age of anyone you take a picture of. It can even remember familiar faces and call them by name if they show up on camera
Scene: Describes the composition of any photo (still in beta)
Currency: Identifies different notes to assist with cash payment"
In the US in one year, the app has been downloaded more than 100,000 times and has helped users with over three million tasks.
R And D
The app is designed to help blind and partially-sighted people better navigate the world. | <urn:uuid:29864df8-2720-4555-b19f-82037a9c146d> | CC-MAIN-2024-38 | https://www.bestpractice.ai/ai-case-study-best-practice/microsoft's_seeing_ai_app_identifies_and_describes_objects_at_which_the_smartphone_camera_is_pointed_to_assist_visually_impaired_people | 2024-09-08T00:13:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00813.warc.gz | en | 0.939426 | 363 | 3.03125 | 3 |
As we’ve seen from the latest cyberattacks, old technology can be far more scary (and harmful) than the scariest Sci-Fi movies. “We have the sci-fi depictions of sentient networks that will turn against us, but the problem is, we’ve already built something way too complex for us to be able to manage as a society,” according to Wendy Nather, principal security strategist at Duo Security. “This is a very shaky foundation that we have to clean out and redo.”1
The majority of cyberattacks occur as the result of exploiting a weak spot in legacy software running on legacy machines. “The problem with these outdated systems is that they are (predominantly) no longer supported by the company that created them. You are on your own. If a new vulnerability is discovered by cyber criminals, there will be no security updates released to patch the issue. It’s also unlikely you will be informed of this vulnerability, meaning you are blindly running a system prone to constant attack.”2
These attacks aren’t just perpetrated against small companies. In 2015 and 2016, Russian hackers brought down Ukraine’s power grid, plunging 103 cities and towns into darkness.3
Hospitals are another high-value target for cybercriminals. Medical facilities focus primarily on patient care. Technology if often a secondary concern. The WannaCry attack, for example, struck UK hospitals, forcing many to turn patients away. Security expert, Janie Larson, recounted an incident in which malware had infected EEG machines that were connected to children – disconnecting them to update the software would have proved detrimental to the patients.1 How would you choose between paying the ransom demanded by the hackers and preventing harm to high-risk patients?
So, what can be done to prevent a cyberattack like this?
- Regularly check for updates and patches on all software and devices in your environment.
- Be mindful of end of life. Know when your technology will no longer be supported and have a plan in place for when that happens.
If you’re ready to protect your organization, it pays to work with a Managed IT Services/Managed Cloud Services company, like Bryley Systems, to ensure that you’re taking the right steps. Bryley will recommend solutions to eliminate weak links in your security chain, and help you develop an organization-wide policy to help prevent potentially catastrophic data loss and system downtime.
Please contact us at 978.562.6077 or by email at email@example.com.
We’re here for you.
1 Larson, Selena. CNN Tech. “Why old tech is scarier than Hollywood AI.” 30 June 2017.
2 Jones, Ed. CloudTech. “The hidden dangers of legacy technology – and how to resolve them.” 10 October 2016.
3 Perez, Evan. CNN Politics. “U.S. official blames Russia for power grid attack in Ukraine.” 11 Feb 2016. | <urn:uuid:3ad68652-925e-4b8f-885e-ea49843ae06d> | CC-MAIN-2024-38 | https://www.bryley.com/old-technology-scarier-sci-fi-thrillers/ | 2024-09-09T05:59:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651072.23/warc/CC-MAIN-20240909040201-20240909070201-00713.warc.gz | en | 0.946715 | 630 | 2.734375 | 3 |
What is Kerberos authentication?
Kerberos authentication is a network protocol developed for user identity authentication and in single-sign on implementations. It was developed by the Massachusetts Institute of Technology in the 1980s and is being continuously enhanced to keep up with current security needs. The Kerberos protocol is more secure than the New Technology LAN Manager (NTLM) authentication, widely used in the '90s, which sends password hashes through the network during the authentication process, leaving the system open to pass-the-hash attacks. Kerberos is widely preferred for authentication and is supported by major operating systems in use including Microsoft Windows, Apple macOS, and Linux.
How does Kerberos authentication work?
Kerberos uses symmetric key cryptography, where the same or similar keys are used for both encryption and decryption of the cipher text. It also requires the use of trusted third-party authorization where the third-party validates the interactions between the client and server. Accordingly, aside from the client and server, there is a third participant: the key distribution center (KDC). The Kerberos process needs the following participants:
- The client, or user device, that sends the request to authenticate.
- The Windows domain controller that supports the Kerberos service by hosting the KDC role.
- The application server that hosts the service or resources the user needs access to.
Kerberos authentication steps
The Kerberos authentication process can be depicted as shown below:
To summarize the interaction between the client, KDC, and the authenticating server or domain controller:
- The user submits the credentials when prompted by the client. This triggers a request for authentication to the server hosting the Kerberos service.
- The authentication server (part of the KDC server) verifies and authenticates the client based on the database of credentials stored. Upon successful validation, the authentication server issues the ticket-granting ticket (TGT) and a session key, both encrypted based on the user's password.
- The client decrypts the TGT and session key, and sends it to the ticket-granting server (also part of the KDC server). This server then, decrypts the TGT and issues the ticket needed for service. The final ticket is sent to the application server by the client to gain access to the required resource.
Advantages and disadvantages of Kerberos authentication
As established earlier, Kerberos offers a lot of advantages over NTLM authentication. But it is not without its disadvantages. Some of the advantages and disadvantages include:
Advantages | Disadvantages |
It is supported by major operating systems in use. | It has a single point of failure: the KDC. If there's only one KDC to authenticate, any downtime may make it impossible for authentication. |
The tickets issued are valid for a limited period. If stolen, there is a short time period when attackers can misuse the tickets. This is unlikely, and this process is considered secure. | If the KDC is compromised, credentials of the user may be revealed to attackers. |
User passwords are not sent over the network in the authentication process. | Kerberos authentication is susceptible to dictionary attacks. If users have weak passwords that include common dictionary words, attackers can use password crackers to retrieve the credentials. |
User credentials input during the interactive logon will be used to authenticate all service requests. The user does not have to input them again. | Kerberos requires time synchronization across servers and clients to work effectively. |
Auditing Kerberos authentication service
Every step in the Kerberos authentication process generates an event in the event viewer. For example, when a user enters the correct credentials and it has been verified, event ID 4768 is logged. When the user input credentials that cannot be verified, a failure event with the ID 4771 is logged. Correlating and analyzing these events, especially failure events, will help detect suspicious behavior. A high volume of failed logon events can indicate a potential brute-force attack. It is not easy to scour through the numerous security logs recorded in the Windows Event Viewer. You can effectively track security logs for malicious events with the help of the right tool.
How ADAudit Plus helps in streamlining Active Directory auditing
ManageEngine ADAudit Plus helps sysadmins audit, analyze, and secure Active Directory and Azure AD, file servers, Windows servers, and workstations. ADAudit Plus is a UBA-driven change auditor providing visibility through over 250 out-of-the-box reports and real-time alerting. With ADAudit Plus, you can:
- Inspect and troubleshoot account lockouts effectively with our account lockout analyzer.
- Analyze Active Directory logon failures using user logon failure auditing tool.
- Spot insider threats and malware attacks in time with insider threat detection.
- Gain comprehensive insights into changes across users, devices, groups, and more via the Azure AD reporting tool.
- Capture unauthorized file changes with the help of our file change monitoring tool for Windows, NAS, Synology, Hitachi, and more.
- Track changes to group policy settings with the GPO change auditor.
- Monitor regular and remote workers' attendance with our employee productivity tracker.
- Achieve data regulatory compliance with ease using Active Directory compliance reporting for the GDPR, HIPAA, PCI DSS, and other mandates.
Try all these features and more for 30 days with a free, fully-functional trial. Alternatively, you can also schedule a personalized demo for a guided walkthrough of ADAudit Plus. | <urn:uuid:e2e05de4-cb72-4a7c-bead-6d6c6b9f7dc6> | CC-MAIN-2024-38 | https://www.manageengine.com/products/active-directory-audit/kb/what-is/kerberos-authentication.html | 2024-09-12T21:50:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00413.warc.gz | en | 0.901921 | 1,171 | 3.4375 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.