text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Reduced Demand for Coal This Summer
According to the June 2023 “Short Term Energy Outlook,” published by the U.S. Energy Information Administration (EIA), the largest increases in U.S. electricity generation this summer (June, July, and August) will come from solar, wind, and natural gas-fired power plants because of new generating capacity coming online. “The rising generation from these sources will likely be offset by reduced generation from coal-fired power plants,” said the report.
The report noted that natural gas remains the primary source of generation in the electric power sector, and the EIA expects that U.S. natural gas-fired generation will grow by three percent, or 16.7 terawatthours (TWh), this summer compared with last year. Additional natural gas-fired generating capacity and favorable fuel costs are the primary drivers of the EIA’s forecast increase in generation from natural gas this summer.
A large share of the new generating capacity built in the United States over the past few years is powered by solar or wind. The U.S. electric power sector added an estimated 14.5 gigawatts (GW) of solar generating capacity and about 8.0 GW of wind capacity during the 12 months ending May 31, 2023.
Wind power has been the leading source of new renewable electricity generation in recent years and is an especially important component of the generation mix for some regions during the spring months. “We forecast that U.S. wind-powered generation this summer will be 7% (5.8 TWh) higher than last summer,” said the EIA.
Much of the solar-powered generating capacity that has been installed in recent months is concentrated in Texas and California, and the EIA expects that new solar capacity will lead to a 24 percent (10.8 TWh) increase in solar generation this summer compared with last summer.
Many solar projects are also being built with associated battery storage systems to help provide power when solar and wind resources are low. “The electric power sector has added an estimated 5.3 GW of battery capacity in the past 12 months, a nearly 90% increase,” said the EIA.
In addition to the continuing growth in generation from renewable energy sources, the EIA forecasts 4.5 TWh more nuclear generation this summer than in summer 2022, as a result of the planned opening of a new reactor at the Vogtle nuclear power plant. “In contrast to this newly added nuclear capacity, a number of reactors at other nuclear plants have retired in recent years,” said the report.
As noted, the EIA expects the increase in summer generation from solar, wind, and nuclear power to contribute to reduced generation from coal-fired power plants. Between June 2022 and May 2023, about 11 GW of U.S. coal capacity retired, and the EIA expects 15 percent (36.0 TWh) less U.S. coal-fired generation this summer compared with last summer. | <urn:uuid:d47c92c6-94e0-455e-b6a2-feac8b144a7b> | CC-MAIN-2024-38 | https://finleyusa.com/reduced-demand-for-coal-this-summer/ | 2024-09-20T04:47:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00806.warc.gz | en | 0.949336 | 623 | 2.90625 | 3 |
Transformative Diagnostic Techniques
The Early Cancer Institute at the University of Cambridge has recently taken a significant stride forward thanks to an £11m anonymous donation, fueling research aimed at detecting cancer at its most nascent stage. Under the stewardship of Prof. Rebecca Fitzgerald, the institute is steering a groundbreaking project that targets the latent phase of cancer—often a period spanning decades—with the intent of revolutionizing how the disease is treated before symptoms even surface. The cytosponge stands as a testament to their innovation, offering a non-invasive method to detect early markers of diseases like oesophageal cancer, a testament to the prospective reach of the screening tools being developed.
This proactive approach is augmented by the rebranding of the institute as the Li Ka-shing Early Cancer Institute, following substantial support from the globally recognized philanthropist. Amplifying the potential of early detection, researchers are diving into an expansive pool of blood samples, estimated at 200,000 strong, initially amassed for ovarian cancer screenings. A key discovery by Jamie Blundell and his peers has been the identification of genetic precursors to blood cancers such as leukaemia, a considerable length of time before their clinical manifestation. This pivotal breakthrough underscores the viability of early therapeutic action to potentially halt cancer in its tracks.
Pioneering Research for Longevity
At the vanguard of cancer research, the institute sees Harveer Dev’s ambitious work on prostate cancer biomarkers. These markers could be pivotal for identifying aggressive forms early on, in line with the institute’s goal to comprehend cancer genetics, improve risk assessment, and ensure equitable treatment access.
In a tangential yet connected pursuit, longevity research at the institute is invigorated by a bequest from a centenarian donor—highlighting the aim to bolster not only lifespan but also life quality by reducing cancer jeopardy. This more encompassing view aligns with the medical consensus that early detection is key for transformative patient outcomes, catching cancer when it’s most vulnerable. The Cambridge Early Cancer Institute is at the forefront, employing tools like the cytosponge and analyzing blood-based data, setting the stage for a revolution in how we approach cancer screening and early intervention. | <urn:uuid:6ccaf6f8-5abe-4d23-92e3-932d97dfa86b> | CC-MAIN-2024-38 | https://biopharmacurated.com/research-and-development/cambridge-early-cancer-institute-advances-pioneering-detection-research/ | 2024-09-08T03:02:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00106.warc.gz | en | 0.918919 | 450 | 2.53125 | 3 |
By pairing an air-cooled, magnetic bearing chiller and mission-critical air handling units with the sophisticated AI capabilities of open-source digital platforms, data centers can reduce energy use by allowing for a dynamic – rather than static – chilled water setpoint.
The primary objective for any data center is flawless data processing; critical to achieving that objective is high reliability and maximized uptime. However, to mitigate the effects of climate change, another attribute has become equally important – minimized environmental impact. Since maintaining uptime has traditionally required significant energy and resources, these goals seem to be at odds.
Data centers are one of the most energy-intensive types of buildings, consuming 10 to 50 times the energy per floor space of standard commercial office buildings and collectively using about two percent of the nation’s total electricity consumption. As technology leaders make significant net-zero and water commitments, they are seeking new innovations to reliably reduce energy and resource use.
Aside from the electricity that servers consume, HVAC equipment is responsible for as much as 40 percent of electricity use in data centers. To operate sustainably and profitably, it’s critical these facilities optimize HVAC energy efficiency while ensuring data center uptime.
The latest innovations in HVAC and smart building technology make this outcome possible, cutting energy and water usage, carbon emissions, and costs while ensuring the highest reliability. Mission-critical, computer room air handling units paired with an air-cooled, magnetic bearing chiller, digital solutions, and building automation technology can significantly improve data center sustainability while maintaining an environment that supports reliability and uptime.
Maintaining cold aisle temperatures
The temperature of the cold aisle determines how aggressively HVAC equipment and server fans must work, and therefore how much power they consume, to ensure the proper volume of air moves through servers to remove heat. A higher cold aisle temperature results in lower chiller power consumption. A lower cold aisle temperature results in a smaller volume of airflow being needed and less fan power consumption for the air handling unit fans and server fans.
To prevent hot spots in a data center’s white space and ensure uptime, data center cooling strategies have historically favored lower cold aisle temperature and higher air flows — even beyond what’s needed. This excess airflow acts as a buffer for an application that is close to the edge of the requirement. For example, if a certain server has a higher-than-usual load on it, it may starve the cold aisle of cold air and may overheat. The extra airflow provides a safety net, but it also wastes energy.
Advances in server technology have made it possible for the latest generation of servers to operate at high ambient conditions in warmer cold aisles. Servers that can operate in a greater temperature range make it possible for a broader range of acceptable cold aisle temperatures.
By optimizing the cold aisle temperature, a data center can consume the minimum power required to cool it. Currently, this temperature is a static number. However, research into the effects of a dynamic cold aisle temperature proves that a dynamic cold aisle temperature can deliver optimum cooling according to the ambient conditions in the data center load at any given time – and the technology to do it is available now. For instance, when it’s very cold outside, there is an opportunity to use economization or free cooling and simultaneously lower the chilled water setpoint and lower the cold aisle temperatures.
This reduces airflow and power consumption from the computer room air handler as well as server fans. When a data center uses a static chilled water setpoint and a static cold aisle temperature, this opportunity is lost.
On the hottest afternoons of the year, the chiller power consumption is highest because the lift on the chiller is high. Chiller lift refers to the difference in pressure between the refrigerant in the condenser and the refrigerant in the evaporator. At higher lifts, the compressor consumes higher amounts of power to drive the thermodynamic cycle.
The lift may be reduced by raising the chilled water setpoint and the cold aisle temperature for a few hours in the afternoon. This reduces the power consumption by the chiller compressor(s). The industry uses the term temporary excursion from ASHRAE Thermal Guidelines for Air Cooling of IT Equipment.
The pairing of an air-cooled, magnetic bearing centrifugal chiller and mission-critical, computer room air handlers with an open-source digital platform and building automation system can drive cold aisle temperatures that suit data center loads at any given moment. A dynamic chilled water setpoint, and a dynamic cold aisle temperature overall, help optimize a data center’s power consumption without risking the uptime of the data center. This method of continuous optimization could lead to the best real-time energy efficiency of the data center while providing cold aisle temperatures that help maintain uptime.
Innovative technology for an evolving industry
Historically, data centers have used chillers and other HVAC equipment that were designed for comfort cooling, not data centers. In comfort cooling, chilled water setpoints are around 44 degrees fahrenheit. However, server manufacturers are becoming more comfortable with processors and motherboards operating at higher temperatures, which means they can be cooled with chilled water upwards of 80 degrees fahrenheit.
Innovations in chillers for data center applications make it possible for chilled water setpoints to be anywhere from 70 to 80 degrees fahrenheit, and sometimes even higher. This reduces power consumption and increases the number of annual hours when free cooling can be used to significantly reduce the amount of power that is consumed by data centers throughout the year.
Designed specifically for data centers, air-cooled, magnetic-bearing centrifugal chillers are optimized for increased temperatures inside the white space and the lifts that are prevalent in the data center industry today. They can deliver chilled water temperatures that are upwards of 80 degrees fahrenheit and cater to a low lift, resulting in greater energy efficiency.
While most data centers use air-cooled chillers that have free cooling coils to benefit from lower ambient conditions, air-cooled, magnetic-bearing centrifugal chillers can operate at inverted conditions and provide free cooling without additional free cooling coils. Free cooling coils that are added to the condenser of the chiller can lead to inefficiencies and additional pressure drops, as well as heavier equipment and a larger carbon footprint. The weight they add to the chiller is embodied carbon, from the metal that makes up the coils, to heavier shipping and rigging weight, to the need for a building structure that inherently has more steel in it to support additional weight on the rooftop. Using a chiller that is lighter and provides inverted-operation-free cooling positively impacts the carbon footprint of the building itself in many dimensions.
The friction-free, magnetic drive benefits uptime, as well. If power is interrupted, a typical chiller can take up to 10 minutes to restart. In comparison, magnetic bearing centrifugal chillers have much faster compressor restart times and can return to full load in as few as three minutes after power is restored. Because air-cooled, magnetic-bearing centrifugal chillers use a variable-speed drive, there is no inrush current. This means a fast, controlled return to full capacity and setpoint.
To further improve data center sustainability, air-cooled, magnetic-bearing chillers produce notably less sound than many screw chillers, and some use R-1234ze, a refrigerant with ultra-low global warming potential (GWP).
When connected to an AI-based solution, air-cooled magnetic bearing centrifugal chillers combined with highly efficient mission-critical computer room air handlers designed with electronically commutated motors (ECM) can match cold aisle temperature with the real-time load and optimize energy use from moment to moment. Having a dynamic chilled water setpoint and cold aisle temperature optimizes energy use without risk to data center uptime.
Optimizing energy use based on real-time conditions
Intelligent digital services, like that provided by an artificial intelligence (AI)-based solution, integrated with air-cooled magnetic bearing centrifugal chillers and high-efficiency mission-critical computer room air handlers, provide the most optimized energy solution. Coupled with a dynamic water setpoint and a routine chilled water reset strategy offers even further energy savings. These solutions optimize airflow based on real-time conditions and can significantly reduce a data center’s energy use.
As part of a digital platform, an AI-based solution can be either an advisory or a supervisory function sitting on top of the building management system (BMS). There, it ensures that data center personnel can evaluate the real-time data center loading and real-time data center requirements around the ambient conditions, as well as know the historical loading patterns or trends. Equipped with this valuable information, facility managers can ensure the system is operating as efficiently as it can.
A chilled water reset strategy can help reduce energy use during peak demand periods in data centers that experience high ambient temperatures. A chiller’s power consumption depends on lift, and lower lift means less energy use. During the hottest afternoons of the year, a low chilled water temperature is required to cool the data center. To achieve this, lift and power consumption are typically high. However, the chilled water setpoint can be adjusted to a higher temperature for four or five hours in the afternoon to improve system energy efficiency while relying on a slight ramp-up of the high-efficiency ECM fans in the computer room air handlers.
This chilled water reset deviates from standard conditions and isn’t permitted by some service-level agreements. To improve overall efficiency and data center sustainability, it’s important to include chilled water reset for a set number of hours per year in service-level agreements.
Using historical trends, an intelligent cooling system can anticipate and prepare for the next loading change. For example, if a data center consistently generates a lot of heat around 8 AM, the system can be automated to gradually ramp up capacity starting at 7 AM rather than running at 100 percent capacity at 7:59 AM. This gradual ramp-up minimizes system spikes, improves energy efficiency, and can even extend equipment life.
A combination of digital solutions, connected equipment, and building automation technology can make data centers smarter and more sustainable. These solutions allow facility managers to continuously monitor equipment health and energy consumption in real-time while automating key processes. Some solutions also offer easy-to-read dashboards that display trends and notify assigned personnel when set parameters deviate from assigned values. That allows facility teams to address issues, identify opportunities for energy savings, and drive outcomes that matter most.
Improving energy efficiency and uptime – simultaneously
Technology leaders have very strict sustainability goals with aggressive deadlines to reach them. It’s critical that data centers are equipped with innovative solutions to help achieve those goals as quickly as possible. Purposefully designed air-cooled, magnetic bearing centrifugal chillers combined with mission-critical, computer room air handlers driven by artificial intelligence can optimize sustainability according to real-world, white space conditions and significantly improve data center efficiency while maintaining uptime.
As servers become more capable of operating at high ambient conditions and data center owners and operators become more comfortable with warmer cold aisles, it’s essential that HVAC equipment be ready to operate at higher chilled water setpoints and higher cold aisle temperatures. This shift in a long-held design mindset presents the opportunity to create a smarter, more sustainable data center architecture that collaborates and empowers overall energy efficiency and reliability. Air-cooled, magnetic-bearing centrifugal chillers, mission-critical computer room air handlers, and a logic-based BMS can grow and evolve with data centers, providing continuous improvement today and tomorrow.
More about Johnson Controls
New organization has appointed Todd Grabowski as CEO
Why has DCD been talking about air cooling this week? | <urn:uuid:d63b0fcd-bad9-4675-a62f-e9793b1fc5e9> | CC-MAIN-2024-38 | https://www.datacenterdynamics.com/en/opinions/revolutionizing-data-center-sustainability-with-intelligent-purpose-built-solutions/ | 2024-09-10T10:21:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00806.warc.gz | en | 0.91755 | 2,444 | 2.703125 | 3 |
In 2014 a refrigerator was implicated in a spam attack involving the distribution of over 750,000 e-mails! The botnet had incoporated about 100,000 devices as part of the attack. This was framed as the first documented attack that involved Internet of Things (IoT) devices. In 2015, researchers exposed security holes in Wi-fi enabled Barbie dolls and Jeep Cherokees. Fast forward to 2016, and an attack that exploited IoT device vulnerabilities and poor network architecture resulted in major brands like Netflix andTwitter and others being severly impacted. This pattern shows that as long as vulnerabilities exist, and bad actors persist such attacks will likely continue to grow in frequency and impact. (You can read my colleague and DNS guru’s perspective on his blog entitled: How to Defend Against the Next DDoS Attack).
The impact of such attacks has grown from a nuisance (spam) to real bottom line impact – for companies that make their money on adverstising or e-commerce – network downtime translates to lost revenue immediately. As the use of such devices grows in segments like healthcare and mining the impact goes beyond money – it could impact human life! While one could argue that eliminating attacks is a myth, it is incumbent upon all of us to make sure that we use every possible avenue to prevent such attacks, detect them quickly when they do happen and are set up to respond rapidly when we do discover them.
On Wednesday, November 2nd, Infoblox’s Chief DNS Architect, Cricket Liu, will share his insights on the recent DDoS attack and discuss best practices during two webinars: one at 3 p.m. GMT (8 a.m. Pacific Time) and the second at 5 p.m. GMT (10 a.m. Pacific Time) entitled “Don’t Be the Next DDos Attack!”
Cricket will discuss:
- Best practices for deploying a DNS architecture
- The role DNS security plays in your network infrastructure
- The pitfalls you should avoid
He will continue the conversation with a Live TweetChat from 11:15-11:45am PDT. You can ask questions or follow the conversation by using the hashtag #DontBeNextDDoS. | <urn:uuid:c2e19235-9c81-4a7f-89e7-2eb4cc8eb9bd> | CC-MAIN-2024-38 | https://blogs.infoblox.com/company/internet-of-things-attacks-are-more-than-a-nuisance/ | 2024-09-17T22:48:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00206.warc.gz | en | 0.949457 | 458 | 2.546875 | 3 |
In today’s digital landscape, the security of application code is paramount to protect sensitive data, prevent unauthorized access, and safeguard against cyber threats. As technology advances, so do the techniques used by malicious actors to exploit vulnerabilities in software. Therefore, developers must implement robust security measures to fortify their application code against potential attacks.
Here are some best practices and strategies to enhance the security of application codes:
1. Secure Coding Standards: Adhering to secure coding standards is the foundation of building secure applications. Developers should follow established guidelines such as OWASP (Open Web Application Security Project) Top 10 and CWE (Common Weakness Enumeration) to mitigate common vulnerabilities like injection attacks, cross-site scripting (XSS), and insecure deserialization.
2. Input Validation and Sanitization: Validate and sanitize all user inputs to prevent injection attacks, such as SQL injection and XSS. Use input validation techniques such as white-listing and regular expressions to ensure that only expected data formats are accepted, thereby reducing the risk of malicious input.
3. Authentication and Authorization: Implement strong authentication mechanisms, such as multi-factor authentication (MFA) and OAuth, to verify the identity of users accessing the application. Additionally, enforce proper authorization controls to restrict access to sensitive resources based on user roles and privileges.
4. Data Encryption: Encrypt sensitive data both at rest and in transit to prevent unauthorized access. Utilize strong encryption algorithms and secure key management practices to safeguard data confidentiality. Implement Transport Layer Security (TLS) protocols for secure communication between the application and its clients.
5. Secure Configuration Management: Maintain secure configurations for all components of the application stack, including web servers, databases, and third-party libraries. Disable unnecessary services, apply patches promptly, and configure security set-tings according to industry best practices to reduce the attack surface.
6. Secure Development Lifecycle (SDLC): Integrate security into every phase of the software development lifecycle, from design and development to testing and deployment. Conduct regular security assessments, code reviews, and penetration testing to identify and remediate security vulnerabilities early in the development process.
7. Dependency Management: Monitor and manage dependencies on third-party libraries and components to mitigate the risk of supply chain attacks. Keep dependencies up-to-date by applying security patches and conducting periodic vulnerability scans to detect and remediate known vulnerabilities.
8. Error Handling and Logging: Implement robust error handling mechanisms to grace-fully handle exceptions and prevent information leakage that could aid attackers. Utilize centralized logging and monitoring solutions to track and analyze application logs for signs of security incidents or abnormal behavior.
9. Security Training and Awareness: Provide security training and awareness programs for developers to educate them about common security threats and best practices. Foster a security-conscious culture within the development team to prioritize security through-out the software development lifecycle.
10. Continuous Improvement: Embrace a culture of continuous improvement by regularly evaluating and enhancing the security posture of the application code. Stay informed about emerging security threats and evolving best practices to adapt and respond effectively to new challenges.
By incorporating these best practices and strategies into the development process, organizations can significantly enhance the security of their application code and mitigate the risk of security breaches and cyber attacks. Remember, security is not a one-time effort but an ongoing commitment to protecting sensitive data and preserving the integrity and trustworthiness of applications in an increasingly interconnected world. | <urn:uuid:79c45c2d-b23a-47a2-9e7c-99832ffd7db0> | CC-MAIN-2024-38 | https://www.cybersecurity-insiders.com/enhancing-application-code-security-best-practices-and-strategies/ | 2024-09-17T22:47:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00206.warc.gz | en | 0.90458 | 711 | 2.671875 | 3 |
Net operating assets are the operating assets of an organization minus its net operating debts. NOA is determined by converting the balance sheet to show all operating activities as separate from finance activities. An organization’s balance sheet can also be used to calculate NOA.
Net operating income can be measured by either gross or net operating expenses. Gross expenses are those that would be incurred in paying employees. Net expenses include those expenses that would be incurred by the business owner’s company in buying goods and services. All these expenses are subtracted from the gross income before calculating net operating income.
Gross income can also be measured by calculating total revenues minus total expenses. Total revenues are measured after deducting operating expenses from gross income to determine net operating income.
Gross income can also be measured by multiplying the gross income with the number of people in the company. The formula is: gross income divided by number of employees.
Capitalization is used to identify the value of a business’s financial assets. It is determined by deducting costs from the value of the assets. All costs must be allocated to fixed assets or identifiable financial assets. Common assets and fixed assets consist of equipment, property, and accounts receivable.
Goodwill is the difference between the total assets of a corporation and its total net tangible assets, which include cash and accounts receivable. Net tangible assets also include tangible fixed assets such as land and buildings.
The capitalized cost of assets is the cost of producing a tangible asset, which includes the cost of land, building, buildings, equipment, machinery, and supplies. The total cost of production is the amount of money spent for purchasing raw materials, labor, and overhead charges. Total cost of sales is the total cost of production divided by revenue for an organization’s sale of goods and services.
Net worth is the sum of total assets less total liabilities. It is the value of all the assets owned by an organization divided by its value of liabilities. Net worth is measured by multiplying net worth by the net income that is derived from subtracting liability from net income.
Business assets are categorized into three categories namely fixed assets, variable assets, and identifiable assets. Fixed assets are tangible assets such as fixed assets in the form of fixed assets and liabilities.
Equity securities are stocks and bonds of an organization. They consist of bonds, debentures, stock, treasury notes, mortgage notes, and rights to dividends and interest earned.
A business’s net worth is calculated by subtracting from gross income plus net assets from net assets. to get the net worth of an organization. Net worth is equal to gross income plus net assets minus net liabilities. . The difference between the two is called net debt.
The net worth of an organization is then subtracted from the gross income of an organization to determine net assets of an organization. Net assets include gross income and net assets. Net debt is the difference between gross income and net assets.
The net worth of a company is the difference between gross income minus net assets. to get the net worth of a particular company. Net assets of a particular company are determined by adding the net assets of the organization and the total liabilities of the organization.
Net assets are used by financial institutions to determine the ability of a firm to pay its debts. It is also considered a tool by insurance companies to determine the credit rating of a firm.
It is important to note that there are two types of assets that constitute a firm: tangible and intangible. Intangible assets are such things that cannot be physically produced.
When the value of tangible assets decreases, the ability of a firm to meet its liabilities increases. When net assets of a firm increase, its ability to pay its debts becomes unlimited.
Editor-in-Chief since 2011. | <urn:uuid:ddae6c2c-725c-49e7-945d-7d2eee6adea3> | CC-MAIN-2024-38 | https://globalislamicfinancemagazine.com/the-different-measures-of-net-worth/ | 2024-09-08T07:08:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00206.warc.gz | en | 0.967256 | 772 | 3.65625 | 4 |
Antivirus software is a critical component of security for safeguarding devices and data in the modern digitally-connected landscape. As malware and cyber threats grow more advanced and ubiquitous, robust antivirus protections provide a fundamental layer of defense.
A. Brief overview of antivirus software
Antivirus software refers to programs designed specifically to detect, block, and remove various forms of malicious software or “malware” including viruses, worms, trojans, spyware, adware, ransomware and more. It utilizes techniques like signature-based scanning, heuristic analysis, emulation, and more to identify threats and prevent infection or exploitation.
Antivirus protects devices like desktops, laptops, smartphones and servers by scanning files, memory, boot records, firmware, and other potential infection points on a regular basis to flag malware. When threats are discovered, the antivirus will attempt to quarantine, delete or clean the associated files or system alterations to remove the infection. As long as the antivirus signatures and security definitions stay updated, it serves as an effective shield against most common malware.
B. Importance of antivirus software in today’s digital landscape
With the average cost of a data breach now reaching $4.35 million in 2022, plus potential reputational damage and operational disruption, protecting infrastructure through security tools like antivirus has become imperative for organizations and individuals alike.
Cybercriminal efforts are ever-evolving, with 350,000 new malware samples observed daily. Phishing, drive-by downloads, malvertising and other social engineering ploys uniquely target human vulnerabilities rather than purely technical defenses. Rigorous security hygiene through antivirus, firewalls, access controls and user education serves as a crucial safety net mitigating risk and preventing incidents in the face of this relentless onslaught of attacks leveraging malware as a preferred infection vector.
Failing to maintain comprehensive antivirus protections leaves digital assets dangerously exposed at a time when data security carries elevated importance, tangibly impacting bottom lines and even national security interests.
II. Personal Antivirus Software
Antivirus tools geared toward home users and designed to run locally on consumer devices tend to prioritize ease of use, minimal impact on system performance, and integration with other security utilities over advanced protection capabilities more common in enterprise-level software suites better resourced to handle elevated complexity.
A. Scope of protection
Personal antivirus solutions focus on safeguarding individual devices most vulnerable to threats like laptops, desktops, tablets, and smartphones rather than expansive networks of systems. Protection revolves around scanning storage media, blocking malicious sites/files, and monitoring system behavior for signs of infection or exploitation activity.
1. Individual devices (laptops, PCs, smartphones)
Personal antivirus runs locally on devices, shielding operating systems from infection via scans of internal and external storage media, email attachments, downloads, and even web traffic in some cases. Protection scales to the hardware capabilities of each device.
Rather than centralized management, home antivirus tools get installed or activated directly on each device through self-service apps or activation using license keys. Cloud-linked dashboards may provide visibility across protected endpoints for monitoring and maintenance.
1. Installed on each device
Users download antivirus apps or suites onto laptops, phones, and other consumer gadgets independently, maintaining software updates and otherwise configuring preferences device-by-device.
C. Security features
Basic protections akin to malware/virus scanning, email/download monitoring, firewall activity blocking and web filtering typically comprise the core capabilities of consumer-grade antivirus platforms as opposed to the full-fledged endpoint detection and response stacks seen in business contexts.
1. Basic malware and virus protection, firewall
At minimum, home antivirus will scan files/applications on disk and memory for malware signatures, block connectivity for suspicious traffic patterns, inspect web URLs accessed through browsers, and match other basic indicators of compromise associated with commodity malware and mass threats. But more advanced protections largely remain absent.
III. Business Antivirus Software
Enterprise-focused antivirus solutions trade enhanced protection breadth, depth and manageability for substantially greater cost and configuration complexity – a tradeoff the much higher stakes of organizational malware disruption and data theft warrants.
A. Scope of protection
Rather than just safeguarding a handful of consumer devices, business antivirus solutions secure potentially thousands of networked endpoints across entire companies including servers, user workstations, remote systems, cloud infrastructure, mobile/IoT gear and more under a “security umbrella”.
1. Networked devices (servers, desktops, laptops, mobile devices)
Robust enterprise antivirus leverages domain integration, group policy administration controls and centralized dashboards to monitor and enforce advanced malware protections for sometimes tens of thousands of business devices simultaneously including mission-critical servers alongside employee laptops, mobile phones enrolled via MDM and everything in between.
Managed through server-based centralized consoles, enterprise antivirus gets pushed to endpoints across the network, allowing for remote installation, updates, configuration changes, scan scheduling and security management en masse rather than piecemeal.
1. Centralized management and deployment, often cloud-based
IT administrators send out antivirus platform updates, rule changes and installation commands through unified portals as opposed to local self-service, enabling consistent security policy enforcement under centralized authority and oversight at enterprise scope. Cloud-hosted management capabilities further aid unified control of distributed environments.
C. Security features
Business-oriented antivirus platforms incorporate advanced detection techniques like machine learning-driven behavioral anomaly detection, deceptive sandbox environments, firmware scanning, and deep integration with other terminal security tools to identify sophisticated threats which trip up consumer-grade protections.
1. Advanced features like sandboxing, real-time protection, and remote management
Commercial antivirus graduates beyond basic signature scanning to add proactive capabilities purpose-built to flag zero-day exploits like:
- Cloud-augmented malware intelligence updating protections against new attack patterns in real-time
- Memory injection interception stopping stealthy in-memory payloads
- Decoy sandbox environments tricking behavior-based threats into revealing themselves
- Encrypted traffic inspection defusing HTTPS-masked infections
- Full disk and firmware scanning unmasking deeply embedded rootkit infections
- Remote containment allowing immediate isolation of infected nodes
- Device control policies checking unauthorized peripheral usage
- Security activity event centralization and automated alerting
These enterprise-level features recognize and halt advanced threats consumer antivirus misses, though at a proportionally elevated cost and skill investment to operate effectively.
IV. Free vs Paid Antivirus Software
Weighing whether to invest in premium antivirus capabilities requires examining the constraints of freeware against the expanded protection horizons commercial suites unlock to make the right choice per individual tolerance of risk versus cost.
A. Key differences between free and paid antivirus solutions
Free antivirus protection leverages signature scanning alongside cloud intelligence about prevalent threats to identify and isolate common malware strains while paid options incorporate advanced heuristics, machine learning and other enhanced techniques to catch sophisticated threats zero-day freeware fails to recognize.
B. Pros and cons of free antivirus software
Free antivirus strikes an appealing balance for cash-conscious consumers…with some substantial caveats:
1. Limited protection, mostly reactive
Freeware antivirus relies heavily on static signature libraries to pinpoint only previously documented malware strains. Until a threat’s signature gets identified and added to definitions, zero-day exploits often slide right by. Protection lags threats rather than proactively intercepting.
2. Cannot detect unknown threats
Without robust heuristic scanning logic, sandboxing environments, malware analytics or other advanced detection mechanisms, novel evasive malware outside cybercriminal commodity kits easily defeats free antivirus lacking the context to flag such threats as suspicious.
C. Pros and cons of paid antivirus software
Paid antivirus delivers markedly expanded security scale yet carries a recurring financial cost factoring into the value proposition:
1. Advanced security features, proactive protection
Commercial suites contain a diversity of complementary detection approaches from attack pattern analytics to behavioral anomaly monitoring which expose even unique zero-day threats exhibiting the hallmarks of malware without matching any specific signature. Prevention occupies a top priority.
2. Protection from unknown threats
Going beyond surface level scans, multi-layered paid endpoint protection platforms leverage isolation environments, deep packet inspection, process DNA mapping and other techniques to reveal novel threats before they have a chance to spread or trigger catastrophe.
D. Choosing the right antivirus software for your needs
Ultimately both free and paid business and consumer antivirus options bring distinct advantages and disadvantages. Prioritizing cyber incident protection investments based on specific risks posed by potential malware disruption offers the most effective way to navigate the complex modern threat landscape.
Evaluating factors like sensitivity of accessible data, regulatory compliance burdens, frequency of networked access granting infection vectors entry, human vulnerability to social engineering, effectiveness of complementary security controls like firewalls or backups, and overall tolerance for malware-linked business disruption guide wise investment.
Home users face far lower stakes around potential malware incidents relative to heavily networked enterprise environments with extensive sensitive data stores and mission-critical infrastructure to defend. As such, paid solutions make obvious sense for organizations but potentially overkill for cautious individual consumers even given limitations of freeware. Properly weighing these tradeoffs determines ideal antivirus posturing.
V. Anti-Malware vs Antivirus Software
While the terms get used interchangeably, some subtle feature differences exist between anti-malware and antivirus software in terms of malware scope, protection capabilities and deployment – distinctions that can inform specialized security tooling choices.
A. Differences between anti-malware and antivirus software
Traditionally antivirus tools focus specifically on targeting computer viruses in particular while anti-malware solutions take a broader approach to combatting viruses alongside worms, trojans, spyware, adware, ransomware, rootkits and other threats under the wider malware umbrella. Otherwise anti-malware apps closely resemble antivirus functionally.
B. When to use anti-malware software
The more expansive purview of anti-malware software makes it appealing for consistent, general purpose malware protection on endpoints likely to encounter multiple threat varieties. Particularly when antivirus gaps may leave spyware, adware or ransomware protection lacking, anti-malware picks up the slack.
C. When to use antivirus software
Antivirus may better suit specialized use cases like shielding servers hosting sensitive data from specifically virus-based threats, where the somewhat broader focus of anti-malware risks performance overhead without notably expanding protection given the more limited risks lacking heavy exposure to web and email-based vectors more likely to introduce diverse malware strains.
For broader endpoint protection against an array of attack vectors, anti-malware solutions carry an advantage in threat scope. But for streamlined, performance-optimized scanning against common infection vectors on infrastructure like servers, antivirus can make more sense assuming protections against other malware remain covered through layered controls.
A. Recap of the importance of antivirus software
Antivirus software enables fundamental protections which defend devices and networks against prevalent cyber threats attempting to infiltrate environments using malware as the exploit vehicle of choice. Securing endpoints via antivirus dramatically reduces attack surface area and hardens systems against compromise, preserving functionality and trustworthiness of infrastructure both locally and at enterprise scale.
B. Final recommendations for choosing the right antivirus software for your needs
Carefully evaluating risk factors like sensitivity of accessible data, likely malware infection vectors based on system connectivity and user behavior, effectiveness of auxiliary defenses like firewalls and backups, regulatory mandates, and overall disruption tolerance allows methodically deciding where investing in advanced paid antivirus capabilities makes prudent sense versus relying on consumer freeware options.
For cash-conscious home users already practicing cautious computing habits, free antivirus can provide “good enough” security. But businesses managing extensive sensitive data stores and mission-critical infrastructure face far higher stakes around potential malware incidents – meriting proportional investments into robust, proactive threat detection and response via commercial-grade antivirus suites purpose-built to lock down vulnerabilities at enterprise scale.
Regardless of solution chosen, maintaining reliable antivirus protections adapted to match the evolving threat landscape through vigilant updates remains non-negotiable for sustaining adequate security posture in our abundantly interconnected world where malware dangers lurk around every corner. | <urn:uuid:1a6049a9-f90b-4d65-a04c-f62aeb202b8f> | CC-MAIN-2024-38 | https://nirvanix.com/best-antivirus/compair-antivirus/ | 2024-09-08T05:58:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00206.warc.gz | en | 0.870397 | 2,523 | 3.265625 | 3 |
Data Governance has emerged as one of the top priorities for organizations across the globe. Given this reality, organizations are handling their data consistently to support business outcomes.
So, what is Data Governance for enterprises, and why is it important?
Data Governance is a set of laws implemented in an organization for deciding the control and authority over their data assets, which means it goes a long way in affecting the tactical and operational decisions in most enterprises.
Usually, a proper Data Governance program involves using agreed-upon models and determining who can use the company’s data assets and under what circumstances.
Data executives across the globe have begun prioritizing enterprise Data Governance, given that regulations like GDPR and CCPA are increasingly being implemented.
Efficient Data Governance is crucial for the integrity, security, availability, and usability of the data. So, it makes a lot of sense to ensure that your organization has the appropriate certifications to get it right.
Below see a list of certifications that should be included for proper Data Governance:
- AICPA SOC 2 (Type II): This certification makes sure that the data is secure, available, and that it maintains its integrity.
- ISAE 3000: This is a certification that is instrumental to the protection of non-financial data.
- PCI DSS–PCI SSC: For the integrity of payment transactions, this certification is crucial.
- ISO/IEC 27018:2019: Securing PII starts with this certification.
- ISO 27017:2015: Ensuring that this certification is present is a great way to safeguard your cloud services.
- NIST Cybersecurity Framework: As a result of this certification, the data security risk is substantially low.
- US Privacy Shield: This certification is mainly aimed at EEA citizens. Their complaints can be seamlessly resolved.
- ISO 27001:2013: This particular certification has the purpose of maintaining the integrity of the information security management system.
Enterprise Data Governance: How to Implement an Effective Framework
Due to the various enterprise Data Governance challenges, maintaining the safety, quality, and integrity of your data assets can be a daunting task.
Therefore, employing these seven steps is important as your enterprise Data Governance policy can go a long way in strengthening your Data Governance efforts throughout your company.
Step 1: Focus on the areas that require improvement
You may be tempted to deal with all the data issues together. But, a surefire way to maintain the integrity of your data is to target one or two assets that provide the maximum scope for data asset improvement. When you selectively weed out Data Governance issues, you will find that it provides you with a sound foundation for enabling Data Governance across the company.
Step 2: Leverage the power of data to the fullest
Data needs to be readily accessible if it is to be appropriately governed. Using various integration technologies and Data Governance best practices, modern companies can make sure of this despite the data existing in diverse forms.
Step 3: Make rules, roles, and responsibilities
Ensure that people who work with information in your company data are governed by an optimal process that safeguards data integrity.
Step 4: Ensure that the available information is high-quality
For the effectiveness of a Data Governance undertaking, one of the crucial requirements is data integrity. You can use the following systematic approach:
- Profiling: This refers to comparing your data to a predefined metric so that you can gauge if it’s good or bad.
- Parsing and Standardizing: This refers to the process of validating and correcting the data in accordance with the industry and company standard. You mainly check for things like case standardization and name formats.
- Enrichment: The idea of enrichment is simple. You garner and enhance your existing data using new data, such as the data pertaining to geocode.
- Monitoring: This step is important if you want Data Quality to be consistent.
Step 5: Set up infrastructure that ensures total accountability
It is worth mentioning that unless people are held accountable, your asset quality cannot be ensured. For this, you need to assign “owners” for each of your assets and provide them with the right technology for its management because manual processes are prone to errors even if they are well-monitored.
Step 6: Move to a master data-based culture at your enterprise.
Another useful technique to incorporate into your Data Governance program is the process of moving from a transaction data–based culture to a master data-based culture. With proper Master Data Management, companies can ensure much better Data Governance.
Step 7: Develop a feedback mechanism for the sake of process improvement.
It is crucial to have a feedback mechanism built into the process to allow for the constant improvement of Data Governance initiatives.
For this final step, graphical, real-time Data Governance tools can enable the feedback and enhancement cycle. Doing this will give you a clear idea of how the Data Governance initiatives are working on your information assets to make sure it is running as per your desire.
Proper Data Governance strategy is crucial for a company to effectively handle data. But, at times, it can be difficult.
While each organization has its unique challenges, the framework provided above can ensure the implantation of an efficient enterprise Data Governance structure.
Originally published at Dataversity | <urn:uuid:602a6ec5-348a-4950-ad9e-4e299f2ddbec> | CC-MAIN-2024-38 | https://guptadeepak.com/data-governance-for-enterprises-important-principles-for-value-generation/ | 2024-09-15T13:04:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00506.warc.gz | en | 0.921637 | 1,110 | 2.765625 | 3 |
NTP (Network Time Protocol) is one of the most common protocols in IP networks and is implemented in most network devices. Its role is to synchronize the device’s time and ensure it is up to date so various time-based mechanisms can function.
While the protocol seems to be highly secure, it contains several exploitations. One of them, which is relatively easy to implement, is tampering with the received time. It means the received NTP message will “look and feel” completely normal but contain an inaccurate time. Thus, the requesting device will be desynchronized from the proper time, which could disorder its operation.
For example, think about your laptop that should send a notification 10 minutes before a meeting. It will send it, but at a different time, and you will miss the meeting. A more critical incident could be that your database server will miss its daily backup, causing your organization to lose data. NTP desynchronization could even be used to harm your entire organization network.
NTP depends on public servers that should be trusted, accurate, secured, and relied on for sending the accurate time. Therefore, many NTP attacks are based on communicating with an illegitimate public server. It has also been found that several NTP pools don’t authenticate their service providers. Thus, it is possible that hosts in the pool are acting as “double agents”, providing incorrect information and attacking the users.
Cynamics next-gen NDR collects small network samples (less than 1%) and covers the entire 100% network. Specifically, our AI threat prediction technology constantly discovers NTP exploitations in clients’ networks. A common issue is endpoints that are querying NTP from public IPs worldwide that have nothing to do with NTP actually. Further research revealed that these endpoints had malware changing their NTP settings to use a malicious public host for the NTP communications, as a command and control, and even a focused data leakage. In other cases, the NTP issue resulted from a naive misconfiguration.
Cynamics recommends ensuring that all your NTP communications are configured only with highly trusted NTP pools. The best practice in North America is NIST NTP servers. And even better, create a dedicated NTP server that will be responsible for synchronizing your network’ devices instead of having your devices querying NTP directly.
Cynamics clients see their entire network like nothing before, not leaving any part behind as a blindspot. Reach out to us today to begin your free trial to mitigate NTP and other threats in your network. | <urn:uuid:6119a79c-ac5b-4392-90f2-790547fc485a> | CC-MAIN-2024-38 | https://www.cynamics.ai/post/detecting-ntp-attacks-with-cynamics-ndr | 2024-09-15T13:31:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00506.warc.gz | en | 0.960543 | 535 | 2.84375 | 3 |
Syntax: connect [<servername>]
The CONNECTcommand initiates a connection to an FTP server. If no site is specified, a dialog box will prompt for this information. Unlike OPEN, this command does not prompt for user name and password. This information must be entered manually. For example, the following command sequence would connect you to an FTP server that does not use a passthrough server (firewall):
Note that passwords appear as text on screen when you enter them directly in the command window. Because the OPEN command prompts for passwords with dialog boxes that do not display password text, this command is preferable for most connections. Use CONNECTif you are troubleshooting connections through a firewall. | <urn:uuid:9cd67712-1311-42f1-8c36-9953ec637a44> | CC-MAIN-2024-38 | https://www.microfocus.com/documentation/reflection-ftp-client/21-0/en/user-guide/command-reference/ftp-commands/connect.html | 2024-09-15T12:49:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00506.warc.gz | en | 0.842239 | 144 | 2.9375 | 3 |
Table of Contents
Data observability is a holistic approach that automates the identification and resolution of data problems, thereby simplifying data system management and improving performance. Its goal is to enhance the reliability and credibility of insights derived from data, all while ensuring data availability.
Comprehensively, it involves understanding and expertly managing the health and performance of data, pipelines and critical business processes. Data observability gives organizations a detailed view of their data ecosystem, providing insights into data consumption, meticulous protection and precise alignment with relevant policies and regulations.Safe, transparent and traceable
Organizations are accumulating vast amounts of data at an unprecedented pace. Critical to today’s business environment, data serves as the lifeblood of decision-making processes, fueling analytics, machine learning and business intelligence initiatives. However, the value of data is directly proportional to its availability and quality.
This is where data observability tools come into play. Supported by robust data quality measures, data observability can be the difference between actionable insights and unreliable outcomes. It can act as a strategic imperative for organizations aiming to extract maximum value from their data assets.
Data observability enables organizations to:
- Identify and resolve data issues quickly
- Optimize data availability, performance and capacity
- Ensure data quality and reliability
- Mitigate risks and safeguard reputation
The term data observability is relatively new; however, the concept of data observability has been around for decades and has become increasingly important in the data-driven era. Data observability emerged as a response to the growing complexities of data-driven operations in the early stages. Early adopters recognized the need to monitor data pipelines for performance and data quality issues. However, as technology rapidly advanced, and organizations embraced a more comprehensive data-driven approach, the scope of data observability expanded. With big data, cloud computing and modern analytics, data observability evolved to encompass the holistic monitoring of data ecosystems, including data sources, transformation processes and business context.
Today, data observability is more than just a tool for detecting problems. Organizations use it as a strategic asset to ensure data reliability, compliance, security and operational efficiency. Here are some specific examples of how data observability is being used to improve business processes and outcomes:
- A retail company uses data observability to identify and fix data quality issues causing inaccurate product recommendations.
- A financial services company uses data observability to detect and prevent fraud.
- A healthcare company uses data observability to identify and address trends in patient care.
- A manufacturing company uses data observability to optimize production processes and reduce waste.
As data grows in volume and importance, data observability will continue to evolve, adapting to new technologies, regulations and the increasing complexity of data landscapes.
Artificial Intelligence (AI) is increasingly transforming the field of data observability by enabling a thorough understanding of data and creating more operational efficiency, reliability and security in data infrastructure. Here are a few examples:
Anomaly Detection: An integral part of the transformation derived from AI anomaly detection. Through machine learning algorithms, AI can identify unusual patterns or behaviors within large datasets that deviate from what is considered normal or expected. This capability to detect outliers can help to flag potential data quality issues, ensuring data integrity, avoiding skewed analytics and helping to prevent larger systemic problems.
Automatic Resolution of Data Quality Issues: AI technology can help with the automatic resolution of data quality issues. By detecting inconsistencies or errors in the data, it can take the necessary steps to rectify these problems or notify users to review them. This process guarantees the dependability and accuracy of the data, which saves time and lowers the need for manual intervention.
Auto-Tuning Data Optimization: The use of AI technology has expanded to include data optimization through auto-tuning features. By analyzing historical performance metrics and data trends, AI can automatically adjust system parameters to achieve optimal performance. This not only enhances system efficiency but also reduces the need for continuous human oversight.
Auto-Scaling: AI can facilitate the seamless scalability of data operations through auto-scaling. This feature monitors the system demand and scales resources accordingly. This ensures the system always operates at the right capacity, thus optimizing infrastructure investment.
There are three measurement components comprising data observability: metrics, logs and traces. These components are interrelated and collectively contribute to the observability of both data and systems. They offer insights into data health, quality, performance and dependencies. Let’s look at these components in more detail.
Metrics provide quantifiable insights into the health and performance of data, including data latency, throughput, error rates and data quality indicators. For example, monitoring patient records or diagnostic data accuracy in the healthcare sector ensures that healthcare professionals depend on reliable information for medical decisions. Metrics help organizations identify data anomalies, allowing for prompt issue resolution and maintaining high-quality data.
Logs provide a detailed record of data events, changes and interactions, essential for upholding data quality and capturing historical information about data processing. For example, transaction logs are used in the financial industry to maintain a chronological record of financial activities, enabling fraud detection and auditing. Logs are instrumental in pinpointing the root causes of data issues, helping organizations maintain data quality and trustworthiness.
Traces give organizations a detailed view of data flow and dependencies within complex data environments. They are essential for comprehending how data moves through a network of systems and processes. For example, a retail company uses a machine learning model to generate recommendations based on a customer's purchase history. The company traces the flow of data into the machine learning model. This allows the company to identify the data sources and how it was transformed, which are most important to the model's accuracy. Tracing also aids in understanding the interdependencies between different data sources and systems. Traces help organizations gain insights into the intricacies of their data ecosystem and improve data flow for optimal efficiency and effectiveness.
Figure 2: The three lenses of data observability
Three core lenses, as shown in Figure 2, form the foundation of achieving the goals of data observability.
Data. Focuses on monitoring and understanding the overall health of data, identifying and resolving data quality issues, anomalies and bottlenecks.
Pipeline. Centers on monitoring and understanding the health of data pipelines, identifying and resolving performance issues, capacity issues and errors.
Business. Emphasizes the monitoring and analysis of how businesses consume and use data, and pertaining to the identification and resolution of compliance, security and governance issues.
Let’s now explore some of the crucial features needed from a data observability tool to deliver on the above data observability lenses.
Proactive detection of data quality issues and anomalies: Address issues preemptively by employing automated data quality checks and anomaly detection algorithms before they can impact downstream processes.
Alerts based on scorecard: Set up alerts based on predefined data quality scorecards. If data quality metrics fall below acceptable thresholds, alerts are triggered, notifying stakeholders of potential issues. This proactive approach can help ensure data remains reliable and fit for intended use without requiring constant manual monitoring.
Impact analysis: Understand how changes in data sources, schema modifications or data pipeline adjustments affect downstream processes and analytics with data lineage capabilities.
Observe infrastructure for critical jobs: Monitor the underlying pipeline infrastructure to ensure they function and perform properly.
Connection and integration observability: Monitor data connections and integrations to help identify and address connectivity issues, ensuring pipeline stability and that data flows smoothly between systems.
AI-powered self-heal, auto-tune, auto-scale: Leverage machine learning and AI capabilities to self-diagnose issues, auto-tune configurations for optimal performance, auto-scale resources as needed and intelligently shut down processes or systems during periods of inactivity. This automation reduces manual intervention and enhances the efficiency of data pipelines.
Automated delivery, fulfillment and observability with governed workflows in a data marketplace: Enable users to easily access and consume data assets while maintaining control and oversight over the data workflows.
Package and consume assets with governed data sharing: Help to ensure data sharing follows predefined rules and regulations, promoting data security and compliance.
Improve the Efficiency and Effectiveness of Data Pipeline Management with FinOps: Aid in monitoring and optimizing resource consumption, ensure efficient data processing and control costs associated with data pipelines, enhancing the financial effectiveness of data operations.
These expanded capabilities empower organizations to maintain data integrity, optimize resource utilization and proactively address data-related challenges.
Is a data observability tool the right fit for your organization? Assessing the need for such a tool involves evaluating your data landscape's complexity, the data's criticality to your operations and your data quality requirements.
A data observability tool becomes increasingly valuable if your organization deals with diverse data sources and intricate data pipelines and relies heavily on data-driven decision-making.
In this context, data observability tools can help you to:
Gain visibility into your data landscape: See how your data flows through your systems and identify potential bottlenecks or problems.
Improve data quality: Identify and fix data quality issues, such as missing values, inconsistencies and outliers.
Reduce downtime: Catch and resolve issues quickly before they cause downtime.
Improve operational efficiency: Detect and fix operational inefficiencies, such as redundant jobs and inefficient data processing.
Reduce costs: Collect data-related metrics and performance indicators, enabling organizations to monitor resource consumption, such as computing and storage, and accurately attribute data-related costs to specific departments or projects.
Observing data pipelines alone is not sufficient; while monitoring and optimizing them are crucial for smooth data flow, they represent just one part of the broader data observability concept. Data observability encompasses the technical aspects of data movement, its quality, usage and impact, requiring a multidimensional approach for comprehensive coverage in the data ecosystem.
A quick online search reveals various vendors and analysts offering different views on data observability types, pillars and lenses. Consequently, there are diverse data observability tools, each with its own focus—some monitor pipelines and infrastructure, others detect anomalies and outliers, or identify data quality issues. Some tools offer insights into resource utilization for informed decisions on resource allocation and costs.
The table below summarizes key capabilities required from a data quality and management platform for effective data observability. While it's essential to access all these capabilities, starting with the one most crucial to your needs allows for a gradual expansion of data management and observability efforts.
Focus / Perspective |
Data Health and Issue Resolution |
Data quality monitoring, error tracking, data profiling, real-time dashboards and reporting, issue resolution workflow and data lineage |
Data Flow Observation |
Job monitoring, dependency mapping, real-time tracking, job status alerts and performance metrics |
Availability, Performance and Capacity |
Resource allocation, performance tuning, capacity planning, scalability metrics and availability monitoring |
Data Agility and Compliance |
Resource Consumption Tracking |
Resource usage analytics, cost optimization, historical resource data, auto-scaling and resource forecasting |
Healthy data pipelines are essential for ensuring data availability, reliability and performance. However, pipeline health becomes less relevant if the data they carry is unfit for purpose. While pipelines may function well technically, poor data quality—manifesting as inaccuracies, incompleteness, or inconsistencies—can undermine the overall value of the data ecosystem. These issues persist irrespective of pipeline health, leading to incorrect insights, flawed decision-making and compliance concerns.
Recognizing that data quality and pipeline health are intertwined is crucial; neglecting one diminishes the effectiveness of the entire data infrastructure. To unlock the full potential of data, a holistic approach is necessary, combining pipeline health observability with robust data quality measures. | <urn:uuid:bf6cec7f-a970-406b-bb59-b055b532b300> | CC-MAIN-2024-38 | https://www.informatica.com/ca/resources/articles/what-is-data-observability.html | 2024-09-18T00:56:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00306.warc.gz | en | 0.896824 | 2,409 | 3 | 3 |
When Satoshi Nakamoto, whose true identity is still unknown, released the whitepaper Bitcoin: A Peer to Peer Electronic Cash System in 2008 that described a “purely peer-to-peer version of electronic cash” known as Bitcoin, blockchain technology made its public debut. Blockchain, the technology that runs Bitcoin, has developed over the last decade into one of today’s biggest ground-breaking technologies with potential to impact every industry from financial to manufacturing to educational institutions. Here’s a brief history of blockchain technology and some thoughts about where it might go in the future.
You can’t discuss the history of blockchain technology without first starting with a discussion about Bitcoin. Shortly after Nakamoto’s whitepaper was released, Bitcoin was offered up to the open source community in 2009. Blockchain provided the answer to digital trust because it records important information in a public space and doesn’t allow anyone to remove it. It’s transparent, time-stamped and decentralised.
“Blockchain is to Bitcoin, what the internet is to email. A big electronic system, on top of which you can build applications. Currency is just one,” Sally Davies, FT Technology reporter.
Blockchain Separates from Bitcoin
Even today, there are many who believe Bitcoin and blockchain are one and the same, even though they are not. Those who started to realise around 2014 that blockchain could be used for more than cryptocurrency started to invest in and explore how blockchain could alter many different kinds of operations. At its core, blockchain is an open, decentralised ledger that records transactions between two parties in a permanent way without needing third-party authentication. This creates an extremely efficient process and one people predict will dramatically reduce the cost of transactions.
When entrepreneurs understood the power of blockchain, there was a surge of investment and discovery to see how blockchain could impact supply chains, healthcare, insurance, transportation, voting, contract management and more. Nearly 15% of financial institutions are currently using blockchain technology.
Ethereum Rises: Smart Contracts
Vitalik Buterin, co-founder of Ethereum and Bitcoin magazine, was also an initial contributor to the Bitcoin codebase, but became frustrated around 2013 with its programming limitations and pushed for a malleable blockchain. Met with resistance from the Bitcoin community, Buterin set out to build the second public blockchain called Ethereum. The largest difference between the two is that Ethereum can record other assets such as loans or contracts, not just currency. Ethereum launched in 2015 and can be used to build “smart contracts”—those that can automatically process based on a set of criteria established in the Ethereum blockchain. This technology has attracted the attention of corporations such as Microsoft, BBVA and UBS who are intrigued by the potential of the smart contract functionality to save time and money.
Transition to Proof of Stake
Currently, blockchain operates on the proof of work concept where an expensive computer calculation or “mining” is done in order to create a block (or a new set of trustless transactions). Currently, when you initiate a transaction, it is bundled into a block. Then miners verify the transactions are legitimate within that block by solving a proof-of-work problem—a very difficult mathematical problem that takes an extraordinary amount of computing power to solve. The first miner to solve the problem gets a reward and then the verified transaction is stored on the blockchain. Ethereum developers are interested in changing to a new consensus system called proof of stake.
Proof of stake has the same goal as proof of work—to validate transactions and achieve consensus in the chain—and it uses an algorithm but with a different process. With proof of stake, the creator of a new block “is chosen in a deterministic way, depending on its wealth, also defined as a stake.” Since in a proof of stake system, there is no block reward, but the miners, known as forgers, get the transaction fees. Proponents of this shift, including Ethereum co-founder Buterin, like proof of stake for the energy and cost savings realised to get to a distributed form of consensus.
Blockchain Scaling on the Horizon
Since currently, every computer in a blockchain network processes every transaction, it can be very slow. A blockchain scaling solution would determine how many computers are necessary to validate every transaction in a way that doesn’t compromise security.
Today, Bitcoin is just one of the several hundred applications that use blockchain technology. It’s been an impressive decade of transformation for blockchain technology and it will be intriguing to see where the next decade takes us. | <urn:uuid:cb88dca0-370b-40fc-af79-b1595d6593ce> | CC-MAIN-2024-38 | https://bernardmarr.com/a-very-brief-history-of-blockchain-technology-everyone-should-read/ | 2024-09-20T14:10:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00106.warc.gz | en | 0.94942 | 944 | 2.953125 | 3 |
Which utility can you use to determine whether a switch can send echo requests and replies?
Click on the arrows to vote for the correct answer
A. B. C. D.D
The correct answer is D. Ping.
Ping is a utility used to test the connectivity between two devices on a network. It sends an ICMP echo request to a destination device and waits for an ICMP echo reply. If the destination device responds to the request, it means that the two devices are able to communicate with each other.
In the context of the exam question, if you want to determine whether a switch can send echo requests and replies, you would use the ping utility. You would send an ICMP echo request to another device on the network and wait for a response. If you receive a response, it means that the switch is able to send and receive echo requests and replies.
SSH and Telnet are protocols used for remote access to network devices, but they do not provide a way to test connectivity or send echo requests. Traceroute is a utility used to trace the path of packets through a network, but it does not test connectivity or send echo requests. | <urn:uuid:683c7be1-3d18-4010-898f-8ec40a69470e> | CC-MAIN-2024-38 | https://www.exam-answer.com/cisco-200-125-switch-echo-request-reply | 2024-09-09T14:07:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00206.warc.gz | en | 0.941418 | 239 | 3.203125 | 3 |
In Today's Society, Protecting Your Computer Is A Requirement.
Advances in computer technology is a double-edged sword. On one hand, it affords us quick and easy access to numerous conveniences such as bank statements, favorite shopping centers, school and health records, and more. On the other hand, it can also grant the same access to those who aren't supposed to get it. Although it's a rare occurrence, hacking has become the biggest criminal nuisance in computer history.
Make no bones about it. There's nothing innocent or cute about the hacker. Today's hackers aren't the pimply-faced teen rebels that you might be thinking of. Instead, this generation of hackers are grown individuals who are more than likely earning a living by stealing the identities of innocent, law abiding individuals and then selling those identities to others who want to slip by the system. And the only protection against these seedy people is prevention.
Computer security couldn't be more important than it is today and that's why we've taken the time to introduce it to you. You can reduce the probability of experiencing identity theft by making your computer as hacker-proof as possible. All that's needed is a little software and a lot of common sense.
- Install an anti-virus/anti-spyware program. Anti-virus/anti-spyware software will stop malicious code from downloading and installing onto your computer while you peruse the Internet. Known as viruses, worms, or spyware, this malicious code can destroy important files and render your computer good for only one thing: sending sensitive data back to the server of an identity thief.
- Don't store sensitive data on your computer in the first place. Should your computer get infected with a virus, worm, or piece of spyware, you can thwart the individuals responsible by not storing your personal information on your PC so that when and if your computer does send back data - it won't be anything valuable. Hackers look for things like full names, social security numbers, phone numbers, home addresses, work-related information, and credit card numbers. If these things aren't saved onto a computer, there's nothing critical to worry about other than restoring your computer to a non-virus condition.
- Don't open files without scanning them with an anti-virus/anti-spyware program. In the past, the warning was to avoid opening files from people that you don't know. Today it's really not safe to open files from anyone (without scanning the files) because that's how viruses get spread - through files - even by mistake. So even though your co-worker may have emailed a funny video, it's no more safe to open than a video downloaded from a complete stranger. Be safe and scan each and every file you download from the Internet or receive through email regardless of where it came from.
- Create a barrier between your computer and prying eyes. Anti-virus/anti-spyware programs are only effective after the effect. But you can prevent identity theft from occurring by installing a firewall. A firewall is software that checks all data entering and exiting a computer and it then blocks that which doesn't meet specified security criteria (user-defined rules).1
- Don't click on website links in spam messages. In an effort to obtain personal information, some spammers will send email that asks you to click on a link. The email messages are often disguised as important messages from well-known online establishments, and they often try to scare their readers into clicking links with threats of closing an account of some sort. Sometimes the links are harmless and attempt to con the reader into volunteering personal information (credit card number), but other times the links attempt to download harmful software onto a computer.
Your best protection against computer crimes is your own knowledge. Hopefully the suggestions above will prompt you into taking appropriate action and into protecting your computer with the suggested tools. In doing so, you'll not only protect yourself, you'll prevent the spread of these malicious activities and protect others at the same time.
About this post
Viewed: 1,730 times
No comments have been added for this post.
Sorry. Comments are frozen for this article. If you have a question or comment that relates to this article, please post it in the appropriate forum. | <urn:uuid:f95b2def-69db-4dec-b392-fe705b97d05d> | CC-MAIN-2024-38 | https://www.fortypoundhead.com/showcontent.asp?artid=2726 | 2024-09-10T21:41:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00106.warc.gz | en | 0.953676 | 885 | 2.9375 | 3 |
The gaming industry has seen a significant evolution, with 3D game art production and game art services playing a pivotal role in enhancing the gaming experience. This art form involves creating 3D objects, characters, and environments that captivate players, making games more engaging and visually attractive.
With the increasing demand for high-quality game graphics, understanding 3D game art production becomes essential for anyone looking to break into the gaming industry.
3D Game Art Production | Introduction to the process
3D game art production is not just about creating objects; it’s about blending art with technology to produce amazing game worlds. Unlike 2D game art, which is flat and two-dimensional, 3D game art offers depth, allowing objects and characters to be viewed from various angles and under different lighting conditions.
Difference between 3D vs. 2D: While 2D art is flat, 3D art has depth and volume.
For example, in a 2D game, a tree might look like a simple silhouette, but in 3D, you can walk around it and see its branches from different angles.
Stages of 3D Game Art Production
- Concept Art Creation: This initial step involves generating ideas for the game’s environments, objects, and characters.
Begin with an idea. Sketch out what characters or objects might look like. For instance, if you’re designing a dragon, sketch its size, shape, and features. - Modeling: After finalizing the concept art, 3D models are created using software like Blender or Maya. This is like making a clay sculpture but digitally.
- Texturing: This phase adds color, texture, and surface features to the 3D objects. For our dragon, decide on its skin texture and color.
- Shading: It gives 3D models the appearance of different materials.
- Rigging: This involves creating a structure for 3D objects or characters to move.
- Animation: Brings the characters or objects to life. Make your objects move.
With the example above, you should animate your dragon to have moving wings and a tail that sways. - Lighting: Sets up the game environment’s lighting and shadows.
Decide where light sources are. This affects how shadows appear. - Rendering: This final step refines the models and images, making them game-ready.
Types of 3D Game Art Production
Let’s look at the different types of 3D game art and how they contribute to a game’s overall feel.
- Character Art: This is about creating characters for games.
Example: Think of a hero in a game. This hero needs a look, a style, and a personality. That’s where character art comes in. - Environment Art: Crafting environments, structures, and other game elements.
Example: If our hero is in a forest, environment art will decide how the trees, paths, and rivers look. - Prop Art: Producing smaller items like weapons and furniture.
Example: Our hero might need a sword to fight or a chair to sit on. Prop art creates these items.
Tools and Software
Artists use various tools like Blender, Maya, ZBrush, and Substance Painter. The choice of tool depends on the artist’s requirements, as each has its advantages.
A 3D game artist is responsible for creating game assets. They should possess both technical skills, like 3D modeling, and artistic skills, such as a keen eye for color and composition.
Game Art Production Pipeline | Process of working a game art studio
The game art pipeline involves several processes, from concept art to animation.
It ensures the efficient creation of game assets and includes pre-production (conceptualization), production (modeling, texturing), and post-production (final touches).
- Conceptualization (Pre-production):
- What is it? This is the stage where artists brainstorm and come up with ideas for the game’s visual elements.
- Example: Imagine you want to create a new character. You’ll start by drawing different designs and picking the best one.
- Advice: Always start with rough sketches. It’s easier to change a sketch than a detailed drawing.
- Modeling (Production):
- What is it? Here, artists turn the concept art into 3D models.
- Example: Using the chosen character design, an artist will use software to make a 3D model of the character.
- Advice: Focus on the main shapes first, then add details later.
- Texturing (Production):
- What is it? This step adds colors and patterns to the 3D models.
- Example: Our character needs clothes and skin. Texturing will decide the color and pattern of these elements.
- Advice: Use clear and simple textures. They should support the model, not distract from it.
- Post-production (Final touches):
- What is it? This is the final step where artists make any last changes to the game assets.
- Example: Maybe our character’s clothes look too new. Artists might add some wear and tear to make them look used.
- Advice: Always review your work. Small changes can make a big difference.
Best Practices | Advice from Game Art Outsourcing Studio
- Organize game assets for easy access. It’s all about keeping all game assets in a clear and easy-to-find manner.
Use clear folder names and consistent file naming conventions. It saves time in the long run. - Prioritize tasks and manage time effectively. Consider planning your work and setting priorities.
- Maintain consistency in style and design. Make sure all game assets have a similar look and feel.
It’s better to set clear design guidelines at the start. It ensures that even if multiple artists work on the project, the game has a unified look. - Communicate and collaborate with the development team. If an artist is unsure about a design, discussing it with the team can provide new perspectives and solutions.
Regular team meetings and open communication channels are key. It ensures everyone is on the same page and reduces misunderstandings.
Wrapping Up | 3D Game Art Process Finalizing
3D game art production is an important aspect of the gaming industry. As the demand for high-quality graphics grows, mastering this art form becomes crucial.
The difference between 2D and 3D art is significant. While 2D art is flat, 3D brings depth and realism, allowing players to explore and interact with the game world in a more tangible way. The stages of 3D art creation – from concept art to rendering – are systematized and require both artistic flair and technical skill.
The role of a 3D game artist is multifaceted. They are the bridge between the game’s story and its visual representation. Their expertise ensures that the game not only looks good but feels real and immersive.
By understanding the basics of concept art, modeling, and texturing, one can create captivating 3D game art. With dedication and practice, game developers can craft gaming environments that give players an unparalleled experience. | <urn:uuid:5507983f-02e9-4160-9094-1c723231d865> | CC-MAIN-2024-38 | https://echannelline.com/2023/09/27/all-you-need-to-know-about-3d-game-art-production/ | 2024-09-12T01:18:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00006.warc.gz | en | 0.924346 | 1,484 | 2.84375 | 3 |
Did you know that 75% of people are already using Generative AI (GenAI) at work? GenAI tools are defined as any artificial intelligence that can generate content such as text, images, videos, code, and other data using generative models, often in response to prompts. Examples include Open AI’s ChatGPT, GitHub’s Copilot, Claude, Dall-E, Gemini, and Google Workspace’s new functionality that connects Gemini to Google apps, to name just a few.
Like any new technology, GenAI comes with a side of risk, and recent data from Cisco uncovered that 27% of businesses have banned the use of GenAI entirely for security reasons. However, with such widespread adoption, and such groundbreaking potential — closing the door to GenAI is likely to be a mistake. Instead, PWC recommends that “Demonstrating that you’re balancing the risks with the rewards of innovation will go a long way toward gaining trust in your company — and in getting a leg up on the competition.”
To make responsible use of GenAI, and support employees in freely using the tools to upgrade their productivity, you need to start by understanding what the industry is dealing with.
Understanding the Potential Risk of GenAI Tools
As excited as your employees are about the productivity benefits of using GenAI tools, you can bet the attackers are feeling the same way. As teams get to grips with how AI can free up hours in the day on tasks like content creation, code writing, and design, hackers are finding innovative ways to use GenAI as a new attack surface to steal sensitive information and disrupt business operations.
To stay one step ahead, organizational policies and employee education should evolve to take into consideration the new threats. As a starting point, security teams should speak to employees about:
- AI code generation: All AI-generated code needs to be tested thoroughly before it’s used, as hackers can manipulate a Large Language Model (LLM) to change its output. Untested, it could open your own customers up to risk or provide an entry point to your network.
- Trusting LLMs: Just because an LLM provides information, that doesn’t make it true. All LLMs carry the risk of hallucinations — providing incorrect or nonsensical information, and if an LLM has been manipulated, there could be malicious content produced, too. Make sure to double-check all facts and data.
- Sharing sensitive data: Your LLM is not a personal diary, and it won’t keep your secrets safe. In order to learn, GenAI tools collect everything we share, which means if it becomes public knowledge, that data can be exposed to others. Employees should craft prompts free of personal information, intellectual property, trade secrets, or passwords.
- Copyright issues: The rules around copyright with AI-generated content are still being discussed and rolled out, but to establish ownership over what you create, employees should make sure that they make modifications to their content, including text and images, too.
- Customer trust: If you rely on GenAI tools to boost your productivity in customer-facing interactions, they deserve to know whether they are being bot-driven or getting the human touch. Be transparent when using GenAI-delivered responses or content, including always sharing its source.
The Impact of GenAI on Phishing
Even without your employees independently using GenAI tools in the workplace, the risks of generative AI can still target your organization. One example is the huge impact of GenAI on the efficacy of phishing scams.
Try this thought experiment: if you asked your employees to point out the tell-tale signs of a phishing email, what do you think they would describe? Not too long ago, markers of your average phishing scam were poor spelling and grammar, broken language, and unprofessional designs — making it easy for staff to spot a garden-variety phishing attack when it arrived in their inbox.
With the advent of GenAI tools, hackers now have access to free online tools that allow them to spin up highly professional-looking content faster than ever before. Even videos and images of known associates can be faked using GenAI, which means employees need to be more on guard than ever. According to research completed by the Harvard Business Review, “Artificial intelligence changes this playing field by drastically reducing the cost of spear phishing attacks while maintaining or even increasing their success rate.” Organizations should expect “a vast increase in credible and hyper-personalized spear-phishing emails that are cheap for attackers to scale up en masse.”
The warning from HBR is clear — “We are not yet well-equipped to handle this problem. Phishing is already costly, and it’s about to get much worse.”
This means that even if you’re one of the 27% of organizations that have banned the use of GenAI, the chances of a successful data breach or cyberattack against your organization have still increased.
Changing your Training Approach in the Era of GenAI
The threat of GenAI comes from both directions — from unaware employees using new technology without realizing its potential threats, and from hackers leveraging these tools intentionally to launch ever more sophisticated and believable attacks of their own.
However, the methodology behind security awareness training to reduce the risk of phishing simulations has remained the same in principle. Organizations simply need to increase the frequency of their training, as well as the variety of the simulations they use to meet the growing threat. At CybeReady, we recognize that employees don’t always feel accountability over security within an organization and that CISOs have too much to handle to be continually proactive. That’s where we come in.
Our comprehensive SaaS awareness program continually trains 100% of your employees, with realistic simulations that reduce risk, engage users, and promote a positive culture of security awareness organization-wide.
We also provide training materials that can be distributed to your employees to empower them to use AI for innovation and productivity purposes, without adding risk. Download your free AI training toolkit to access:
- Short training content decks that educate on the dark side of AI
- Tips for identifying a phishing scam that was created by GenAI tools
- Bite-sized digital posters displaying GenAI best practices | <urn:uuid:bf4dbba4-d95c-4f37-8570-806750646d0c> | CC-MAIN-2024-38 | https://cybeready.com/awareness-training/your-employees-are-already-using-genai | 2024-09-13T05:41:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00806.warc.gz | en | 0.948476 | 1,302 | 2.984375 | 3 |
(IsraelHomelandSecurity) U.S., British and Australian officials already signed a defense agreement (AUKUS) at the end of last year, which includes intelligence, cyber, quantum technology, and military equipment cooperation. IQT-News shares a recent blog from Israel Home Security that discusses in broad terms the implications of quantum technologies use in the military.
When it comes to navigating, gathering intelligence, and even making weapons, the application of quantum physics principles may give the Defense Alliance a distinct advantage over other countries. Quantum technologies are expected to be increasingly used by military forces in the future, but their exact impact is still difficult to predict. There are some people who believe quantum computing will revolutionize technology like the first microprocessor did.
Scientists’ advances and new discoveries directly affect quantum technologies. Therefore, Australia is collaborating with industry, academia, and government research agencies to explore the potential of quantum technologies for defense. According to asiapacificdefencereporter.com, the initiative will eventually result in the development of prototype systems that will demonstrate how quantum systems can be utilized for security.
The integration of quantum technologies currently represents one of the most anticipated advances for armed forces, yet their precise impact remains difficult to predict. Although economic applications are now increasing, there is little doubt that they will have a disruptive effect when they are employed more widely as per asiapacificdefencereporter.com
There are many military applications of quantum technology including metrology, simulation, imaging, sensing, timing, stealth, computing, weapons, communications, data encryption, and analyzing encrypted messages.
Sandra K. Helsel, Ph.D. has been researching and reporting on frontier technologies since 1990. She has her Ph.D. from the University of Arizona. | <urn:uuid:338b9432-50b1-4c94-9624-d96c5eb6ad08> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/why-do-military-quantum-technologies-matter/amp/ | 2024-09-13T05:04:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00806.warc.gz | en | 0.940285 | 360 | 2.546875 | 3 |
While there are still some on the earth who claim climate change is a farce, the majority of us believe we need to throw everything possible into slowing down or solving the problem. Artificial intelligence (AI) and machine learning are two tools in our climate-change-halting toolbox. The more we utilise AI and machine learning technology to help us understand our current reality, predict future weather events and create new products and services to minimise our human impact our chances of improving and saving lives, creating a healthier world and making businesses more efficient, the better chance we have to stall or even reverse the climate change trajectory we’re on. Here are just a few of the ways AI and machine learning are helping us tackle climate change.
Climate Study: A Big-Data Problem
Machines can analyse the flood of data that is generated every day from sensors, gauges and monitors to spot patterns quickly and automatically. By looking at data about the changing conditions of the world’s land surfaces that is gathered by NASA and aggregated at Landsat, it provides a very accurate picture of how the world is changing. The more accurate we’re able to be at the current status of our climate, the better our climate models will be. This information can be used to identify our biggest vulnerabilities and risk zones. This knowledge from climate scientists can be shared with decision-makers so they know how to respond to the impact of climate change—severe weather such as hurricanes, rising sea levels and higher temperatures.
Developing Better Solutions
Artificial intelligence and deep learning can help climate researchers and innovators test out their theories and solutions about how to reduce air pollution and other climate-friendly innovations. One example of this is the Green Horizon Project from IBM that analyses environmental data and predicts pollution as well as tests “what-if” scenarios that involve pollution-reducing tactics.
By using the information provided by machine learning algorithms, Google was able to cut the amount of energy it used at its data centres by 15%. Similar insights can help other companies reduce their carbon footprint.
While businesses and manufacturing might contribute significantly to greenhouse gas levels, it’s still imperative that each citizen commits to reducing their impact as well. The easier we make green initiatives for each person, the higher the adoption rate and the more progress we make to save the environment. Artificial intelligence and machine learning innovations can help create products and services that make it easier to take care of our planet. There are several consumer-facing AI devices such as smart thermostats (which could save up to 15% on cooling annually for each household) and irrigation systems (which could save up to 8,800 gallons of water per home per year) that help conserve resources. Everyone doing their part over time will add up.
Better Weather Event Predictions
The damage to human lives and property can be reduced if there are earlier warning signs of a catastrophic weather event. There has been significant progress in using machine-learning algorithms that were trained on data from other extreme weather events to identify tropical cyclones and atmospheric rivers. The earlier warning that governments and citizens can get about severe weather, the better they are able to respond and protect themselves. Machines are also being deployed to assess the strengths of models that are used to investigate climate change by reviewing the dozens of them that are in use and extracting intelligence from them. They also help predict how long a storm will last and its severity. Since machines can’t tell you “how” it arrived at its prediction or decisions, most climate professionals don’t feel comfortable relying on only what the machines suggest will happen, but use machine insight along with their own professional analysis to complement one another.
Climate change is a gargantuan problem and its complexity is exacerbated by the many people and players involved from divergent worldwide government entities to profit-driven corporations and individuals who aren’t always open to change. Therefore, the faster and smarter we can become through the use of AI and machine learning the higher our probability of success to at least slow down the damage caused by climate change. | <urn:uuid:41f3f0a9-1dce-4d3f-8932-c6da44ff6795> | CC-MAIN-2024-38 | https://bernardmarr.com/the-amazing-ways-we-can-use-ai-to-tackle-climate-change/ | 2024-09-16T23:33:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00506.warc.gz | en | 0.947476 | 829 | 3.53125 | 4 |
In Service Management, there's a vast amount of knowledge at play—some of it is documented and formalized, and some of it is not. Implicit knowledge may not be as straightforward to manage as explicit knowledge, which is easy to capture and share through manuals or databases.
What happens with this knowledge that isn’t so easily put into words? In this article, we’ll explore implicit knowledge, why it’s crucial for ITSM, and how it can be effectively managed within your organization. Understanding and managing this type of knowledge can significantly impact how smoothly your IT operations run.
What is implicit knowledge?
Implicit knowledge, sometimes referred to as "know-how," is the type of knowledge that people carry with them based on experience, intuition, and informal learning. Unlike explicit knowledge, which is easily documented and shared, implicit knowledge is harder to articulate.
For example, it might include the steps an experienced IT technician takes to troubleshoot a recurring issue, steps they know work but haven't necessarily documented.
Knowledge Types in ITIL Knowledge Management
Definition of knowledge types: Explicit, implicit, and tacit
To fully grasp the concept of implicit knowledge, it’s helpful to compare it with other types of knowledge:
- Explicit knowledge: This is the knowledge that is clearly documented and easily shared. Examples include user manuals, policies, and standard operating procedures.
- Implicit knowledge: This knowledge is understood and applied but not formally documented. It’s often gained through experience and is demonstrated in practice.
- Tacit knowledge: This is deeply embedded knowledge that is often unconscious and difficult to express. It includes personal insights, instincts, and skills developed over time.
Characteristics of implicit knowledge
Implicit knowledge is the understanding or know-how that individuals have but might not be fully aware of or able to communicate easily. It’s the kind of knowledge that comes from experience, intuition, and informal learning, and it’s often applied automatically without much thought.
- Experience-based: Implicit knowledge is gained through hands-on experience. For instance, an IT technician who has resolved similar issues multiple times may develop a quick, efficient method for troubleshooting.
- Context-specific: This knowledge is often tied to specific situations or environments, making it challenging to generalize or formalize.
- Difficult to articulate: Unlike explicit knowledge, which can be easily written down, implicit knowledge is more intuitive and harder to express in words.
- Inherent in tasks: Implicit knowledge is frequently required to accomplish specific tasks, even when not explicitly stated in instructions.
- Challenging to record: Unlike explicit knowledge, implicit knowledge is tough to document because it's about understanding and interpreting, not just knowing something.
- Unintentionally shared: People often transmit implicit knowledge without realizing it. They do this through their actions and behavior, not by teaching.
- Dynamic: Implicit knowledge develops as individuals gain more experience or as circumstances and contexts shift. It's flexible and can easily adapt to new information.
7 Steps For a Solid Knowledge Management Process [+Workflow Template]
Importance of Knowledge Management
Knowledge Management is a practice that ensures the right information is available to the right people at the right time. This is critical for improving decision-making, problem-solving, and overall efficiency in Service Management.
While explicit knowledge is straightforward to manage, implicit and tacit knowledge require more nuanced approaches, as they are more difficult to capture and share.
The role of implicit knowledge in ITSM
Managing knowledge effectively includes not only capturing explicit information but also understanding and leveraging implicit knowledge. This type of knowledge is crucial because it often fills the gaps left by formal documentation.
Implicit knowledge can influence how effectively teams respond to issues, resolve problems, and design services. This knowledge helps IT teams navigate complex scenarios, make informed decisions in real-time, and maintain a level of service continuity that purely documented procedures might not fully address.
It plays a particularly important role in ITIL processes like Problem Management, Incident Management, and Service Design. Let's see what this looks like with some examples:
In Problem Management, implicit knowledge can help identify and resolve underlying issues that cause incidents.
An experienced problem manager might intuitively know where to look for the root cause of a recurring issue based on their understanding of the system’s history. This ability to “cut through the noise” and focus on the most likely cause is a form of implicit knowledge that can significantly speed up problem resolution.
In Incident Management, implicit knowledge can be key in quickly restoring services and minimizing impact.
Service desk staff might use their implicit knowledge of the business’s priorities to quickly decide which incidents to address first. They might prioritize an incident affecting a high-revenue-generating system, even if other incidents are technically more severe, based on their understanding of the business impact.
During Service Design, implicit knowledge contributes to creating IT services that are well-aligned with business needs.
An IT architect might rely on implicit knowledge about how different departments use technology to design services tailored to their specific needs. This knowledge, gained through years of working closely with different teams, helps build services that are not only technically sound but also meet users' practical needs.
Challenges of implicit knowledge
While implicit knowledge is invaluable, sharing it within an organization poses several challenges. Unlike explicit knowledge, which can be easily documented and distributed, implicit knowledge is often harder to transfer. This can create bottlenecks in knowledge sharing, leading to potential inefficiencies or knowledge loss.
Difficult to capture
Challenge: Implicit knowledge often resides in the minds of experienced staff and is difficult to articulate. For example, a network administrator might have an intuitive sense of when a network will experience high traffic based on past experience, but explaining how they arrived at that conclusion can be challenging.
Tip: Encourage storytelling and informal knowledge sharing.
Create opportunities for experienced staff to share their insights in a more narrative format, such as during team meetings or informal "lunch and learn" sessions. This allows them to communicate their thought processes in a more natural way, making it easier for others to understand and learn from their experiences.
Hard for new members
Challenge: Transferring implicit knowledge often requires a hands-on approach. New team members need time to observe and learn from more experienced colleagues. This can be a slow process and may not always be effective if there’s a gap in understanding.
Tip: Implement mentorship and shadowing programs.
Pair new hires with seasoned professionals to facilitate the transfer of implicit knowledge. Encourage mentors to not only demonstrate their techniques but also explain their reasoning behind decisions. This helps bridge the gap between implicit knowledge and actionable insight.
Risk of knowledge loss
Challenge: When experienced employees leave, they take their implicit knowledge with them, which can lead to knowledge gaps and inefficiencies.
Tip: Conduct regular knowledge audits and create knowledge repositories.
Periodically review the knowledge within your team, identifying areas where implicit knowledge is heavily relied upon. Capture this knowledge in a structured format, such as case studies or decision trees, and store it in a centralized knowledge repository that the entire team can access.
Tied to very specific tasks
Challenge: Implicit knowledge is often tied to specific experiences and contexts, making it difficult to generalize or document. For example, a senior IT professional might know exactly how to troubleshoot a rare network issue due to years of experience, but explaining the thought process behind their solution could be nearly impossible.
Tip: Use scenarios and case studies to capture complex knowledge.
When documenting implicit knowledge, consider creating detailed scenarios or case studies that illustrate the problem, the solution, and the rationale behind it. This approach can help others understand the context in which the implicit knowledge was applied, making it more accessible.
Tips for Better Knowledge Sharing
More actionable ideas to improve knowledge transfer
Here are some actions you can take to improve knowledge sharing in your organization and foster a culture of learning and growth.
Offer learning opportunities
Establish regular learning opportunities. These can include formal training sessions and workshops where experienced team members share their practical insights and expertise.
Informal settings, such as “lunch and learn” sessions or team brainstorming meetings, also provide valuable opportunities for employees to exchange knowledge in a relaxed and open atmosphere. Such initiatives help transfer knowledge and foster a culture of continuous learning and improvement within the organization.
Employees might hesitate to share their knowledge if they feel that their innovations or deviations from established procedures could be seen as undermining official processes.
For example, if someone discovers a more efficient way to complete a task that deviates from documented procedures, they might worry about being penalized or criticized. To overcome this, creating a culture where process improvements are welcomed and reviewed constructively is important.
Encourage employees to share their insights openly and ensure that changes are evaluated and integrated into official procedures when appropriate. This approach helps refine processes and reassures employees that their contributions are valued and considered for continuous improvement.
Create online communities
Internal forums or discussion groups offer spaces where employees can ask questions, seek advice, and share their experiences. By fostering a collaborative environment, these communities help break down silos and encourage employees to engage with one another.
Recognizing and rewarding valuable contributions in these forums can further motivate team members to participate and share their knowledge.
Implement knowledge-sharing software
Knowledge-sharing software can further support your efforts by providing a platform for documenting and accessing informal knowledge. Tools like wikis or collaborative platforms enable employees to contribute their insights and experiences, making it easier for others to learn from them.
This approach ensures that valuable knowledge is not lost and can be easily updated as new information becomes available. Choosing software that encourages user interaction and feedback is crucial, as it helps maintain the relevance and accuracy of the shared knowledge.
What Are Service Knowledge Management Systems (SKMS) in ITIL?
Implicit knowledge is important in ITIL practices, influencing how IT services are managed and improved. While it’s more challenging to document and share than explicit knowledge, it contributes significantly to effective problem-solving and service delivery.
Addressing the management of implicit knowledge helps ensure that experienced insights are not lost and can be utilized to continuously improve operations. Focusing on capturing and transferring this type of knowledge strengthens IT processes and enhances overall service quality.
Understanding and managing implicit knowledge leads to a more agile and informed IT team, ultimately supporting better service outcomes and operational efficiency. | <urn:uuid:190c9d95-94b6-4669-be0d-b762165afa62> | CC-MAIN-2024-38 | https://blog.invgate.com/implicit-knowledge | 2024-09-16T23:36:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00506.warc.gz | en | 0.940808 | 2,177 | 2.796875 | 3 |
By: Stefan Bernbo, founder and CEO of Compuverde
According to IBM, humans create 2.5 quintillion bytes of data. In fact, 90 percent of the data in the world today has been created in the last two years alone. All this data has to be stored somewhere, putting a strain on traditional storage architectures. The standard model of storage involves buying more hardware, but the rate of data growth has outpaced most organizations’ ability to buy the number of servers needed. Not to mention that the rate of scale is too slow using this model.
Instead, enterprises are considering new storage options that are more flexible and scalable. Software-defined storage (SDS) offers that flexibility. In light of the varied storage and compute needs of organizations, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are discussed below. First, it’s important to understand what came before hyperconverged and hyperscale approaches.
The Evolution of Storage
Converged storage combines storage and computing hardware to increase delivery time and minimal the physical space required in virtualized and cloud-based environments. This was an improvement over the traditional storage approach, where storage and compute functions were housed in separate hardware. The goal was to improve data storage and retrieval and to speed the delivery of applications to and from clients.
In the converged storage model, there are discrete hardware components, each of which can be used on its own for its original purpose in a “building block” model. Converged storage is not centrally managed and does not run on hypervisors; the storage is attached directly to the physical servers.
So then, what does it mean to be hyperconverged? This storage model is software-defined, and all components are converged at the software level; they cannot be separated out. This model is centrally managed and virtual machine-based. The storage controller and array are deployed on the same server, and compute and storage are scaled together. Each node has compute and storage capabilities. Data can be stored locally or on another server, depending on how often that data is needed.
Flexibility and agility are needed to effectively and efficiently manage today’s data demands, and these are what hyperconverged storage offers. It also promotes cost savings. Organizations are able to use commodity servers, since software-defined storage works by taking features typically found in hardware and moving them to the software layer. Organizations that need more 1:1 scaling would use the hyperconverged approach, and those that deploy VDI environments. The hyperconverged model is storage’s version of a Swiss Army knife; it is useful in many business scenarios. It is one building block that works exactly the same; it’s just a question of how many building blocks a data center needs.
Now let’s turn our attention to the hyperscale model, a new storage approach created to address differing storage needs. Hyperscale computing is a distributed computing environment in which the storage controller and array are separated. As its name implies, hyperscale is the ability of an architecture to scale quickly as greater demands are made on the system. This kind of scalability is required in order to build big data or cloud systems; it’s what Internet giants like Amazon and Google use to meet their vast storage demands. However, software-defined storage now enables many enterprises to enjoy the benefits of hyperscale.
As with hyperconverged storage, hyperscale reduces costs because the IT organizations can use commodity servers and a data center can have millions of virtual servers without the added expense that this number of physical servers would require. Data center managers want to get rid of refrigerator-sized disk shelves that use NAS and SAN solutions, which are difficult to scale and very expensive. With hyper solutions, it is easy to start small and scale up as needed. Using standard servers in a hyper setup creates a flattened architecture. Less hardware needs to be bought, and it is less expensive. Hyperscale enables organizations to buy commodity hardware. Hyperconverged goes one step further by running both elements—compute and storage—in the same commodity hardware. It becomes a question of how many servers are necessary.
Two Options for Today’s Storage Needs
As described above, the hyperconverged approach is like having a really useful box that contains everything you need. Hyperscale has two sets of boxes, one set of storage boxes and one set of compute boxes. It just depends what the architect wants to do, according to the needs of the business. A software-defined storage solution would take over all the hardware and turn it into a type of appliance, or it could be run as a VM – which would make it a hyperconverged configuration.
Perhaps the best aspect of these two approaches is that you don’t have to choose one or the other. Data center architects can mix and match the models according to their needs at any given time. Those needs will remain fluid as technologies change and as data continues to proliferate, making hyperconverged and hyperscale approaches all the more attractive due to their flexibility and cost-effectiveness. Enterprises can use these approaches to scale as needed as they face the future. | <urn:uuid:6b5b256e-8528-4772-b553-a863607ce7f6> | CC-MAIN-2024-38 | https://datacenterpost.com/hyper-storage-approaches-for-todays-data-demands/ | 2024-09-18T05:44:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00406.warc.gz | en | 0.95873 | 1,080 | 3.078125 | 3 |
Do computers belong in the schools? Should public libraries be cutting back on books in favour of PCs and Internet connections? Can you make an aquarium out of a used Mac?
The answer to one of these questions is yes, and Clifford Stoll has the goldfish to prove it. An Internet legend and well-known dissenter from the utopian hype that has sprung up around personal computers, Stoll assaults in his latest book the idea that computers and the Internet offer any special learning opportunities, attacking school administrators, librarians and gullible parents for thinking these machines, in the brief period between purchase and obsolescence, could possibly substitute for reading books and thinking critically with the help of a dedicated teacher.
These criticisms come from a man who knows a little something about computers. An avowed ex-hippie, Stoll was an astronomer and systems administrator at Lawrence Berkeley Laboratory in the 1980s when he noticed that he couldn’t account for 75 cents worth of computer time. He investigated and discovered a hacker breaking into the system — a spy, in fact, who wandered all over the network of U.S. military computers in search of sensitive data. Stoll’s inspired sleuthing led to the intruder’s capture in Europe, and Stoll’s account of this drama was the basis for his best-selling first book, The Cuckoo’s Egg, as well as an episode of the PBS program Nova. His next book was the provocative Silicon Snake Oil.
His latest, High-Tech Heretic, is a further critique of the idea that computers, networked or otherwise, represent a panacea in the complex business of living human lives. Stoll’s book is written in what I call ‘net prose, a chipper, informal style that (perhaps unwittingly) owes much to modern advertising copy. It’s inoffensive, somewhat flavourless and poses no problems to the vocabulary-challenged.
Fortunately, Stoll is a very smart guy who brings his sceptical intelligence to bear on some critical questions. This is a man who cares passionately about learning and its transmission, and he can’t figure out how diverting students with computer exercises fosters understanding. He cites horrifying instances of schools shortchanging true pedagogy for machinery they’re not properly equipped to use, and demolishes the arguments one by one for computers in schools.
To the idea that students will graduate into a world of ubiquitous computing, he says, “Automobiles are everywhere, too. They play a damned important part in our society and it’s hard to get a job if you can’t drive…But we don’t teach automobile literacy.”
To the notion that networked computers can keep curricula current, he scoffs, “The past two decades of research haven’t greatly changed basic high-school math, physics and chemistry.”
To the suggestion that computers make learning fun, he answers that real learning is unavoidably hard, and that computers merely substitute games. His arguments, like those of Yale computer scientist David Gelernter before him, are convincing. “Computer literacy” is an empty cliche that, for most people, means knowing how to type, backspace and click a mouse. In fact, Stoll doesn’t think schools need much in the way of technology, aside from indoor plumbing and good light. He sees “distance learning” as a joke, and loathes the tendency of today’s students to rely on calculators. He heaps scorn, too, on the idea that computers can somehow replace books in libraries.
In the vein of Silicon Snake Oil, Stoll is convinced the Internet isolates us (in part by enfolding us in useless data while real life is going on outside), rather than bringing us together. He says the Internet is filled with junk, which of course it is, but doesn’t credit its extraordinary usefulness as an everyday source of information, goods and services.
High-Tech Heretic is one of those books that makes me wonder anew why the Internet hasn’t revived the monograph. Old-fashioned, single-subject tracts of this kind are impossible to sell as books, because they aren’t long enough to justify a book’s price, but they’re too long for magazines.
On the Internet, however, people could buy them electronically, print them out and read them on paper. That way an author like Stoll wouldn’t be tempted to pad a perfectly reasonable work on a worthwhile topic with lesser pieces that really aren’t apropos.
At least he really made an aquarium from an old Mac. And showing his true colours (not Big Blue), he turns an old PC into a kitty litterbox. | <urn:uuid:d64938d0-942a-420a-a31d-773f7fb69f49> | CC-MAIN-2024-38 | https://www.itworldcanada.com/article/distant-learning/36670 | 2024-09-18T03:46:50Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00406.warc.gz | en | 0.947999 | 999 | 2.59375 | 3 |
Improper Control of Generation of Code ('Code Injection')
phpMyFAQ 2.6.11 and 2.6.12, as distributed between December 4th and December 15th 2010, contains an externally introduced modification (Trojan Horse) in the getTopTen method in inc/Faq.php, which allows remote attackers to execute arbitrary PHP code.
CWE-94 - Code Injection
Code injection is a type of vulnerability that allows an attacker to execute arbitrary code. This vulnerability fully compromises the machine and can cause a wide variety of security issues, such as unauthorized access to sensitive information, manipulation of data, denial of service attacks etc. Code injection is different from command injection in the fact that it is limited by the functionality of the injected language (e.g. PHP), as opposed to command injection, which leverages existing code to execute commands, usually within the context of a shell. | <urn:uuid:de55765d-de7f-4fc0-a34e-82caa71be57e> | CC-MAIN-2024-38 | https://devhub.checkmarx.com/cve-details/cve-2010-4558/ | 2024-09-19T10:46:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00306.warc.gz | en | 0.917107 | 185 | 2.53125 | 3 |
Two-factor authentication (2FA) –or multi-factor authentication (MFA) in general – has grown in importance in security in recent years. This is about how users (employees and customers) authenticate to systems. Authentication by username and password is called 1FA.
However, in order to increase safety, 2FA or MFA has been increasingly used in recent years. One possibility for a second factor is the Time-based One-time Password (TOTP) method, which is probably the best known method used in countless applications. For example, the TOTP procedure is used by Google and Microsoft in the Authenticator app.
Many may also remember the tokens of e-banking systems, which had to be renewed every 60 seconds. This combination between username/password and a second system, which is mostly based on personalized hardware such as your mobile phone, is considered to be 2FA.
The possible authentication factors can be divided into three different categories:
- Knowledge: the user has certain knowledge, which is known only to him. For example, these are passwords, pins, or answers to security questions
- Biometrics: the user clearly uses biometric features such as his fingerprint, face or iris pattern
- Hardware: the user owns an item that helps him with authentication. This could be a code generator, an SMS or an email to his mobilt elefon. Or hardware in the form of a card or token.
Challenges of Multi-Factor Authentication (MFA)
- Data is more secure from third-party access. Usernames and passwords of customers and employees are vulnerable to theft. They are either not complex enough (in many cases only a few letters such as “123456”) or can be read out by Trojan. Another vulnerability is the writing down of passwords, either physically or digitally. 2FA / multi-factor authenticati
on can prevent attacks despite successful password entry. - They increase their reputation towards customers. Many customers don’t mind taking an extra step when they know it serves their safety. They gain additional trust when they know that the security of their data is important to the company.
- Productivity can be increased or maintained. As data access becomes more secure, employees can increasingly be allowed to work from home. In times like the current coronavirus crisis, it helps to maintain productivity. In normal times, employees can increasingly access the systems from home or on the go. According to the Harvard Business Review, this can lead to an increase in productivity of up to 13% (Harvard Business Review).
- Lower operational costs. Access for hackers is made more difficult and this can minimize system failures. The EU-wide GDPR Guidelines, as well as the Data Protection Act adapted to Switzerland, requires notification to the Confederation if personal data is lost, deleted, destroyed or altered or if unauthorised persons are disclosed or made available to persons (Art. 4 lit. g E-DSG). Improved security measures can prevent fines.
- Armed for possible standard. 2FA or MFA can be used by the Federal Council as a standard for companies with sensitive data with regard to data security (Art. 7 Data Security & Art. 11 E-DSG).
- azure multi-factor authentication
- Amazon multi factor authentication (mfa login)
- Microsoft multi factor authentication app
- multi factor authentication benefits
- two-factor authentication examples
- back to basics multi factor authentication mfa
- multi-factor authentication can be used to handle spoofing
- Fortnite multi factor authentication
Organizations that are serious about security have no choice but to implement multifactor authentication. It is currently a recognized and proven practice to authenticate users with multiple factors to protect sensitive data. Finally, we looked at the Multi-Factor Authentication (MFA) – Login, Benefits, Examples. Next article will focus on What’s the Difference Between Two-Factor Authentication and MFA? | <urn:uuid:854f39a4-828c-4b23-a2be-315a32d366c2> | CC-MAIN-2024-38 | https://hybridcloudtech.com/multi-factor-authentication-mfa-login-benefits-examples/ | 2024-09-09T17:53:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00306.warc.gz | en | 0.942125 | 816 | 3.5 | 4 |
The phrase Big Data has now been around for a while and we are at the stage where it is impacting more and more of us every day and it’s a trend which is showing no signs of slowing down.
I have written hundreds of posts on big data, from what it is to how it is used in practice. To go alongside, I thought a post highlighting the meaning behind some of the jargon and buzzwords which have built up around the subject would be useful.
So here goes – these are topics everyone who want to know more about Big Data should have a general understanding of.
Data-as-a-service, software-as-a-service, platform-as-a-service – all refer to the idea that rather than selling data, licences to use data, or platforms for running Big Data technology, it can be provided “as a service”, rather than as a product. This reduces the upfront capital investment necessary for customers to begin putting their data, or platforms, to work for them, as the provider bears all of the costs of setting up and hosting the infrastructure. As a customer, as-a-service infrastructure can greatly reduce the initial cost and setup time of getting Big Data initiatives up and running.
Data science is the professional field that deals with turning data into value such as new insights or predictive models. It brings together expertise from fields including statistics, mathematics, computer science, communication as well as domain expertise such as business knowledge. Data scientist has recently been voted the No 1 job in the U.S., based on current demand and salary and career opportunities.
Data mining is the process of discovering insights from data. In terms of Big Data, because it is so large, this is generally done by computational methods in an automated way using methods such as decision trees, clustering analysis and, most recently, machine learning. This can be thought of as using the brute mathematical power of computers to spot patterns in data which would not be visible to the human eye due to the complexity of the dataset.
Hadoop is a framework for Big Data computing which has been released into the public domain as open source software, and so can freely be used by anyone. It consists of a number of modules all tailored for a different vital step of the Big Data process – from file storage (Hadoop File System – HDFS) to database (HBase) to carrying out data operations (Hadoop MapReduce – see below). It has become so popular due to its power and flexibility that it has developed its own industry of retailers (selling tailored versions), support service providers and consultants.
At its simplest, this is predicting what will happen next based on data about what has happened previously. In the Big Data age, because there is more data around than ever before, predictions are becoming more and more accurate. Predictive modelling is a core component of most Big Data initiatives, which are formulated to help us choose the course of action which will lead to the most desirable outcome. The speed of modern computers and the volume of data available means that predictions can be made based on a huge number of variables, allowing an ever-increasing number of variables to be assessed for the probability that it will lead to success.
MapReduce is a computing procedure for working with large datasets, which was devised due to difficulty of reading and analysing really Big Data using conventional computing methodologies. As its name suggest, it consists of two procedures – mapping (sorting information into the format needed for analysis – i.e. sorting a list of people according to their age) and reducing (performing an operation, such checking the age of everyone in the dataset to see who is over 21).
NoSQL refers to a database format designed to hold more than data which is simply arranged into tables, rows, and columns, as is the case in a conventional relational database. This database format has proven very popular in Big Data applications because Big Data is often messy, unstructured and does not easily fit into traditional database frameworks.
Python is a programming language which has become very popular in the Big Data space due to its ability to work very well with large, unstructured datasets (see Part II for the difference between structured and unstructured data). It is considered to be easier to learn for a data science beginner than other languages such as R (see also Part II) and more flexible.
R is another programming language commonly used in Big Data, and can be thought of as more specialised than Python, being geared towards statistics. Its strength lies in its powerful handling of structured data. Like Python, it has an active community of users who are constantly expanding and adding to its capabilities by creating new libraries and extensions.
A recommendation engine is basically an algorithm, or collection of algorithms, designed to match an entity (for example, a customer) with something they are looking for. Recommendation engines used by the likes of Netflix or Amazon heavily rely on Big Data technology to gain an overview of their customers and, using predictive modelling, match them with products to buy or content to consume. The economic incentives offered by recommendation engines has been a driving force behind a lot of commercial Big Data initiatives and developments over the last decade.
Real-time means “as it happens” and in Big Data refers to a system or process which is able to give data-driven insights based on what is happening at the present moment. Recent years have seen a large push for the development of systems capable of processing and offering insights in real-time (or near-real-time), and advances in computing power as well as development of techniques such as machine learning have made it a reality in many applications today.
The crucial “last step” of many Big Data initiative involves getting the right information to the people who need it to make decisions, at the right time. When this step is automated, analytics is applied to the insights themselves to ensure that they are communicated in a way that they will be understood and easy to act on. This will usually involve creating multiple reports based on the same data or insights but each intended for a different audience (for example, in-depth technical analysis for engineers, and an overview of the impact on the bottom line for c-level executives).
Spark is another open source framework like Hadoop but more recently developed and more suited to handling cutting-edge Big Data tasks involving real time analytics and machine learning. Unlike Hadoop it does not include its own filesystem, though it is designed to work with Hadoop’s HDFS or a number of other options. However, for certain data related processes it is able to calculate at over 100 times the speed of Hadoop, thanks to its in-memory processing capability. This means it is becoming an increasingly popular choice for projects involving deep learning, neural networks and other compute-intensive tasks.
Structured data is simply data that can be arranged neatly into charts and tables consisting of rows, columns or multi-dimensioned matrixes. This is traditionally the way that computers have stored data, and information in this format can easily and simply be processed and mined for insights. Data gathered from machines is often a good example of structured data, where various data points – speed, temperature, rate of failure, RPM etc. – can be neatly recorded and tabulated for analysis.
Unstructured data is any data which cannot easily be put into conventional charts and tables. This can include video data, pictures, recorded sounds, text written in human languages and a great deal more. This data has traditionally been far harder to draw insight from using computers which were generally designed to read and analyse structured information. However, since it has become apparent that a huge amount of value can be locked away in this unstructured data, great efforts have been made to create applications which are capable of understanding unstructured data – for example visual recognition and natural language processing.
Humans find it very hard to understand and draw insights from large amounts of text or numerical data – we can do it, but it takes time, and our concentration and attention is limited. For this reason effort has been made to develop computer applications capable of rendering information in a visual form – charts and graphics which highlight the most important insights which have resulted from our Big Data projects. A subfield of reporting (see above), visualising is now often an automated process, with visualisations customised by algorithm to be understandable to the people who need to act or take decisions based on them. | <urn:uuid:f0d004e8-5cd8-4cdd-9c4e-c1680b564701> | CC-MAIN-2024-38 | https://bernardmarr.com/big-data-terminology-16-key-definitions-everyone-should-understand/ | 2024-09-12T05:18:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00106.warc.gz | en | 0.962322 | 1,730 | 2.84375 | 3 |
In today’s age of far-reaching technology systems, information technology (IT) compliance has become an essential aspect for businesses of all sizes. The data that companies collect and generate throughout their journeys ties directly into the wellbeing of customers and employees.
Therefore, adhering to various regulatory standards and guidelines that ensure the security, integrity, and confidentiality of sensitive data should be high on the list of organizations’ priorities.
This simple article will delve into the importance of IT compliance and how it can benefit your business in the long run.
What is IT compliance in simple terms?
IT compliance refers to the set of rules and regulations posed by third-party entities that companies must adhere to when it comes to their IT environment and overall business operations. The purpose of IT compliance is to ensure that organizations are following best practices and industry standards to protect sensitive information from security threats, such as data breaches.
What are some examples of IT compliance?
There are various compliance regulations that exist within the business landscape today. Some of them include the following:
Different regulations apply to different companies depending on their industry and location. When securing business IT solutions, it is incredibly important that organizations align their security strategies and measures with the compliance regulations that are relevant to them.
Is there a difference between IT compliance and IT security?
Compliance and security are two distinct concepts in the world of technology systems. Compliance is about ensuring that the organization is following relevant security laws and regulations, making sure that companies are following the rules.
On the other hand, IT security refers to safeguarding the company’s data and systems from unauthorized access, theft, or damage. It covers the specific security solutions businesses use to protect their IT environment and their configurations. Compared to compliance (which is standardized across the board), IT security is more customizable.
Despite their differences, security and compliance can work together to create a secure and compliant IT environment.
Why is IT compliance important for organizations?
Compliance is important for businesses for several reasons. First and foremost, it helps companies protect their data and business IT solutions from cyber-attacks and other security threats. Regulations provide organizations with a financial incentive to protect customer data and prevent costly data breaches that can cause downtime.
Secondly, compliance provides companies with guidance in conforming to legal standards. Many industries have strict regulations that businesses must follow to ensure that they are providing secure and reliable services to their customers. If a company is not able to comply with regulations, it can face fines and legal consequences, on top of a damaged reputation.
Thirdly, compliance helps organizations find and retain customers and stakeholders. When they actively invest in their compliance and security measures, companies demonstrate that they take security and data protection seriously. And when people see this, it can help businesses build trust and credibility with their prospective and established customers, which can lead to increased loyalty and retention.
How can businesses become compliant?
Your technology systems provide you with the resources to keep your business operating 24/7/365. When you do business with a company outside work, you would expect your personal data to be protected. Consequently, it is important to remember that your customers expect the same standard from you.
There are numerous practices to ensure that your IT environment is compliant. They include the following:
Keep your technology systems compliant with IT specialists
Regulations are vital to maintaining high standards of data security, ethics, and business best practices to keep organizations functional and profitable. The cybersecurity professionals at Davenport Group can assess your organization’s risks, business IT solutions, and legal obligations to develop a comprehensive strategy that can keep your business compliant with any regulations.
With the Davenport team managing your compliance needs, you and your co-workers will be able to deliver your products and services within a secure IT environment. Contact them today and get started. | <urn:uuid:58172e0b-be4b-46a3-ad27-8c4d722d0c45> | CC-MAIN-2024-38 | https://davenportgroup.com/insights/how-important-is-it-compliance-for-businesses/ | 2024-09-13T12:16:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00006.warc.gz | en | 0.961711 | 781 | 2.578125 | 3 |
Introduction to Barcode Creation in Excel
In a detailed tutorial video by Kevin Stratvert, viewers learn how to create barcodes in Microsoft Excel. This process is showcased as not only free but also does not require any software installation. The tutorial is applicable to users of the full Excel desktop version as well as the free web version.
Step-by-Step Guide for Generating Barcodes
The video outlines a straightforward method to format cells in Excel for barcode generation, which begins at the 0:37 minute mark. Following this, at 1:45, Stratvert demonstrates how to insert a barcode using the image function in Excel. As the tutorial progresses, different barcode formats and even QR codes are generated, with specific steps highlighted at 5:24 and 8:04 respectively.
Additional Resources and Community Engagement
To aid viewers, Stratvert provides a downloadable Excel workbook and a link to a free barcode API, enhancing the learning experience by offering practical tools. Furthermore, viewers are encouraged to engage with the broader community through various platforms and can subscribe to receive regular high-quality tutorials and tips directly to their inbox.
ExcelGenerating barcodes in Excel can significantly streamline various tasks, especially in environments like inventory management and retail. Barcodes are a compact, efficient way to encode data visually, and using Excel allows for easy customization and integration into existing workflows. This capability aligns well with the needs of businesses looking for cost-effective solutions for data management and product tracking.
Excel's versatility in handling different barcode formats ensures adaptability across a range of industries. The ability to generate QR codes adds a layer of modern barcode technology, which is widely used in marketing and information dissemination. By harnessing these functionalities, users can create a seamless bridge between digital data and physical operations.
The availability of free resources, such as APIs and downloadable content provided in the tutorial, underscores the accessibility of advanced Excel functions to a broader audience. This democratization of technology enables users from various sectors to optimize their operational efficiency without significant investment. Strategies outlined by experts like Stratvert make potent tools like Excel more approachable for everyday users and professionals alike.
To manually create barcodes in Microsoft Excel, you can use specific barcode fonts and format the data accordingly.
Enter the formula =RANDBETWEEN(1,100) into the formula bar and press Enter. After copying the formula across the desired cells to generate random barcode numbers, use "Control + C" to copy the selected cells, preparing them to be converted into a barcode format with appropriate fonts or applications.
Yes, utilizing a barcode generator allows you to effortlessly create barcodes for your inventory items at no cost.
Implementing barcode scanners in Excel involves configuring the scanner settings to correctly interface and capture data into Excel sheets.
Create Barcode Excel, Free Barcode Generator, Barcode Excel Tutorial, No Install Barcode, Excel Barcode Creation, How to Make Barcode in Excel, Barcode without Installation, Excel Barcode Free | <urn:uuid:82433427-508f-48f1-9754-8593ea557265> | CC-MAIN-2024-38 | https://www.hubsite365.com/en-ww/crm-pages/how-to-create-barcode-in-excel-free-no-install.htm | 2024-09-13T11:35:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00006.warc.gz | en | 0.857865 | 621 | 2.625 | 3 |
Many terms, features, and technologies are thrown at new networking students. It is common to feel overwhelmed when learning these concepts, which seem complex and foreign to people who have no previous experience in the field. This confusion is compounded by the fact that many of these technologies overlap into each other. An example of this overlap is the concept of a virtual LAN (VLAN). Since it is common to begin by learning about physical LANs, students can become confused if instructors attempt to teach VLAN concepts without first establishing a solid understanding of the physical concepts.
If your understanding of physical LAN concepts is not completely solid, review those concepts thoroughly before reading this article. We will examine how virtual LAN concepts tie in with physical LAN device functionalities, which assumes that you thoroughly understand those physical LAN concepts. Let’s get started. | <urn:uuid:6c7e50f3-aae0-411b-b471-ecb6a465caab> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=2343470&seqNum=3 | 2024-09-14T15:02:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.38/warc/CC-MAIN-20240914125424-20240914155424-00806.warc.gz | en | 0.960613 | 166 | 3.296875 | 3 |
Improper Control of Generation of Code ('Code Injection')
Microsoft Windows Vista Gold, SP1, and SP2, Windows Server 2008 Gold and SP2, and Windows 7 RC do not properly process the command value in an SMB Multi-Protocol Negotiate Request packet, which allows remote attackers to execute arbitrary code via a crafted SMBv2 packet to the Server service, aka "SMBv2 Command Value Vulnerability."
CWE-94 - Code Injection
Code injection is a type of vulnerability that allows an attacker to execute arbitrary code. This vulnerability fully compromises the machine and can cause a wide variety of security issues, such as unauthorized access to sensitive information, manipulation of data, denial of service attacks etc. Code injection is different from command injection in the fact that it is limited by the functionality of the injected language (e.g. PHP), as opposed to command injection, which leverages existing code to execute commands, usually within the context of a shell. | <urn:uuid:6ad8bb4a-0502-4a69-90ec-75e33a86b1c7> | CC-MAIN-2024-38 | https://devhub.checkmarx.com/cve-details/cve-2009-2532/ | 2024-09-15T20:49:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00706.warc.gz | en | 0.867349 | 201 | 2.5625 | 3 |
According to a recent article by Forbes, the FBI has issued a “high-impact” cyber attack warning to U.S-businesses and organizations. The FBI claims that the incidence of indiscriminate ransomware campaigns, such as evidenced by WannaCry in 2017, has sharply declined. However, the frequency of attacks has been relatively consistent and the landscape has been continually evolving.
But what exactly is the FBI warning us about?
Ransomware is a form of malware that encrypts files on a victim’s computer or server and can only be accessed when paying cyber criminals a ransom. Put simply, ransomware is a type of cyber attack that locks your files and holds them hostage unless you pay up. Ransom prices vary and, thanks to the anonymity that cryptocurrencies have, bitcoins are usually the preferred type of payment that these attackers demand.
According to the FBI’s warning, healthcare organizations, industrial companies, and the transportation sector, along with regularly targeted industries like state and local governments, should be wary about:
Email Phishing campaigns
Email phishing is a cybercrime defined as the practice of sending fraudulent emails claiming to be from a reputable source in order to trick a user into revealing personal information such as passwords, personal information, credit card numbers, and other sensitive information. Cyber criminals may also compromise a victim’s email account by using precursor malware which enables the attacker to use a victim’s email account to further spread the infection. -
Remote Desktop Protocol vulnerabilities
Remote Desktop Protocol (or RDP) is a proprietary network protocol that allows individuals to control the resources and data of a computer over the internet. Cyber criminals use this method to access a range of information through the internet from user credentials or control the victim’s system. -
Software vulnerabilities are probably the most common way attackers use to access a user’s sensitive information. According to the FBI, cyber criminals recently exploited vulnerabilities in two remote management tools used by managed service providers (MSPs) to deploy ransomware on the networks of customers of at least three MSPs.
What does the FBI say when your system gets infected?
Should I pay the ransom? The FBI says no. The FBI warns users that paying the ransom DOES NOT guarantee that you will regain access to your files. In a statement, the FBI says “Due to flaws in the encryption algorithms of certain malware variants, victims may not be able to recover some or all of their data even with a valid decryption key. In addition, paying ransoms emboldens criminals to target other organizations and provides an alluring and lucrative enterprise to other criminals.”
Regardless of whether or not you have decided to pay the attackers, the FBI urges you to report ransomware incidents to law enforcement. This provides the FBI data on critical information that they need to be on top of as well as hold the attackers accountable under the law.
What can I do to protect myself against Ransomware?
What else can you do?
Managed IT Services is the way to go! You might be asking - if it’s as simple as installing an antivirus, doing a quick lecture with my employees, and ensuring that I have antivirus software in place, why should I hire a professional?
Easy. Peace of mind.
Instead of spending your time learning these stuff then educating your employees about the different types of Internet pitfalls, why not just focus on growing your business?
You might be rolling your eyes but please - hear us out!
Managed IT service providers exist to help your business cover your IT needs which, in turn, allow businesses to lower costs and become more effective in its day-to-day operations. These services may vary from server management, customer support, server backup, to - you guessed it - network security.
And, as we’ve talked about before, data is king in today’s world. Almost every single business decision now relies on acquired data. That’s where we, as managed IT service providers, come in.
“Managed IT service providers oversee large data centers and put multiple layers of protection in place,” says Isanov. “However, users may still be breached by hackers. It is of utmost importance to understand that as technology improves, the type of attacks become more intricate as well. All hope is not lost though - as long as each security layer is kept up to date, we would be able to see attacks from a mile away.”
Still unconvinced? Or do you want to know more about how you could better protect your business? ETech 7 offers a free network check for your business! | <urn:uuid:47a8beb5-4919-446e-b41b-4f903e28b371> | CC-MAIN-2024-38 | https://blog.etech7.com/blog-fbi-issues-warning-for-high-impact-ransomware-attacks | 2024-09-08T15:54:18Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651013.49/warc/CC-MAIN-20240908150334-20240908180334-00570.warc.gz | en | 0.947673 | 955 | 3.046875 | 3 |
In the run-up to the coronavirus pandemic, future-gazing economists were increasingly concerned with technological unemployment: The idea we’ll replace human workers with automated systems, creating social unrest among surplus workers.
If the prospect of guillotine-wielding retirees didn’t worry you, the economists asked who would buy what robots built. Workers would stage uprisings, unable to buy anything on their way home from the barricades. It wouldn’t be the first time automation has brought social unrest – the term ‘Luddite’ comes from a movement in the UK’s industrial revolution that saw workers destroying machinery that was replacing them.
Calming the angry masses
Many have proposed solutions to this (speculative) crisis, most prominently Universal Basic Income (UBI:) A regular pay packet for every person, with no means testing. It would lubricate the economy’s supply side and dry up the reservoirs of guillotine-building fuel. UBI may be a good idea, but not because of technological unemployment.
Most technological unemployment stories foresee advancements in machine learning producing artificial intelligence. That’s like continuous improvements in horse breeding producing an internal combustion engine.
The poster child for technological unemployment is trucking. Artificial intelligence (AI) boosters will tell you being a trucker is one of the most common jobs in the US and that self-driving trucks are within our grasp.
But there’s more to the story. “Trucker” is one of the most common jobs in the US because the Bureau of Labor Statistics deems “trucker” includes anyone who operates a delivery vehicle, from a van to a long-haul 16-wheeler.
Self-driving vehicles are nowhere near ready to replace urban delivery drivers. It’s true that if given their own lane on major highways, automated lorries could probably navigate reliably. How exciting. We’ve just invented a substandard freight train.
Bigger fish to fry
That all said, I’m a science fiction writer. My job is to imagine six impossible things before breakfast. I can conduct a thought experiment in which AI breakthroughs replace most of the jobs we’re doing today.
If that day comes, it will be a good one. It will free up people for the most urgent task our species faces – a project that will absorb all the work all of us and all our descendants can do, for centuries to come. I’m talking about climate change.
Remediating climate change will be unimaginably labor-intensive: Relocating every coastal city inland, building high-speed rail to replace aviation and treating runaway pandemics, to begin with. If automation takes your job, you’ll have another. You’ll have ten. Or a hundred.
Fantasy faces facts
Perhaps you think I’m dodging the question. If we’re stipulating a fundamental breakthrough that produces AI, what about a comparable geoengineering breakthrough? Maybe our AI will be so smart it will work out how to reduce Earth’s reflectivity, creating mass cooling to counter the greenhouse effect.
That’s less science fiction, more science fantasy. It’s too late to halt the climate processes that will flood every coastal city, displace hundreds of millions of people and sicken billions as pathogen-bearing organisms seek new habitats. These things will happen regardless of geoengineering.
Why? Consider, for example, the heat we’ve sunk into the oceans. The seas won’t cool until the energy trapped in their depths is expended. The ice caps are toast. I can speculate about AI all night long, but thought experiments that repeal the second law of thermodynamics aren’t scenario-building. They’re wishful thinking.
The good news is, we’ve revealed the central problem of technological unemployment. It’s not a technological problem – it’s an economic one.
Pandemic lays economic assumptions bare
The COVID-19 crisis has taught us two critical things. The first: Ideological commitment to government austerity doesn’t build capacity – it destroys it. California spent 200 million US dollars in 2006 to stockpile N95 masks, ventilators and mobile hospitals. In 2011, the state dismantled its stockpile to save just 5 million US dollars a year on maintenance as its tax revenues fell after the subprime crisis.
The second: Sovereign currency issuers don’t have cash shortfalls during crises – they have capacity shortfalls. Since the pandemic began, central bankers of countries with monetary sovereignty (issuing free-floating currencies and not borrowing substantially in currencies they don’t issue) have been creating money. They just type zeroes into a spreadsheet at the central bank, and the money appears.
Be first to find out what’s happening in tech, leadership and cybersecurity.
Orthodox economics says creating money produces inflation, but it didn’t, because the money was largely spent on things the private sector wasn’t buying. Inflation happens when the amount of money in circulation grows in relation to goods and services for sale. When the amount of currency goes up without a rise in available goods and services, you have more money chasing fewer things, and prices rise.
But the private sector had suddenly developed a disinterest in the labor of hundreds of millions of workers worldwide. When governments gave those workers money to cover their overheads, they’re not competing with the private sector, so they’re not driving up the price of labor.
There’s one exception. The demand for healthcare has spiked, with no end in sight.
Central banks can print as much money as they want, but they can’t make ventilators appear on the market. California gave away its ventilators 12 years ago to save money. The US central bank could have filled California’s budget hole 12 years ago. Instead, they “saved” money and threw out the health infrastructure. Likewise, post-2008 austerity saw a drawdown of medical infrastructure across the European Union, particularly in the poorest countries like Italy, whose early collision with the coronavirus was made brutal by austerity-wrought brittleness.
Austerity is a bad bet. Money is the one thing central banks can’t run out of. But we sure can run out of ventilators and masks.
A better kind of full employment
If governments with monetary sovereignty can make as much cash as they need to buy things nobody else is buying without creating inflation, we won’t have an automation-driven unemployment crisis. If robots can supply all the capacity for our needs, central banks can distribute money and tax some of it away if the ratio of money to stuff-you-can-buy gets out of line.
Today, we have a kind of full employment. You can either be employed, underemployed or unemployed. The prevalence of unemployed and underemployed people controls inflation by keeping wages down. If you ask for a raise, your employer can threaten to fire you and hire someone unemployed.
A better kind of full employment would take every person made idle by the climate emergencies we’re about to face and give them essential, meaningful work for a decent wage. They’d be moving cities, building seawalls and fighting pandemics. They’d remediate habitats and care for refugees. No one is competing to hire others to do that work, but it’s work that will need doing, and there’ll be work enough for all.
Dreaming gets real
Those fretting over technological unemployment are worried about a future, speculative moment when we go through a technological singularity, driven by general AI through nebulous means.
That’s a long way away, but in the next year or so, the pandemic crisis will end. About 30 percent of all jobs that existed before the crisis will no longer exist. The workers who did those jobs will be unemployed unless governments step up.
Governments will collapse if they choose unemployment over job creation, leaving a third of workers jobless and desperate. They may take their nations down with them. Nations with 30 percent unemployment can’t function and can’t create capacity that will make them resilient to the next crisis.
It’s a civilization-threatening downward spiral that will end the dream of creating superintelligent AI. Societies whose primary industry is digging through rubble for canned goods do not make AI breakthroughs.
The 30 percent unemployment “solution” solves nothing, but what about a job guarantee? The 30 percent working under government jobs programs will have working lives decoupled from the market. The movement of markets – especially financial markets – will be irrelevant to everyone except chart-watching, twitchy nerds.
Can we imagine a job guarantee? You might find it hard to believe we’ll find the political will to procure the labor of people without jobs. Even in the face of a climate emergency that needs labor to prevent destruction of cities and civilization, it’s hard to imagine such an ambitious program.
But if we’re going to dream, let’s dream big. I see a path from here to a job guarantee. I don’t see a path from here to AI.
Universal Basic Income or job guarantee?
Universal Basic Income (UBI) is easy to understand. If some people don’t have enough money to make ends meet, give everyone money.
That’s a great idea, but it won’t address inequality. If most people lack the money to keep body and soul together, giving them a monthly stipend could make the difference between starvation and subsistence. However, for the small number who already have money enough to save, that stipend is just additional savings. Ten years later, the masses have averted starvation, while the wealthy minority have accumulated much extra wealth.
A job guarantee is also easy to understand. If we have a job that needs doing and a person that needs a job, we should pay the person to do that job at a sufficient wage with decent benefits. This will address inequality and create a true minimum wage. Without a job guarantee, the real minimum wage is $0 per hour: The amount you get if you want to work, but no one will give you a job.
Early 20th-century economist John Maynard Keynes once proposed we could jump-start an economy by paying half the unemployed people to dig holes and the other half to fill them. No one’s tried that, but we just spent 150 years subsidizing digging hydrocarbons out of the ground. Now we’ll spend 200 to 300 years subsidizing our descendants to put them back in.
Since neo-liberal economics swept the globe in the 1980s, business has used unemployed people to control wages. This tactic is no longer sustainable. The pessimistic future of your business is unemployment so high it will collapse the property relationships and rule of law your market relies upon. The optimistic future is simply that your firm hires in a way that makes it competitive with good public sector jobs remediating our planet. | <urn:uuid:09c13573-2e3c-4a50-8024-e5b499c50afe> | CC-MAIN-2024-38 | https://www.kaspersky.com/blog/secure-futures-magazine/ai-future-jobs-climate/37035/ | 2024-09-09T20:33:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00470.warc.gz | en | 0.933764 | 2,334 | 2.5625 | 3 |
Generative AI (GenAI) is a branch of AI that can be trained to create derivative content in different formats, including text, image, video, and audio.
Why is Generative AI Important?
Generative AI is important to business because it helps accelerate creative processes, including writing copy and sourcing images for ads, customer emails, and newsletters. Product designers benefit from using GenAI to deliver 3D images and models of designs from new perspectives.
Consumers benefit from GenAI by having search results explained to them.
Applications of Generative AI
New applications for GenAI are released almost daily. Below are some examples of this rapidly evolving set of applications:
- Chatbots are probably the most popular text-based application of Generative AI. Customer service teams use these sales contact centers and marketing websites to provide highly responsive dialogs.
- Transcription GenAI services will create meeting minutes and summarize video content.
- Social media analysis GenAI models analyze social streams to get the gist and highlight particularly negative or positive sentiments.
- Research can be more productive by having a GenAI tool run web searches for articles and papers and then summarize and organize the output based on search terms.
- Marketing teams can use GenAI to create visual and written content.
Training Generative AI Models
Generative Pre-trained Transformer (GPT) models use deep learning algorithms applied to large training data sets to accumulate knowledge. Below are the training methods.
The least sophisticated training approach is to feed large volumes of relevant data to teach the GenAI model. For example, you may want a text-based GenAI to write your PR agency’s first draft of press releases. You could start by sharing client briefing templates along with the final draft of the associated press release. The GenAI model will quickly learn to draft similar releases.
A more guided approach uses data sets with the best usage examples highlighted or tagged. This has the potential to create higher grade output than the unsupervised approach.
Reinforcement Learning from Human Feedback
Reinforcement Learning from Human Feedback (RLHF) provides feedback on GenAI output using people’s preferences—this form of fine-tuning training results in more natural conversational responses from Chatbot applications. For an application that summarizes articles, for example, any edits made to its output are used to generate a further training dataset for fine-tuning.
Diffusion models are used for GenAI applications that create and enhance images and videos. In this instance, image creation is done using text-based prompts to provide information about the required frame, subject and style. GPT image tools such as Dall-E2 and Microsoft Designer use diffusion models to create versions of the images they are trained on that depict new perspectives, change settings and allow customizations such as adding text.
GenAI vendors like AWS and OpenAI Enterprise customers can access plug-ins that provide a pre-trained model as a high-level starting point. Below are some examples of GPT-4.
- AI Data Analyst – Explore data using natural language.
- AnalyticsAI – Review your Google Analytics using prompts.
- Bramework analyzes search data to help marketers with Search Engine Optimization (SEO).
- Chat With Excel – Converse with your spreadsheet.
- Developer Doc Search – Open-source code research and documentation search.
- Recipe Finder – Recipe ideas organized by dietary needs.
- Rephrase AI – Turn text into talking avatar videos.
- Smart Slides – Create a slide presentation.
- Take Code Captures – Beautify source code for sharing.
- Visualize Your Data – Create charts of your data.
The Actian Data Platform and Generative AI
Thanks to its built-in data integration capabilities, the Actian Data Platform makes it easy to automate data preprocessing as part of your AI training workflow. Businesses can proactively preprocess their operational data to be analysis-ready using pipeline automation. By making it easy to unify, transform, and orchestrate data pipelines.
Actian’s database technology in the Action Data Platform can perform high-speed queries referencing distributed database instances and data stored externally to the database using the Spark connector. | <urn:uuid:864a038f-aa52-4857-8045-bf941a688356> | CC-MAIN-2024-38 | https://www.actian.com/glossary/generative-ai/ | 2024-09-17T05:15:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00770.warc.gz | en | 0.892405 | 865 | 3.171875 | 3 |
[German]November 30, is Computer Security Day, a day that has been dedicated to secure IT worldwide since 1988. The initiative for Computer Security Day goes back to the US Association for Computer Security Day. The aim of this day of action: to give the topic of computer and information security a firm place in the public consciousness and to sensitize individuals to this complex of topics.
This day is actually sorely needed, because in the meantime one must assume the status of "365 computer unsecurity days". Security provider Check Point Software Technologies Ltd takes a similar view. Because since the introduction of Computer Security Day in 1988, the extent of the threats has increased every year and in the last 12 months there have been more incidents than ever before. New and sophisticated malware, more devices, more computing power and professional criminal gangs mean that anyone with a computer, smartphone or IoT device needs to think about IT security on a regular basis – but many still don't.
Five tips for better security
But now that many people are working remotely from home, every employee has a certain amount of responsibility when it comes to IT security at home and at work. For this reason, the following tips have been compiled to provide guidance and assistance in protecting both personal devices and IT systems:
- Passwords are important: Passwords should be checked and strengthened regularly. However, experts argue about the length and composition, as well as the frequency of renewal. It is important for users to be careful with their passwords, not to store them unsecured in Excel spreadsheets or leave them written down for anyone to see, or stick them on the back of the keyboard. "1234" or "password" are also not secure passwords.
- Protect against phishing: Users should be careful before clicking on links that look suspicious in any way, often associated with the sender. They should also only download content from reliable sources, as phishing, a popular form of social engineering, has become the main avenue of attack. Therefore, if users receive an email with an unusual request or a strange sender or subject, they should immediately start doubting.
- Choose IT devices carefully: In connection with telecommuting: In connection with teleworking (remote working), this point has become extremely important. The risk of a large-scale attack increases when employees use their private end devices, such as computers or cell phones, for work purposes. Security software should be installed on all devices and the connection to the company network should be protected.
- Keep software fresh: Hackers often find entry points in applications, operating systems and security solutions, as they generally monitor and exploit the appearance of vulnerabilities. One of the best protective measures is to always use the latest version of any software – simple, but effective.
- Use multi-factor authentication: Many users are already familiar with multi-factor authentication from their online banking accounts when the TAN (one-time password) is requested via the cell phone, for example. In many cases, this login method is now being introduced for applications and accounts at online retailers to increase IT security. In this way, they have made it almost impossible for cyber criminals to gain access to the system despite knowing the password.
This advice can already help toward protecting your own devices and business against hacker attacks and malware. This should be supplemented by a comprehensive IT security architecture that consolidates and centrally controls various security solutions against different types of attack. This covers all areas of IT security and can even intercept the dreaded zero-day attacks. The training of all employees, up to management level, and the training of specialists via special training programs and learning platforms ultimately rounds off the strategy. | <urn:uuid:f1c06425-2450-4b0f-a508-f78aecae2181> | CC-MAIN-2024-38 | https://borncity.com/win/2021/11/30/30-november-ist-computer-security-day/ | 2024-09-19T18:49:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00570.warc.gz | en | 0.958405 | 738 | 2.6875 | 3 |
Length Definition of Result Data Type
The length of a result data type is defined by fid_rlength in II_ADD_FI_DFN or an external lenspec routine. This length is used by the DBMS Server when manipulating values internally. The length component of the actual data value (II_DATA_VALUE db_length) should not be changed within the function itself. If the length defined by fid_rlength or the lenspec routine differs from the data value’s db_length, then errors may result or data values may be incorrectly interpreted.
The same is true of scale and precision in the case of DECIMAL. In this case, the fid_rprec or lenspec is used and the db_prec of the data value should not be changed.
The following macros, defined in $II_SYSTEM/ingres/files/iiadd.h, can be used to manipulate DECIMAL length values:
Given a precision p and scale s, returns the two-byte value combining the two. This macro could be used, for example, when setting db_prec value within a user-defined lenspec routine.
Given a two-byte combined value for precision and scale (db_prec) of ps, returns the precision part.
Given a two-byte combined value for precision and scale (db_prec) of ps, returns the scale part.
Given a precision of prec, returns the length needed for such a decimal. This macro could be used when setting the db_length within a user-defined lenspec routine. | <urn:uuid:ca7ee39f-3d4a-47d4-acd3-6a5f97a28b01> | CC-MAIN-2024-38 | https://docs.actian.com/ingres/10S/ObjMgmtExt/Length_Definition_of_Result_Data_Type.htm | 2024-09-19T18:52:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652055.62/warc/CC-MAIN-20240919162032-20240919192032-00570.warc.gz | en | 0.659248 | 329 | 2.796875 | 3 |
How to Run Console Programs Without a Console Window
From time to time the question gets asked how a batch file or console program can be executed silently or hidden, i.e. without a console window popping up. If you do not know what I am talking about: press Win+R, type “help” and press ENTER. A black console window will open, execute the HELP command and close again. Often, this is not desired. Instead, the command should execute without any visible window.
Solution 1: For Programmers
Use CreateProcess to execute the command and set the parameter dwCreationFlags to CREATE_NO_WINDOW (0x08000000).
Solution 2: For Script Writers
Use the Run method as illustrated in the following code snippet to execute the command:
Set Shell = CreateObject("WScript.Shell")
Shell.Run """Path to command or batch file""", 0, False
Of the three arguments to Run, the first is the full path to the executable or batch file, the second sets the windows style (0 meaning “hide the window”) and the third specifies whether the script waits for the command to return (False meaning “no, do not wait”).
Solution 3: For Everybody Else
Use the excellent little free tool hstart. It is not only capable of executing any command without a visible console window but is a full-fledged replacement of the console’s START command – and more. For example, it handles Vista UAC elevation and it is even available in a 64-bit version. | <urn:uuid:2d151e3d-38dc-491d-936b-092296b6e723> | CC-MAIN-2024-38 | https://helgeklein.com/blog/how-to-run-console-programs-without-a-console-window/ | 2024-09-07T13:37:56Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00770.warc.gz | en | 0.821864 | 326 | 2.578125 | 3 |
Virginia Tech College of Engineering Professor Wu Feng has a vision to broadly apply parallel computing to advance science and address major challenges. A recent expose on Feng’s work details his involvement with the NSF, Microsoft, and the Air Force using innovative computing techniques to solve problems.
“Delivering personalized medicine to the masses is just one of the grand challenge problems facing society,” said Feng. “To accelerate the discovery to such grand challenge problems requires more than the traditional pillars of scientific inquiry, namely theory and experimentation. It requires computing. Computing has become our ‘third pillar’ of scientific inquiry, complementing theory and experimentation. This third pillar can empower researchers to tackle problems previously viewed as infeasible.”
He addresses the question of why bolstering these disciplines is no longer a matter of throwing more FLOPs at the problem.
“In short, with the rise of ‘big data’, data is being generated faster than our ability to compute on it,” he explains. “For instance, next-generation sequencers (NGS) double the amount of data generated every eight to nine months while our computational capability doubles only every 24 months, relative to Moore’s Law. Clearly, tripling our institutional computational resources every eight months is not a sustainable solution… and clearly not a fiscally responsible one either. This is where parallel computing in the cloud comes in.”
“…Rather than having an institution set-up, maintain, and support an information technology infrastructure that is seldom utilized anywhere near its capacity… and having to triple these resources every eight to nine months to keep up with the data deluge of next-generation sequencing, cloud computing is a viable and more cost effective avenue for accessing necessary computational resources on the fly and then releasing them when not needed.”
Much of his work centers on the promise of parallel computing, which he sees as analogous to the Internet in terms of its ability to transform the way people interact.
In the mid-2000s, Feng was part of a team that created an ad-hoc supercomputing cloud to process genomics data. They were able to reduce the time it took to identify missing gene annotations in genomes from a period of three years down to two weeks by adopting added parallelism. This project is now being formalized and expanded with funding from NSF and Microsoft with the aim of commoditizing biocomputing in the cloud.
To facilitate this important research, Feng founded a new center at Virginia Tech — Synergistic Environments for Experimental Computing (SEEC). The center is co-funded by Virginia Tech’s Institute for Critical Technology and Applied Science (ICTAS), the Office of Information Technology, and the Department of Computer Science. Under Feng’s leadership, the research center seeks to democratize parallel computing through the codesign of algorithms, software, and hardware to accelerate discovery and innovation. Emphasis will be placed on five areas, each with varying degrees of “big compute” and “big data” requirements: cyber-physical systems where computing and physical systems intersect; health and life sciences, including the medical sciences; business and financial analytics; cybersecurity; and scientific simulation. | <urn:uuid:bb62bf80-e583-49b7-929a-ff331b016e9e> | CC-MAIN-2024-38 | https://www.hpcwire.com/2014/08/20/democratization-parallel-computing/ | 2024-09-11T07:55:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00470.warc.gz | en | 0.937448 | 661 | 2.546875 | 3 |
How AI and Cybersecurity Changes Will Transform Your Security Program
In today’s rapidly evolving technological landscape, artificial intelligence (AI) is no longer a distant concept but a central force reshaping industries across the globe. Among these industries, cybersecurity stands out as one of the most profoundly impacted by AI’s rise. The intersection of AI and cybersecurity presents both opportunities and significant challenges, and understanding this relationship is crucial for organizations aiming to protect their digital assets. This article explores how AI is revolutionizing cybersecurity programs, highlights the benefits and risks it brings, and offers practical recommendations for cybersecurity professionals.
The Benefits of AI in Cybersecurity
AI is increasingly being integrated into cybersecurity programs due to its ability to process vast amounts of data, identify patterns, and automate responses to threats. The use of AI in cybersecurity has been particularly beneficial in enhancing threat detection and response times. According to IBM’s 2024 “Cost of a Data Breach Report,” organizations that extensively used AI and automation saved an average of $1.88 million in data breach costs and identified and contained breaches 100 days faster than those that did not use AI. This capability allows organizations to respond to threats more swiftly and effectively, reducing the potential damage.
How AI Increases Security Risks
While AI offers numerous benefits, it also introduces new security risks that organizations must address. These risks include:
- Accidental Data Leaks: AI systems often require large datasets for training, which can include sensitive information. If these datasets are not adequately protected, they can be vulnerable to breaches, leading to the exposure of confidential data.
- AI-Generated Phishing: Cybercriminals are using AI to create highly sophisticated phishing emails that are harder to detect. These emails can mimic legitimate communication with high accuracy, increasing the likelihood that unsuspecting recipients will fall victim to scams. Watch LMG’s recent webinar, “How the Dark Web Works,” to see AI phishing tools in action.
- Voice Cloning and Deepfakes: AI-driven technologies can create highly convincing audio and video content, which can be used to impersonate individuals or manipulate public opinion. For instance, a deepfake video could deceive employees into transferring funds to a fraudulent account, believing they are following legitimate instructions from a senior executive.
In early 2024, a major British engineering company fell victim to a sophisticated deepfake scam, resulting in the theft of nearly $26 million. The attackers used AI-generated video and audio to impersonate the company’s CFO in a virtual meeting with a key finance department employee. The deepfake was so convincing that the worker was completely unaware that they were not speaking to their real CFO. During the meeting, the “CFO” authorized large transfers of funds, which were swiftly executed. This incident highlights the growing threat of AI-driven impersonation attacks and underscores the need for enhanced verification protocols, especially in high-stakes financial transactions.
- Adversarial Attacks on AI Models: Cyber attackers can manipulate AI models by feeding them malicious inputs, leading to incorrect or harmful outputs. These attacks can compromise the integrity of AI-driven security tools, making them less effective or even turning them against the organization.
- AI-Generated Malware: AI is being used by cybercriminals to create more advanced and evasive forms of malware. AI-generated malware can adapt to evade detection by traditional cybersecurity tools, posing a significant threat to corporate networks. This type of malware can mimic legitimate software behavior, making it difficult to detect and remove.
Recommendations for Strengthening Cybersecurity in the AI Era
Given the unique challenges posed by AI, it is crucial for organizations to adapt their cybersecurity strategies accordingly. Below are some recommendations for managing the integration of AI and cybersecurity:
- Create Clear Policies and Procedures Regarding the Use of AI: Organizations should establish comprehensive policies that govern the use of AI technologies. These policies should outline the acceptable use of AI, data handling procedures, and protocols for addressing AI-related security incidents. Regular reviews and updates of these policies are essential to keep pace with the rapidly evolving AI and cybersecurity landscapes.
- Third-Party Risk Management (TPRM): As AI becomes more integrated into business operations, organizations must update their vendor vetting processes to include questions related to the use of AI. This includes assessing whether third-party vendors use AI, whether it is integrated into their operating systems, and how they protect AI-driven systems. Contracts should also address the use of AI to ensure that both parties are aligned on security practices.
- Social Engineering Training: With the rise of AI-driven scams, it is more important than ever to educate employees about the risks of social engineering. Training programs should cover the potential dangers of voice cloning, deepfakes, and AI-generated phishing emails. Providing real-world examples and conducting regular security awareness training can help employees recognize and respond to these threats effectively.
- Implement Robust AI Monitoring and Testing: Organizations should implement continuous monitoring of AI systems to detect anomalies and potential attacks. Regular testing of AI models, including adversarial testing, can help identify vulnerabilities before they can be exploited by attackers.
- Invest in AI-Specific Security Tools: As the cybersecurity landscape evolves, so too must the tools used to protect digital assets. Organizations should consider investing in AI-specific security tools that can detect and respond to threats targeting AI systems. These tools can provide an additional layer of defense, ensuring that AI technologies are not compromised.
- Collaborate with External Cybersecurity Experts: Given the complexity of AI and its associated risks, collaborating with external cybersecurity experts can be beneficial. These experts can provide insights into emerging threats, offer guidance on best practices, and assist in the development of robust security strategies.
Managing the Integration of AI and Cybersecurity
The integration of AI into cybersecurity programs presents both opportunities and challenges for organizations. While AI can significantly enhance threat detection and response capabilities, it also introduces new risks that must be carefully managed. By adopting clear policies, updating third-party risk management practices, educating employees, and investing in AI-specific security tools, organizations can better protect themselves in this new era of cybersecurity. As AI continues to evolve, so too must the strategies employed to safeguard digital assets, ensuring that the benefits of AI can be fully realized without compromising security.
We hope this information has been helpful! Please contact us if you need support developing AI and cybersecurity policies and procedures or help with technical testing, advisory services, cybersecurity solutions, or training. | <urn:uuid:1b82d3ba-2f9e-4be6-bb49-42776505233f> | CC-MAIN-2024-38 | https://www.lmgsecurity.com/how-ai-and-cybersecurity-are-transforming-threat-management/ | 2024-09-11T06:28:13Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00470.warc.gz | en | 0.937439 | 1,325 | 3.0625 | 3 |
Table of Contents:
In the fast-paced realm of cybersecurity, staying ahead of constantly evolving threats is a perpetual challenge. Over the recent months, the landscape has witnessed remarkable innovations reshaping how we defend against cyber threats. From cutting-edge technologies to novel approaches, let’s explore some key advancements bolstering our digital defenses.
List of Latest Innovations in Cybersecurity
Zero Trust Architecture
Traditional security models centered around the notion of a secure perimeter, assuming that once inside, all was safe. However, the surge in remote work and sophisticated cyber threats demanded a reevaluation. Enter Zero Trust Architecture—a paradigm shift that advocates for continuous verification of every user and device. By eliminating implicit trust, organizations can significantly reduce their attack surface and enhance security in an era where boundaries are increasingly blurred. Seeking guidance from an expert cyber security consultancy will help to navigate these cutting-edge advancements effectively with the best knowledge that you could hope to be armed with.
AI-Powered Threat Detection
Artificial Intelligence (AI) has become a linchpin in modern cybersecurity. Machine learning algorithms analyze vast datasets to identify patterns and anomalies, enabling early threat detection. Advanced AI systems excel at distinguishing normal behavior from potential threats, offering a proactive defense mechanism against evolving risks. This innovation is instrumental in preventing data breaches and fortifying overall cybersecurity posture.
The inadequacies of traditional passwords have led to the rise of biometric authentication, which is quickly taking over two-factor authentication. Technologies like facial recognition, fingerprint scanning, and behavioral biometrics provide a more robust and user-friendly alternative to quickly access the required data, making it more secure than other means. By incorporating unique biological traits into the authentication process, organizations can thwart unauthorized access attempts, offering a higher level of security in an age where data breaches are a constant concern.
The advent of quantum computers poses a looming threat to conventional encryption methods. Acknowledging this vulnerability, researchers have been actively developing quantum-safe cryptography. These cryptographic algorithms are designed to resist attacks from quantum computers, ensuring the continued confidentiality and integrity of sensitive information. As quantum computing technology advances, the integration of quantum-safe cryptography is crucial for future-proofing our digital defenses.
Threat Intelligence Sharing Platforms
In the spirit of collective defense, threat intelligence-sharing platforms have gained prominence. These platforms facilitate real-time information exchange about emerging threats among organizations. Members can fortify their defenses by pooling knowledge and resources and proactively responding to potential risks. Collaboration within the cybersecurity community has proven to be a powerful strategy for staying ahead of the rapidly evolving threat landscape.
As cybersecurity continues to evolve, innovation remains paramount in the ongoing battle against cyber threats. The highlighted advancements represent a snapshot of the progress made in recent times. Looking forward, the cybersecurity community must maintain its vigilance, fostering collaboration and pushing the boundaries of innovation to stay ahead of malicious actors. With these continued advancements, people can navigate the ever-changing digital landscape, ensuring the interconnected world remains secure and has a resilient future in the long term.
ABOUT THE AUTHOR
IPwithease is aimed at sharing knowledge across varied domains like Network, Security, Virtualization, Software, Wireless, etc. | <urn:uuid:bfede386-b25c-42e4-9496-e5eb9904f15f> | CC-MAIN-2024-38 | https://ipwithease.com/cybersecurity-latest-innovations/ | 2024-09-16T05:47:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00070.warc.gz | en | 0.916583 | 648 | 2.640625 | 3 |
Many companies would like to enhance their quality management, but they are still unsure whether to start with ISO 9001 or GMP, since both of these are focused on quality. Read this article to find out how to decide which one to choose.
ISO 9001 and GMP are quality frameworks for businesses offering products or services. ISO 9001 is a Quality Management System standard, while GMP stands for Good Manufacturing Practices. Both are equally important, but ISO 9001 applies to any industry, and GMP is only for manufacturing food, drugs, cosmetics, and medical devices.
What are ISO 9001 and GMP?
ISO 9001 is the International Organization for Standardization’s Quality Management System (QMS) standard. It is a globally recognized framework applicable to virtually any industry, and it allows voluntary certification by an external party. According to ISO, over 1.2 million organizations are certified in more than 170 countries. This standard focuses on the general principles of quality management, such as top management commitment, customer focus, process approach, and continual improvement.
GMP regulations are mandatory in the EU for cosmetic products and are highly recommended by many other countries, such as the United States. In the United States, GMP is enforced by the US Food and Drug Administration (FDA) through Current Good Manufacturing Practices (CGMP), which cover a broader range of industries such as cosmetics, food, medical devices, and prescription drugs. Similarly, the FDA has established CGMPs for food and dietary supplements to ensure the safety of these products. Therefore, while GMP regulations are enforced in many countries, they may be mandatory in some and highly recommended in others.
GMP rules ensure that products consistently meet high quality standards, are suitable for their intended use, and comply with marketing authorization or clinical trial authorization. Unlike ISO standards, which are voluntary, GMP is legally binding, and compliance is enforced by national or supervising authorities. GMP is essential to guarantee product safety, consistency, and effectiveness, as it helps maintain rigorous quality control and manufacturing standards within the pharmaceutical, medical device, food, and cosmetic industries.
What are the key differences between ISO 9001 and GMP?
ISO 9001 and GMP are Quality Management Systems widely used in the manufacturing industry, but they have some key differences. Here are the main differences between ISO 9001 and GMP:
- Industry Applicability: The most significant difference is the industries these standards apply to. ISO 9001 applies to all organizations, including manufacturing, services, healthcare, etc. In contrast, GMP applies exclusively to the pharmaceutical, medical devices, cosmetics, and food industries, where product safety and efficacy are essential. It guarantees that products are manufactured, tested, and controlled following rigorous guidelines.
- Focus: GMP focuses on manufacturing to ensure the safety, identity, strength, quality, and purity of pharmaceuticals and food. It safeguards consumers from potential harm. At the same time, ISO 9001 is more concerned with the company’s overall management and Quality Management System, with a strong customer focus – meeting customer requirements, exceeding customer expectations, and improving customer satisfaction.
- Voluntary vs. Mandatory: ISO 9001 certification is voluntary, with organizations adopting the standard for competitive advantage and customer satisfaction. GMP compliance is mandatory and regulated by government agencies.
- Certification vs. Audited by Regulatory Authorities: Organizations can achieve ISO 9001 certification through third-party audits conducted by accredited certification bodies chosen by the organization. GMP compliance is verified through inspections and audits conducted by regulatory authorities, and non-compliance can result in regulatory actions.
- Documentation Emphasis: Both standards require documentation, but GMP’s documentation is particularly stringent, given the critical nature of pharmaceuticals, medical devices, cosmetics, and food products.
- Quality Control Unit: GMP regulations require establishing a quality control unit responsible for product approval and rejection. In contrast, ISO 9001 does not explicitly prescribe the creation of a quality control unit vested with the authority to approve or reject products. Instead, ISO 9001 encourages organizations to define roles and responsibilities in quality management. If quality control units exist, the standard does not dictate their specific composition and authority.
What are the similarities between ISO 9001 and GMP?
ISO 9001 and GMP converge on several key issues in quality management. Both standards prioritize consistent quality and operational excellence. ISO 9001 and GMP emphasize process optimization and documented procedures to uphold established standards. Both advocate a process-oriented approach, requiring ongoing evaluation for enhanced quality outcomes.
Customer-centricity and regulatory adherence are shared principles, underscoring the alignment of both standards in meeting customer needs and complying with industry regulations for sustained quality assurance. This convergence highlights their mutual emphasis on stringent processes, documentation, and continuous improvement to ensure consistent quality across different sectors and pharmaceutical manufacturing.
How can ISO 9001 complement GMP?
Even though ISO 9001 and GMP each have a somewhat different focus, implementing them together is a smart idea.
The process approach and customer focus outlined in ISO 9001 can significantly complement GMP by enhancing the Quality Management System within regulated industries.
ISO 9001’s process approach, when applied to non-GMP areas (such as administration, human resources, or sales), allows for mapping and managing interconnected workflows and activities. By identifying, analyzing, and optimizing these processes, efficiency can be improved, redundancies reduced, and potential bottlenecks identified.
ISO 9001 places a strong emphasis on understanding and meeting customer requirements. Integrating customer-focused principles into GMP encourages organizations to consider regulatory compliance and the end-user’s needs and expectations. By aligning manufacturing processes with customer requirements, GMP can prioritize product quality, efficacy, and safety in a way that directly meets or exceeds customer expectations. It can lead to developing compliant products that resonate with end-users, enhancing customer satisfaction and loyalty.
Integrating ISO 9001 into GMP enhances operational efficiency, risk management, product quality, and customer satisfaction in regulated industries while providing a stronger compliance framework. This ensures GMP practices meet regulations and deliver quality products that meet customer needs.
Start implementing quality management practices with our ISO 9001 Documentation Toolkit, which provides step-by-step guidance for full ISO 9001 compliance. | <urn:uuid:87099ec8-58a7-4ad9-8f89-963c755d7866> | CC-MAIN-2024-38 | https://advisera.com/articles/iso-9001-and-gmp-what-are-the-differences/ | 2024-09-17T08:41:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00870.warc.gz | en | 0.932956 | 1,283 | 2.671875 | 3 |
What is Blockchain Verification & Validation?
What is Blockchain Verification & Validation?
Modern network infrastructure is turning towards decentralized models of record keeping. Authentication and identity management are no different.
What is blockchain verification? Blockchain verification uses private blockchain technology to store and verify identity credentials.
What Is a Blockchain?
A blockchain is a relatively new ledger and database technology invented to support decentralized information management. Initially conceived by Satoshi Nakamoto in their Bitcoin whitepaper, the blockchain serves as a solution to two specific problems–one, to avoid record duplication in the ledger, and two, to provide a decentralized mechanism for verification modeled on peer-to-peer transactions.
While the internal workings of a blockchain can get quite complex, the simple application of any blockchain is in contexts where users or organizations want decentralized records management. Users collectively provide resources to verify credentials and other information via built-in cryptographic standards. At the same time, they get to retain their information without relying on monolithic databases.
Several blockchain types serve this purpose, all of which fit into different contexts. These blockchains include:
- Public: Public blockchains are (obviously) public ledgers in which any user may participate in transaction verification and (if the blockchain supports it) information exchange. Centralized organizations do not maintain these blockchains, although an organization may participate as users. Common examples of public blockchains include most cryptocurrencies like Bitcoin or Ethereum.The advantages of public blockchains are that they are open and trustable because no one can alter records independently. Conversely, they tend to be inefficient in terms of energy consumption and performance, and they don’t scale as well as their private counterparts.
- Private: Private blockchains adopt the decentralized approach of the blockchain on a smaller scale, typically within an organization or application. It still relies on peer-to-peer transactions and decentralized data management, but there are often additional controls in place managed by a central authority.While not as free or transparent as their public counterparts, private blockchains are typically scalable, secure, and fast.
- Consortium: Consortium blockchains are a collection of blockchain systems owned by private interests used to streamline information sharing and workflows.
- Hybrid: A combination of public and private systems where an organization (or group of organizations) may segregate private blockchain data internally while sharing public data on a public blockchain system.
Additionally, two significant access categories may apply across different blockchain types:
- Permissionless: Permissionless blockchains allow any and all users to join and participate in the network without centralized control. Almost all public blockchains are permissionless, but it is possible to have permissioned, public systems.
- Permissioned: These blockchains are those where users must follow specific rules and regulations to participate. This participation is almost always predicated on the authority of a central organization or consortium.
These categories are not exclusive to a blockchain type, but more often than not, a public blockchain will be permissionless while private chains are permissioned.
How Does the Blockchain Support Identity Verification?
Traditionally, authentication and identity verification work through a series of applications and databases–a centralized store of user information and credentials.
This presents a few problems:
- Honeypots: Singular databases are known as “honeypots,” or attractive targets for hackers. If an identity database is compromised, then every user’s identity is threatened–including any connected information throughout that system.
- Ownership: Individual users do not own or manage databases… large companies do. As such, it’s increasingly difficult for users to disentangle their personal information from large companies that store it. Blockchain schemes allow users to manage their own information within a blockchain system without relying on a major company.
- Internet of Things (IoT) and Distributed Devices: The increasing adoption of smart devices and Bring-Your-Own-Device (BYOD) work models makes centralized authentication and verification challenges. A blockchain authentication model can help make these networks more scalable and secure.
A blockchain can address these issues through decentralized management and user-focused participation.
Benefits of Digital Identity and Blockchain Verification
Because blockchains address questions of security, distribution, and scalability, they bring significant benefits to organizations that adopt them.
Some of these benefits include:
- Self-Sovereignty: A major strength of the blockchain is that users own their own data on their devices. Rather than rely on large databases, the system can authenticate user credentials against those stored locally on a user’s device.We often lose sight of the importance of data ownership, and blockchain verification can go a long way in foregrounding self-sovereign identity management.
- Transparency: Blockchains are essentially transparent–that is, anyone participating on the chain has access to information relevant to the chain. Likewise, these records are under the user’s control, which means they know exactly what information is on the network and, if necessary, correct or remove it.
- Portability: Modern security standards emphasize data portability, or the capacity to move data from one location or system to another. With blockchain verification, it becomes much easier to move information between compliant systems without having to have, for example, multiple accounts or worry about data format and compatibility.
- Security: Additionally, such portability will strengthen the security around modern authentication approaches like federated identity management and Single Sign-On (SSO) schemes. Rather than having shared databases and complex APIs, blockchains could make moving between participating systems much easier–all while providing more control over what data is and is not exposed.
- Decentralized Key Management: A major issue in security and cryptography is key management, or the secure sharing of decryption keys so that users can keep their data obfuscated without impacting their usability or compromising overall security. A blockchain can provide a resilient form of key management that doesn’t present singular points of failure.
Relying on Secure, Decentralized Identity Verification with 1Kosmos
Blockchain technology is quickly becoming a staple of enterprise record keeping, which is very apparent in authentication and identity verification. Private blockchains are helping support companies manage distributed users worldwide in a scalable and safe way, putting ownership of private data back in the hands of end users.
With 1Kosmos, you get this blockchain verification technology as part of our feature set. These features include:
- Private and Permissioned Blockchain: 1Kosmos protects personally identifiable information in a private and permissioned blockchain, encrypts digital identities, and is only accessible by the user. The distributed properties ensure no databases to breach or honeypots for hackers to target.
- Identity-Based Authentication: We push biometrics and authentication into a new “who you are” paradigm. BlockID uses biometrics to identify individuals, not devices, through credential triangulation and identity verification.
- Cloud-Native Architecture: Flexible and scalable cloud architecture makes it simple to build applications using our standard API and SDK.
- Identity Proofing: BlockID verifies identity anywhere, anytime and on any device with over 99% accuracy.
- Privacy by Design: Embedding privacy into the design of our ecosystem is a core principle of 1Kosmos. We protect personally identifiable information in a distributed identity architecture, and the encrypted data is only accessible by the user.
- SIM Binding: The BlockID application uses SMS verification, identity proofing, and SIM card authentication to create solid, robust, and secure device authentication from any employee’s phone.
- Interoperability: BlockID can readily integrate with existing infrastructure through its 50+ out-of-the-box integrations or via API/SDK.
To learn more about private blockchain and identity management, sign up for our newsletter and read more about 1Kosmos Identity Proofing. | <urn:uuid:1ee7a5d3-3133-4136-b104-db9a0583406b> | CC-MAIN-2024-38 | https://www.1kosmos.com/blockchain/blockchain-verification/ | 2024-09-17T08:06:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00870.warc.gz | en | 0.915993 | 1,619 | 3.046875 | 3 |
Ethical Hacking is an authorized practice of bypassing system security to identify potential data breaches and threats in a network. The company that owns the system or network allows Cyber Security engineers to perform such activities in order to test the system’s defenses. Thus, unlike malicious hacking, this process is planned, approved, and more importantly, legal.
Ethical hackers aim to investigate the system or network for weak points that malicious hackers can exploit or destroy. They collect and analyze the information to figure out ways to strengthen the security of the system/network/applications. By doing so, they can improve the security footprint so that it can better withstand attacks or divert them.
Ethical hackers are hired by organizations to look into the vulnerabilities of their systems and networks and develop solutions to prevent data breaches. Consider it a high-tech permutation of the old saying “It takes a thief to catch a thief.”
They check for key vulnerabilities include but are not limited to:
• Injection attacks
• Changes in security settings
• Exposure of sensitive data
• Breach in authentication protocols
• Components used in the system or network that may be used as access points
Now, as you have the idea of what is ethical hacking, it’s time to learn the type of hackers.
Type of Hackers
The practice of guidedhacking is called “White Hat” hacking, and those who perform it are called White Hat hackers. In contrast to Ethical Hacking, “Black Hat” hacking describes practices involving security violations. The Black Hat hackers use illegal techniques to compromise the system or destroy information.
Unlike White Hat hackers, “Grey Hat” hackers don’t ask for permission before getting into your system. But Grey Hats are also different from Black Hats because they don’t perform hacking for any personal or third-party benefit. These hackers do not have any malicious intention and hack systems for fun or various other reasons, usually informing the owner about any threats they find. Grey Hat and Black Hat hacking are both illegal as they both constitute an unauthorized system breach, even though the intentions of both types of hackers differ.
White Hat vs Black Hat Hacker
The best way to differentiate between White Hat and Black Hat hackers is by taking a look at their motives. Black Hat hackers are motivated by malicious intent, manifested by personal gains, profit, or harassment; whereas White Hat hackers seek out and remedy vulnerabilities, so as to prevent Black Hats from taking advantage.
The other ways to draw a distinction between White Hat and Black Hat hackers include:
• Techniques Used
White Hat hackers duplicate the techniques and methods followed by malicious hackers in order to find out the system discrepancies, replicating all the latter’s steps to find out how a system attack occurred or may occur. If they find a weak point in the system or network, they report it immediately and fix the flaw.
Even though White Hat hacking follows the same techniques and methods as Black Hat hacking, only one is legally acceptable. Black Hat hackers break the law by penetrating systems without consent.
White Hat hackers are employed by organizations to penetrate their systems and detect security issues. Black hat hackers neither own the system nor work for someone who owns it.
After understanding what is ethical hacking, the types of ethical hackers, and knowing the difference between white-hat and black-hat hackers, let’s have a look at the ethical hacker roles and responsibilities.
Roles and Responsibilities of an Ethical Hacker
Ethical Hackers must follow certain guidelines in order to perform hacking legally. A good hacker knows his or her responsibility and adheres to all of the ethical guidelines. Here are the most important rules of Ethical Hacking:
• An ethical hacker must seek authorization from the organization that owns the system. Hackers should obtain complete approval before performing any security assessment on the system or network.
• Determine the scope of their assessment and make known their plan to the organization.
• Report any security breaches and vulnerabilities found in the system or network.
• Keep their discoveries confidential. As their purpose is to secure the system or network, ethical hackers should agree to and respect their non-disclosure agreement.
• Erase all traces of the hack after checking the system for any vulnerability. It prevents malicious hackers from entering the system through the identified loopholes.
Benefits of Ethical Hacking
Learning ethical hacking involves studying the mindset and techniques of black hat hackers and testers to learn how to identify and correct vulnerabilities within networks. Studying ethical hacking can be applied by security pros across industries and in a multitude of sectors. This sphere includes network defender, risk management, and quality assurance tester.
However, the most obvious benefit of learning ethical hacking is its potential to inform and improve and defend corporate networks. The primary threat to any organization’s security is a hacker: learning, understanding, and implementing how hackers operate can help network defenders prioritize potential risks and learn how to remediate them best. Additionally, getting ethical hacking training or certifications can benefit those who are seeking a new role in the security realm or those wanting to demonstrate skills and quality to their organization.
You understood what is ethical hacking, and the various roles and responsibilities of an ethical hacker, and you must be thinking about what skills you require to become an ethical hacker. | <urn:uuid:9bfabbe1-2a37-4d08-8b1e-a442450d85a1> | CC-MAIN-2024-38 | https://kalilinuxtutorials.com/ethical-hacking-a-new-evolution-in-the-digital-era/ | 2024-09-19T21:03:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00670.warc.gz | en | 0.929545 | 1,092 | 3.109375 | 3 |
Business Process Harmonization Definition
Process Harmonization is defined as the process of designing and executing business process commonality across different business units or locations in order to facilitate achievement of the targeted business objectives and benefits. Harmonization ensures a common and harmonious adoption of new processes by the different stakeholders of the business.
Though some scholars use the term process harmonization interchangeably with the term Process Standardization, some scholars specify a difference between the two terms lying in the degree of strictness of the accounting standards. Process Harmonization is said to involve a reduction in accounting variations between the different regions or business units, while Process Standardization is said to entail moving towards the eradication of any variation between different regions or business units.
The need for Process Harmonization is experienced corporations, especially large global ones composed of multiple business groups across a number of different countries and regions. These corporations develop gradually over a period of time, through organic and/or inorganic growth. When different business units evolve their own business processes, policies and practices with supporting IT systems, their business processes can turn out to be heterogeneous, complex, and non-standard. This makes it difficult for the organization to be flexible and agile to remain competitive in the global economy. This way, enterprises need to harmonize their processes across different regions and business units, due to the various constraints and business challenges faced by them.
mergers and acquisitions
providing a uniform customer experience
developing agile and flexible processes
reducing risks in outsourcing of processes
optimizing cost of IT operations, and more
While an increasing need for process harmonization is felt by enterprises, the focus on developing relevant approaches and methods is at a nascent stage. It is still in the domain of management consultants with the individual consultant approaching the problem based on previous experience and expertise. The term ‘Process Harmonization’ itself, is not clearly defined or understood by all due to its close relation to improvement and standardization processes. Clarification of these terminologies is a requirement to ensure uniform understanding and facilitate the development of solutions that are of high quality, with shorter time-to-benefit and reduced Total Cost of Operation.
White Paper By: EtQ
A system that integrates all food safety processes across the enterprise is the key to ensure a high level of compliance down the food chain. The Food and Beverage industry is increasingly adapting the Global Food Safety Initiative (GFSI) as its “stamp” of high quality and safety. In this white paper on “Creating Harmonized Food Safety Processes through...
White Paper By: BPM-D
Why has standardization and harmonization in Business Process Management become so much more important in this digital world? Measuring success of standardization and harmonization initiatives has a dramatic impact on BPM enablers. But complex market conditions combined with digitally empowered consumers present a challenging commercial environment. So, how can you achieve this? When...
White Paper By: Elastic Suite
Sales Process Management (SPM) is a key functionality that can lead to increased cross-selling and up-selling opportunities, ultimately contributing to enhanced sales growth revenue. Choosing the right solution for Sales Management Process can make a tangible and quantifiable impact on a company’s bottom line and become a key driver of company’s profitability. Elastic, a...
White Paper By: BPM-D
The execution of the strategy in the process of process management can be people or technology based – or a combination of both. Organizations need to master a systematic strategy execution and deal proactively with the opportunities and threats in this “digital world”. The Process of Process Management (PoPM) was developed to build and run a value-driven BPM-Discipline for...
White Paper By: BPM-D
Value driven business process management focuses systematically on creating business value. In many organizations, the key challenge is adapting to an ever-changing business environment in order to strive in this digital world. What is the business strategy to overcome this challenge and how to execute it? A key component of a Business Process Management is a structured value-driven...
White Paper By: Business Automation
How to Choose an ERP System? Selecting, installing and implementing a new ERP System may be the toughest business challenge you face because the results of the process are so critical to your business. There are many common methods people use when trying to select the ERP software. Some of them work well, others don’t. This whitepaper discusses some of these techniques... | <urn:uuid:c56ad278-e97d-45e3-a701-577b494d116e> | CC-MAIN-2024-38 | https://whatis.ciowhitepapersreview.com/definition/business-process-harmonization/ | 2024-09-07T18:33:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00870.warc.gz | en | 0.931599 | 927 | 2.53125 | 3 |
You just got a new drone and you want it to be super smart! Maybe it should detect whether workers are properly wearing their helmets or how big the cracks on a factory rooftop are.
In this blog post, we’ll look at the basic methods of object detection (Exhaustive Search, R-CNN, Fast R-CNN and Faster R-CNN) and try to understand the technical details of each model. The best part? We’ll do all of this without any formula, allowing readers with all levels of experience to follow along!
Finally, we will follow this post with a second one, where we will take a deeper dive into Single Shot Detector (SSD) networks and see how this can be deployed… on a drone.
Our First Steps Into Object Detection
Is It a Bird? Is It a Plane?— Image Classification
Object detection (or recognition) builds on image classification. Image classification is the task of — you guessed it—classifying an image (via a grid of pixels like shown above) into a class category. For a refresher on image classification, we refer the reader to this post.
Object recognition is the process of identifying and classifying objects inside an image, which looks something like this:
In order for the model to be able to learn the class and the position of the object in the image, the target has to be a five-dimensional label (class, x, y, width, length).
The Inner Workings of Object Detection Methods
A Computationally Expensive Method: Exhaustive Search
The simplest object detection method is using an image classifier on various subparts of the image. Which ones, you might ask? Let’s consider each of them:
1. First, take the image on which you want to perform object detection.
2. Then, divide this image into different sections, or “regions”, as shown below:
3. Consider each region as an individual image.
4. Classify each image using a classic image classifier.
5. Finally, combine all the images with the predicted label for each region where one object has been detected.
One problem with this method is that objects can have different aspect ratios and spatial locations, which can lead to unnecessarily expensive computations of a large number of regions. It presents too big of a bottleneck in terms of computation time to be used for real-life problems.
Region Proposal Methods and Selective Search
A more recent approach is to break down the problem into two tasks: detect the areas of interest first and then perform image classification to determine the category of each object.
The first step usually consists in applying region proposal methods. These methods output bounding boxes that are likely to contain objects of interest. If the object has been properly detected in one of the region proposals, then the classifier should detect it as well. That’s why it’s important for these methods to not only be fast, but also to have a very high recall.
These methods also use a clever architecture where part of the image preprocessing is the same for the object detection and for the classification tasks, making them faster than simply chaining two algorithms. One of the most frequently used region proposal methods is selective search:
Its first step is to apply image segmentation, as shown here:
From the image segmentation output, selective search will successively:
- Create bounding boxes from the segmented parts and add them to the list of region proposals.
- Combine several small adjacent segments to larger ones based on four types of similarity: color, texture, size, and shape.
- Go back to step one until the section covers the entire image.
Now that we understand how selective search works, let’s introduce some of the most popular object detection algorithms that leverage it.
A First Object Detection Algorithm: R-CNN
Ross Girshick et al. proposed Region-CNN (R-CNN) which allows the combination of selective search and CNNs. Indeed, for each region proposal (2000 in the paper), one forward propagation generates an output vector through a CNN. This vector will be fed to a one-vs-all classifier (i.e. one classifier per class, for instance one classifier where labels = 1 if the image is a dog and 0 if not, a second one where labels = 1 if the image is a cat and 0 if not, etc), SVM is the classification algorithm used by R-CNN.
But how do you label the region proposals? Of course, if it perfectly matches our ground truth we can label it as 1, and if a given object is not present at all, we can then label it 0 for this object. What if a part of an object is present in the image? Should we label the region as 0 or 1? To make sure we are training our classifier on regions that we can realistically have when predicting an image (and not only perfectly matching regions), we are going to look at the intersection over union (IoU) of the boxes predicted by the selective search and the ground truth:
The IoU is a metric represented by the area of overlap between the predicted and the ground truth boxes divided by their area of union. It rewards successful pixel detection and penalizes false positives in order to prevent algorithms from selecting the whole image.
Going back to our R-CNN method, if the IoU is lower than a given threshold (0.3), then the associated label would be 0.
After running the classifier on all region proposals, R-CNN proposes to refine the bounding box (bbox) using a class-specific bbox regressor. The bbox regressor can fine-tune the position of the bounding box boundaries. For example, if the selective search has detected a dog but only selected half of it, the bbox regressor, which is aware that dogs have four legs, will ensure that the whole body is selected.
Also thanks to the new bbox regressor prediction, we can discard overlapping proposals using non-maximum suppression (NMS). Here, the idea is to identify and delete overlapping boxes of the same object. NMS sorts the proposals per classification score for each class and computes the IoU of the predicted boxes with the highest probability score with all the other predicted boxes (of the same class). It then discards the proposals if the IoU is higher than a given threshold (e.g., 0.5). This step is then repeated for the next best probabilities.
To sum up, R-CNN follows the following steps:
- Create region proposals from selective search (i.e, predict the parts of the image that are likely to contain an object).
- Run these regions through a pre-trained model and then a SVM to classify the sub-image.
- Run the positive prediction through a bounding box prediction which allows for a better box accuracy.
- Apply an NMS when predicting to get rid of overlapping proposals.
There are, however, some issues with R-CNN:
- This method still needs to classify all the region proposals which can lead to computational bottlenecks — it’s not possible to use it for a real-time use case.
- No learning happens at the selective search stage, which can lead to bad region proposals for certain types of datasets.
A Marginal Improvement: Fast R-CNN
Fast R-CNN — as its name indicates — is faster than R-CNN. It is based on R-CNN with two differences:
- Instead of feeding the CNN for every region proposal, you feed the CNN only once by taking the whole image to generate a convolutional feature map (take a vector of pixels and transform it into another vector using a filter which will give you a convolutional feature map — you can find more info here). Next, the region of proposals are identified with selective search and then they are reshaped into a fixed size using a Region of Interest pooling (RoI pooling) layer to be able to use as an input of the fully connected layer.
- Fast-RCNN uses the softmax layer instead of SVM in its classification of region proposals which is faster and generates a better accuracy.
Here is the architecture of the network:
As we can see in the figure below, Fast R-CNN is way faster at training and testing than R-CNN. However, a bottleneck still remains due to the selective search method.
How Fast Can R-CNN Get? — FASTER R-CNN
While Fast R-CNN was a lot faster than R-CNN, the bottleneck remains with selective search as it is very time consuming. Therefore, Shaoqing Ren et al. came up with Faster R-CNN to solve this and proposed to replace selective search by a very small convolutional network called Region Proposal Network (RPN) to find the regions of interest.
In a nutshell, RPN is a small network that directly finds region proposals.
One naive approach to this would be to create a deep learning model which outputs x_min, y_min, x_max, and x_max to get the bounding box for one region proposal (so 8,000 outputs if we want 2,000 regions). However, there are two fundamental problems:
- The images can have very different sizes and ratios, so to create a model correctly predicting raw coordinates can be tricky.
- There are some coordinate ordering constraints in our prediction (x_min < x_max, y_min < y_max).
To overcome this, we are going to use anchors:
Anchors are predefined boxes of different ratios and scales all over the image. For example, for a given central point, we usually start with three sets of sizes (e.g., 64px, 128px, 256px) and three different width/height ratios (1/1, ½, 2/1). In this example, we would end up having nine different boxes for a given pixel of the image (the center of our boxes).
So how many anchors would I have in total for one image?
It is paramount to understand that we are not going to create anchors on the raw images, but on the output feature maps on the last convolutional layer. For instance, it’s false to say that for a 1,000*600 input image we would have one anchor per pixel so 1,000*600*9 = 5,400,000 anchors. Indeed, since we are going to create them on the feature map, there is a subsampling ratio to take into account (which is the factor reduction between the input and the output dimension due to strides in our convolutional layer).
In our example, if we take this ratio to be 16 (like in VGG16) we would have nine anchors per spatial position of the feature map so “only” around 20,000 anchors (5,400,000 / 16²). This means that two consecutive pixels in the output features correspond to two points which are 16 pixels apart in the input image. Note that this down sampling ratio is a tunable parameter of Faster R-CNN.
The remaining question now is how to go from those 20,000 anchors to 2,000 region proposals (taking the same number of region proposals as before), which is the goal of our RPN.
How to Train the Region Proposal Network
To achieve this, we want our RPN to tell us whether a box contains an object or is a background, as well as the accurate coordinates of the object. The output predictions are probability of being background, probability of being foreground, and the deltas Dx, Dy, Dw, Dh which are the difference between the anchor and the final proposal).
- First, we will remove the cross-boundary anchors (i.e. the anchors which are cut due to the border of the image) — this left us with around 6,000 images.
- We need to label our anchors positive if either of the two following conditions exist:
→ The anchor has the highest IoU with a ground truth box among all the other anchors.
→ The anchor has at least 0.7 of IoU with a ground truth box.
- We need to label our anchors negative if its IoU is less than 0.3 with all ground truth boxes.
- Wedisregard all the remaining anchors.
- We train the binary classification and the bounding box regression adjustment.
Finally, a few remarks about the implementation:
- We want the number of positive and negative anchors to be balanced in our mini batch.
- We use a multi-task loss, which makes sense since we want to minimize either loss — the error of mistakenly predicting foreground or background and also the error of accuracy in our box.
- We initialize the convolutional layer using weights from a pre-trained model.
How to Use the Region Proposal Network
- All the anchors (20,000) are scored so we get new bounding boxes and the probability of being a foreground (i.e., being an object) for all of them.
- Use non-maximum suppression (see the R-CNN section)
- Proposal selection: Finally, only the top N proposals sorted by score (with N=2,000, we are back to our 2,000 region proposals) are kept.
We finally have our 2,000 proposals like in the previous methods. Despite appearing more complex, this prediction step is way faster and more accurate than the previous methods.
The next step is to create a similar model as in Fast R-CNN (i.e. RoI pooling, and a classifier + bbox regressor), using RPN instead of selective search. However, we don’t want to do exactly as before, i.e. take the 2,000 proposals, crop them, and pass them through a pre-trained base network. Instead, reuse the existing convolutional feature map. Indeed, one of the advantages of using an RPN as a proposal generator is to share the weights and CNN between the RPN and the main detector network.
- The RPN is trained using a pre-trained network and then fine-tuned.
- The detector network is trained using a pre-trained network and then fine-tuned. Proposal regions from the RPN are used.
- The RPN is initialized using the weights from the second model and then fine-tuned—this is going to be our final RPN model).
- Finally, the detector network is fine-tuned (RPN weights are fixed). The CNN feature maps are going to be shared amongst the two networks (see next figure).
To sum up, Faster R-CNN is more accurate than the previous methods and is about 10 times faster than Fast-R-CNN, which is a big improvement and a start for real-time scoring.
Even still, region proposal detection models won’t be enough for an embedded system since these models are heavy and not fast enough for most real-time scoring cases — the last example is about five images per second.
In our next post, we will discuss faster methods like SSD and real use cases with image detection from drones. | <urn:uuid:f429dec8-9b39-4cd3-b306-f65e8ead7e3e> | CC-MAIN-2024-38 | https://resources.experfy.com/ai-ml/the-nuts-and-bolts-of-deep-learning-algorithms-for-object-detection/ | 2024-09-08T22:33:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651035.2/warc/CC-MAIN-20240908213138-20240909003138-00770.warc.gz | en | 0.919908 | 3,171 | 3.46875 | 3 |
Which statement about DH group is true?
Click on the arrows to vote for the correct answer
A. B. C. D.A.
The Diffie-Hellman (DH) algorithm is a public key cryptographic algorithm that allows two parties to establish a shared secret key over an unsecured communication channel. The DH algorithm is commonly used in key exchange protocols such as Internet Protocol Security (IPsec), Secure Sockets Layer (SSL), and Transport Layer Security (TLS).
Regarding the statements in the question, option C is true, and the rest are false. The following explanations support this statement:
A. The DH group provides confidentiality and integrity, but it does not provide data authentication. Data authentication is achieved through digital signatures or message authentication codes (MACs). However, the DH key exchange can be combined with digital signatures or MACs to provide data authentication.
B. The DH group does not provide data confidentiality by itself. It only provides a shared secret key that can be used to encrypt data with symmetric encryption algorithms such as Advanced Encryption Standard (AES) or Triple Data Encryption Standard (3DES). The confidentiality of data depends on the encryption algorithm and key size used.
C. The DH group is used to establish a shared key over an unsecured communication channel. The two parties generate their public and private keys and exchange their public keys. Using the DH algorithm, they derive a shared secret key that is known only to them. This key can be used for symmetric encryption, message authentication, or any other purpose that requires a shared secret key.
D. The DH group is not negotiated in IPsec phase-2. The DH key exchange is one of the four methods that can be used to generate a shared secret key in IPsec. The other methods are pre-shared keys, digital certificates, and Kerberos. The DH group is negotiated in the IKE phase-1, which establishes a secure communication channel between the two IPsec peers.
In conclusion, the DH group is used to establish a shared secret key over an unsecured communication channel. It does not provide data authentication or confidentiality by itself, but it can be combined with other cryptographic algorithms to achieve these goals. The DH group is negotiated in the IKE phase-1, not in IPsec phase-2. | <urn:uuid:c07de280-8549-4735-9ccd-1b59ab14e95b> | CC-MAIN-2024-38 | https://www.exam-answer.com/ccie-security-written-exam-dh-group-true-statement | 2024-09-10T03:36:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00670.warc.gz | en | 0.910818 | 476 | 3.28125 | 3 |
While there is a lot of hype, there is no question that quantum computers are going to revolutionize computing. But we are still in the early stages of exploring quantum development, and truly useful quantum systems are still years away. That does not mean that quantum lacks opportunities, however, and companies such as Dell and quantum startup IonQ are exploring the possibilities of hybrid systems that combine classical computer systems with quantum hardware.
IBM currently holds the record for the world’s largest superconducting quantum computer, with its Eagle processor announced last November packing in 127 quantum bits (qubits). But many experts believe that machines with many more qubits will be necessary in order to improve on the unreliability of current hardware.
“Superconducting gate speeds are very fast, but you’re going to need potentially 10,000 or 100,000 or even a million physical qubits to represent one logical qubit to do the necessary error correction because of low quality,” said Matt Keesan, IonQ’s vice president for product development.
Keesan, speaking at an HPC community event hosted by Dell, said that today’s quantum systems suffer greatly from noise, and so we are currently in the noisy intermediate-scale quantum (NISQ) computer era, unable yet to fully realize the power of quantum computers, because of that need for a lot more qubits to run fully fault tolerant quantum computers.
This NISQ era is projected to last for at least the next five years, until quantum systems have developed enough to be able to support qubits in the thousands.
In the meantime, researchers can still make advances by pairing current quantum systems with traditional classical computers, in a way that Keesan compares with adding a GPU to a server.
“It turns out the quantum computer by itself isn’t enough,” he declared. “Just like a GPU is more useful when paired with a regular CPU, the quantum processing unit or QPU is more useful today when paired with a classical computer.”
Keesan cited some examples of problems that are amenable to this treatment. One, the Variational Quantum Eigensolver (VQE) algorithm, is used to estimate the ground state energy of small molecules. Here, the optimiser runs on a classical computer while the evaluation of that output happens in the quantum computer, and they work together back and forth iteratively.
Another, the quantum approximate optimisation algorithm (QAOA) can find approximate solutions to combinatorial optimization problems by pairing a classical pre-processor with a quantum computer. Quantum circuits can also be used as machine learning models, with the quantum circuit parameters being updated by the classical computer system and evaluated using quantum methods.
More explanation of this is available on IonQ’s blog, but the trick with these hybrid applications apparently lies in finding the right control points that allow the quantum and classical portions of the algorithms to effectively interact. VQE does this by creating a single quantum circuit with certain parameterized components, then using the classical optimisation algorithm to vary these parameters until the desired outcome is reached.
But this iterative process could easily be very slow, such that a VQE run might take weeks to execute round robin between a classical computer and a quantum computer, according to Keesan, unless the quantum and classical systems are somehow co-located. This is what Dell and IonQ have actually demonstrated, with an IonQ quantum system integrated with a Dell server cluster in order to run to run a hybrid workload.
This integration is perhaps easier with IonQ’s quantum systems because of the pathway it has taken to developing its quantum technology. Whereas some in the quantum industry use superconductivity and need the qubits to be encased in a bulky specialised refrigeration unit, IonQ’s approach works at room temperature. It uses trapped ions for its qubits trapped ions for its qubits suspended in a vacuum and manipulated using a laser beam, which enables it to be relatively compact.
“We have announced publicly, we’re driving towards fully rack-mounted systems. And it’s important to note that systems on the cloud today, at least in our case, are room temperature systems, where the isolation is happening in a vacuum chamber, about the size of a deck of cards,” Keesan explained.
Power requirements for IonQ’s quantum processors are also claimed to be relatively low, with a total consumption in kilowatts, “So it’s very conceivable to put it into a commercial datacentre, with room temperature technology like we’re using now,” Keesan added.
For organisations that might be wondering how to even get started in their quantum journey, Ken Durazzo, Dell’s vice president of technology research and innovation, shared what the company had learned from its quantum exploration.
One of the key ways Dell found to get started with quantum is by using simulated quantum systems, which Durazzo refers to as using virtual QPUs or vQPUs, to allow for hands-on experimentation to allow developers and engineers to become familiar with using quantum systems.
“Some of the key learnings that we identified there were, how do we skill or reskill or upskill people to quickly bridge the gap between the known and the unknown in terms of quantum? Quantum computation is dramatically different than the classical computation, and getting people with hands-on experience there is a bit of a hurdle. And that hands on experimentation helps get people over the hurdle pretty quickly,” Durazzo explained.
Also vital is identifying potential use cases, and Durazzo said that zoning those down to a level of smaller action-oriented types of activities is key to really understanding where a user might find a benefit in terms of quantum computation, and therefore where to place the biggest bets in terms of solving these types of issues.
Dell also decided that bringing into operation a hybrid classical-quantum system would best suit their purposes, one in which it would be possible to transit workloads between virtual and the physical QPUs to provide a simple path from experimentation to production.
“All of those learning activities enabled us to build a full stack suite of things that provided us the tools that allowed us to be able to integrate seamlessly with that hybrid classical quantum system,” Durazzo said.
In Dell’s view of a hybrid classical-quantum computer, the processing capabilities comprise both virtual QPU servers and real QPUs that deliver that quantum processing capability. This arrangement provides the user with the ability to simulate or run experiments on the virtual QPUs that will then allow them to identify where there may be opportunities or complex problems to be solved on the real QPU side.
“One area that we have focused on there is the ability to provide a seamless experience that allows you to develop an application inside of one framework, Qiskit for example, and run that in a virtual QPU or a real QPU just by modifying a flag, without having to modify the application, without having to change the parameters associated with the application,” Durazzo explained.
Sonika Johri, IonQ’s lead quantum applications researcher, gave a demonstration of a hybrid classical-quantum generative learning application. This was trained by sampling the output of a parametrized quantum circuit, which is run on a quantum computer, and updating the static parameters using a classical optimisation technique. This was run on both run on both a quantum simulator – a virtual QPU – as well as a real quantum computer.
That example application was run using just four qubits, and Johri disclosed that the simulator is actually faster than the quantum computer at that level.
“But when you go from 4 to 40 qubits, the amount of time and the amount of memory the simulator needs will increase exponentially with the number of qubits, but for the quantum computer, it is only going to increase linearly. So at four cubits the simulator is faster than the quantum computer, but if you scale up that same example to say, 30 to 40 qubits, the quantum computer is going to be exponentially faster,” she explained.
Dell has also now begun to further adapt its hybrid classical-quantum computer by adding intelligent orchestration to automate some of the provisioning and management of the quantum hardware, and further optimize operations.
“We have taken that two steps further by adding machine learning into an intelligent orchestration function. And what the machine learning algorithms do is to identify the characteristics associated with the workload and then match the correct number of QPUs and the correct system, either virtual or real QPU, in order to get to the outcomes that you’re looking to get to a very specific point in time,” Durazzo said.
Quantum computer hardware will continue to evolve, and may even pick up pace as interest in the field (and investment) grows, but Dell’s Durazzo believes that the classical-quantum hybrid model it has developed is good for a few years yet.
“I think that diagram actually shows the future state for a very long time for quantum of a hybrid classical-quantum system, where the interactions are very tight, the interactions are very prescriptive in the world of quantum and classical for growth together into the future,” he said. “As we further grow those numbers of qubits, the classical infrastructure necessary to support this quantum computation will grow as well. So, there should be a very large increase overall in the system as we start becoming more capable of solving more complex problems inside the quantum space.” | <urn:uuid:a29b06da-abc3-4b7a-851e-454384d722d6> | CC-MAIN-2024-38 | https://www.nextplatform.com/2022/02/22/building-the-bridge-to-the-quantum-future-with-hybrid-systems/ | 2024-09-10T03:22:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651196.36/warc/CC-MAIN-20240910025651-20240910055651-00670.warc.gz | en | 0.941074 | 2,006 | 2.796875 | 3 |
Over the past two decades technology has advanced rapidly and fundamentally changed the way that businesses function. Whilst this has primarily been a positive experience for businesses, these advancements have also given rise to an increase in cybercrime. With the current prevalence of cybercrime, all organisations are currently at risk of falling victim to a cyberattack. Thankfully, many businesses are aware of the risk and starting to invest more time and money into protecting their data and systems.
If your business is looking into how to prevent a cyberattack or data breach, it is important to first understand the different types of security and their principles and differences. In this article we will discuss the definitions of information security and cyber security, the key principles of each and why they matter to your business.
What is information security?
Information security are the practices organisations implement to protect their business records, data and intellectual property. These practices ensure that both physical and digital data is protected from unauthorised access, deletion, corruption, unlawful use, or modification. The key information security principle is the CIA triad, which is a focus on the balanced protection of the confidentiality, integrity and availability of data.
What is cyber security?
Cyber security is a branch of information security including the practices an organisation undertakes to reduce the risk of a cyberattack. These practices are focused on technology to stop cybercriminals from accessing sensitive information, extorting money from users, or interrupting normal business procedures. Common cyber security practices include protecting networks, endpoints and educating users on how to avoid an attack.
Key information security principles
The key information security principle is the CIA triad, this includes:
Confidentiality – Protecting confidentiality ensures that that any sensitive information is not made available or disclosed to unauthorised individuals, entities or processes. Countermeasures that protect confidentiality include defining and enforcing access levels for information, as well as avoiding password theft, device theft and ensuring sensitive data is encrypted.
Integrity – Integrity in the CIA triad is focused on ensuring that information has not been modified, and therefore can be trusted to be correct and authentic. Integrity can be comprised by a cybercriminal causing a data breach and modifying data for malicious reasons. Integrity can also be compromised by human error or poor access policies and procedures. Countermeasures that protect integrity include digital signatures, hashing, physical and digital intrusion protection systems, and strong authentication methods, including multi-factor authentication.
Availability – For a business to function effectively, it is important that information is available whenever it is needed. This means that all networks, systems, and applications are working as intended to allow authorised users access to resources as required. The key risks to data availability include hardware failure, natural disasters, denial of service attacks and human error. Countermeasures that ensure data availability include backups, data redundancy, denial of service protection and a comprehensive disaster recovery plan.
Key Cyber Security Principles
Network security – Network security includes any measure taken to protect the usability, security and integrity of a network and its data. This includes hardware and software solutions designed to stop cybercriminals from accessing a network or spreading malware within a network. Some network security measures include firewalls, network-wide email security and anti-malware software, and authentication solutions.
Endpoint security – Whereas network security aims to protect a network as a whole, endpoint security aims to protect the individual end-user devices that connect to a network, however there is overlap between the two. These endpoint devices include desktops, laptops, servers, smartphones and IoT devices. Common endpoint security solutions include privileged access management, endpoint protection platforms, device anti-malware, application control and patch management.
User Education and Awareness – A significant factor in keeping businesses safe from a cyberattack is ensuring users of networks and systems have an awareness of common attack vectors. Some common attack vectors include phishing emails, compromised or weak credentials, malvertising and brute force attacks. If an organisation runs regular cyber security education and awareness training it enables employees to detect a potential attack or breach of procedure before it is too late.
Why information security and cyber security matter
In 2021, the greatest threat to all businesses, regardless of size or industry, is a cyberattack or data breach. As the methods cybercriminals are using become more complex and attacks more prevalent, if your business has not secured their network, systems, and information, now is the time to start taking security seriously. If you want to find out more about how to implement a comprehensive information security or cyber security solution within your organisation, get in touch today. | <urn:uuid:d2dde62a-c122-4a5e-92d0-6324266151bc> | CC-MAIN-2024-38 | https://cloudbusiness.com/tag/information-security/ | 2024-09-11T11:05:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00570.warc.gz | en | 0.929903 | 921 | 3.5 | 4 |
‘In the first half of 2020, the total number of global ransomware reports increased by 715% year-over-year.’ (Threat Landscape Report 2020 by Bitdefender)
In this guide to Ransomware – we take a look at what it is, how it works, and how to defend against it as best as possible.
Since ransomware has become by far the fastest growing type of cyber threat faced by businesses in recent years, we thought we’d take a closer look at this type of malware.
As an ever-evolving attack tool, the simplest form of ransomware can cost an organisation significant time and money, but more severe attacks can cripple or even destroy a company completely.
This is especially dangerous in these days of economic uncertainty, and since both individuals’ and businesses’ online-activity such as; cloud usage, online payments, online entertainment, working from home, has increased considerably.
The already significant threat of ransomware grew more sharply this year with the onset of the current coronavirus pandemic and transition by many organisations to remote working arrangements.
As a result, cybercriminals have sought to exploit the security vulnerabilities that coincide with working from home and are now capitalizing on the opportunity.
Ransomware cyber-attacks are a big business, to the point that research anticipates a business is attacked by a cyber-criminal every 11 seconds, and damage costs from these attacks will hit around $20 billion by 2021.
There is no easy win in the battle on cyber extortion, and the best way to deal with this threat is to firstly understand what ransomware is, how it works, and who it targets, and then look to the best lines of prevention and ultimately the best methods of mitigation should a breach occur.
What is Ransomware?
‘A type of malicious software designed to block access to a computer system until a sum of money is paid.’
A ransomware cyber-attack occurs when malicious software (malware) is used to deny a user or business access to a computer system or data. The malware attack takes over computer networks as the malware locks up the victim’s computer and renders it unusable by the victim until they pay the attacker the ransom (frequently in bitcoin).
The first known ransomware attack occurred in 1989 and targeted the healthcare industry, but it’s has been constantly evolving, with more sophisticated strains on the increase. Over the last year, the number of new variants increased by 46%. Unprepared network users and businesses can quickly lose valuable data and money from these attacks.
Types of Ransomware
STOP/Djvu, is the most reported ransomware family in Q1 2020. The prolific strain typically spreads through cracked software, key generators, and activators.
This year there have been a number of changes in the most commonly reported ransomware strains. Rapid, Rapid 2.0, Ryuk and Zeppelin fell out of the top 10 and have been replaced by Makop, Paymen45, LockBit, and GoGoogle.
How does Ransomware happen?
URLs embedded in emails remains the number one way for computers to become infected, and despite it being well known that emails are the main infection method for all types of cyber-attacks, people still fall victim to malicious social engineering, and subsequently, infect whole systems.
In general, a lack of proper cybersecurity procedure (or a poorly implemented one) and lack of training in basic cybersecurity practices eg; re-using weak passwords, lack of proper access management and poor user awareness, commonly are the causes of ransomware infection.
For example, many managed service providers (MSPs) report that Windows OS is targeted the most by ransomware attacks as Windows-based computers are typically more affordable, meaning more people use them. This along with the knowledge that there is a large number of users who use them but don’t install necessary updates for their operating systems, (leaving them without patches that protect against these viruses) opens up the doorways and makes them sitting targets for cyber-attackers.
This doesn’t mean that macOS, Android, and iOS are immune however, poor user activity can make any device vulnerably and potentially compromise a whole company and its systems.
Encrypting ransomware (or cryptoware) is the most common recent variety, however there are other types; non-encrypting ransomware or lock screens which restrict access to files and data, but does not encrypt them, leakware or extortionware that steals compromising or damaging data that the attackers then threaten to release if ransom is not paid, and mobile device ransomware which infects mobile phones through drive-by downloads or fake apps.
This guide to ransomware firstly gives the phases of an attack and then the steps to remediate impact.
5 Phases of a Ransomware Attack
There are 5 distinct phases of a ransomware attack, which can generally be executed in as little as 15 minutes:
1. Exploitation and infection
The pathway for the malicious Ransomware file to execute on a computer is often through a phishing email or an exploit kit (a specific kind of toolkit that takes advantage of security holes in software applications to be able to spread malware). Users running insecure or outdated software applications on their computers often fall foul of these kits.
2. Delivery and execution
Ransomware is then delivered to the system and persistence mechanisms are put in place. This process can take just a few seconds, depending on the network. Executables are most often delivered via an encrypted channel.
3. Backup defilement
The ransomware targets the backup files and folders on a system and removes them to stop any restoring from backup, this is intended to prevent any means that the victim has to recover from the attack without paying the ransom.
4. File Encryption
Once backups are completely removed, the malware performs an exchange to establish encryption keys that will be used on the local system. Depending on the network speed, number of documents, and amount of devices connected, the encryption process can take anywhere from a few minutes to a couple of hours.
5. User notification and removal
Following encryption, the demand instructions for payment are sent to the victim. The victim is usually given only a few days to pay before the ransom demand increases.
Finally, the malware removes itself from the system so as not to leave behind considerable forensic evidence that might help build better defences against the malware.
Who’s targeted with Ransomware?
Ransomware attacks have experienced a resurgence, and whereas individuals and small businesses were key targets in the early days, in more recent years the likes of; large corporate businesses, governments, councils, public health departments, educational facilities and other various organisations have not been exempt as targets.
Recently, Microsoft announced it took down a major hacking network that had been used to spread ransomware, and the company said it could have been used to interfere with the US election indirectly by freezing access to voter rolls or websites displaying election results.
The US Elections are indeed a notable target at the moment. A ransomware attack could suddenly lock down important parts of the voting infrastructure all around the country. This could happen at county and state level to disable voting registers.
Concerns around ransomware’s disruptive potential spiked after Tyler Technologies, a major software vendor to many state and local governments, disclosed a ransomware attack affecting its systems recently. The company sells software that is used by some clients to display voting information on websites, it said in a statement, ‘but that software is hosted on Amazon servers, not its own, and it was not affected’. The attack targeted Tyler Technologies’ internal corporate network.
In general, however, the healthcare industry has by far been the main target for ransomware attacks.
‘The industry with the highest number of attacks by ransomware is the healthcare industry. Attacks will quadruple by 2020’. (CSO Online)
The arrival of COVID-19 has become an influential force in the threat landscape for not only the health care industry but businesses as a whole due to the increase in remote working – this sudden surge in working from home has helped cement remote desktop protocol (RDP) as the attack vector of choice for ransomware operators.
Many organizations have evidently failed to securely implement RDP in their rush to roll out work from home arrangements, which has left RDP connections vulnerable to compromise.
Consequences of a Ransomware attack
‘By 2021, experts predict the total damage from ransomware to reach $20 billion USD.’ (CyberSecurity Ventures)
There is now a greater than 1 in 10 chance of data being stolen in a ransomware attack and the average ransom payment has nearly doubled over the years, with this trend showing no signs of slowing down. Hackers also tend to duplicate successful attacks and hit victims over and over again.
Some hackers even corrupt and delete a company’s files while they await the ransom payment, just to show that they’re serious.
While a few thousand dollars may seem insignificant for larger businesses, it can be crippling for smaller businesses that cannot afford to lose their data.
Regardless of the cyber criminal’s ultimate actions, the actual cost of ransomware goes beyond just the pay-out. Not only the potential legal costs, data-loss and down-time financial consequences but there is the reputational cost to a business which can lose consumer trust and subsequently custom as a result of a breach.
To pay or not to pay
Non-paying victims run the risk and generally fall foul of their data being published on leak sites or sold off to the highest bidder.
Paying a ransomware demand is however generally discouraged, in the event an entity considers paying a ransom demand, it must take the risk that the attacker may not return access to the data, or may even have released it already onto the dark web. And as stated, there is no reason why a hacker may not simply try again as the business is then seen as an easy, paying target.
Another concern especially for businesses when considering paying the hacker, is that on 1st October 2020, the US Department of the Treasury’s Office of Foreign Assets Control (OFAC) issued guidance cautioning companies of the potential risk of US sanctions for certain ransomware payments paid to parties designated as malicious cyber actors under OFAC’s cyber-related sanctions program.
The OFAC advisory clearly states that engaging in or facilitating ransomware payments may result in enforcement actions and civil penalties in the event the payee is a sanctioned party – even if the entity is unaware that the cyber-criminal is subject to US sanctions.
The cost of a ransomware attack can be extremely high—not just the cost of the ransom itself, but also with the costs associated to loss of business whilst the files and documents are unavailable.
James Carder, LogRhythm CISO and Vice President of LogRhythm Labs, advises organizations to prepare by getting a good cyber insurance policy that explicitly covers losses due to ransomware.
“If you have a loss of revenue due to a ransomware infection, you may be able to use your cyber insurance to make a claim to recover that revenue,” says Carder “From a pure risk management perspective, getting a really good cyber insurance policy is probably worth its weight in gold in situations like this.”
How can you protect yourself from a Ransomware attack?
The best defence against ransomware is for users not to just learn about ransomware; what it is and what happens, but to know the organisation’s cyber security status, the controls and processes that are in place and understand how to mitigate impact as best as possible should an attack happen. Individuals need to know their devices, what the risks are and where to go for advice and support. In the UK the NCSC provides excellent support, advice and information to individuals, small businesses and large organisations alike.
For businesses in the UK that aren’t sure where to start with cyber security, the National Cyber Security Centre (NCSC) provides cyber security guidance and support to individuals, families, businesses large or small. Working with the Information Assurance for Small and Medium Enterprises Consortium (IASME) they also provide the Cyber Essentials, the government-backed, industry supported scheme to help organisations protect themselves against common cyber attacks. Working towards this certification is an easy introduction and start for a business to build its cyber defences. Find out more about Cyber Essentials here.
Steps of defence you can take to keep an attack from shutting down your business
‘On average it takes around 23 days to resolve a ransomware attack.’ (Accenture)
Ransomware attacks are increasing in frequency and seriousness, so you need to prepare your organisation for the very real possibility of an attack.
Following is a brief overview of the incident response advice from both the SANS Institute and National Institute of Standards and Technology (NIST), for a more in depth look into the phases cleck here. The key phases and actions for defence are:
Preparation can be as simple as making sure you have a trained incident response team; inhouse, contracted or at least a business card to know who to call. But keys steps within preparation include:
- Pen-test and Patch (Find out more about pen-testing here)
- Create and Protect Your Backups
- Prepare a Response Plan
- Assign Least Privileges
- Connect with Industry and Threat Intelligence Sources
- Protect Your Endpoints
- Educate Employees and Users
- Consider Cyber Security Insurance
2 Detection and Identification
Should your business get hit with an attack, you can minimize the damage if you can detect the malware early, by including the following steps:
- Prime Your Defence Devices
- Screen Email for Malicious Links and Payloads
- Look for Signs of Encryption and Notification
- Scope the incident
Damaged systems need to be removed, devices isolated and compromised accounts locked down. A key step at this stage is to isolate the afflicted endpoint as quickly as possible.
Once the ransomware incident is identified and contained, it needs to be removed from the network, and any damage discovered in the identification phase remediated.
Replace, rebuild or clean
It’s generally recommended where possible that machines are replaced rather than cleaned as a tool an attacker has put in place may not be detected in a clean-up. However, it can be more pertinent to clean some certain locations. If so, it is imperative to continually monitor to prevent the attack from re-emerging.
Having and following your disaster recovery plan is vital to get all affected systems up and running again and quickly get back to business as usual.
A full investigation into the ransomware attack as to what specific infection vector was used against the system is also needed. Knowing how the ransomware came onto the system in the first place helps to prepare and improve defence systems for the future.
6 Lessons Learnt
The last phase, but arguably the most important is to learn from the incident to help prevent future incidents. Businesses can be too quick to delete, restore, and re-image at the first sign of an incident before they’ve fully learned how the attacker got in, or how much damage was really done. Without this stage, a business can easily find itself repeating the same steps again and again, against the same attack, with no improvement.
Ransomware isn’t going away any time soon, in fact it continues to grow. Quite simply, as long as it keeps working for attackers, so individual users and businesses will continue to be targeted. Cybercriminals will continue to take advantage of security weaknesses to deploy destructive ransomware attacks, as long as individuals and businesses fail to make cyber security a priority.
Both prevention (regular security audits, application testing and penetration testing etc.) and cure (incident plan and response management tool in place) are both as important as each other. Considering and implementing both, not just one are vital for an organisation’s cyber hygiene in the fight against cybercrime. Following a guide to ransomware or indeed advice on improving cyber posture and cyber protection as a whole should become a habit rather than a chore. | <urn:uuid:f328acc9-7395-473d-9dcb-f74937fd4a04> | CC-MAIN-2024-38 | https://www.logicallysecure.com/blog/a-guide-to-ransomware/ | 2024-09-12T15:12:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00470.warc.gz | en | 0.955211 | 3,322 | 3.140625 | 3 |
What Is an IT Operating Model, and Why Does “Everyone” Need One?
New and seasoned businesspeople know that a well-thought-out business model and strategy are two key ingredients needed to create a successful company.
However, something discussed less frequently is an operating or operational model. This outlines important details about a company, including those regarding employees, processes, systems and technology.
Operating Model Definition
An operational model is a visualization, or model, that illustrates the inner workings of an organization. It includes essential elements needed to operate a business, such as how a company sources its products, structures departments, and delivers value to its customers or clients. Operating models play a vital role in virtually every type of business.
People sometimes mix up the terms operational model and business model. Though they are similar, they serve different purposes and outline various things.
A business model describes how a company captures and offers value to its customers through products or services, while an operating model specifies how it delivers it. Simply put, a business model is the "what" of a company and an operating model is the "how".
Foundational Classifications for Today's Operating Models
How did today's business operational models evolve? Bruce Scott, a Harvard Business School professor, spent time researching corporate strategy and developed a model of the stages of corporate development. Soon after, Leonard Wrigley and Richard Rumelt followed Scott's path and created ways to classify business structures so they could compare the strategies of multiple companies.
Here are Wrigley and Rumelt's four classifications:
- Revenue comes from one activity
- Additional business complements the original activity
- Diversified firms combine unrelated businesses
- Conglomerates, unrelated to synergistic or complementary effects
These four elements were essentially the foundation for today's operational models but currently go by different names. They are:
- Holding company
Types of Operational Models
Developing and implementing the right operation model for a business is bedrock to its success. There are four primary types of operating models, including:
- Coordination (seamless access to shared data among business units in a company)
- Unification (business units are integrated based upon standardized, integrated processes)
- Diversification (different business units offering different products and services to different customers)
- Replication (business units function independently but operations are run in a highly standardized format)
Elements of Operating Models
No two organizations will have the same operational model but, to be effective, there are a handful of standard elements that should always be included. Those key components are:
- Organizational structure
These five basic areas are critical in developing a solid operational model, but organizations can still include additional core business functions.
One inherent benefit of an operational model is that it's a living document subject to change. Whether it's making adjustments in the company's structure or a set of process improvements, it's rare for an organization to maintain a single operating model throughout its life cycle.
Two Approaches to Operating Models
Organizations can follow one of two approaches to operating model development: role-based or process-based. As their names suggest, a role-based approach designs a model by looking at the hierarchy of roles in the organization. Process-based focuses on the journey of delivering value to the customer.
Additionally, the Service Operating Model Skills (SOMS) framework and IT operating models are things a company might need. According to Wikipedia, "SOMS is an operating model focused on the service sector. SOMS stipulates the expertise needed for people creating and working with operating models." While the SOMS framework is particularly useful for companies in the services sector, the IT operating model (discussed below) applies to any industry.
What is an IT Operating Model and Why Does Every Business Need One?
An IT operating model's main purpose is to help companies make wise IT investment decisions. However, it also informs employees and stakeholders about standard business processes and helps IT professionals design various technical and IT-related components.
Everyone needs an IT operational model because the cybersecurity landscape is evolving. New threats are emerging, and bad actors are becoming more sophisticated in their attacks. Some of the most common hacking threats are phishing, malware, distributed denial of service (DDoS) and social engineering.
Organizations with a clear IT operating model can see the inner workings of their departments and determine if any improvements are needed as cybersecurity threats become more prevalent.
Specific Elements of an Effective IT Operational Model
Because IT operational models are unique to an individual company's IT department, more specific elements must be included.
Here are four essential components of an IT operating model:
IT operating models are single, snapshot views of an IT department. They also act as interfaces between business and IT and outline standard functions and processes.
How to Create an IT Operating Model
Creating an IT operating model might seem daunting at first. However, companies should take two crucial steps to successfully build next-generation operating models. These are:
- Focus on putting in place building blocks to drive widespread change across the entire organization
- Select a transformation path that suits their unique situation
Ultimately, the goal is to implement an effective operating model to deliver value while also significantly reducing costs.
If you're looking to develop a suitable model for your business, you're in the right place. Here are the steps outlining how to create an IT operating model using the four components listed above.
In this step, you should identify IT processes, determine their objectives and create a set of outputs to help the department achieve its goals. Every company differs, so IT processes will vary. They might be formal or informal or involve different types or a specific number of stakeholders.
The next step is governance, which involves reviewing stakeholder needs, conditions and options that help create the IT department's objectives. It also gives employees direction so they can prioritize goals, improve decision-making and continuously monitor performance.
This step involves identifying the optimal sourcing models to create, deliver, and support technology products and services. The sourcing step breaks down into three broad categories, including insourcing, co-sourcing and outsourcing.
Sourcing is crucial for IT professionals, as it determines which processes are performed in-house or by a third party. It is well-known that an organization is only as resilient as the vendors it relies on — skipping this step could cause future issues within the IT department.
All employees need structure within the department, which is accomplished through IT modeling. The operational structure should include roles, authority, responsibilities and methods of communication. With a clear understanding of the structure, an IT operating model will help employees find their place in the department.
These components enable the IT department to deliver tech services to its customers in ways that align with the organization's strategy and match consumer needs.
Hiring a Consultant for Operational Modeling
Some companies will hire third-party consultants to assist them with developing a suitable operational model. For example, large corporations might face more challenges in the modeling process because of their size. However, a small startup might lack the expertise to create an effective operational model.
Various companies, such as McKinsey, Bain & Co. and Accenture, typically offer relevant consulting services. The right operating model will provide your company with an efficient, integrated system. Essentially, these models ensure your teams are aligned, competent, efficient, inspired and adaptable.
Final Word on the Operating Model
Operational models, IT-related or otherwise, can be a major boon to organizations. It can be challenging to keep up with trends in today's fast-paced environment. If you are a business owner or a C-suite executive, you know how quickly things change within your company, regardless of the industry it serves. Operating models allow you to stay abreast of these changes, make informed decisions and enable your team to perform at the top of its game. | <urn:uuid:91336c7c-e26b-40d0-bbc2-c6809508ee4f> | CC-MAIN-2024-38 | https://www.givainc.com/blog/operating-model-defined-elements-of-it-operating-models/ | 2024-09-16T08:22:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00170.warc.gz | en | 0.950007 | 1,621 | 3.3125 | 3 |
Mobile phone security: How to make sure your phone is secure
A smartphone is the most widely used electronic device in daily life for many of us. The days of mobile phones being mainly used to call someone or send a text message are long gone – now, they operate as portable computers, with a vast array of apps for everything from social networking to online banking. The extent to which we rely on our phones, plus the amount of data they contain, means that phone security is crucial.
As our reliance on mobile devices has increased, so too have mobile security threats. Read on to learn more about phone security and how to protect your phone.
Mobile phone security threats
Some of the main phone security threats include:
Malicious apps and websites
Mobile malware (i.e., malicious applications) and malicious websites can achieve the same aims – such as stealing data and encrypting data – on mobile phones as on traditional computers. Malicious apps come in different forms – the most common are trojans that perform ad and click scams.
Mobile ransomware is malware used to lock users out of their mobile device and demand a ransom payment, usually in cryptocurrency. The increased use of mobile devices for business has made ransomware a more common and damaging malware variant.
On desktop or laptop computers, most phishing attacks start with an email that includes a malicious link or an attachment containing malware. However, on mobile devices, emails account for only 15% of phishing attacks. Most mobile phishing attempts occur via SMS messaging, social media, or other applications.
Man-in-the-Middle (MitM) attacks
Man-in-the-Middle (MitM) attacks involve an attacker intercepting network communications so they can eavesdrop on or modify the data being transmitted. While this type of attack is possible on different systems, mobile devices are especially susceptible. Unlike web traffic that typically uses encrypted HTTPS for communication, SMS messages can be easily intercepted, and mobile applications may use unencrypted HTTP when potentially sensitive data is being transferred.
Jailbreaking and rooting
Jailbreaking and rooting refer to gaining administrator access to iOS and Android mobile devices. Mobile users may jailbreak or root their devices to delete unwanted default apps or install apps from untrusted app stores – but doing this carries risk. Increased permissions can enable attackers to access data and therefore cause damage.
Spyware can collect or use private data without your knowledge or approval. Data commonly targeted by spyware includes phone call history, text messages, user location, browser history, contact list, email, and private photos. Cybercriminals could use this stolen information for identity theft or financial fraud.
Are iPhones safer than Android phones?
A common question in phone security is whether iPhones are safer than Android. A critical difference between the two is that iOS is a closed operating system, whereas Android is used by various manufacturers. This means that Apple doesn't share its source code, reducing the chances of attackers finding vulnerabilities in its system. Because of this, many believe that iOS is a safer operating system. Regardless, there's no way to be completely safe, even if you own an Apple phone – so understanding phone security and how to protect your phone remains essential.
Bear in mind that older phones are less secure than newer ones. For example, earlier iPhones no longer receive security updates. If you're using an old smartphone, upgrading to a newer model will help increase your phone security.
Smartphone security tips
If you want to know how to protect your phone, essential smartphone security tips include:
Keep your phone locked
If your device is stolen, the thief could obtain access to your personal information. To prevent this, it’s important to have a lock on your screen. Whether this is a passcode, pattern, fingerprint, or face recognition depends on your preferences and your device’s capabilities.
You can usually specify how long the phone can be idle before locking when enabling a lock screen. Choose the shortest amount of time to increase your phone security. You are protected because the screen locks automatically even if you forget to lock it yourself. It will also conserve your battery because the screen goes dark after a set period of inactivity.
Setting this up is straightforward. For most Android devices, you can find instructions within Location & Security Settings. For iOS users, check within the General options of your settings.
Create a strong password for your phone and apps
Create a strong password for your smartphone. If a password attempt fails a certain number of times, the phone will lock, disable, and in some cases even erase all data. Surveys show that many business users don’t change the default passwords on their mobile devices or use multi-factor authentication. Weak passwords can place an entire organization at risk.
It’s also a good idea to set strong passwords for your apps – this will make it harder for a hacker to guess them. Using unique passwords for each app will ensure that the hacker won't have access to all your information across the board if one password is discovered.
Be wary of text messages
Text messages are an easy target for mobile malware, so avoid sending sensitive data such as credit card details or important private information by text. Equally, be cautious about text messages you receive.
Smishing (phishing via text) and vishing (voice phishing that takes place over the phone) are popular ways to target mobile phone users. A smishing victim may receive a text message that appears to be from a business, prompting them to call a number and disclose secure account information to address an issue with their account. If you receive emails or texts which appear to be from a business asking you to confirm or update account information, contact that business directly to confirm the request. Avoid tapping links in unsolicited emails or texts.
Check your browser for the lock symbol
The lock icon in the browser's address bar indicates that you are on a secure connection and that the website you are using has an up-to-date security certificate. Look out for this when entering personal data such as your address or payment information or sending emails from your mobile browser.
Ensure your apps are from reputable sources
Always download apps from official app stores. Google and Apple test every app before it is allowed into the Play Store or App Store, which means downloading an app from an official store is less risky than obtaining them from elsewhere. Cybercriminals create fake mobile apps that mimic trusted brands so they can obtain users’ confidential information. To avoid this trap, read app reviews and check the developer's last update and contact information. These details should be available within the app information on the store. Deleting apps you no longer use or want is also good practice.
Keep your device’s OS up-to-date
From performance to security, mobile phone operating system updates are designed to improve your experience. To ensure a secure smartphone, it's essential to keep your mobile's operating system up to date. Operating system updates protect your device from newly discovered threats. You can check if your phone’s operating system is up to date by looking within About Phone or General and clicking on System Updates or Software Update (depending on your device).
Connect to secure Wi-Fi
Mobile devices allow us to access the internet wherever we go. Often, one of the first things we do when out and about is search for Wi-Fi. While free Wi-Fi can save on data, unsecured networks carry security risks. To maximize your safety while using public Wi-Fi, connect to a virtual private network or VPN. A VPN encrypts your data, protecting your location and keeping your information from prying eyes. Equally at home, make sure your home network is set up securely to maximize your safety.
Don’t jailbreak or root your phone
Jailbreaking or rooting your phone is the process of unlocking your phone and removing the safeguards that manufacturers have put in place so you can access anything you want. Users jailbreak or root their phones to access app stores other than the official ones, but this carries risk. The apps on illegitimate stores have not been vetted – which means they can spy on your phone and steal sensitive information.
Encrypt your data
Our smartphones hold a wealth of data. If your phone is lost or stolen, sensitive information like your emails, contacts, and financial information could be at risk. To protect your mobile phone data, you can encrypt it. Encrypted data is stored in an unreadable form so it can’t be understood. Most phones have encryption settings you can control via the security menu.
To check if your iOS device is encrypted:
- Go to the settings menu.
- Click on Touch ID & Passcode.
- You will be prompted to enter your lock screen code.
- Go to the bottom of the page – if your phone is encrypted, it should say “Data Protection is enabled.”
To encrypt an Android:
- First, make sure your device is at least 80% charged.
- If your phone is rooted, then unroot it before continuing.
- Then, go to Security and choose Encrypt Phone.
- If you interrupt the encryption process, or if you don’t charge and unroot your device, you could lose all your data. Encryption can take an hour or more.
Enable remote wiping of your phone
If your phone is lost or stolen, you can remotely clear your personal data from its memory. Provided you have previously backed up your data to the cloud, you don’t have to worry about losing that data. Learn more about how to erase your iPhone remotely and erase your Android device remotely on Apple and Google’s support pages.
Log out of sites after you make a payment
If you use your smartphone for online shopping or online banking, log out of the relevant sites once your transactions are complete. Don’t store your usernames and passwords on your phone, and avoid sensitive transactions while using public Wi-Fi.
Turn off Wi-Fi and Bluetooth when not in use
When you keep Wi-Fi and Bluetooth active, hackers can see what networks you have connected to before, spoof them and deceive your phone into connecting to Wi-Fi and Bluetooth devices that hackers carry around. Once connected to your phone, hackers can attack your device with malware, steal data, or spy on you – without you necessarily noticing. Therefore, it’s good to turn off Wi-Fi and Bluetooth when you don’t need them.
Use a good antivirus
Antivirus isn't just for laptops or desktop computers – it's also essential for mobile devices. A good mobile antivirus will protect your smartphone from viruses and hacking attempts. Kaspersky for Android provides 24/7 protection and includes a ‘Where is my device’ feature as well as spy app detection. | <urn:uuid:d70ba701-f90c-42eb-a4dc-5cbaa3c2ef45> | CC-MAIN-2024-38 | https://usa.kaspersky.com/resource-center/preemptive-safety/tips-for-mobile-security-smartphone | 2024-09-14T02:12:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.77/warc/CC-MAIN-20240913233654-20240914023654-00470.warc.gz | en | 0.914633 | 2,223 | 2.921875 | 3 |
Have you ever wondered what makes your web browsing experience smooth and efficient? One key player behind the scenes is the Keepalives mechanism in HTTP 1.1.
This article delves into the intricacies of Keepalives, shedding light on how this essential feature optimizes your internet interactions. We will explore its functionality within HTTP 1.1, demonstrate its role in the web page request-response cycle, and examine its impact on modern web browsing. Join us as we uncover the secrets of Keepalives and their pivotal role in the digital world.
In this article:
- Understanding Keepalives in HTTP
- Keepalives in Action
- Advantages of Using Keepalives
- Limitations and Considerations
- Minimizing Connections in HTTP/2
1. Understanding Keepalives in HTTP
Keepalives in HTTP 1.1 are a fundamental mechanism designed to enhance network efficiency. They allow a single TCP connection to be used for multiple HTTP requests and responses, rather than opening a new connection for each transaction. This feature reduces the overhead of establishing and closing connections, which is especially beneficial in HTTP 1.1 where each request otherwise requires a separate connection. By maintaining a persistent connection, Keepalives minimizes latency and resource consumption, leading to faster web page loading times and a more efficient use of network resources.
2. Keepalives in Action
During a typical web page request and response cycle, Keepalives play a critical role. When you click on a link or type in a URL, your browser sends an HTTP request to the server. With Keepalives enabled, instead of closing the connection after receiving the response, the connection remains open for a predetermined period. This allows subsequent requests to the same server to reuse the existing connection. The result is a more seamless and quicker browsing experience, as the time-consuming process of establishing new connections for each request is avoided. Keepalives ensure that your interaction with websites is not just about fetching data, but doing so in the most efficient way possible.
When a Web browser that supports keepalives (such as Internet Explorer 4 and above) makes an HTTP GET request to a Web server that supports keepalives (such as IIS 4 and above), the Web browser includes a new “Connection:Keep-Alive” header in the list of HTTP headers that it sends to the Web server in the request.
The Web server responds by giving the client the file it requested (usually an HTML page or an image file). After the server sends the file to the client, instead of closing the TCP/IP socket it keeps the socket open for a period of time in case the client wants to download additional files.
A typical Web page might include a dozen images, and normally up to four sockets are kept open for transferring files between the client and the server.
Keepalives other usages
The term “keepalives” also refers to special packets used to keep a TCP connection open on a TCP/IP internetwork.
Keepalives do not work unless they are supported by both the Web browser and the Web server.
3. Advantages of Using Keepalives
Keepalives in HTTP 1.1 significantly enhance connection efficiency and resource management. By maintaining a persistent connection for multiple requests, they reduce the need for frequent handshakes associated with opening and closing connections. This results in less latency and quicker data transfer, leading to faster page load times. Efficient connection reuse also means reduced CPU and memory usage on both client and server sides, optimizing overall system performance. Furthermore, Keepalives reduce network congestion, making them beneficial for both high-traffic websites and users with limited bandwidth.
4. Limitations and Considerations
While beneficial, Keepalives have limitations. One major drawback is the potential for connections to remain open longer than necessary, consuming server resources. In high-traffic scenarios, this can lead to resource exhaustion, impacting server performance. Additionally, Keepalives may not be ideal in scenarios where connections are intermittent or when the server needs to handle a large number of short-lived connections. They also add complexity to server configuration and management, requiring careful tuning to balance performance with resource utilization.
5. Minimizing Connections in HTTP/2
HTTP/2 introduces more advanced connection management features, building upon the concept of Keepalives. It allows multiple requests and responses to be multiplexed over a single connection, significantly reducing the number of connections needed. This is achieved through its ‘stream’ concept, where numerous streams can coexist, each carrying a request-response pair, without blocking each other. While HTTP/2 enhances connection efficiency, Keepalives remains relevant, particularly in environments where legacy systems are in use or in specific server configurations that do not support HTTP/2. Its role in maintaining connection efficiency, especially in HTTP 1.1 environments, continues to be crucial.
- “HTTP: The Definitive Guide” by David Gourley and Brian Totty.
- “High Performance Browser Networking” by Ilya Grigorik.
- RFC 2616: Hypertext Transfer Protocol — HTTP/1.1.
- RFC 7540: Hypertext Transfer Protocol Version 2 (HTTP/2).
- “Computer Networking: A Top-Down Approach” by James Kurose and Keith Ross.
- “TCP/IP Illustrated, Volume 1: The Protocols” by W. Richard Stevens. | <urn:uuid:d01b202f-9fa6-406c-a400-e9ba3c81c003> | CC-MAIN-2024-38 | https://networkencyclopedia.com/keepalives/ | 2024-09-19T02:04:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651944.55/warc/CC-MAIN-20240918233405-20240919023405-00070.warc.gz | en | 0.901913 | 1,129 | 2.59375 | 3 |
Many people are writing about innovation. Yet, the more I read, the more confusing the term becomes.
Some have said an innovation is an idea. Others have said an idea isn’t an innovation until it has been applied or implemented into a new product, service, or method. Hmmm, from my experience the hard part is coming up with the idea.
In an attempt at clarity, some have said that innovation is a new: product, service, design, method, technology, process, solution, experience, outcome, and/or trend. Yeah, that makes it clear!
Then there are others who have defined innovation as one of the following: adding value to a product or service; adding value to a company, finding new markets; moving toward the future; having a different viewpoint; or addressing challenges.
In my opinion this last one, addressing challenges, comes closest to a correct definition. So, let’s look at an actual dictionary definition. Here’s what Dictionary.com offers:
innovate [in-uh-veyt] verb (used without object), innovated, innovating.
From the Latin innovatus, past participle of innovare: to renew or alter.
* to introduce something new; make changes in anything established.
innovation [in-uh-vey-shuhn] noun.
* something new or different introduced.
* the act of innovating; introduction of new things or methods.
Maybe it’s just me, but I don’t care for this definition either, and that’s simply because I feel it should include a reason for wanting to introduce something new. Why are you introducing something new? What purpose does it serve? So, what could the reason be? If we look at the great scientific and technological advancements in history, the answer becomes obvious.
When we think of historical breakthrough innovation most people’s thoughts go to relatively recent inventions such as automobiles and airplanes, radio and television, telephones, and personal computers, all of which revolutionized the world, but even before the advent of electricity there were some amazing discoveries that radically improved human life. Ones we completely take for granted today.
• For example, take the invention of sanitation systems. The idea of separating fresh water from waste water (sewage), was so important for ending disease that it extended the average life span by decades. There have been 2 other major medical developments that massively ended the spread of disease, extended the human life span, and advanced the knowledge of medicine. Those are the discovery of disease-preventing vaccination and disease-fighting penicillin.
• Do you wear glasses? If you’ve never needed glasses, then you can’t imagine how vulnerable a person is without the ability to see clearly. Most people will eventually need reading glasses, therefore nearly everyone will experience a degree of that vulnerability at some point in their life. The invention of wearable optical lenses in 13th Century Italy has given billions of people the safety and confidence of clear sight.
• A new plow design in the early 19th Century transformed farming. Early plows did little more than scratch grooves into the earth, and had changed little since Roman times. The invention of the moldboard plow not only cut a furrow, but lifted up the soil and turned it. It mulched the debris from the previous year’s crops, along with growing weeds, by flipping it upside down and creating a nutrient humus. This process extended the fertility of heavily used farmland. It was one of the key factors leading to the agricultural revolution that increased crop yields, provided better nutrition, and led to a surge in population growth.
• The discovery of cement as a construction material led to the building of nearly permanent weather-resistant structures some of which have lasted for millennia. The lowly nail enabled the average man to build safe homes to live in.
• Then there are the cumulative inventions of written language, paper, and the printing press. These developments made it possible to record, transport, and share knowledge. More than anything the ability to spread know-how has improved humanity.
• And, let’s not forget the wheel and axle, or the sailboat — both of which have enabled man to transport himself, his products, and his culture around the world.
So what is innovation? It’s simple: innovation is the solving of problems. | <urn:uuid:ac7e0f54-ab81-477f-8ce0-38f8d394401b> | CC-MAIN-2024-38 | https://www.isemag.com/columnist/article/14268200/innovation-is-about-one-thing-and-one-thing-only | 2024-09-20T03:38:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652130.6/warc/CC-MAIN-20240920022257-20240920052257-00870.warc.gz | en | 0.952983 | 911 | 2.75 | 3 |
There are different statements in Python Coding. In this lesson, we will focus on some of them, Python try and except statements. Finally, we will also learn finally statement.
With try statement, we can test a code if it is working fine or not. In other words, we learn if our code will return an error or not with the help of try statement of python.
On the other hand, except statement is used what the code will return if are there any error in the code.
Lastly, finally statement let us to execute the code independent from the result of the code.
You can check also Python If Else Condition lesson.
Table of Contents
Now, let’s start to see how to use Python try and except statements with examples.
As you know, firstly, we should define a variable to print it. If we try to print a variable without its definition, the code will return an error. Below, we will use this print function without variable definition with the help of try statement. Beside we will use except statement to do something if this code return an error. Here, it will return an error.
Firstly, let’s try to print a defined variable with the help of try statement. Here, we are trying if this code is working or not.
The return of this code will be:
You can also watch the video of this lesson!
You can also check Python For Loop
Now, let’s learn what will happen, if we do not define the variable a.
The return of this code will be:
We can also use except statement multiple times. By doing this, we can specify the error type and for this error type we can do a specific job. Below, there is an example of this multiple except statement usage.
The return of this command is:
If we use another error type, then it will go to the other except.
Now, let’s learn another statement that is used with Python try and except statements. This statement is finally statement. Finally statement executes the code evet it has an error or not. In other words, we specify a code with finally statement and then it executes both the result of try and except code and then finally code.
Let’s do an example and understand better, how finally statement works.
The output of this python code will be:
Now, let’s see how to do anything, if there is no error in the code. Here, we will use else statement to do anything after the code executed.
The return of this code will be like below:
Here, we have defined variable a, so, print function will work in try statement. After that, because of the fact that there is no Error, else statement will also work and it will print “All is Good!”.
We can write a code that will raise an error if something happens. For example we can control the value of a variable and if it is higher than a specific value, then we can raise an error.
Below, we will control the value of a variable and if it is higher than a value, then we will raise an exception with raise statement.
The output will be:
As you can see, it is raised as exception and the string that we have mentioned are printed also.
With raise statement, we can also define the error type that we will rise. Here, we have raised an exception, this can be also a TypeError, SystemError, SyntaxError etc.
Here, we have talked about Python try and except statements, finally statement and else statement. We have also learned how to use, raise statement to raise an error according to a condition. | <urn:uuid:5790f5ed-5599-4863-859b-2b4a5a872090> | CC-MAIN-2024-38 | https://ipcisco.com/lesson/python-try-and-except/ | 2024-09-08T03:35:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650958.30/warc/CC-MAIN-20240908020844-20240908050844-00170.warc.gz | en | 0.890747 | 768 | 3.9375 | 4 |
February 12, 2016
The internet is an expanse of websites, videos, images, and information so vast that no one could possibly view it in its entirety. Every hour of every day there are thousands upon thousands of webpages written, published, and indexed on search engines. All of this is ready and waiting to be read, watched, or consumed by millions of internet users.
Even still, the internet has layers. Beneath the surface level of searchable websites, there is a subterranean, anonymous level where anyone can post anything they want. It’s ominously called the DarkNet.
The DarkNet, Deep Web, and Tor Hidden Services Defined
One of the most common problems in understanding the DarkNet is that there are actually three distinct concepts at play. Each are used interchangeably to describe the DarkNet, but in reality they are different aspects of a larger idea.
The Deep Web
The deep web refers to any webpage or web content that is not indexed by Google or other search engines. Therefore, in order to find a page on the deep web, one must click on a direct link or already know the desired URL. This could include a wide range of websites. From a “thank you page” after filling out a web form to the black market itself, all of this content is considered part of the deep web.
Within the deep web exists the DarkNet. These websites are not indexed by search engines and they are also only accessible by computers using special software to protect anonymity. The relation between the deep web and a DarkNet is much the same as that of a square and rectangle. All squares are rectangles, but not all rectangles are squares. In the same way, every webpage on the DarkNet is part of the deep web, but not every deep web page is on the DarkNet.
Tor Hidden Services
One of the most popular pieces of software available to access the DarkNet, Tor Hidden Services is an open source project to maintain the anonymity of users on the internet.
Security Disclaimer Regarding the DarkNet
Before we go any further, it is important to mention the security risks involved. The DarkNet is designed to protect the anonymity of the posters and users, however, that doesn’t mean it’s secure. All manner of hackers and viruses lurk on the cryptic and mysterious links of the DarkNet. Mindsight does not recommend anyone go there period. If set on exploring the DarkNet, conduct extensive research into security measures to protect yourself and your network. This is dangerous territory.
What’s on the DarkNet?
Consider the infamous comment boards on your favorite websites. Very quickly, conversations on those boards can devolve into nasty vitriol the likes of which you’d never utter in person. Now, those boards aren’t even anonymous. Imagine what type of content people might post with complete anonymity. The DarkNet is filled with the heroes and horrors of free and unregulated expression without the limitations or decency of the law.
It is worth noting that not everything is so sinister. There are anonymity advocates who simply enjoy that their identity is free from prying eyes, or citizens living in countries that restrict areas of their internet access. Content on the DarkNet is said to run the gambit between benign and depraved.
How Tor Hidden Services Protects the Anonymity of DarkNet Visitors
Tor Hidden Services were originally designed to protect the anonymity of users as they visited normal websites, and it still can be used for that today. A DarkNet is just an alternate, though now primary, way to use Tor Hidden Services.
Tor fittingly adopted an onion as their symbol, because the layers of an onion are a perfect representation of their encryption process. Here’s how it works.
- In order to access the Mindsight website, a computer must send a data request to the server where our website is housed. Under normal circumstances, that server would be able to see the IP address of the initial computer. Tor puts a few measures in place to prevent this.
- The transmitted data is encrypted multiple times in nested layers and sent to multiple onion relays in the Tor network before finally shooting out of the network and to the desired website.
Onion Relays: These work just like proxy servers. They simply pass along the message to the next server in the route. However, what makes onion relays unique is that they only decrypt a small portion of the encrypted layers in the onion. No single onion relay every truly knows what it is passing along. It only decrypts the location of the previous server and the one that comes next.
- Upon receiving the request, the website cannot discern the original source. All it can see is the location of the last relay in the Tor network. The information is then sent back through the onion relay route and back to the original computer.
- As an added security measure, these routes through the onion relays are active for only about ten minutes. After that time, Tor will dissolve the route and automatically create a new one in the network for the user.
While there are other security concerns involved and ways around this process, Tor Hidden Services apply these encryption and relay principles to maintain the anonymity of their users regardless of what content they are trying to see.
Like what you read?
Mindsight, a Chicago IT services provider, is an extension of your team. Our culture is built on transparency and trust, and our team is made up of extraordinary people – the kinds of people you would hire. We have one of the largest expert-level engineering teams delivering the full spectrum of IT services and solutions, from cloud to infrastructure, collaboration to contact center. Our highly-certified engineers and process-oriented excellence have certainly been key to our success. But what really sets us apart is our straightforward and honest approach to every conversation, whether it is for an emerging business or global enterprise. Our customers rely on our thought leadership, responsiveness, and dedication to solving their toughest technology challenges.
For Further Reading | <urn:uuid:dfc42208-661e-4081-84fe-44eca4f7ddc4> | CC-MAIN-2024-38 | https://gomindsight.com/insights/blog/how-it-works-the-darknet/ | 2024-09-09T07:48:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00070.warc.gz | en | 0.939436 | 1,234 | 3.0625 | 3 |
More than $600 billion is lost to cyberattacks every year. Whether they know it or not, all businesses have digital weaknesses, and the only way to identify many deeply rooted vulnerabilities, other than an actual cyberattack, is to conduct a security assessment.
Data breach prevention tests take many forms. But below, we’ll break down the most important one: penetration testing.
What Is Penetration Testing?
Data breaches, malware, SQL injections, phishing, and other forms of digital crime are rising. Typically, internal security assessments test only for specific vulnerabilities, such as code integrity or cloud storage security. However, penetration tests are more expansive, discovering security weaknesses across the entire digital infrastructure.
In essence, penetration testing is a vulnerability assessment that simulates a real world attack. That’s right: We recommend that businesses invest in hacking themselves.
Penetration tests are a popular form of ethical hacking because they thoroughly analyze networks, IP addresses, mobile apps, servers, cloud storage services, and other potential points of entry through the lens of someone looking to commit real harm.
Here’s how it works: A team of faux attackers, known as a “Red Team,” seeks out vulnerabilities, just as any dedicated cyberattacker would. This idea might cause uneasiness. It is odd to think someone will know all of your business’s weaknesses. To assuage these fears, the third-party hackers sign legal documents stating exactly what the tests will include and guarantee they won’t hold onto any compromising information.
Once the test is complete, the assessors share their insights, and the business can create a plan of action to patch its security risks.
Different Types of Penetration Testing
Businesses constantly incorporate new forms of technology to improve their operations. These include mobile apps, cloud data services, and Internet of Things devices, all storing vital information, including medical and financial details. Often, businesses use hundreds of application programming interfaces (APIs), network tools, and devices to organize their systems and handle internal and external operations.
Due to this complexity, penetration testing takes multiple forms. Some tests are thorough; others are more specialized.
Black-box testing is the most complicated and lengthy penetration-testing method, but it’s also the most worthwhile.
Here, the penetration tester, or “pen tester,” scours a company’s entire digital attack surface from the outside in. Attack surface is shorthand for all possible avenues a real attacker could use to exploit your systems.
The testers start off completely blind to the business’s security measures and protocols, with no credentials, no road map, and no information concerning the digital footprint. Because of this, the process requires serious digging as they work to enter and exploit areas considered to be secure.
Ultimately, black-box testing is the most true-to-life way to understand your weaknesses. However, given its comprehensive nature, it can take months to complete the test, depending on the size of the attack surface. It’s also the most expensive, as it can’t be fully automated, and it should be done by an independent party without pre-existing knowledge of the business’s network.
Despite its costly and time-consuming nature, black-box testing is absolutely necessary to protect critical networks, mobile apps, APIs, and other key systems and data warehouses. Yes, it’s a long process, but it’s also the most powerful way to understand your weaknesses and reach a firm sense of digital security.
White-box testing, while similar in principle, is the opposite of black-box testing. Here, the pen tester receives full or specific documentation of the environment, meaning they have a guide to use to spot weaknesses. They also receive certain credentials to access things like source code and guidance on where to look within the digital architecture.
The purpose of white-box testing is to narrow the scope of research down to a few prized assets, which could include:
- Cloud storage centers
- Coding repositories
- Mobile app deployments
- Conditional loop functionality
White-box testing has appeal because it’s cheaper and takes less time to complete.
Finally, there’s gray-box testing, which combines the black and white methodologies. Here, the pen tester receives partial insight into the company’s security infrastructure. For example, a gray-box test may provide white-hat hackers with certain log-in credentials and have them run tests to simulate a scenario in which threat actors get a hold of them.
Gray-box testing is useful because it helps businesses theorize potential real-life outcomes for specific scenarios while reducing the cost and time consumption of black-box system penetration testing.
Benefits of Penetration Testing
Depending on the depth and complexity, pen testing may cost anywhere from $500 to $50,000. Especially for black-box tests, pen testing comes with a hefty price tag. But it’s often worth the cost, and we’ll show you why.
1. Security Posture Improvement
Realistic threat simulation is often the only way to discover deeply rooted vulnerabilities. Through these tests, you get actionable recommendations that are more than just speculative. After completing an assessment, the business can immediately devise a plan to tweak its security controls, practices, and habits to secure the areas with the greatest risk.
2. Avoiding Financial Damages and Preventing Downtime
Penetration-testing costs are steep. But they pale in comparison to the average cost of a data breach, which in 2022 was $4.35 million.
While penetration tests don’t guarantee attack prevention, they are the most holistic way to threat-test your existing network from every angle. This helps you avoid the potential cost of a ransomware attack and the lost revenue resulting from downtime or system failure.
3. Protecting Partnerships
A reputation for poor security is a surefire way to lose existing partnerships and spoil potential ones. Also, studies show that the costliest data breaches come via third-party hacking, in which hackers enter through the less secure networks of business partners, vendors, and suppliers.
Penetration tests evaluate businesses’ links to partners, helping avoid these types of hacks and creating a more secure partnership.
4. Preserving and Enhancing the Company’s Reputation
A single data breach can completely tarnish a company’s reputation. Remember the 2017 Equifax data breach? This breach left the credit reporting agency reeling for years. Now, the brand may be forever associated with a devastating security lapse, and don’t forget the $495 million settlement.
Penetration-testing services help secure your reputation, showing customers and potential clients that your business actually follows best practices and is willing to invest in securing sensitive data.
5. Regulatory Compliance
At this point, a handful of regulatory measures dictate that companies within specific industries must carry out penetration tests. These regulations include:
- HIPAA Evaluation Standard § 164.308(a)(8)
- Payment Card Industry Data Security Standard (PCI DSS)
- AICPA-developed SOC 2
- FINRA’s Securities Exchange Act (17 CFR § 240.17a-4(f))
As cyber threats increase, businesses should expect more penetration-testing mandates. To avoid infractions, a proactive approach ensures your business stays on the right side of these regulations and avoids noncompliance fines.
Invest in the Best Penetration-Testing Services
As cybercrime skyrockets and threats evolve, businesses can’t afford to put off penetration tests. Ideally, businesses should run these tests whenever they incorporate significant new infrastructure or make major changes to their environment.
However, finding a trusted, affordable penetration-testing service is challenging, and internal penetration testing is not ideal. This is why you should consider ISOutsource’s penetration-testing services for your next security exercise.
Our services are some of the most comprehensive and affordable on the market, offering continuous insights into how you can secure your most pertinent digital vulnerabilities. Visit our product page today and learn about how our penetration testing can help you avoid downtime, gain control over your vulnerabilities, and prevent the dreadful fallout of data breaches. | <urn:uuid:a2d091e4-b7c8-4f3c-8060-3dcc94d1360a> | CC-MAIN-2024-38 | https://www.isoutsource.com/benefits-of-penetration-testing/ | 2024-09-09T08:22:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00070.warc.gz | en | 0.925316 | 1,708 | 2.96875 | 3 |
By Robert Pease
APPLICATIONSTunable lasers made their splash on the telecommunications scene a few years back, developed primarily for "sparing," in wavelength-division multiplexing (WDM) applications. The ability to tune the lasers to whatever bandwidth was required alleviated a need to stock lasers for each individual bandwidth. Since then, tunable lasers have been developed for use in bandwidth provisioning and, more recently, wavelength-routing applications geared toward an all-optical-network architecture.
Project HORNET (Hybrid Optoelectronic Ring NETwork), a research program being conducted by Stanford University under the direction of Prof. Leonid Kazovsky, is working on new technologies for using tunable lasers for future metropolitan area networks (MANs). The research is sponsored by Sprint Advanced Technology Laboratories in Burlingame, CA. The researchers hope to determine the feasibility and advantages of an optoelectronic packet-switching MAN.
"At the time we started the project in June 1998, everyone in the WDM world was trying to break capacity records," says Ian White, a Ph.D. student at Stanford's Optical Communications Research Laboratory. "We wanted to be different, so we tried to harvest new technologies for the MAN. Tunable lasers were just becoming commercial products at the time, so we decided to incorporate them into the network architecture and determine two things: if it was even feasible and if there were any advantages in doing so."
White explains that conventional MANs are designed around a traffic model in which all traffic comes into the MAN from the Internet backbone and simply needs to be distributed to the end users. In other words, a MAN is merely a distribution network. However, times are changing, and so is the traffic. Technological changes in Internet-protocol (IP) traffic, such as Napster and other applications designed to keep content distributed throughout the network, are leading to a lot of changes in the traffic patterns currently on MANs.
"We predict that there will be a lot of bursty, unpredictable, packet-based communications between access points on the ring network," says White. "Therefore, we see it necessary to give intelligence to the access points. Intelligence needs to be pushed out toward the end user just as content is being pushed out."
That added intelligence could come in the form of tunable lasers. By incorporating a tunable laser into the access points on the MAN, those access points are able to transmit packets directly to each other. In the conventional model, all traffic is hubbed at the center of the star network. HORNET gives the network a logical topology that looks more like a mesh, rather than the star topology typically used in conventional networks.
Benefits of the HORNET architecture are mainly for the network provider, says White. The traffic demands of the future could cause a lot of strain on the switching equipment at the hubs of conventional, static networks. With HORNET, that strain is relieved because the hub doesn't have to route traffic to the destination any more. The access point is now smart enough to perform that function. Also, since the traffic goes directly from the source to the destination, traffic is actually reduced.
Since point-to-point links are eliminated by HORNET's architecture, Syn chronous Optical Network (SONET) cannot be used. Although White concedes that SONET still has a place in point-to-point long-haul, high-capacity backbones for years to come, HORNET will eliminate it in the MAN, except for cases where there is a long-term connection between a source and a destination in the optical layer.
"Because we tune our transmitter on a packet-by-packet basis between packet transmission, we do not maintain a permanent connection between source and destination," explains White. "In fact, we don't maintain any connection at all. Therefore, all of the point-to-point protocols typically contained in the stack between IP and WDM must be eliminated in HORNET."
Research is still underway for providing the inherent capabilities of protection, restoration, and grooming that SONET so aptly provides. The researchers are continuing to investigate how to incorporate quality of service into the HORNET architecture. Currently, the main issue being faced is bit-level synchronization.
"We have a method for doing this," says White, "and it works. But we're trying to improve on it. It's one of the most difficult aspects of the project. The problem is that in HORNET, packets arrive asynchronously in the receiver. The receiver needs to know when to sample the 1s and 0s. SONET takes care of this. However, in our case, we need to do this bit-level synchronization on every incoming packet. Obviously, to maintain low overhead, it needs to happen nearly instantaneously. This is very difficult. Unfortunately, research has not been done in this area. We hope that changes-and we want to be a contributor."
Another important issue is survivability. Again, the Stanford team has a design on how to solve the problem, although it has not yet been demonstrated. As the project continues, more questions can be addressed and other issues will be researched.
"As you can see, this project could be a mother to many more research projects," says White. "We hope that happens and the rest of the research community attempts to tackle the problems we've come across, such as the bit-level synchronization problem and the Layer 2 and Layer 3 issues."
Successful research in the HORNET project could reap many advantages through the use of tunable lasers. A logical topology that resembles a mesh would reduce the amount of switching equipment necessary at the point-of-presence. Nodes would be more connected and traffic across the fiber would be reduced. Routing algorithms would become more flexible and possible paths between sources and destinations could be significantly increased. Additionally, the use of tunable lasers creates a more favorable and efficient environment for multicast and broadcast transmission than those currently available on conventional rings.
What has been achieved so far by the HORNET researchers? The major breakthrough, which White believes has already been partially achieved, is to see an increase in interest in bringing network functions-mainly packet switching-down to the WDM layer.
"People in the industry claim to be doing this, but they're only taking baby steps," says White. "We are hoping to see vigorous efforts to use optical packet switching or optoelectronic packet switching to provide intelligence to the WDM layer."
HORNET has already had several notable successes. For example, tuning the laser between packet transmissions from one ITU wavelength to its adjacent ITU wavelength was accomplished in 4 nsec using a transmitter based on a tunable laser from Altitun Inc. Faster lasers could bring the switching times down to picoseconds.
Project HORNET is innovative and unique, two critical elements we often associate with new optical startup companies. The final element is whether the architecture will work as designed. White and his colleagues at Stanford, with help from AT&T's laboratories, are continuing their research. The team has already reached one conclusion: the optoelectronic packet-switching MAN is definitely feasible and undeniably advantageous. | <urn:uuid:8f2672fd-1b85-4879-9c88-5dee9a79255b> | CC-MAIN-2024-38 | https://www.lightwaveonline.com/optical-tech/transmission/article/16649170/hornet-creates-a-new-buzz-in-the-use-of-tunable-laser-technology | 2024-09-09T07:52:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00070.warc.gz | en | 0.963281 | 1,481 | 3.046875 | 3 |
Every intelligent MSP technician has an understanding of the three-letter acronyms that enable networks to function and allow traffic to flow. Network address translation, commonly referred to as “NAT”, is one of them. Without network address translation, traffic would never be able to make it past the routing device. Here is a quick breakdown of what NAT is and why we need it, and an overview of NAT tools and the security issues that go along with it.
Network Address Translation Definition
Network address translation is the remapping of IP addresses, be it by single address or subnet, via routing devices. As IP addresses are remapped, or translated, they are effectively hidden behind another IP address. This translation happens at layer three, the network layer, of the OSI model.
The most common example of this is on a home or business network. Opening a command prompt and using the ‘ipconfig’ command returns the local IP address of the device, often something in the privately designated 192.168.1.0/24 subnet. From the same device, visiting a site such as Google and using the “What is my IP address” search query returns a public IP address; generally, the IP address is assigned to the public side of the gateway router.
Types of Network Address Translation
There are three different types of NAT: static, dynamic, and port address translation. Here is a breakdown of each of them.
- Static network address translation is where a single public IP address is directly mapped to a single private IP address. This can be used in examples of hosted distributed servers, such as web and FTP servers.
- Dynamic network address translation maps a group of public IP addresses to internal private IP addresses. This is similar to static NAT and is often used in larger corporate environments that may have use for multiple public IP addresses.
- Port address translation is when multiple private IP addresses are mapped to a single public IP address. Each private address must be configured to respond on an individual port for this to work properly.
Further reading Guide to Subnets and IP Addressing
How NAT Helps Average Users
The most basic concept in order to understand the power of network address translation is this: there is a finite number of IP addresses available for use - 4,294,967,296, to be specific. If every PC on the internet was assigned an individual, public-facing IP address, they would run out pretty quickly.
Thanks to network address translation, we don’t need to worry about this. Rather than each internet-facing device having a public IP address, NAT allows gateway routing devices to be assigned one public-facing IP address which “represents” all of the devices behind it.
How NAT Helps Network Administrators
Network administrators can use network address translation to direct traffic. Networks that host servers that need to be publicly available, such as web and FTP servers, can make these easily accessible to the outside world, thanks to NAT. This can either be done simply via one-to-one static NAT or with security in mind via port address translation.
With port address translation, traffic direction can be set up with non-standard ports. This adds a new layer of security, making it harder for bad actors to find these servers that are being made accessible via NAT. While network security generally should be approached at multiple levels, this is a great way to deflect intrusion attempts from the front end of the network.
Get tips for keeping your networks clean and well-organized, including:
- Effective labeling practices
- Tips on standard network sizing
- Network cooling and heating, and more.
The most popular way to administer network address translation is through network routing devices. The simplest way to break this down is with three different class levels.
- Basic consumer class - Many times, it is an on/off option. This is most often found in internet service provider modems that offer routing capabilities.
- Small office/home office - These devices will offer one-to-one NAT as a standard feature. This can be used for effective traffic forwarding for those who need it.
- Enterprise-level - This level of routing device should offer every dimension of effective network address translation. This includes one-to-many NAT and port address translation.
As with any other networking protocol, every managed service provider technician should have security in mind when implementing and administering network address translation. Here is a breakdown of things to consider:
The name is fairly self-explanatory: an intruder accesses the configuration and redirects traffic or retranslates addresses, all with the intent of disruption or some other evil aim.
Man-in-the-middle attacks are best prevented by following standard security measures. All network address translation devices should be protected with a strong password that is changed often and only accessible to the public from selected sources and over non-standard ports.
Out of Date Configurations
MSPs should have a quality assurance team available to make sure that all routing policies, including network address translation, are kept up to date and accurate. Furthermore, whenever changes are made to a NAT server, the technician involved should review to be sure that the changes that are being made don’t render other rules out of date.
Further reading Network Security Best Practices
Network address translation, when used appropriately, is a valuable resource to managed service providers. It can be used to direct traffic as needed and helps to conserve IP addresses in the public space. While there are different types of network address translation based on need, there are tools to use and security considerations to be made for each case.
Now that we’ve made network address translation easier to understand, this is a great time to do a little research to see how it can be better used to help your managed service provider and its clients today. | <urn:uuid:21201a41-dbaf-4270-8f8d-0a87974f85b5> | CC-MAIN-2024-38 | https://www.msp360.com/resources/blog/guide-to-network-address-translation/ | 2024-09-09T09:17:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00070.warc.gz | en | 0.939959 | 1,199 | 3.375 | 3 |
Definition: Federated Learning
Federated Learning (FL) is a machine learning paradigm where multiple decentralized devices or servers collaboratively train a shared model while keeping the data localized. This approach enhances data privacy and security by ensuring that raw data remains on local devices and only model updates are shared.
Understanding Federated Learning
Federated Learning revolutionizes traditional machine learning by decentralizing the learning process. Unlike conventional methods where data is aggregated into a central server for training, FL allows multiple devices to collaboratively train a model without sharing their raw data. This paradigm is particularly useful in scenarios where data privacy and security are paramount.
- Decentralized Data: Data remains on local devices, promoting privacy and security.
- Model Aggregation: Devices train models locally and share updates with a central server to update the global model.
- Privacy Preservation: By keeping data on local devices, FL minimizes the risk of data breaches.
- Scalability: FL can scale to thousands or millions of devices, enabling more comprehensive and inclusive models.
Benefits of Federated Learning
Enhanced Privacy and Security
One of the primary benefits of Federated Learning is its ability to maintain data privacy. Since data never leaves the local devices, the risk of data breaches is significantly reduced. This is particularly beneficial in sectors like healthcare and finance, where data sensitivity is paramount.
By keeping data on local devices, Federated Learning reduces the latency associated with transferring large datasets to a central server. This leads to faster model training and real-time updates, which are critical in applications such as autonomous vehicles and IoT devices.
Compliance with Regulations
Federated Learning aids in compliance with stringent data protection regulations such as GDPR and CCPA. By ensuring that data remains local and only aggregated model updates are shared, organizations can better adhere to legal requirements.
Federated Learning reduces the need for extensive data storage and transfer infrastructure, leading to cost savings. Organizations can leverage existing devices for model training without incurring additional costs for data centralization.
Use Cases of Federated Learning
In healthcare, patient data privacy is crucial. Federated Learning allows healthcare providers to collaboratively train models on patient data from different hospitals without compromising privacy. This enables the development of more robust diagnostic tools and treatment plans.
Financial institutions can use Federated Learning to train models on sensitive transaction data across multiple branches or institutions. This collaborative approach enhances fraud detection systems while maintaining data privacy.
Smart Devices and IoT
Smart devices and IoT ecosystems generate vast amounts of data. Federated Learning enables these devices to train models locally, improving functionalities like predictive maintenance, user personalization, and real-time analytics.
Autonomous vehicles require continuous learning from vast datasets to improve navigation and safety features. Federated Learning allows these vehicles to share model updates without transferring raw data, ensuring both efficiency and privacy.
Features of Federated Learning
Federated Learning leverages the computational power of local devices to perform model training. This reduces dependency on centralized servers and enables continuous learning from decentralized data sources.
Aggregated Model Updates
Rather than sharing raw data, Federated Learning systems share model updates (gradients) with a central server. The server aggregates these updates to improve the global model, ensuring that the system benefits from collective learning while maintaining data privacy.
Federated Learning systems can adapt to new data patterns quickly, as local devices continuously update the model with new data. This adaptive learning capability is essential for dynamic environments such as smart cities and healthcare monitoring systems.
Robustness to Heterogeneity
Federated Learning can handle heterogeneous data distributions across different devices. This robustness ensures that the global model is representative of diverse data sources, leading to more generalized and effective machine learning models.
How to Implement Federated Learning
Step 1: Define the Federated Learning Architecture
Choose a federated learning architecture that suits your application. Common architectures include centralized, decentralized, and hierarchical models.
- Centralized FL: A central server coordinates the training process and aggregates model updates.
- Decentralized FL: Devices communicate and share model updates without a central server.
- Hierarchical FL: Combines centralized and decentralized approaches, using intermediary nodes for aggregation.
Step 2: Select Appropriate Devices
Identify the devices that will participate in the federated learning process. These devices should have sufficient computational power and storage to handle local model training.
Step 3: Ensure Data Privacy
Implement privacy-preserving techniques such as differential privacy and secure multi-party computation to protect data during the training process.
Step 4: Local Training
Train the machine learning model on local devices using the available data. This process involves:
- Data preprocessing
- Model training
- Local evaluation
Step 5: Share Model Updates
Once local training is complete, devices share model updates (gradients) with the central server or directly with other devices in a decentralized setup.
Step 6: Aggregate and Update the Global Model
The central server or aggregation node collects model updates from participating devices and combines them to update the global model. This aggregation process can involve techniques like Federated Averaging.
Step 7: Iterate
Repeat the local training and model aggregation steps iteratively until the model converges to an acceptable performance level.
Challenges in Federated Learning
Devices may have different data distributions, leading to challenges in model convergence. Techniques like personalized federated learning can address this by allowing models to adapt to local data distributions.
Sharing model updates can create significant communication overhead, especially in large-scale deployments. Techniques like federated dropout and compression algorithms can help reduce this overhead.
While Federated Learning enhances data privacy, it still faces security risks such as model inversion attacks. Implementing robust security protocols and continuous monitoring is essential to mitigate these risks.
Participating devices may have varying levels of reliability and computational power. Ensuring consistent participation and performance across devices is crucial for the success of the federated learning process.
Future of Federated Learning
Federated Learning is poised to revolutionize various industries by enabling privacy-preserving, scalable, and efficient machine learning solutions. As the technology matures, we can expect advancements in areas such as:
- Personalized Federated Learning: Enhancing model personalization to better adapt to individual device data.
- Federated Learning Frameworks: Development of standardized frameworks and protocols to simplify implementation.
- Edge Computing Integration: Leveraging edge computing resources to enhance federated learning capabilities.
- Interoperability: Ensuring seamless integration of federated learning systems across different platforms and devices.
Frequently Asked Questions Related to Federated Learning
What is Federated Learning?
Federated Learning is a machine learning paradigm where multiple decentralized devices or servers collaboratively train a shared model while keeping the data localized. This approach enhances data privacy and security by ensuring that raw data remains on local devices and only model updates are shared.
How does Federated Learning enhance data privacy?
Federated Learning enhances data privacy by keeping data on local devices and only sharing model updates (gradients) with a central server. This minimizes the risk of data breaches, as raw data is never transferred or stored centrally.
What are the key benefits of Federated Learning?
The key benefits of Federated Learning include enhanced privacy and security, reduced latency, compliance with data protection regulations, and cost efficiency. By keeping data local and only sharing model updates, Federated Learning mitigates the risks associated with centralized data storage.
What are some use cases of Federated Learning?
Federated Learning has several use cases, including healthcare, finance, smart devices and IoT, and autonomous vehicles. It enables privacy-preserving collaborative model training across different organizations or devices, improving diagnostic tools, fraud detection, device functionalities, and navigation systems.
What challenges does Federated Learning face?
Federated Learning faces challenges such as data heterogeneity, communication overhead, security risks, and device reliability. Solutions like personalized federated learning, federated dropout, compression algorithms, and robust security protocols are essential to address these issues. | <urn:uuid:c88ca7f7-be44-4baa-9f73-69812c4e82a5> | CC-MAIN-2024-38 | https://www.ituonline.com/tech-definitions/what-is-federated-learning/ | 2024-09-12T21:43:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651498.46/warc/CC-MAIN-20240912210501-20240913000501-00670.warc.gz | en | 0.894455 | 1,661 | 3.109375 | 3 |
The site, long home to steel manufacturing, is expected to land billions in investment from California-based tech company PsiQuantum, which is working to build the first commercially viable quantum computer.
The university will use a grant from the National Science Foundation to build a fabrication lab that will apply quantum discoveries to manufacture quantum computers, clocks, optical networks and other technologies.
Officials on Friday announced the deployment of the first IBM Quantum System One computer on a university campus, at Rensselaer Polytechnic Institute in upstate New York. It’s aimed at driving quantum research and education programming.
While the widespread use of quantum computers across industries for a variety of applications appears to be years away, some universities are beginning to beef up education and research to prepare for the future. | <urn:uuid:f81ec6fd-76de-4523-8a87-e78b35b27b6b> | CC-MAIN-2024-38 | https://www.govtech.com/tag/quantum-computing | 2024-09-15T09:42:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00470.warc.gz | en | 0.920565 | 159 | 2.75 | 3 |
DeepMind, Google’s leading artificial intelligence research company, has recently developed a new AI system called AlphaGeometry. The system has demonstrated the ability to solve geometry problems at a level comparable to the world’s brightest high school mathletes. This significant new development suggests an optimistic path for creating artificial general intelligence. Combining neural networks with symbolic deduction engines can detect patterns in new predictive ways.
AlphaGeometry aims to emulate human cognition by integrating two types of intelligence. According to Nobel Prize-winning psychologist Daniel Kahneman in his book “Thinking Fast and Slow,” the human mind has two systems of thinking – System 1, which is fast and intuitive, and System 2, which is slower and more deliberate. Neural networks act as System 1 by quickly proposing constructive ideas, while symbolic deduction engines resemble System 2 by methodically assessing the validity of these ideas.
System 1 Thinking
Fast, Automatic and Effortless: System 1 operates automatically and quickly, with little effort and no sense of voluntary control. It includes automatic reactions and quick judgments we make without deliberate analytical effort. For example, when you pull your hand back from a hot stove, recognize a friend’s face, or understand simple sentences, you use System 1 thinking.
Intuitive and Emotional: System 1 is often driven by emotions and instincts. It can generate powerful feelings and impressions that influence our decisions and judgments without conscious awareness.
Error-Prone: While System 1 is efficient for handling routine tasks and making rapid decisions, it’s prone to biases and errors. It often relies on heuristics (mental shortcuts) that can lead to systematic mistakes in complex situations.
System 2 Thinking
Slow, Effortful and Deliberate: System 2 requires attention and mental effort. It’s invoked when we engage in complex computations, focus on a challenging task or deliberately choose between multiple options. System 2 thinking is responsible for solving a math problem, making a budget or planning a vacation.
Analytical and Logical: This system is characterized by its ability to analyze and apply logic to a problem. It’s more conscious and rational, capable of reasoning through complex situations, evaluating evidence, and making judgments based on facts and analysis.
Lazy Controller: Despite its capabilities, System 2 is often described as “lazy” because it requires significant mental energy. Our brains tend to conserve energy by defaulting to System 1 whenever possible. System 2 will only engage when necessary, such as when a task cannot be solved by the fast, automatic responses of System 1 or when a situation explicitly demands focused attention.
Understanding how System 1 and System 2 thinking interact can help improve decision-making and problem-solving in various areas of life. System 1 tends to make errors independently, while System 2 is slow and deliberate. Working together can be very effective – System 1 provides inspiration and direction to System 2, while System 2 applies rigorous checks to ground System 1. AlphaGeometry takes advantage of this collaboration by using a neuro-symbolic architecture, where the predictive pattern recognition of a neural language model feeds into the deliberate deductive reasoning of a logic-based system.
AlphaGeometry has developed a neuro-symbolic approach that enables it to solve complex geometry conjectures at the university level. This approach combines a neural language model that acts as an intuition, suggesting new points, lines or circles to the diagram as necessary, and a symbolic deduction engine that reasons step-by-step from these starting points to uncover layers of mathematical relationships until the solution is found. This innovative approach has allowed AlphaGeometry to create new geometric constructs when necessary and solve problems.
The language model and symbolic engine are two components that are essential in discovering and verifying mathematical knowledge. However, when used together, they can successfully uncover and validate mathematical knowledge – a promising approach to advancing AI.
Many experts in artificial intelligence believe that developing artificial general intelligence (AGI) is still an unresolved problem. This new DeepMind breakthrough is both frighteningly interesting and innovatively unique. Although this case is only used for Geometry, imagine a deep learning exercise based on Kahneman’s famous hypothetical question about Julie’s GPA. The question is as follows:
Julie is currently a senior in a state university. She read fluently when she was four years old. What is her grade point average (GPA)?
Suppose we look at how a neural network might interpret this question, for example, a large language model LLM. It first interprets the text of the question, identifying key elements such as “Julie,” “senior in a state university,” and “read fluently when she was four years old.” This step involves parsing the text to understand its semantic content. It will then contextualize the information within its trained model, including general knowledge about educational achievements, early reading abilities, and their potential implications on academic performance. The model would identify and extract relevant information that might inform an answer. If available in the trained model, this could include general trends about early readers and their academic achievements or more direct information about GPA outcomes.
Here’s how GPT-4 answered the question:
Without specific details about Julie’s coursework, major or academic performance in college, it’s impossible to accurately guess her GPA. However, if we were to make a broad, speculative assumption based on the correlation between early literacy and academic success, we might assume that Julie could have a GPA in the higher range, potentially around 3.5 to 4.0 on a 4.0 scale, which is often considered excellent in many academic institutions.
This speculation is based on the premise that early reading fluency is an indicator of good learning capabilities and might correlate with higher academic achievement. Nonetheless, it’s important to remember that this is a very general assumption and real-world outcomes can vary widely due to the multitude of factors affecting academic performance.
It is important to note that responding to a question from a large language model (LLM) like GPT-4 (Generative Pre-trained Transformer) is considered System 1 thinking. However, this approach lacks what Kahneman calls “shared factors,” such as family support, academic interest and many other factors influencing Julie’s GPA. In his book, Kahneman proposes an alternative formula for predicting Julia’s GPA that considers some shared factors.
(P. 190 Thinking Fast and Slow) Recall that the correlation between two measures—in the present case reading age and GPA —is equal to the proportion of shared factors among their determinants. What is your best guess about that proportion? My most optimistic guess is about 30%. Assuming this estimate, we have all we need to produce an unbiased prediction. Here are the directions for how to get there in four simple steps:
Start with an estimate of average GPA.
Determine the GPA that matches your impression of the evidence.
Estimate the correlation between reading precocity and GPA.
If the correlation is .30, move 30% of the distance from the average to the matching GPA.
Kahneman coined the cognitive pattern “Regression to the Mean,” which refers to the tendency for things to even out or become closer to the average. “Or one might say that it’s closer to the truth.” The result comes from System 2 thinking, which involves more deliberate and effortful decision-making. AlphaGeometry neuro-symbolic architecture is an innovative approach to address some AI bias forms, where one system generates quick and intuitive ideas. In contrast, the other system engages in more rational and deliberate decision-making.
Further research in this area has the potential to produce exciting results. | <urn:uuid:0f40f057-f75f-471b-87c4-06bd58cf1984> | CC-MAIN-2024-38 | https://techstrong.ai/articles/thinking-fast-and-slow-how-genai-is-mastering-mathematical-reasoning/ | 2024-09-19T03:10:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00170.warc.gz | en | 0.943297 | 1,595 | 3.828125 | 4 |
An audit is any examination designed to identify problems or areas for improvement. The phases of auditing usually consist of: planning and preparing for the audit, execution of the audit plan, reporting the audit results and closing out of corrective actions. The purpose of audits is to detect problems in the organization or organization’s system earlier, before they get too severe. It is also a tool for continuous improvement, which is the goal of any well-meaning individual, business or organization.
There are three types of audits:
Auditing may also be specifically defined as an independent and objective examination of the final accounts of a business. In the case of financial audits, it is for the purpose of determining whether the balance sheet and profit and loss accounts present fairly the financial position of the business and results of operations.
The biggest auditing firms in the world are called The Big 4 Auditors.
This group was earlier known as the “Big Eight”, and was reduced to the “Big Five” by a series of mergers. The Big Five became the Big Four after a fifth large auditor, Arthur Andersen, collapsed in the wake of the Enron scandal in 2002.
Under orders from Congress, the Government Accountability Office (GAO) surveyed large companies to determine whether having fewer auditors affected the market. The report, published in 2003, found that most large U.S. companies will not even consider hiring an auditor from outside the ranks of the Big 4, but most said they would prefer having more than four.
The Big 4 audits 98 percent of U.S. companies with annual revenues over $1 billion.
The following are their revenues for fiscal year 2008:
Pricewaterhouse Coopers – $29.2bn revenue
Deloitte Touche Tohmatsu – $27.4bn revenue
Ernst & Young – $24.5bn revenue
KPMG – $22.7bn revenue
None of the Big Four accounting firms stand alone—each is a network of firms that is owned and managed independently. Each of these firms entered into agreements with other member firms in the network to share a common name, brand and standards of quality. Each network has established an entity to coordinate the activities of the firms.
In most cases, each member firm operates in a single country, and is structured to comply with the regulatory environment in that country.
In this section we will discuss: | <urn:uuid:5b78e1f2-8be1-4917-b659-8f24a4d609d5> | CC-MAIN-2024-38 | http://www.best-practice.com/best-practice-in-reporting-accounting/the-accounting-process/audit-best-practices/ | 2024-09-20T09:07:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00070.warc.gz | en | 0.964786 | 496 | 3.328125 | 3 |
Databases, whether in the traditional relational form, or the increasingly common NoSQL/NewSQL variants, are the near-universal storage mechanism underpinning the handling of data within modern dynamic web application platforms. Typically providing methods supporting the remote creation, deletion and updating of data across the network, they offer an attractive target for remote, network-based attackers. Despite a long history of database design and development stretching back over fifty years, common patterns of security weaknesses continue to be seen in database design, deployment, configuration, and maintenance. Breaches of database security that impact data confidentiality have the potential to deliver to attackers data sets that include highly sensitive or personal data. They can also enable attackers to impact the integrity of the stored data, erasing or modifying it as they desire in order to deliver personal or financial gain – either by directly modifying it in a malicious manner, or increasingly by encrypting and then ransoming back the data to its owner in a ransomware attack.
In this blog post we take a look at the wider context of database security by briefly surveying the various scenarios surrounding database configuration, deployment and maintenance that can lead to security weaknesses. We then take a look at what measures organisations can take to harden their database systems to better resist attacks or exploits by adversaries.
Data breach statistics drawn from industry survey, such as the annual Verizon Data Breach Report, repeatedly show that insider threats from sources such as reckless or malicious employees continue to be an ongoing threat to organisational security. Robust cybersecurity does therefore require a holistic approach consisting of a broad range of controls including internal administrative procedures and policies, such as employee background checks. However, in this article we will be focusing largely on what measures an organisation can take at the predominantly technical level to harden database systems against attack, from sources both internal and external.
It is also possible (and advisable) for a database security programme to incorporate elements that deliver increased security assurance without preventing attacks: these include detective controls designed to detect when an attack is underway (such as Intrusion Detection Systems or IDS, and logging/monitoring and SIEM solutions); as well as corrective controls that can be applied in order to recover from a security incident. However, the focus within this article will be specifically on preventative controls – those designed to prevent an attack from being successfully conducted in the first place, via hardening the database system.
The concept of hardening, or target hardening to give it its full name or when referring to it out of context, originates in a concept that is used in the military and security services to refer to strengthening the security of a building or other physical installation. It would often include measures such as modifications to the building itself (such as upgraded doors and windows) as well as environmental alteration such as removing bushes or other ground cover that could offer hiding places or screened approach to the installation, as well as adding or improving gates, fences, or other barriers.
The idea behind the concept of hardening can serve many purposes, including:
Although the techniques and materials are different in modern security environments, the same approach to hardening physical installations is in fact very common in the realm of physical security when designing modern real-world compute facilities such as data centres. The same principles were used for centuries in building castles with strong walls, and surrounding structures such as moats, ditches, fences, and walls are all adopted still, albeit with different materials and technologies.
However, in this article we are not going to look at physical security and how hardening applies to data centres, but rather how analogous techniques can be used at the software level to improve the security of database systems against electronic forms of attack, typically performed across the network. This form of hardening is sometimes also variously described as security auditing or compliance testing.
Various factors combine to make database systems highly appealing targets for criminals and other attackers: the compromise of a database system or server can often be lucrative to criminals in numerous ways, whether by permitting the extortion of money via ransomware, or more directly via the exfiltration (theft) of valuable data such as credit card numbers.
The theft of data at scale from organisations has become so commonplace that the term data breach has entered popular culture and can often be attributed to a failure to maintain the confidentiality of data stored within a database system. In the last decade or so, the number of data breaches has risen almost exponentially. In addition to the significant damage that these threats pose to a company’s reputation, there are direct costs including forensic investigations, loss of customers, and financial penalties for data breaches under regulations including the General Data Protection Regulation (GDPR)
Effective database security is therefore key for remaining compliant, protecting an organisation’s reputation, and – in the worst-case scenario – ensuring that a business remains solvent and operable following a significant data breach and the surrounding fallout.
Database security encompasses all of the tools, processes, and methodologies which establish security inside a database environment, and holistic database security programs are designed to protect not only the data within the database, but also the data management system itself, and every application that accesses it, from misuse, damage, and intrusion. Database hardening in particular applies to the largely technical measures used to make a database system or ecosystem more directly resistant to exploit or attack.
The challenge for database hardening is the dichotomy between securing a database and making it useful: if a database system is designed for ease of access, then almost unavoidably it becomes less secure since its attack surface generally increases; but if it is made watertight (by for example disconnecting it from the network entirely), then it becomes impossible to use. Hardening is about striking a balance that delivers the maximal security that is practicable whilst ensuring that the database is usable and useful for its given purpose.
Hardening is often described in the context of computer systems as the reduction of the attack surface of the system in question. The attack surface of a system or network is normally defined as the sum of the different points (“attack vectors”) via which the system is exposed to attack. It can alternatively be viewed as a combination of the ways in which actions that can be performed on a system remotely. This may include several components; from externally facing services to internal components such as an associated database or host operating system.
Intuitively, the more actions available to a user, or the more components accessible through these actions, the more exposed the attack surface. The more exposed the attack surface, the more likely the system could be successfully attacked, and hence the more insecure it is. If we can reduce the attack surface to each component, we can decrease the likelihood of attack and make a system more secure. Before looking at what measures can be taken to harden a database, it is important to understand the attack surface of databases, and the kind of exploits that they are vulnerable to.
The threat vectors to databases are generally very well understood, although some (such as the threat of SQL injection) receive significantly more attention (and therefore awareness) than others (such as the inherent risks in stored procedures). The aim with the list below is to provide a brief summary of the main threat vectors only within the context of database hardening, so that hardening measures can be placed in context. We will not be doing a “deep dive” into any one of them within this article:
As with any other local or network service, a database has an intended user or service base that are the set of clients intended and authorised to interact with the database – either as end users (connecting directly to the database) or via authorised applications and web applications that utilise the database. In the majority of cases, this authorised list of clients are located on fixed, static source IP addresses, so it is not necessary to expose the database across the network to anything other than these sources. Failing to restrict the network sources that can connect to a database simply serves to drastically increase that attack surface of the database and makes the database system massively more vulnerable to exploit by a remote attacker.
In addition to the database services themselves (those that perform or execute the functions of the Data Query (DQL) and Data Manipulation (DML) languages to perform CRUD (Create, Read, Update, Delete) operations, databases will typically offer some form of metaservice – services designed to allow the modification of metadata (data about the database) such as the schema, views, triggers and stored procedures – as well as additional functions such as auditing, storage and indexing constraints and configuration. This metadata is typically stored in a discrete set of tables and views often referred to as a system catalogue or data dictionary. Just as with the database services for end users, this metadata is often exposed and available as a network service too, for administrative purposes. It is as important, or perhaps even more important, to carefully review any exposed administrative interfaces and programming interfaces (APIs) that permit access to these meta-services in the same way as with the database services themselves. It may be possible to disable network access to the meta-services entirely, or else to restrict them to a known-trusted set of administrative workstation IP addresses via a network whitelist.
Where an attacker does not have direct connectivity to a database (as above), they are likely to still have an attack vector via a mitigating or intermediate application layer service (such as a web application) that acts as a proxy between themselves and the database. Although more complex to execute, an attacker can in many instances still perform an attack through and via this mitigating application layer.
Perhaps the most commonly exploited weakness in database security in fact is that of SQL Injection, an attack vector that involves inserting malicious code in SQL statements via input data from the client to the intermediary application. It is possible for an attacker to “inject” code into their HTTP request to the web application, which in turn relays the malicious code into the SQL query it performs against the database on their behalf. The injected code, typically performed by escaping the data context via the use of special characters, subverts the intended SQL to be executed and either alters, expands or replaces it. If an attacker is able to have the server invoke their own SQL query in place of the intended one, they can instruct the server to execute a query to return all other customer data, or to delete all data from the database.
Databases typically do not exist in isolation but as part of a broader data management solution. In many cases, the majority of an organisation’s data security efforts can be focused upon securing the primary database server, and less attention given to additional repositories that may provide either partial or full replicas of the master data set. If these secondary data sources are neglected from a security perspective, or situated in a less heavily protected resource sphere, then they can offer a wider (and less hardened) attack surface for attackers to target. Examples can include backup data sets and servers, replicated data sets, data exports to test or staging environments, manual data extracts, data warehousing and reporting tools, and connecting interfaces, feeds, and batch processing services.
Mixed mode authentication refers to database configurations in which users (and administrators) are able to authenticate to a database using more than one method or authentication system. It is typically contrasted to the use of a centrally managed Single Sign On (SSO) solution, but this is not a necessity – single mode authentication can exit using local database authentication engine only, for example.
The issue with mixed mode authentication is that it introduces complexity to authentication. It becomes more difficult to ensure that authentication requirements are evenly applied across systems, increases the dependencies and attack surface of the database, makes it difficult to enforce non-repudiation (and attribution) of action and ensure that user accounts are unique, and complicates processes such as user permission reviews or user deactivation – an administrator may deactivate a user account, believing that they have blocked that user from access, whilst leaving a second, valid account in place inadvertently through Active Directory.
Stored procedures are essentially logical flows or sequences of commands that exist within a database itself rather than the application layer, consolidating and centralizing some of the logic that would typically be implemented in remote connecting applications and then executed as a discrete series of queries. Using subroutines, an application instead makes a single call to the database to execute the stored procedure. Although they offer some security advantages (such as partial protection against SQL injection attacks), they also introduce new potential security weaknesses. One common example is application or database logical confusion over the execution-assigned permissions – essentially, does a given stored procedure execute within the definer or invoker’s security context? If mistakes are made, stored procedures can potentially give attackers the ability to execute queries against the database with highly elevated privileges, granting them access to data that they are not authorised to have access to.
Although not unique to database security, Man in The Middle (MiTM) attacks are very simple to execute for a suitably positioned network attacker and can permit attackers to observe and capture any data that is flowing across a network segment to which they have access, if that data is not encrypted but being sent in plaintext. Although access to the data itself is a concern, the greater risk is that database credentials themselves will be sent across the network in plaintext if the database fails to enforce transport security. An attacker who is able to “sniff” an unencrypted (plaintext) database connection from an application server or administrative user may be able to intercept the credentials being used, and then use them in turn to establish a trusted connection to the database. Depending on the credentials captured, they may then have full access to all data within the database.
Data At Rest encryption is the encryption of the data that is stored in the databases once it has been written, rather than the data that is currently being transmitted across the network. It can be applied at various levels such as at the database itself, specific tables, views, or documents within the database, or of the underlying filesystem itself that provides storage for the data within the database.
If a database is not stored in an encrypted form, then an attacker who is able to gain access to the underlying host system, or to the storage system if network-based, is able to access any and all data within the database, and to exfiltrate it in a data breach attack.
The principle of least privilege is a key guiding principle in information security that highlights the importance of ensuring that users have all the permissions to access data and services in a way and to an extent that is necessary to perform their given function and role, but no privileges beyond or in excess of that. A common antipattern is simply to give all users flat or blanket access to all data or methods, without basing access upon need. There are a number of risks in such an approach: a disgruntled employee is able to perform more harm than otherwise they would; an attacker who is able to intercept or steal a user’s credentials has much broader system/data access; and the potential for accidental (as opposed to malicious) damage is also greatly increased.
Lastly, the binary files that themselves constitute the database application and are provided by a vendor may themselves contain logical or code flaws that leave the database open to one or more vulnerabilities that can be exploited by an attacker. If a database system is not patched regularly under an effective vulnerability management programme, then it may become increasingly susceptible to exploit over time as security weaknesses within the executables are discovered, published, and sought to be exploited by attackers.
Although there are many other weaknesses potentially present within database, the above is a summary of the most observed weaknesses, and we can look now at how these may best be protected against.
This article is designed to lay out at a high level the general best practices at a conceptual level/high level of abstraction, in order to increase awareness of the kinds of measures that can be taken to harden a database environment against attack. However, it is not intended to act as a complete resource in isolation and cannot cover in detail given the space available, how to implement database hardening at the low level of specific configuration changes for every database system variant.
Detailed product-specific guidance is generally published by the database system vendors, but perhaps the most detailed and low-level guidance is that available via organisations that produce compiled “checklists” of hundreds of individual configuration options that are recommended on a per-platform or per-technology basis. Perhaps the most well known and respected of these are the “CIS benchmarks” published by the Center for Internet Security, but the United States Defense Information Systems Agency (DISA) also produce a series of Security Technical Implementation Guides (STIG) that are available upon request. Specific platforms may also offer scripts and other tools (such as the mysql_secure_installation script for MySQL) that allow automated application of secure configuration options.
The drawback with these tools and guidelines is that they serve a very different purpose to this article, since they provide guidance (or in the case of the MySQL script simply change parameters automatically) specifically relating to database configuration parameters only, and without providing wider context or awareness as to the risks they are addressing or to wider environmental concerns that impact database security. In the list below, in contrast, we highlight the key hardening measures that you should expect to include in order to harden your database against attack, but it is recommended to use compliance tools from CIS, DISA or others in order to implement many of the changes that require system-level configuration changes.
Databases are able to produce various logs recording actions taken. These can include binary or relay logs that incorporate every action performed on the database, in sequence, and can be used to restore a database from scratch if “played back” in full order or used to “rewind” the database to a given point, as well as allow replication of changes to a slave database server in a replicated setup. However, there are also more standard application logs as seen in other system types that are designed primarily for logging and monitoring purposes and record key events only, such as security events. For database systems in particular, it is recommended to turn on auditing for high-risk query types such as GRANT (used in permission assignment for users) and DROP (that removes a database table and its data entirely).
It is sometimes said that capturing audit data is easy, but using it is not. In order to delivery on security, audit data must be transformed into actionable information that teams can respond to, so monitoring and alerting must be built on top of any logging put in place and an organisation must commit to providing the resources to allow appropriate vigilance required to support effective auditing and incident response. Auditing changes to your databases enables you to track and understand how data is accessed and used and gives you visibility into any risks of misuse or breaches.
One of the simplest and most robust measures to put in place is an effective firewall solution. Packet filtering firewalls operate by screening database services and ports, restricting access to them in a simple and robust manner to restrict access to certain source hosts and IP addresses, discarding or dropping any other requests to access the database. This can be used both to restrict access to database services to authorised application servers, as well as to restrict access to administrative interfaces and meta services to authorised workstations belonging to database administrators (DBAs). Conversely, packet filtering firewalls can also be used to effectively restrict outbound access from the database server also, limiting an attacker’s options for data exfiltration in certain exploit types.
In addition to packet filtering firewalls, it is also possible to deploy application-level gateway devices that screen the database at the application layer (also known as proxy firewalls). These “database firewalls” such as MySQL enterprise firewall operate at the application rather than packet layer and have a native understanding of the product being screened and the actual queries being performed. Given this application-specific context they are fairly resource-intensive but permit a different kind of security to be enforced: rather than applying blanket allow or deny access based on source IP, they enable the configuration of fine-grained directives that place more subtle restrictions on traffic from given sources or users. They can, for example, allow a database administrator to configure whether a SQL statement sent to the database server from the application server is permitted to execute, based on one or more rules matching against lists of accepted statement patterns known as signatures. Signatures can be either standard vendor-provided defaults that can help to harden the server against attacks such as SQL injection, or custom signatures added by administrators within an organisation designed to resist attempts to exploit applications by executing queries not permitted within the designed system function and query workload characteristics. Protection can be applied and tailored based on various factors such as account being used, application source, the time, and the query contents.
At the risk of turning this blog post into a “firewall varieties” listing rather than a guide on database hardening, there is a third firewall type that, whilst not specific to databases, is a key measure in hardening the environment as a whole and protecting the database, and that is a Web Application Firewall (WAF). Also known variously as Application Delivery Controllers (ADCs), Application Security Managers (ASMs) and Application Gateways, these devices are similar in many ways to database firewalls, in that they are application-aware and are used as proxies inserted in the path between the requesting client and the database. However, they are placed in front of (screening) the web application server that accesses the database, rather than behind it. The requests that they are used as a proxy for are therefore the client HTTP requests, rather than the SQL queries sent to the database. As with database firewalls, signatures can be provided by the vendor against generic attack types, and then extended by an organisation based on its own requirements.
The reason that this is an effective database security measure is that SQL injection and other attacks ultimately originate with the requesting client, and it is possible to intercept and block requests upstream of the application server before they even reach the application server and their payload unpacked and passed on to the screened database system.
Transport layer encryption can be applied to ensure that data sent to and from the database and requesting clients (such as application servers) is encrypted during transport and hence not subject to interception by an intermediary. Most database servers can be configured to operate in a mode that permits only encrypted connections, which may involve the database service listening on a different port.
In the introduction to this section, we mentioned that organisations such as CIS produce standard configuration baselines for database systems that can be used to ensure that systems are built and deployed securely, and this remains a recommended practice. However, it is not sufficient to rely on baseline configuration or golden image deployment alone: over time, adjustments to database configurations and functionality will be required, and the older the database, the more changes will have occurred in a process known as configuration drift. It is therefore recommended to perform ongoing or periodic compliance testing of deployed systems to detect variance from this secure baseline that may represent exploitable security concerns. Vendor-specific solutions such as Oracle’s Integrity AppSentry do exist, but a more common and broader set of tools is available from CIS in their Benchmark suite, allowing generation of configuration comparisons of the current configuration against “known good” secure baseline policy sets such as STIG or CIS’s own benchmarks.
In addition to encrypting data that is in transport to and from the database server, encryption can also be applied to data that is “at rest” (that is, stored within the database). The encryption process is performed on the database server before the data is written to disk and is completely transparent to the applications accessing the database. Known as Transparent Data Encryption (TDE) on some platforms, both data and log files can be encrypted in this way. It is also possible to apply storage/volume encryption to the underlying host or storage platform as a whole, to further ensure that the data is not accessible in a readable format to an attacker under various attack scenarios.
A general principle, as opposed to a specific technical measure, is to ensure that where putting in place a restriction – whether it is at the network level or relating to database table or user permissions – that the default (baseline) position is to deny all access, and then to apply specific “allow” exceptions. This is in contrast to an approach where the default position is for all access to be allowed, except certain specific blocked or denied instances. The reason that this is important is both that it makes permissions for a given user or system implicit and easy to review and understand, as well as that any new access vector or user created by default has no access, forcing administrators to carefully consider and grant only that access that is required – typically using the principle of least privilege and making use of Role Based Access Control (RBAC) via user-role assignment in the case of user permissions.
It is often necessary to store sensitive data within a database, and further for this data to then be needed to be exported in some form for further uses: especially within large enterprises, production databases are commonly copied or cloned to create test, support, and development environments. Rather than simply storing the sensitive data in raw (readable) format there are a few options for modifying the data so that it is fit for purpose (in terms of form or volume) yet does not disclose its contents. This typically involves the transformation of the data using obfuscation or perturbation techniques – the three most common options being data masking, encryption or tokenization. Data masking substitutes realistic, but fake, data for the original values; whereas data tokenization substitutes sensitive data with random (meaningless) surrogate values, referred to as a token. Modifying the data in this way allows its safe storage within less secure resource spheres such as testing environments without the risk of data exfiltration leading to a data breach involving sensitive data.
Certain attack types such as command injection can permit an attacker to execute ad hoc commands within the context of the database system and its executing owner. By configuring the database services to run under a low-privileged user account, an administrator can help to minimise the impact of such exploits should they occur.
In addition to the risks posed by an attacker assuming control of an authorised user’s credentials or account, there is also a risk posed by problematic configuration of authorised accounts, specifically in relation to configuration drift – most importantly, access to a database is not something that can be set once and then never amended. Employees join and leave and organisation, and change roles, and access that may once have been necessary for a business function may now no longer be appropriate and present a business risk. It is therefore important to consider the authorised user list as another type of configuration relating to database security, and one that requires periodic review under a process of account review process and account cull/de-provision in cases where access is no longer required or appropriate.
Whilst the primary focus on securing access to the database is typically around access via the connecting application or applications, there are a number of other channels that need to be considered and appropriately secured. In some cases, this may be local access (access from the host operating system command line itself), which brings with it its own potential security issues – such as the common issue of administrators saving database access credentials in plaintext configuration files on disk or passing in the password as a command line parameter and it being stored in the host’s command history file (and sometimes process listing too). Network based access also may offer a number of alternative channels for database access that can easily be overlooked when hardening a database server, such as permitted connections for backups or replication, for batch processing, by data warehouses or reporting tools, and by third-party interfaces and data feeds. These all offer alternative attack vectors for attackers and come with their own security risks.
Although the primary focus of hardening within a database environment will understandably be on the database system itself, in the context of considering likely attack vectors then it is also worth considering the security of client environments – that is, machines such as administrator workstations. These may have privileged access to administrative interfaces and metaservices that permit modification to the database itself, and therefore are tempting targets for attackers. An attacker compromising an administrative workstation can often then pivot this attack into an exploit against the database system or systems to which it has privileged access. These attacks can be either purely technical or rely on social engineering attacks such as phishing attacks that permit an attacker to install malware onto administrative machines. Measures that can be useful in hardening client environments include the use of anti-malware and anti-virus solutions on administrative machines and a clear enforcement of unprivileged accounts usage for day to day working, and separate “high privilege” administrative accounts only used when required. It is also possible to introduce measures such as bastion hosts for administrative access, as well as 2FA/MFA for authentication in order to add an additional “hurdle” for attackers – these measures mean that even if an operator/administrator host machine is compromised, access to the database server may not be guaranteed for the attacker.
As with any other computer system, database systems can have vulnerabilities inherent in their binaries as shipped from the vendor. It is therefore important to register for security advisories from software vendors and to apply security patches within a reasonable timeframe – with shorter timescales for higher criticality vulnerabilities.
Database administrators and others can often be hesitant to commit to applying updates to database servers, given the requirements for service availability and the significant issues if databases cannot be restored to service following update. Complicating the overall situation is the need to schedule business downtime to test changes. Downtime interrupts operations and can also have an adverse impact on company revenue. If the business perceives little-to-no benefit to testing and scheduling downtime to apply security patches, over time security vulnerabilities can easily accumulate. These obstacles can be overcome, however, by means such as testing of patches in off-production environments, and the use of multiple master database servers in a replica set, permitting individual hosts to be patched one at a time without impacting service delivery.
An often-overlooked measure that can be taken to harden databases is the careful review and optimisation of code. This can involve manual or automated code review aimed at identifying queries that are either dangerous or highly resource intensive. By identifying such queries, the application code can either be optimised to provide more efficient query execution, or else the queries in question moved to a batch processing system, if possible, rather than executed in real-time where they compete for database resources against other queries. The risk of leaving unoptimized queries in place is that they offer an easy amplification method for attackers seeking to perform a Denial of Service (DoS) attack. If an attacker is able to identify an expensive query that can be triggered by a single HTTP request (for a webpage) for example, then they can potentially trigger a complete database outage by simply making a handful of concurrent HTTP requests, at minimal cost to themselves but significant impact to the database availability.
Applicable to both system privileges as well as object privileges (those applying to specific database columns, tables, or views), it is possible to place access restriction controls on data to ensure that users and clients can only perform queries relating to data to which they are permitted access. If access is configured to specific tables or columns only then it is possible that even in the case of an SQL injection exploit an attacker will be unable to completely compromise the database server given the execution context of any SQL commands that are able to inject lacking privileges to perform queries on tables that they wish to target.
Lastly, it is important to have a strong password policy in place for both user and administrative access, and to pair this with multifactor authentication (MFA) if possible, for at least administrative connections. Passwords remain one of the most common weak points leveraged by attackers to gain unauthorised access. A good password policy will include requirements for password length, password strength and character set, password re-use and uniqueness.
AppCheck help you with providing assurance in your entire organisation’s security footprint. AppCheck performs comprehensive checks for a massive range of web application vulnerabilities – including SQL injection and other database security weaknesses – from first principle to detect vulnerabilities in in-house application code. Our custom vulnerability detection engine delivers class-leading detection of database vulnerabilities and includes logic for multiple detection methods (including Time Delay Detection, Error Detection, Out of Band Detection and Boolean Inference) as well as a range of database products and platforms (including Oracle, PostgreSQL, SQLite, MSSQL, MySQL and Azure).
The AppCheck Vulnerability Analysis Engine provides detailed rationale behind each finding including a custom narrative to explain the detection methodology, verbose technical detail, and proof of concept evidence through safe exploitation.
AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network, and cloud infrastructure. AppCheck are authorized by the Common Vulnerabilities and Exposures (CVE) Program as a CVE Numbering Authority (CNA)
As always, if you require any more information on this topic or want to see what unexpected vulnerabilities AppCheck can pick up in your website and applications then please get in contact with us: info@localhost
No software to download or install.
Contact us or call us 0113 887 8380
AppCheck is a software security vendor based in the UK, offering a leading security scanning platform that automates the discovery of security flaws within organisations websites, applications, network and cloud infrastructure. AppCheck are authorized by te Common Vulnerabilities and Exposures (CVE) Program aas a CVE Numbering Authority (CNA) | <urn:uuid:a750a8b7-9aa9-42eb-ba43-07edcaea4003> | CC-MAIN-2024-38 | https://appcheck-ng.com/database-hardening/ | 2024-09-20T11:30:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00070.warc.gz | en | 0.94192 | 6,966 | 2.921875 | 3 |
Law enforcement agencies have long been aware of the power of the internet browser to undertake criminal investigations, pulling information from suspect’s browsing history to reconstruct their activities, build a dossier and uncover vital evidence.
Web browsers are the primary way that end users access a variety of popular applications, both business and personal, and they store an incredible amount of important information about an individual’s personal and professional activities.
So, when a user has been detected conducting suspicious activities, or their device has potentially been compromised by an external hacker to gain access to systems, going through the information stored in their browser can help quantify the nature, scale and scope of any potential threat.
The web browser – a rich source of information
Within the enterprise, insider threats are both insidious and difficult to detect. But initiating a browser investigation after a user has crossed a predefined risk threshold – becoming a ‘notable user’ – or after a system has been flagged as suspicious or potentially compromised can be the vital first step to reconstructing a user’s activities.
Web browsers contain features that are designed to make life easier for users. Everything from remembering recently viewed web pages to recording web form data, saving passwords, sending geolocation information and synching browser history across devices. This means they offer the insights that investigators need to understand if a cyber crime has been committed and vital evidence on a user’s activities and motivations.
>See also: Forensics and the Internet of Things
Understanding a user’s web browsing activities can help identify if a user was in violation of enterprise policies. It can show if a user visited a site or clicked on a phishing link that redirected them elsewhere where a malware payload was delivered that infected their device – opening a door into the enterprise.
The prevalence of HTTPS and other privacy measures means its difficult for cyber security analysts to look at network traffic alone. So, any deep-dive investigation should involve looking at browser artefacts on a user’s laptop and mobile devices. When conducting a digital investigation on a system, a cyber security analyst is able to gather evidence from the artefacts left by a browser after a session; typically, these forensic artefacts will include cache, history, cookies and file download lists.
Primary investigation areas
A detailed forensic investigation of browser artefacts will reveal timestamps, dates, downloads, sites visited, whether data was entered on websites and more. At a basic level, viewing a user’s browsing history will enable construction of a timeline, URLs visited and help identify if a user has engaged in any perilous behaviours – such as downloading or opening a risky file or logging-in to non-authorised file-sharing websites.
Similarly, investigating a user’s search history and queries will provide context and background around a user’s motivations and their recent online behaviours. Meanwhile, autofill artefacts can provide a rich source of valuable information that helps build out what happened and when – for example, identifying if a user has multiple other email accounts they haven’t informed the team about.
Creating a ‘word cloud’ visualisation can help speed up the evaluation of a user’s areas of interest and other behaviours. Even if a user has deleted their search history, artefact data may well have been synched to the cloud or be retained elsewhere on the device.
An escalating challenge
Pulling as much information as possible out of a user’s browsing history is the key to reconstructing a user’s activities. For cyber security analysts responding to a potential data breach, this may involve reviewing multiple devices in a limited amount of time, so the ability to perform browser investigations as efficiently as possible is becoming a top priority.
That said, today’s security analysts face an increasing number of alerts, with a limited amount of time or resources to respond. A recent Exabeam survey of digital forensics and incident response professionals found that they typically have to examine between five and 20 devices a month – and some had to evaluate 40 devices or more.
Reviewing processed browser data takes up a significant amount of time – most investigators took over an hour for each device. All of which highlights the growing need for tools and automation to extract data fast and glean threat intelligence as fast as possible.
>See also: Cyber security and AI predictions 2018
Today’s advanced automated intelligence tools can help telescope vital investigation timeframes, delivering the top-level summary of key information that gives analysts the answers they need fast. Automatically parsing actions from the web history investigation, these tools generate easy-to-read reports and dashboards that deliver insights on everything from a user’s search engine queries and accounts on websites, to autofill data, historical geolocation, activity per domain and even activity trends based on the time and day of the week.
Web browsers may be overlooked during investigations, but they have the potential to help cyber security analysts to respond to security incidents more quickly and effectively. | <urn:uuid:9ce5428f-93d0-41a1-b4fa-c501c2f172d5> | CC-MAIN-2024-38 | https://www.information-age.com/enterprise-csi-utilising-web-browser-forensics-for-cyber-security-investigations-11449/ | 2024-09-09T12:27:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00170.warc.gz | en | 0.929972 | 1,022 | 3.09375 | 3 |
ASN.1 is similar to a high-level programming language. Unlike other high-level languages, ASN.1 has no executable statements. It includes only language constructs required to define types and values.
ASN.1 defines a number of built-in types. Users of ASN.1 can then define their own types based on the built-in types provided by the language. The ASN.1 standard defines four categories of types that are commonly used in defining application interfaces such as XOM and XDS:
· ASN.1 Simple Types
· ASN.1 Useful Types
· ASN.1 Character String Types
· ASN.1 Type Constructors
ASN.1 simple types are Bit String, Boolean, Integer, Null, Object Identifier, Octet String, and Real. The following table shows the relationship of OM syntaxes (syntaxes defined in XOM API) to ASN.1 simple types. (Refer to Information Syntaxes for the complete set of tables for the four categories of ASN.1 types.) As shown in the table, for every ASN.1 type except Real, there is an OM syntax that is functionally equivalent to it. The simple types are listed in the first column of the table; the corresponding syntaxes are listed in the second column.
Syntax for the Simple ASN.1 Types
ASN.1 Type | OM Syntax |
Bit String | String(OM_S_BIT_STRING) |
Boolean | OM_S_BOOLEAN |
Integer | OM_S_INTEGER |
Null | OM_S_NULL |
Object Identifier | String(OM_S_OBJECT_IDENTIFIER_STRING) |
Octet String | String(OM_S_OCTET_STRING) |
Real | None (A future edition of XOM can define a syntax corresponding to this type.) | | <urn:uuid:356ffee8-da4e-44d7-9135-d7d422b181e2> | CC-MAIN-2024-38 | http://odl.sysworks.biz/disk$axpdocdec022/network/dcev30/develop/dirsrvs/Dirsr149.htm | 2024-09-10T18:35:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00070.warc.gz | en | 0.788751 | 406 | 2.640625 | 3 |
Protecting personal information becomes more crucial as technology develops and businesses try to reach new markets. Growing worries about Microsoft’s and Google’s data privacy policies in Saudi Arabia have been raised in recent years. Human rights advocates warn that these businesses risk being compelled to give the Saudi government customer information, opening the door to surveillance and persecution. This essay will explore the consequences of data privacy in Saudi Arabia and the potential dangers of keeping private data there.
Saudi Arabia has been actively working towards transforming itself into a hub for technology and innovation. Crown Prince Mohammed bin Salman’s “Vision 2030” plan aims to diversify the country’s economy and reduce its dependency on oil reserves. As part of this plan, Saudi Arabia has attracted the attention of major tech companies, including Microsoft and Google. These companies have announced plans to establish cloud storage centers in the kingdom, investing billions of dollars in the process.
However, there is a huge caveat to the allure of rich contracts. Saudi Arabia has a history of punishing dissidents and lax privacy regulations. Human rights advocates are sounding the alarm, cautioning that the huge digital data vaults kept in the kingdom might be used to amplify monitoring and repression. Under Saudi legislation, security services are given broad access rights to data and have the authority to order businesses to turn over confidential information based on vague and general national security regulations.
Human Rights Watch has called out Microsoft and Google for their lack of transparency regarding data privacy in Saudi Arabia. These tech giants have not disclosed how they plan to safeguard the privacy of data hosted in the kingdom. This lack of transparency raises concerns about the potential for abuse and the possibility of private citizen data falling into the wrong hands. Activists argue that by establishing cloud centers in Saudi Arabia, Microsoft and Google may unwittingly be providing Saudi authorities with easy access to sensitive information.
The human rights record of Saudi Arabia is concerning. The nation has a documented record of brutally pursuing political opponents on social media, employing spyware to find exiled dissidents, and even breaking into Twitter’s headquarters to steal information. Recent events, including the arrest of Salma al-Shehab and Fatima al-Shawarbi for criticizing the crown prince and the Neom megacity project on social media, show just how far the Saudi government will go to repress dissent.
With Microsoft and Google establishing cloud storage centers in Saudi Arabia, there is a legitimate concern that sensitive political information could be accessed by Saudi authorities. The government’s sweeping powers and weak privacy laws give them the ability to use data stored in the kingdom against dissidents and critics. The potential for surveillance and monitoring of residents in real-time, particularly in the futuristic city of Neom, further exacerbates these concerns.
For tech companies like Microsoft and Google, the decision to invest in Saudi Arabia is a delicate balancing act. On one hand, they have the opportunity to contribute to the kingdom’s technological advancement and diversification efforts. On the other hand, they face the risk of being complicit in human rights abuses and privacy violations. It is crucial for these companies to address the concerns raised by human rights organizations and publicly demonstrate their commitment to protecting fundamental rights.
According to Microsoft, upholding human rights is one of its guiding principles. Microsoft emphasizes its commitment to responsible cloud practices, including security, privacy, compliance, and transparency, despite the fact that they have not provided any specifics on how they intend to protect data privacy in Saudi Arabia. However, detractors claim that more must be done and that Microsoft should be more explicit about the dangers they want to reduce as a result of the Saudi authorities’ possible access to data.
Similarly, Google has asserted its commitment to upholding human rights in every country where it operates. The company highlights its collaboration with human rights organizations and the broader technology industry to ensure the protection of human rights. However, like Microsoft, Google is urged to provide more transparency and accountability by publishing human rights “due diligence” reports. These reports would assess the ethical considerations and risks associated with operating in a country with a poor human rights record.
As technology continues to advance and data becomes increasingly valuable, the issue of data privacy in Saudi Arabia will remain a topic of concern. The country’s weak privacy laws and the potential for abuse of power by Saudi authorities raise important questions about the safety and security of personal information. Tech giants like Microsoft and Google must prioritize the protection of user data and work towards ensuring that their operations in Saudi Arabia do not contribute to human rights abuses or privacy violations.
The establishment of cloud storage centers by Microsoft and Google in Saudi Arabia has raised significant concerns about data privacy. The weak privacy laws and human rights record of Saudi Arabia pose risks to the protection of personal information. Activists warn that these tech giants may be forced to surrender people’s data to Saudi authorities, potentially leading to surveillance and repression. It is imperative for Microsoft and Google to address these concerns, provide greater transparency, and demonstrate their commitment to protecting fundamental rights. As technology continues to advance, the future of data privacy in Saudi Arabia will remain a critical issue that requires ongoing attention and vigilance.
First reported by Business Insider. | <urn:uuid:82a663a4-b79a-4967-bf1b-62f2075b2a78> | CC-MAIN-2024-38 | https://www.baselinemag.com/news/the-impact-of-data-privacy-in-saudi-arabia/ | 2024-09-10T18:31:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00070.warc.gz | en | 0.934976 | 1,046 | 2.90625 | 3 |
The digitally connected world we live in is powered by identity. Without the ability to prove that we truly are who we claim to be, the countless online accounts, devices, and services that we use each day would be almost worthless. From email accounts to social networks, and from digital wallets to eCommerce sites, our digital existence depends on our ability to manage identity effectively. But precisely because identity is so powerful and so ubiquitous, it is also a tempting target for cybercriminals. Fraudsters use increasingly sophisticated methods to obtain personal information such as passwords, social security numbers, and account numbers. Armed with that information, cybercriminals can then impersonate their victims to hijack their accounts, steal their data or money, secure illicit loans and lines of credit, access government benefits, or perpetrate other kinds of fraud.
In fact, by some estimates losses relating to identity fraud now total more than $62 billion, making identity a critical battleground in the fight against global cybercrime.
How does identity impact cybersecurity?
Identity is the new perimeter for cybersecurity.
Companies that can successfully verify and authenticate a user’s identity can keep their customers and their data safe, create more compelling products, and gain a substantial advantage in the marketplace. Fraudsters that successfully penetrate authentication systems and spoof or hijack a user’s identity, meanwhile, can wreak havoc, steal huge sums, and seize ontrol of a wide range of valuable assets.
Only by understanding and managing identity effectively, and using next-gen software to proactively secure identity in real-time across the entire operational ecosystem, can organizations ensure their users’ safety. That requires a clear understanding of how identity diffuses through digital networks, along with robust security technologies capable of validating identity at every step in the customer journey. Deduce is here to help you achieve those goals. Learn more about Deduce’s approach to managing identity, or request a free demo to learn how you can prevent identity fraud.
How do companies verify and authenticate identity?
How confident are you in your company’s overall ability to effectively manage and secure all type of identities — both human and non-human?
Managing identity depends upon two key concepts: verification and authentication
Essentially, the verification process is to do with generating identity credentials, while the authentication process is concerned with checking those credentials. Both are important, but how much weight needs to be given to each step depends on the use-case in question.
A loan issuer might need to take extreme care when originally verifying an applicant’s identity for instance, while an entertainment website might allow users to use a simple email address for verification purposes.
Similarly, a bank might require rigorous authentication before enabling transactions, while a low-stakes entertainment website might use a simple cookie-based system to recognize returning users and apply their browsing preferences.
When you partner with Deduce, you can leverage our data coalition to put customized verification and authentication tools in place to meet your organization’s specific needs.
When you partner with Deduce, you can leverage our data coalition to put customized verification and authentication tools in place to meet your organization’s specific needs. Check out our one-page guide for tips on using collective intelligence to keep customers safe.
What is identity fraud?
Identity fraud is the improper use of someone’s personal information for illicit financial gain. As the connective tissue binding together websites, digital services, and online accounts, identity is a key target for fraud.
Once a person’s identity has been compromised, fraudsters can drain their financial accounts, gain access to other online accounts, or use their information to claim benefits, take out loans, make purchases, open new credit cards, or conduct many other kinds of fraud.
There are three major types of identity fraud:
What’s the difference between identity theft and identity fraud?
Identity theft is when a cybercriminal gains improper access to personal information, such as obtaining an individual’s social security number or date of birth. Identity fraud is when a person’s stolen information is subsequently used for unlawful financial gain, such as by transferring funds out of a compromised bank account.
By aggressively monitoring for and responding to potential identity theft, it’s often possible for organizations to prevent financial losses from identity fraud. Deduce can give you the broad-spectrum security intelligence you need to achieve that. Request a free demo today to see how Deduce puts you back in control of account security.
How do fraudsters steal someone’s identity?
Fraudsters can gain access to personal information in many ways, making it hard to defend against. All forms of personal information — even apparently trivial and freely shared scraps of information such as a pet’s name, a holiday photograph, or a city of residence — can be potentially valuable to fraudsters. Among the main ways that fraudsters steal information:
Read more about the threats you face and how Deduce can help you comply with regulations and minimize your risk exposure.
Who commits identity fraud, and why?
Identity fraud is conducted on a vast scale, with upwards of 10,000 fraud rings believed to be active in the United States alone. Many low-level fraud rings are casual operators who use friends and family members’ IDs, checkbooks, or social security information to unlawfully claim benefits, secure loans, or open credit cards. Others are street gangs that have branched out into financial crime and now run credit card and tax scams.
Larger cybercriminal rings use digital tools and networks of hijacked computers to perpetrate identity fraud on a global scale.
One 11,000-member fraud network based in the U.S., Europe, and Australia penetrated more than 4.3 million online accounts, causing losses of over $530 million. Another group based in China and the former Soviet bloc stole information from 45.7 million credit and debit cards over a 17-month period.
During the pandemic, such crime rings have only stepped up their activity. Remote workers make an enticing target for fraudsters; so do new government relief programs, emergency business loans, and unemployment benefits. One West African identity fraud ring is known to have defrauded U.S. states of millions of dollars in bogus unemployment benefit payments since the beginning of the COVID crisis.
It’s important to remember that fraudsters seldom work alone: a local crime ring might sell stolen credit cards or bank statements online to cybercriminals specializing in more sophisticated kinds of fraud, for instance. Other fraudsters might combine multiple strategies, such as using information gleaned from social-media sites to personalize social engineering scams, then harvesting more sensitive information for use in financial fraud.
Who do identity fraudsters usually target?
The reality is that anyone is a potential target for identity fraud. Research shows that all age-groups and demographics suffer from identity fraud,
with about 9% of all United States residents reporting having been the victim of identity theft at some point in the previous 12 months.
Notably, the people who feel most tech-savvy — such as early adopters of new technologies — are among those most likely to fall victim to fraud.
Identity fraudsters are increasingly sophisticated and well-resourced, and leverage global information networks to share both stolen personal information and effective strategies for committing fraud. That makes it hard for either individuals or organizations to anticipate new kinds of identity fraud, or to take preemptive action to protect themselves at the scale that’s needed.
By joining Deduce’s data coalition, you can get the identity intelligence you need to protect your users from fraudsters. To find out more.
What are the consequences of identity fraud?
For victims of identity fraud, the cost of a successful attack can be high.
According to the U.S. Department of Justice, about 70% of identity theft victims report being negatively financially impacted, with average losses of about $930 per victim.
Many also report significant emotional distress as a result of the fraud.Victims also spend significant amounts of time trying to resolve identity fraud, with victims of account misuse spending an average of 14 hours clearing up the resulting problems. For a subset of victims, things take far longer: about 6% of identity theft victims report spending 6 months or longer trying to restore their accounts and credit records.
How does identity fraud hurt businesses?
How long ago was your company’s most recent identity-related breach?
Globally, organizations suffer hundreds of billions of dollars in direct losses each year as a result of identity fraud — and while the direct financial impacts are the easiest to quantify, they aren’t the only harm suffered by affected businesses.
Nine out of 10 identity fraud victims expect the organizations where the accounts were held to resolve the fraud, so organizations are forced to invest large sums in customer support and fraud detection and remediation resources.
Research also shows that as many as 38% of identity fraud victims subsequently close the affected accounts, so organizations face an significant indirect financial impact relating to lost future business, enduring brand damage, and increased customer acquisition and retention costs.
Finally, security countermeasures come at a cost — and not just the cost of implementing new data infrastructure or software. Consumers typically report abandoning transactions that take longer than 30 seconds to complete, so organizations must take pains to implement seamless security measures and authentication protocols that are robust enough to keep customers, but frictionless enough to avoid diminishing the user experience.
Get in touch today to learn how Deduce can help you level up your identity risk management, and give your customers peace of mind without inconveniencing them or holding them back.
How can you detect identity fraud?
Because identity fraud is typically perpetrated by cybercriminals who’ve already obtained significant information about an account-holder, it can be remarkably difficult to spot — even for the victims themselves.
About 44% of identity fraud victims say they ultimately discovered they had been targeted after they were notified of unauthorized or suspicious activity by their financial institutions. Only about a fifth of victims say they noticed the suspicious account activity themselves.
Among the most common red flags for identity fraud:
For identity fraud that doesn’t pertain to an existing account, such as benefits fraud or the use of stolen identities to open new credit cards or loans, detecting fraud can be significantly harder. Almost 37% of victims don’t realize they’ve been targeted until they’re hit with an unpaid bill, encounter problems when subsequently applying for loans or benefits, or notice an unexpected ding on their credit record.
How to beat identify fraud with data democratization
Because identity fraud is so complex, it can be hard to reverse engineer an attack: only about a quarter of victims know how fraudsters first obtained their personal information, making it hard for either individuals or organizations to put effective countermeasures in place.
That makes it all the more important for organizations to pool resources, and share security intelligence in order to proactively identify fraudulent activity, and use behavioral analysis software to halt bogus transactions or account changes before the fraud is executed.
Learn more about Deduce’s democratized approach to managing identity risk.
How should enterprises respond to identity fraud?
All organizations, from SMBs to major corporations, are now potential targets for identity fraud. The key to minimizing losses is to spot ongoing attacks quickly; respond decisively to prevent data breaches or financial losses; and take effective ongoing measures to ensure that your users continue to view you as a trustworthy partner.
How long ago was your company’s most recent identity-related breach?
Among the most common red flags for identity fraud:
These steps might sound straightforward, but the reality is that a third of identity fraud victims currently say they don’t get the support they need from the organizations where they held the affected accounts. Many ultimately close their accounts and take their business elsewhere as a result.
For both individuals and organizations, the cost of identity fraud is real — so make sure you have an effective response strategy, and the software and tools in place to rapidly identify and intercept identity fraudsters. Read more here about how you can partner with Deduce to give your users the support they need.
How can businesses prevent identity fraud?
Preventing identity fraud is no easy task, and requires a multi-pronged approach.
The key to effective security is to stop thinking of identity as a set of credentials used to grant or deny access. Instead, view identity as a proxy for your relationship with the end-user, and a source of intelligence about how they use your service. By leveraging that intelligence effectively, it’s possible to detect fraudsters seeking unauthorized access to protected assets.
What are businesses’ regulatory obligations regarding identity fraud?
Under the Federal Trade Commission’s Red Flags Rule, many organizations that handle financial transactions or extend credit to consumers need formal policies in place to detect and prevent identity fraud. The rule requires that organizations regularly update their methods to adapt to changing technologies and strategies used by fraudsters, as well as new technologies that can be used to detect and prevent identity theft.
That’s a sound policy for any organization, of course — but staying ahead of fraudsters and keeping up to speed on the latest technological countermeasures can be a complex and costly business.
To fend off identity fraud, you need broad-spectrum security intelligence and specialized support. Learn more about how Deduce helps you meet your regulatory obligations, and puts you back in control of your users’ identity.
What is identity intelligence?
Traditional identity checks are credential-based: if a user has the right token, they’re granted access and allowed to download data or execute transactions. That’s an important part of any security system, and by incorporating sophisticated credential checks — such as biometrics, card readers, or two-step verification processes — it’s possible to halt many kinds of fraud.
But in the modern era, with synthetic identities increasingly common and scammers obtaining users’ passwords and personal identifiers, authentication alone isn’t enough. Organizations need robust point-of-entry checks and post-authentication security tools to spot anomalous behaviors and halt bogus transactions before they’re executed.
To achieve that, we need a richer understanding of who our users actually are, and how they access and use our services.
That’s where identity intelligence comes in: by using big datasets and AI tools, it’s possible to glean powerful insights about how legitimate users act, and proactively detect and intercept fraudsters even if they possess an account holder’s personal details, passwords, or PINs.
Identity intelligence is a quantum leap beyond traditional fraud countermeasures, which focus on spotting bots or automated traffic. Using collectivized identity data, it’s possible to create risk metrics for any user activity, based on how legitimate users ordinarily act — then use that metric to augment existing tools and flag suspicious logins or post-login account activity.
Does identity intelligence focus on people or devices?
Identity can pertain to a particular individual, such as authenticating that the person claiming to be Fred really is Fred. It can also pertain to a device: it’s useful to know if a user claiming to be Fred is logging in from a computer known to belong to the real Fred, for instance.
Often, security checks view such relationships simplistically. A system might require additional questions if a person logs in from a new device, but let them check a “trust this device” box to avoid similar questions in future.
With fraud victims typically having 33% more connected devices than non-victims, though, it’s important to take a closer look.
The identity intelligence approach views individuals’ device usage as a rich source of behavioral insights. Perhaps Fred usually uses cloud tools on his smartphone during his morning commute, accesses an on-site network during office hours, and uses a VPN from his laptop in the evening. Those times, networks, devices, and locations contribute to a richer, more three-dimensional understanding of Fred’s behavior — and divergences from those behaviors might automatically trigger additional layers of identity authentication.
When you partner with Deduce, you can leverage our data coalition to get the rapid, actionable intel you need.
Check out our one-page guide for more tips on using collective intelligence to keep your customers safe.
How can identity intelligence prevent cybercrime?
Identity intelligence is based on the key insight that identity isn’t simply a digital test used to grant or deny access to a particular account, network, or file. It’s something richer and more granular: a means of describing the full spectrum of behavior that constitutes your relationship with a user, and using that to determine how risky or benign any user activity is likely to be.
Rather than painting risk in black and white, identity intelligence deals with shades of grey, leveraging a nuanced understanding of how users — both individually and collectively — operate in the real world in order to identify when any given online activity is more or less likely to be the result of (or precursor to) identity fraud.
This rich understanding of identity as a proxy for real-world user behaviors, both in general and in the specific context of your own organization, allows businesses to strengthen all aspects of their security, including:
In addition to identifying and halting attempted identity fraud, identity intelligence allows organizations to plan more effectively, with advanced monitoring and reporting enabling CISOs and teams across your organization to quickly spot and remediate potential vulnerabilities, and to implement new security features in efficient, proactive, and cost-effective ways.
Finally, software solutions powered by identity intelligence can enable you to reassure customers that their data and assets are properly safeguarded. Automated, intelligence-enabled customer alerts can proactively communicate with users to highlight risky behaviors, check on anomalous account usage, and clearly communicate that cybersecurity is your organization’s top priority.
Want to find out more? Read Deduce’s one-page guide to protecting your users
How can businesses use identity intelligence most effectively?
Identity intelligence depends on a clear understanding of the way that real-world users access and use online resources — and also of the increasingly sophisticated methods used by cybercriminals to steal identities, emulate legitimate behavior, and perpetrate online fraud.
It’s possible to glean important insights from the way that your own organization’s users operate. But unless you’re running security at one of a handful of global tech giants, your organization’s data universe simply isn’t big enough to rapidly capture new trends in identity fraud, or to distinguish between evolving consumer behaviors and the rapidly changing strategies and techniques deployed by cybercriminals.
The result: most organizations are left flying blind, without the intel and data resources they need to keep their users safe from harm. That often drives companies to overcompensate and implement new security features that diminish user experience, even as customers ultimately remain vulnerable to identity fraud.
The solution: stop trying to go it alone. By using security software to pool resources and share anonymized data about identity fraud and other cybersecurity threats, organizations can gain access to the same level of security intelligence used by the biggest global financial institutions and tech giants to detect and prevent identity fraud.
How data democratization helps businesses to prevent identity fraud
Organizations can’t solve this problem by flying solo. Instead, businesses need to band together and share data effectively, giving all organizations and enterprises access to rich identity intelligence that help create more responsive and resilient security systems without reducing utility for end-users.
The more organizations share data, the easier it becomes for everyone to detect and prevent identity fraud.
That’s why Deduce has built a data coalition uniting 150,000 member websites, and giving merchants, businesses, and other organizations access to insights from data gleaned from 200 million users and billions of historical account interactions
Our commitment to collaborative identity intelligence gives our partners an incredibly rich window onto the countless different ways in which legitimate account users behave, and the telltale signs that betray bad actors. Together, we’re democratizing security data and beating identity fraud.
Want to find out more? learn more about our mission | <urn:uuid:acf7db25-c465-44aa-ad24-0d50056951d2> | CC-MAIN-2024-38 | https://www.deduce.com/identity-intelligence/ | 2024-09-10T18:35:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00070.warc.gz | en | 0.925937 | 4,175 | 2.515625 | 3 |
Anyone Can Lose Control Of Their Data
The recent scandal involving Facebook’s data privacy has garnered global media attention and widespread mistrust for the social platform. It all erupted when it was reported that private information from approximately 87 million profiles was used to covertly influence the 2016 U.S. election.
The StoryIn 2013, Aleksandr Kogan, a University of Cambridge psychology professor, created a personality quiz app called "thisisyourdigitallife." The app utilized the Facebook login feature, which granted developers access to profile information. So when approximately 270,000 people downloaded the app, they agreed to share their data with the professor’s company. However, access to the platform’s API allowed the professor to download additional data from friends of those who took the quiz.The app claimed to operate in accordance with Facebook’s platform policies, with the understanding that any information collected would be used for academic research only. Unfortunately, the private data ended up being used by controversial consulting firm Cambridge Analytica to sway a political campaign.Data Is Gold Before Facebook changed its policies in 2014, plenty of apps used the platform’s data-sharing feature to harvest profile information from unwitting users. Although the collection of data by developers has been restricted, according to The New York Times, “The core functions of Facebook’s open platform tool are still intact. There are still many third-party apps like 'thisisyourdigitallife' out there, vacuuming up intimate data about Facebook users. That data doesn’t disappear, and Facebook has no real recourse to stop it from falling into the wrong hands.”What If…Hundreds of third-party apps continue piggybacking off social media platforms to track our internet behavior, and there will likely be more stories like this one in the future. The possibility of another breach affecting millions is very real, and with the GDPR deadline only weeks away, consequences for noncompliance could be devastating.The GDPRThere are three major GDPR violations at play here: consent, privacy by design and timely breach reporting.The GDPR requires users to give their consent before personal information is harvested, and Facebook’s data policies no longer allow apps like these to “ask for data about a person's friends unless their friends had also authorized the app.” However, assuming such data has already been collected, additional consent is required for it to be sold.The consulting firm is responsible for breaching Facebook’s terms, but under the GDPR, Facebook would still be liable for the data it lost. “Even if data is collected in an appropriate way, the controller of that data is responsible and accountable for how it is processed by third parties,” Michael Baxter wrote in an article for GDPR.Report. Companies have an obligation to ensure end-to-end data protection for their users, which means putting measures in place to support privacy by design -- the principle that privacy must be built into the core of each product/service and not added in retrospect.It’s unclear when the internet behemoth actually discovered the breach, but according to The New Yorker, Facebook did send “a polite request to delete the GSR-sourced material.” As polite as the request may have been, it would not have been enough to avoid a huge GDPR fine -- which, in Facebook’s case, could have been billions.What Now?This should serve as a cautionary tale for businesses everywhere. Ultimately, Facebook was responsible for the data that fell out of its control. The company failed to keep a tight enough grip on user data or track to see if it was being used properly. Now it's facing multiple lawsuits and an investigation from the Federal Trade Commission (FTC), all resulting in a fall in stock value and many prominent companies and public figures dropping the platform altogether.The management of company content is now more important than ever, and if this scandal has shown us anything, it's that the mishandling of personal data can happen to any company of any size. Every company handling personal data is obligated to manage it properly, and all organizations will soon be held accountable by increasingly strict laws. When all is said and done, businesses will need a truly comprehensive solution for what is an ever-increasing issue.This article originally appeared on Forbes. (Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives.)
Get started with Egnyte today
Explore our unified solution for file sharing, collaboration and data governance.
LATEST PRODUCT ARTICLES
Don’t miss an update
Subscribe today to our newsletter to get all the updates right in your inbox. | <urn:uuid:70ac542b-8fc8-49ad-9fe3-26701dfe7020> | CC-MAIN-2024-38 | https://www.egnyte.com/blog/post/anyone-can-lose-control-of-their-data | 2024-09-13T02:56:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00770.warc.gz | en | 0.949096 | 965 | 2.859375 | 3 |
There is no question that the recent COVID-19 pandemic has greatly impacted the US healthcare system, exposing its weak vulnerabilities and creating numerous challenges for its future. According to the Organization for Economic Cooperation and Development (OECD), this is true for many countries around the world. The healthcare crisis has demonstrated how the vulnerabilities of healthcare systems have ultimately impacted health, economic progress, political trust, and social cohesion. Strengthening the capacity of the U.S. health system and its rapid and effective response to the viral threat was critical during the pandemic, but it may be equally important in the post-pandemic period.
Efforts have been made to prevent infections, develop, deliver, and administer COVID-19 vaccines, and ensure equitable access to vaccines and treatments across the country. However, despite significant progress in controlling the pandemic, the US healthcare system still faces many challenges. This crisis could be followed by other similar epidemics, and it remains to be seen whether the US healthcare system is now in better shape than before and more capable of facing them. The US health system is also facing a shortage of clinical staff, something that could make access to care even more difficult.
Moreover, new problems, such as Russia’s unjust war in Ukraine and rising inflation rates, are affecting not only the US economy, but also the personal finances of individuals who may need access to treatment and care.
Major Problems of the US Healthcare System
According to a recent study, Americans of various political stances, including those who are satisfied with their current insurance, believe that important changes to the healthcare system are still needed. People in the US think that healthcare should be more affordable for ordinary people. Half of those surveyed for the study also said that they have experienced serious financial difficulties because of medical care or know someone to whom this has happened. Preventable medical errors, poor amenable mortality rates, and a significant lack of transparency are among the critical issues that Americans face when it comes to healthcare.
These findings are also supported by a recent McKinsey study, which suggests that large scale innovation is needed to fill the gaps in healthcare and reshape its future. According to the article, a number of issues, including affordability challenges, access issues, clinical workforce shortages, and the recent healthcare crisis, have already set the stage for a “gathering storm.” This potential crisis could impact the entire US healthcare industry, putting nearly half of its profits at risk. According to McKinsey experts, this industry is currently facing numerous risks, but these risks also present new opportunities that might change the future of healthcare as we know it.
“Innovative models exist and, if scaled up, could deliver the $1 trillion improvement,” they add, providing hope that American companies and policymakers could collaborate not only to solve the current problems, but also to rebuild the future of US healthcare.
Positive Signals and Investment in Healthcare
The Biden-Harris Administration has recently announced that it will invest $225 million from the American Rescue Plan to train over 13,000 Community Health Workers (CHWs). According to the White House, these funds are expected to deploy more than 40,000 individuals in community health, outreach, and health education roles over the next few years. This new investment represents an expansion of the plan that originally included roughly 50,000 CHWs who were already serving American communities even before the healthcare crisis. With the COVID-19 pandemic dramatically impacting the health workforce, this is a much-needed solution to support health workers, prevent new problems, and address existing disparities.
President Joe Biden believes it is extremely important to invest in a modern public health workforce, and the current administration has already promised Americans that it will work to develop and expand the community healthcare workforce. However, there are still many challenges when it comes to sustaining the progress made to date and building a brighter future in healthcare. According to the recent announcement, the American Rescue Plan is designed to build the healthcare system during the pandemic and in its aftermath. The purpose of this investment is to solidify other significant achievements in healthcare, such as the deployment of more than 14,000 community outreach workers to build confidence in the vaccine, or the overall process of recruiting, training, and supporting healthcare professionals.
“I will do everything in my power to ensure that all Americans have access to the quality, affordable health care they deserve – and the peace of mind it brings,” President Biden promised last year. By investing in community health and health education, he is sending another positive signal that the future of US healthcare will be better than its present. | <urn:uuid:edf3479f-f6be-4f37-85b4-5cb40da2ed5e> | CC-MAIN-2024-38 | https://healthcarecurated.com/editorial/rebuilding-the-future-of-us-healthcare/ | 2024-09-15T13:50:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651630.14/warc/CC-MAIN-20240915120545-20240915150545-00570.warc.gz | en | 0.970782 | 939 | 2.609375 | 3 |
Introduction to Decision Intelligence: Benefits of AI-Driven Decision-Making
According to Gartner, by 2023, 33% of large organizations shall use decision intelligence and modeling in their work processes. By 2026, organizations that create reliable, target-driven AI will enjoy more than 75% of innovative projects' success, in comparison to 40% among those that decide not to. What is the reason for such a great popularization of AI for decision-makers among modern organizations?
Let’s find out together what decision intelligence is and how artificial intelligence decision-making can create better outcomes for every decision you make for your business.
What Is Decision Intelligence?
To better explain the decision intelligence concept, let’s get started by carefully defining the decision as such. According to the Cambridge dictionary, “A decision is a choice that you make about something after thinking about several possibilities.”
When making a decision in both business and everyday life, we are usually driven by the current situation and environment, our knowledge of the issue, previous experience, partiality, emotions, desires, and intuition. Our decision can also be influenced by stereotypes, misconceptions, and the ultimately subjective perception of reality.
This is how the human brain processes the combination of external and internal factors to make a choice, and that’s why it never takes into account all of the influencing factors and can never imagine a holistic picture.
But when it comes to AI in decision making, it becomes a game-changer. The artificial intelligence system processes and analyzes huge data arrays in real time, makes smart predictions based on the historical data, and suggests the best possible decisions based on the data sets and initially specified parameters.
So, there are two main differences between human and AI decision-making:
- AI takes into account all the information available, while a human considers limited data.
- Artificial intelligence is ultimately objective and neglects emotional factors.
How Do Intelligent Decision-Making Models Work?
Decision Intelligence Model
There is a whole set of technologies and algorithms that powers a decision machine:
- Machine learning. ML algorithms work with a certain amount of structured data and make suggestions or decisions according to the specified parameters. Anti-fraud systems used by banks are the simplest example. For instance, when users access their banking app from the suspicious IP, the system decides on the necessity of additional user authentication.
- Deep learning. Deep learning is the next stage of machine learning evolution. In this case, a decision machine takes the previously made decisions and their outcomes into account when making each new suggestion.
- Visual decision modeling. AI decision-making serves as the reliable starting point, but the decisions are still made by the business owners and/or their employees. Visual decision modeling is one of the features of decision intelligence software to show the human decision-makers the available options and their outcomes.
- Complex systems modeling. One of the benefits of decision intelligence is to quickly build a complex business logic being guided by the available data and the final goal.
- Predictive analytics. The decisions AI systems make are based on quite accurate predictions. The simplest example is price prediction and automated optimization in retail. In this case, the suggestions made by decision intelligence are based on the current and historical price fluctuations, predicted demand, upcoming trends, and tons of customer behavior insights.
The Benefits of Decision Intelligence for Business
Below are five core benefits of decision intelligence solutions businesses can expect.
- Data-driven decisions. While 91% of companies believe that data-driven decision-making can boost their business growth, only 57% of them rely on their data. To get a competitive advantage, you have to correctly analyze the available data, make some predictions, and choose the best option. AI can take a better look at the data array and find invisible patterns and possible anomalies that can significantly influence the outcome.
- Faster decisions. According to the Gartner survey, 65% of decisions made lately are more sophisticated than earlier as they require more sign-offs from multiple stakeholders involved. In other words, artificial intelligence decision making can deal with the fast-paced business realm as such systems are able to process huge amounts of data almost instantly.
- Multiple problem-solving options. AI-powered decision-making algorithms can also be quite flexible and highlight several outcomes of a certain decision when one of the parameters is changed. This feature can help the business to make the best choice from a multitude of options, considering their current goals and growth strategies.
- Mistakes and biases elimination. There are at least nine types of cognitive biases (conservatism, base rate neglect, confirmation, sample size neglect, hindsight, anchoring and adjustment, mental accounting, availability, and framing) that can directly influence business decision outcomes. Decision intelligence allows avoiding them all since the correctly programmed algorithm takes an ultimately objective look at the available data.
However, do smart systems always make better decisions than humans? Although they are guided by large input data and aren’t prone to cognitive biases, they still need human verification, especially in the cases when the decision made can lead to conflicts of interest and values.
Decision Intelligence Use Cases across the Industries
Let’s find out how businesses from different industries use decision intelligence solutions to become more data-driven, sustainable, resilient, optimized, and cost-effective.
Banking and Finance
Morgan Stanley is a financial advisory company which helps its clients invest in a smarter way, supported by their in-house financial consultants and intelligent decision-making models. Their wealth management platform is powered by decision intelligence.
Proceeding from the customer’s goal (for example, investing in real estate or getting started with saving for college tuition for their children), the AI system suggests the winning strategies which are also verified by human consultants before being offered to the customer.
HSBC uses AI to enhance its investing practice. AiPEXAR, the bank’s AI-Powered US Equity Indexes, uses a three-stage process for smart investing. First, it goes over and verifies a number of US-based large and midcap publicly listed businesses, showing potential for growth.
After, the system compiles a portfolio of around 250 market players based on their highest cumulated management, financial health, and news and information scores. Finally, AiPEXAR arranges the selected list of companies, assigning each its weight in the portfolio.
The opportunity to predict better prices for the specific categories of goods, depending on the external factors, customers’ demand, trends, and sentiments is one of the simplest yet still effective decision intelligence use cases for retailers and merchants.
For example, Remi AI is the software which helps retail businesses make better pricing decisions, tailor their pricing policy to their customers’ solvency and expectations, and thereby optimize their supply chain and make their revenue volumes more predictable.
Enlitic Cure is a data analysis and decision-making platform created to combine the capabilities of artificial intelligence and human doctors. The decision intelligence solution allows medical practitioners to faster analyze medical imaging reports, suggest the diagnosis, and help doctors prioritize the cases to improve medical outcomes.
As for the use cases of decision intelligence in the energy sector, it’s worth mentioning Athena AI software. This system helps its users better manage their energy resources and makes automated decisions on energy and cost savings. It also predicts solar energy generation and optimizes batteries’ capacity accordingly.
One of the leading electricity distribution operators in Nordics reimagined its decision-making capabilities with a custom-made business analytics platform. With the help of their decision machine, the company can gather, structure, and analyze accurate business data from seven different sources, generate insightful reports, and make more optimized decisions.
Ecology problems, climate changes, and natural disasters caused by them are global problems, but at the micro-level, they pose businesses with severe risks. One of the benefits of decision intelligence is the opportunity to predict the possible risks being guided by the historical and current data and suggest risk management, response, and mitigation strategies with the help of AI.
One Concern is the AI decision-making platform that allows businesses to analyze and stay aware of the possible risks of environmental disasters. With the ultimate climate data analysis, they can also make better decisions on their business strategies. For example, hospitality businesses can decide on the safer location to build a new hotel, taking into account not only weather conditions but the market environment, COVID-19 situation, and customers’ demand.
Using business intelligence solutions powered by AI is an opportunity for businesses to make better and faster decisions within crucial business processes. In this way, the companies can not only unlock the ultimate benefits of being data-driven, but also take into account the largest possible array of relevant information when deciding on the next step.
Infopulse can assist you in developing sophisticated business intelligence solutions and perfectly tailor your decision-making platform to your current business needs with long-run projections! | <urn:uuid:b9358b86-c7fd-4f47-8024-37c6cdbe8b78> | CC-MAIN-2024-38 | https://www.infopulse.com/blog/introduction-decision-intelligence | 2024-09-19T08:04:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651995.50/warc/CC-MAIN-20240919061514-20240919091514-00270.warc.gz | en | 0.938882 | 1,833 | 2.796875 | 3 |
As Communications Service Providers (CSPs) worldwide scale up the deployments of their 5G networks, they face strong pressure to optimize their Return on Investment (RoI), given the massive expenses they already incurred to acquire spectrum as well as the ongoing costs of infrastructure rollouts.
When the Apollo program started back in 1961, it also started a technology revolution that has brought us everything from calculators to computers and a lot of other stuff that we take for granted today.
A few years later, in 1965 to be exact, Gordon Moore, the co-founder of Intel, made a projection based on observations: that the number of components per integrated circuit would double every year. Ten years later, in 1975, he revised the forecast, to double every two years. The period though is often 18 months due to Intel executive David House’s prediction that chip performance would double every 18 months, due to the sheer increase in the numbers and speed of transistors.
As seen in the picture above, this prediction has come true with precision to this day and this principle is known as Moore’s Law.
But where does the FPGA come into the picture? FPGA’s history starts in the 80s where the 2 biggest FPGA vendors today: Altera (now Intel) and Xilinx were both founded. So, the FPGA technology has now been around for more than 30 years and has been used to accelerate the development of chip-based solutions, offering a much faster development cycle than ASICs, and with a much lower investment and risk due to the re-programmability. FPGA in fact brings the software development methodology to the hardware – you program, build, download and test – and you can then do it over and over again. This is very nice, flexible and reduces the time to market significantly, while adding the option to do updates in the field afterwards.
What brings it all together?
The amount of data we produce and handle is exploding. The amount of data is almost doubling every two years, and as you can see from the figure below, this development is predicted to continue over the years to come.
Due to this massive data growth, the increasing need for getting insights out of the data with low latency and the use of this data in “real time”, the requirements for compute power has not slowed down but is increasing all the time. With CPUs struggling to just keep up with Moore’s Law, it is even more challenging to follow the data explosion, and the service providers have therefore been searching for alternative ways to cope with this challenge.
Graphics Processing Units (GPUs) have been used to offload the CPU for some compute-intensive work. The GPUs have a huge number of very low performance kernels that can all operate in parallel, which is very well suited for some types of compute-intensive work like machine learning and artificial intelligence.
Different types of Network Processing Units (NPUs) have been in the game as well. These are purpose-built processors build with a focus on doing packet processing on networks. They are very fast at doing packet and flow-based processing, but not that general-purpose and therefore mainly used in networking equipment, which is also what they are built for.
A few years back the FPGA was identified as another alternative, as very flexible, highly-reprogrammable and power-efficient.
The figure below shows how the CPU, GPU, FPGA and ASIC technologies differ – from the CPU being highly flexible and fast reprogrammable but not that efficient on compute power per watt, to ASICS being highly efficient in compute power per watt but not flexible at all with no re-programmability and with yearlong development cycles. This comparison shows that the FPGA is a winner, reasonably efficient on compute power per watt, very re-programmable and relatively power efficient.
FPGA has shown that it can provide significant acceleration and a high level of reconfigurability and doing all that with an efficient compute power per watt.
So where does all this lead and who is the winner?
Over the last few years, hyperscale data centers and service providers have been working with this issue and have struggled to find solutions. Back in 2010, Microsoft’s Azure team started to look at FPGA technology. In 2012, they set up a prototype with 60 FPGA boards to accelerate things like Bing’s search index rankings and it was so promising that in 2013 they put 1600 FPGA boards into the production network. With this they realized that FPGA was really useful for accelerating their tasks at the correct point of cost vs. performance. In 2014, they introduced a new FPGA board where they, based on experiences with the first boards and some experienced network issues, put FPGA inline (between the network and the CPU) enabling them to also accelerate the networking part of their servers. So, from late 2015 they started putting FPGA in every server, and that was even before they had the software ready, which only happened in 2016.
This case is just one of many, which shows that FPGAs are really gaining momentum as the preferred acceleration engine within the cloud and service providers. Today Microsoft Azure, AWS and others offer FPGA-as-a-Service to their customers, enabling the same type of acceleration to their customers as they utilize themselves internally.
Another important milestone in the FPGA story was when Intel in 2015 acquired Altera, one of the two major FPGA vendors in the industry. This for me proves that Intel then realized that the FPGA is and will be a key component in server infrastructure, and since then Intel has been working on enabling FPGA technology in servers, presenting Skylake processors with FPGA embedded in the CPU package.
So, for me the time of the FPGA is here. FPGA is the winning ‘Reconfigurable Compute Platform’, providing flexible acceleration at the right efficiency. they want.
But be prepared for the next step in the evolution as you never know what tomorrow brings due to the speed of technological advancements. One thing is for sure, that the FPGA technology will be around in the years to come. | <urn:uuid:459bb19a-b206-43b7-8f00-4d797f7ed9f6> | CC-MAIN-2024-38 | https://www.napatech.com/age-of-fpga/ | 2024-09-12T02:05:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00070.warc.gz | en | 0.968965 | 1,288 | 3.34375 | 3 |
Creator: University of Pennsylvania
Category: Software > Computer Software > Educational Software
Topic: Law, Social Sciences
Tag: application, Discuss, process, processes, requirements
Availability: In stock
Price: USD 49.00
This course begins by exploring short term entry and long term entry into the United States. We will cover the various means of short term entry and long term entry, as well as the general application processes. We will also examine exclusion and deportation in the United States.
Interested in what the future will bring? Download our 2024 Technology Trends eBook for free.
In particular, we will discuss how and why individuals may not be admitted into the United States and possible reasons for deportation or removal. Lastly, we will cover the process of how to become a United States citizen and the various requirements for naturalization. | <urn:uuid:1b907cd9-28ce-4ad1-86a2-b6f508fa0be2> | CC-MAIN-2024-38 | https://datafloq.com/course/nuts-and-bolts-of-u-s-immigration-law/ | 2024-09-13T04:57:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00870.warc.gz | en | 0.928055 | 168 | 2.59375 | 3 |
(SpectrumIEEE) A British scientist working in Australia has found a way to apply a three-dimensional code to a two-dimensional framework for quantum error correcting.
When it comes to correcting errors arising during quantum operations, an error-correction method known as the surface code has drawn a lot of research attention. That’s because of its robustness and the fact that it’s well suited to being set out on a two-dimensional plane
For a quantum computer to tackle complicated tasks, error-correction codes need to be able to perform quantum gate operations; these are small logic operations carried out on qubit information that, when combined, can run algorithms. Classical computing analogues would be AND gates, XOR gates, and the like.
Benjamin Brown, an EQUS researcher at the University of Sydney’s School of Physics. Brown has developed a new type of non-Clifford-gate error-correcting method that removes the need for overhead-heavy distillation. A paper he published on this development appeared in Science Advances on 22 May.
Brown notes that reducing errors in quantum computing is one of the biggest challenges facing scientists before machines capable of solving useful problems can be built. “My approach to suppressing errors could free up a lot of the hardware from error correction and will allow the computer to get on with doing useful stuff.”
“Given it is understood to be impossible to use two-dimensional code like the surface code to do the work of a non-Clifford gate, I have used a three-dimensional code and applied it to the physical two-dimensional surface code scheme using time as the third dimension,” explains Brown. “This has opened up possibilities we didn’t have before.” | <urn:uuid:ac71afee-9855-4abd-a02a-28caac0c9534> | CC-MAIN-2024-38 | https://www.insidequantumtechnology.com/news-archive/novel-error-correction-code-developed-at-u-of-sydney-opens-a-new-approach-to-universal-quantum-computing/amp/ | 2024-09-13T05:13:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00870.warc.gz | en | 0.944858 | 366 | 3.4375 | 3 |
Data centers consume significant amounts of energy, not only to operate IT equipment but to keep the IT equipment cool. In many data centers 40% is used to cool the IT system. Enabling such energy consumption makes sure that data centers are running 24/7. That would be a PUE of 1.4. Some data centers can have a PUE as high as 2.0
There has been a movement to address the increasing energy consumption for data centers. The focus is now on improving the power usage effectiveness (PUE) of data centers. Relative considerations are also put forth in choosing more efficient fans, pump, or HVAC system. Beyond the objective to maximize data center uptime, the efficient use of energy is also increasingly important.
Better Energy Consumption Using VFD
An effective solution for improving energy consumption is the implemention of VFD. Variable Frequency Drives (VFD) serve as controllers that enable the control of an induction motors’ speed. Simply put, a VFD can control the rotation speed of fans, pumps and compressors. Through which energy consumption is limited to what is needed by the equipment.
How Does a VFD Increase Energy Savings?
The frequency and voltage of the power supply are directly proportional to the speed and torque of an AC induction motor. In this regard, the power supply is constant if frequency and voltage are fixed. Despite changes in the speed and torque of engines that power the rotational capacity of a fan, electricity will remain the same.
This is where VFD becomes handy. They work to intercept the incoming power supply. Given an interception, VFD will then modify the voltage and AC frequency depending on the load requirements. The result, in turn, will adjust the speed and torque of the motor to match the loan demand without compromising precise output. Consequently, if the demand changes, the speed of the fans will keep pace.
In terms of energy savings, optimum run at a shorter period can significantly lower energy outputs. VFD can function to control the machines to run at higher-rated speeds for a particular period. This move is on a case-to-case basis and should be done cautiously.
When VFD is not in play, fixed AC motor speed and torque will likely be wasted in the process. This is because the limited speed and torque operations will be operated in a constant regardless if it is needed or not.
Such situations are highly probable to HVAC fans. When air supply-demand goes low, the air blown will remain constant. This will fill the entire space and would technically change the temperature parameters. If too much cold air is supplied, it goes beyond the standard parameter conditions. Such a case will be worse than good. The space will then try to dissipate cold air through the vents and valves. A lot of the air supply will be diverted to waste in the end.
In cooling systems, some fans are also integrated to cover the maximum cooling demand. However, an ultimate cooling condition is not necessarily the norm every operation day. Cooling parameters will likely be below the maximum cooling limit. In this context, energy again is wasted. The cooling system is operating at higher default, thereby pouring out the highest energy output. Unfortunately, there is an air supply surplus, and efficiency is not attained.
Other Benefit of a VFD in a Data Center
More than lesser energy consumption, integrating VFD to a data center will fare well in a data center operation. The speed control capacity will affect other operational capabilities as well.
- VFD Give More Accurate Process Control
When it comes to accurate process control, variable speed drivers are the best option. Variable speed drives enabled by VFD are a go-to AC motor control method with better load specifications than others.
In comparison, full voltage starters will power any motor with full speed. On the other hand, soft starters will gradually force full speed and then revert to shutoff. Such a mechanism will be more detrimental in a cooling system. It also elicits more energy exertion from the get-go and when shutting down.
Variable frequency drives will adjust motor run at pre-programmed speed and at a particular time interval. A variable drive running in an open-loop operation will gauge output to the programmed input value. The measurement will regulate speed depending on current load demand. The adjustment will be automatic for output to revert at the estimated set point.
- Use of a VFD Can Increase Machine Life
Forceful start subjects the motor to an enormous amount of torque pressure. The power needed for the start-up surge will be beyond the full-load current. This start dynamics will induce more mechanical stress to the motor. An electrical surge can also cause further voltaic issues.
To a certain extent, reduced-voltage soft starters function to start up a motor gradually. However, the programmability of drives to operate at certain preconditions negates stress better. Providing gradual and smooth starts can reduce wear and tear in the process. VFD is also significant in running specialized motor patterns. When applied to a conveyor, VFD will enable smoother acceleration and deceleration motion. Such capacity lowers the backlash to the conveyor during operation.
- Reducing Maintenance And Problems
The adverse effect of a full-load current is a significant source of issues in a cooling system. Instantaneous energy will cause voltage sag on the power system. The heavy current drop after a full-speed will affect other loads negatively as well. It is referred to as shock damage and will elicit enormous wear and tear to the motor.
Total load capacity is a recurring problem in maintaining sustainable cooling systems. Much more in the case of a data center, cooling systems are reduced to excessive motor function to attain optimum conditions. But again, a full air supply is not directly proportional to sustainable temperature conditions. It only highlights that enabling cooling motors at total capacity does not recognize the optimum temperature setpoint. What is left is air supply waste that will be streamed to the vents to adjust the temperature condition.
Instant energization is a starting method that will pose problems in the long run. It will push technicians to consider motor sizes because full-voltage starters will cause problems across the line. This will be further problematic if the full-load start affects the whole utility system. It can potentially result in downtimes and create issues for the end-users.
It is instead a bigger problem in the context of maintenance. Not only will it extract significant capacities in repairs and maintenance. It will also bring forth additional operational checkpoints. Instead of streamlining a data center operation, much focus is given to repairing a fan motor.
Other Considerations For Data Center Energy Consumption
The power usage effectiveness is part of a regulatory parameter of data centers to control its energy usage. As such, data centers are also inclined to lower their PUE. At the forefront of this initiative is using VFD to realize energy-efficient motors.
However, there are other considerations to make sure energy output is low. Initial planning of data centers integrates proper design layout for the IT hardware and fan design. It is essential to consider all components to gauge the efficiency of a cooling system.
Other than that, efforts to integrate monitoring capacities are also a key consideration. As data centers develop into higher density facilities, heat loads are no longer dependent on the cooling system alone. Proper supplemental mechanisms such as monitoring technologies are paramount.
AKCP HVAC Maintenance Monitoring provides a comprehensive solution to air handling unit monitoring. Sound control of temperature, humidity, filtration, and building pressure is crucial as air handling units impact a data center’s energy use.
AKCP air handling unit monitoring system can diagnose faults such as:
- Compressor short cycling
- Compressor overheating
- Over or under pressure
- Dirty air filters
Furthermore, AKCP air handling unit monitoring accounts system control as well. For example, using VFD to control motor speed to better match the hydraulic energy of the chiller to that of the demand load, electrical energy consumption is reduced. As a valuable monitoring capacity enables AKCP to interface with VFD through RS485. The functionality to connect with VFD enables better speed control based on the data input from AKCP wireless sensor.
These monitoring capacities will bring forth:
- Better diagnosis of common faults by technicians remotely
- Lower overhead but increased efficiency on the cooling system
- Improved monitoring of air handling units
- Enhanced monitoring of chilled water cooling system
VFD For Better Cooling Efficiency
There’s much focus on future-proofing data centers. As an integral part of the operation, cooling systems should keep pace in evolving for future needs. This means cooling systems should be scalable and flexible. VFD is a valuable element to future-proofing. As they are developed for HVAC applications, they provide solid high-efficiency turnout. This is an essential benefit as data centers evolve for the future. VFD capacity to adjust depending on loads is an inherent attribute that will prove valuable in the growing operational demands of a data center. | <urn:uuid:8c79e1cf-ae01-4a6a-b315-b3e666628ee2> | CC-MAIN-2024-38 | https://www.akcp.com/blog/utilizing-vfd-for-lower-energy-consumption-in-data-center/ | 2024-09-14T10:23:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00770.warc.gz | en | 0.914818 | 1,843 | 2.734375 | 3 |
When we do not have access control, it is practically impossible to guarantee that features are used only by their target users. If a problem occurs, the person responsible for the system is unable to track the person responsible for it. The lack of permission management allows users to have access to services not needed by them, making room for improper access and possible application failures. This may result in data breaches that cost millions of dollars and reputational damage.
The roles of identity are those responsible for cataloging users within a system so that everyone who has access to it can be properly authenticated, this being one of the three main pillars of information security. It is important for better access control that the roles of identities are clear and allow easy identification of the individual who wants to access them.
It is critical for information security that there is control over what a particular user needs about what he can access. The ideal is to appeal to the maxim of “minimum privileges,” where a person, through the management of permission groups, receive authorization and sees on his screen only what has been allowed.
One of the critical aspects of cybersecurity for businesses in today's world is to assess organizational maturity against the fundamentals of IAM. It will provide an overview of your organization's current situation regarding the security of your digital assets and infrastructure. Here are some important factors to consider:
By implementing a reliable IAM program, a company can strike a balance between security, risk reduction, training its staff (including customers and employees) to use the services they need, whenever they need them, without taking too many digital risks. In the light of advantages and failure prevention that an access management system can provide to applications, it is highly recommended that it receives due attention. Doing so can prevent data breaches; financial and reputational damage to your company. | <urn:uuid:d44c6dc2-d2ac-4647-bd5f-8443e79533da> | CC-MAIN-2024-38 | https://www.logsign.com/blog/role-of-identity-and-access-management-in-cybersecurity/ | 2024-09-14T10:30:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00770.warc.gz | en | 0.956519 | 368 | 2.71875 | 3 |
In 2001, Dale Maw, the recently hired regional director of information technology and telecommunications for the Niagara Health System, faced a monumental task.
By 2004, Maw and his IT team had to connect more than 1,500 desktops, 15 domains and 160 servers across eight hospitals that had no existing WAN connections to one another. On top of that, the eight Niagara Region hospitals, which were being amalgamated as part of the provincial government’s plan to improve the health system, had a potpourri of systems that were incompatible with one another.
Despite the difficult situation they faced, Maw and his team managed to complete the project this June, on schedule. The first step in getting the eight hospitals talking to one another was to set up a WAN. When Maw came on board, NHS had just begun setting up point-to-point wireless connections between the eight sites to create a metro area network.
NHS looked at services from telecom providers, but the providers couldn’t offer the bandwidth NHS wanted at a reasonable price, Maw explained. “We wanted 100Mbps between our major facilities,” he said. “Our data centre is physically in a different building, so we wanted high-speed connections.” The next step in the project was to assess the IT infrastructure at each of the eight hospitals and figure out how to connect the disparate systems. Ultimately, the senior engineer on the project recommended NHS set up a completely new IT installation.
“Each of the sites had between zero and 20 years of experience with a variety of platforms,” Maw noted. Some were running Unix on AS 400s. One of the smaller hospitals was running an operating system from a company that had gone out of business.
Another was running an old Microsoft NT 3.5.1 platform. And yet another had a completely proprietary system running on old Data General gear. “It was quite a mess, to be honest,” Maw said. NHS ultimately decided to install a Microsoft Windows Server 2000 platform to bring the hospitals together.
Most Ontario hospitals run Microsoft environments and the Ontario Ministry of Health is also a large Microsoft user, Maw noted, and since NHS had to share information with facilities across the province, it made sense for NHS to have the same tools as most of the health community. “For healthcare, Windows was it,” he said. “We had some Linux and other operating systems running in-house, but when it came right down to it, we couldn’t afford to support multiple operating systems.”
The healthcare industry is a major part of Microsoft Canada’s business, noted Jordan Chrysafidis, director, Windows Server system. One of the keys to Microsoft’s success in the health sector, he said, has been to recognize that hospitals tend to be very cost-conscious.
“Typically they end up running a lot more legacy software and hardware than we’d see in other verticals,” he said. After setting up the Microsoft servers, the NHS IT staff established a new domain in Active Directory and began migrating each department to the new domain.
The next step was to replace all of the hospitals’ desktops with new machines running Windows 2000. Introducing hospital staff to the new operating system and applications was actually more difficult than moving the data, Maw said. “We focused more on the end user community,” he said. “Whether they believe that or not, I don’t know. We spent a lot of time trying to understand where they were keeping their data and what they were using the tools for.”
From a server pool of 160, NHS is now down to around 100 servers, Maw said. Despite the move to Windows Server, most of the old systems are still in place, because the information can’t easily be ported to the new environment.
“In some cases we had to wrap new technology around the older technology,” he said. For example, at one of our sites, it won’t run on anything other than Windows 95. So we’ve used some software tools to set those Windows 95 boxes up in the data centre and we have the end user community sort of remote control into those boxes.”
Consolidating the eight hospitals on one network will result in savings of over $500,000 by concentrating IT staff and reducing the number of servers, Maw said. It should also improve the way hospitals can serve patients, he explained.
“It’s a huge value for physicians and patients to be in Port Colborne, Ont., go to the hospital there and then when they visit their home doctor in Niagara Falls, Ont., that doctor can see the results from the Port Colborne visit.” | <urn:uuid:0b317940-ff2e-4717-b861-2a7f5a5d39c6> | CC-MAIN-2024-38 | https://www.itworldcanada.com/article/revamped-setup-heals-tech-woes/17017 | 2024-09-16T22:57:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00570.warc.gz | en | 0.973286 | 984 | 2.546875 | 3 |
The 3D printing concept is inherently dramatic and the ideas are creative, but ironically, 3D printing sometimes gets a bit under-reported, simply because things always seem to be running at such a high pitch. Let’s catch up.
MX3D is combining robotics and 3D printing and plans to build a 24-foot long pedestrian bridge over a canal in Amsterdam. This InformationWeek story has a couple of videos. In essence, the company will use additive printing technology to “draw in midair” and finish the bridge in about two months. That, according to writer David Wagner, is a competitive timeframe.
Focusing on the great near-term promise of the technology, CloudDDM co-founder Rick Smith starts his column at Forbes with two vignettes focusing on 3D and GE: One is about an Indonesian man without industrial manufacturing experience who nonetheless used 3D printing to win a competition centered on redesigning the bracket that holds a jet engine to the wing. The second focuses on the use of 3D printing to reduce from 21 to one the number of parts needed to create a jet engine fuel injection system.
It should be noted that the new products are better: The wing/engine bolt is 83 percent lighter than the part it replaced. The fuel injection system is five times stronger and increases fuel efficiency by 15 percent over the existing type.
Lots of less esoteric and intensely functional uses of 3D show up in medicine. For instance, it is used to improve joint replacement. Traditionally, six knee designs have been available for use. In a 3D printing world, it is possible to customize knees to the patient. The San Diego Union-Tribune says that in similar areas, such as hearing aids and dental implants, 3D approaches are both faster and more flexible. 3D printing of human tissue and organs is technically here. It is being used for training and testing drugs, and seems likely to be used directly on people in the near future.
Finally, 3D offers a story or two that are especially cool and dramatic — even in a landscape of stories with high cool and dramatic ratios. 3D Print.com reports that the “Open Source Nano Replicator Initiative” is looking at technology that will enable 3D printers to use single atoms as their source material. It is worth noting that the organization is trying to raise $500 million on Indiegogo. That seems as close to science fiction as the project itself.
That said, the stakes are pretty high, according to Eddie Krassenstein:
If successful, a nano-replicator would mean that any free man, woman, or child would have the ability to 3D print anything from a turkey sandwich to a human eye ball, using the smallest constituent unit of matter, the atom as the building blocks. With every solid, liquid, gas and plasma in the universe made up of atoms, virtually anything could be printed on a machine of this caliber. The only obstacle that remains in our way — albeit a huge one — is the technology that will allow this to move forward.
The mind-bending nature of 3D printing makes it a very entertaining category. The reality is that very smart people are doing very important things. Many of these are good. Some, like figuring out how to 3D print weapons, aren’t. The bottom line, however, is that the category has perhaps the greatest potential to change society. And that’s saying a lot.
Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at email@example.com and via twitter at @DailyMusicBrk. | <urn:uuid:3443fdc8-a026-47e0-afe3-5e5d4e03463a> | CC-MAIN-2024-38 | https://www.itbusinessedge.com/it-management/never-a-dull-moment-in-the-3d-printing-sector/ | 2024-09-19T11:05:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00370.warc.gz | en | 0.952288 | 811 | 2.703125 | 3 |
At a time when business models are becoming more and more virtual, reliable data has become a cornerstone of successful organizations. Reliable data serves as the bedrock of informed decision-making, enabling companies to gain valuable insights, identify emerging trends, and make strategic choices that drive growth and success. But what exactly is reliable data, and why is it so crucial in today’s business landscape?
Reliable data refers to information that is accurate, consistent, and trustworthy. It encompasses data that has been collected, verified, and validated using robust methodologies, ensuring its integrity and usability. Reliable data empowers businesses to go beyond assumptions and gut feelings, providing a solid foundation for decision-making processes.
Understanding the significance of reliable data and its implications can be a game-changer for businesses of all sizes and industries. It can unlock a wealth of opportunities, such as optimizing operations, improving customer experiences, mitigating risks, and identifying new avenues for growth. With reliable data at their disposal, organizations can navigate the complexities of the modern business landscape with confidence and precision.
What is reliable data?
Reliable data is information that can be trusted and depended upon to accurately represent the real world. It is obtained through reliable sources and rigorous data collection processes. When data is considered reliable, it means that it is credible, accurate, consistent, and free from bias or errors.
One major advantage of reliable data is its ability to inform decision-making. When we have accurate and trustworthy information at our fingertips, we can make better choices. It allows us to understand our circumstances, spot patterns, and evaluate potential outcomes. With reliable data, we can move from guesswork to informed decisions that align with our goals.
Planning and strategy also benefit greatly from reliable data. By analyzing trustworthy information, we gain insights into market trends, customer preferences, and industry dynamics. This knowledge helps us develop effective plans and strategies. We can anticipate challenges, seize opportunities, and position ourselves for success.
Efficiency and performance receive a boost when we work with reliable data. With accurate and consistent information, we can optimize processes, identify areas for improvement, and streamline operations. This leads to increased productivity, reduced costs, and improved overall performance.
Risk management becomes more effective with reliable data. By relying on accurate information, we can assess potential risks, evaluate their impact, and devise strategies to mitigate them. This proactive approach allows us to navigate uncertainties with confidence and minimize negative consequences.
Reliable data also fosters trust and credibility in our professional relationships. When we base our actions and presentations on reliable data, we establish ourselves as trustworthy partners. Clients, stakeholders, and colleagues have confidence in our expertise and the quality of our work.
How do you measure data reliability?
We emphasized the importance of data reliability for your business, but how much can you trust the data you have?
You need to ask yourself this question in any business. Almost 90% of today’s business depends on examining certain data well enough and starting with wrong information will cause your long-planned enterprise to fail. Therefore, to measure reliable data, you need to make sure that the data you have meets certain standards.
At the heart of data reliability lies accuracy—the degree to which information aligns with the truth. To gauge accuracy, several approaches can be employed. One method involves comparing the data against a known standard, while statistical techniques can provide valuable insights.
By striving for accuracy, we ensure that the data faithfully represents the real world, enabling confident decision-making.
A reliable dataset should encompass all the pertinent information required for its intended purpose. This attribute, known as completeness, ensures that no crucial aspects are missing. Evaluating completeness may involve referencing a checklist or employing statistical techniques to gauge the extent to which the dataset covers relevant dimensions.
By embracing completeness, we avoid making decisions based on incomplete or partial information.
Consistency examines the uniformity of data across various sources or datasets. A reliable dataset should exhibit coherence and avoid contradictory information. By comparing data to other datasets or applying statistical techniques, we can assess its consistency.
Striving for consistency enables us to build a comprehensive and cohesive understanding of the subject matter.
Guarding against bias is another critical aspect of measuring data reliability. Bias refers to the influence of personal opinions or prejudices on the data. A reliable dataset should be free from skewed perspectives and impartially represent the facts. Detecting bias can be achieved through statistical techniques or by comparing the data to other trustworthy datasets.
By recognizing and addressing bias, we ensure a fair and objective portrayal of information.
Even the most carefully curated datasets can contain errors. Evaluating the error rate allows us to identify and quantify these inaccuracies. It involves counting the number of errors present or applying statistical techniques to uncover discrepancies.
Understanding the error rate helps us appreciate the potential limitations of the data and make informed judgments accordingly.
Considerations beyond the methods
While the aforementioned methods form the foundation of measuring data reliability, there are additional factors to consider:
- Source of the data: The credibility and reliability of data are influenced by its source. Data obtained from reputable and authoritative sources is inherently more trustworthy than data from less reputable sources. Being mindful of the data’s origin enhances our confidence in its reliability
- Method of data collection: The method employed to collect data impacts its reliability. Data collected using rigorous and scientifically sound methodologies carries greater credibility compared to data collected through less meticulous approaches. Awareness of the data collection method allows us to evaluate its reliability accurately
- Quality of data entry: Accurate and careful data entry is vital to maintain reliability. Data that undergoes meticulous and precise entry procedures is more likely to be reliable than data that is carelessly recorded or contains errors. Recognizing the importance of accurate data entry safeguards the overall reliability of the dataset
- Storage and retrieval of data: The way data is stored and retrieved can influence its reliability. Secure and consistent storage procedures, coupled with reliable retrieval methods, enhance the integrity of the data. Understanding the importance of proper data management ensures the long-term reliability of the dataset
What are the common data reliability issues?
Various common issues can compromise the reliability of data, affecting the accuracy and trustworthiness of the information being analyzed. Let’s delve into these challenges and explore how they can impact the usability of reliable data.
One prevalent issue is the presence of inconsistencies in reliable data, which can arise when there are variations or contradictions in data values within a dataset or across different sources. These inconsistencies can occur due to human errors during data entry, differences in data collection methods, or challenges in integrating data from multiple systems. When reliable data exhibits inconsistencies, it becomes difficult to obtain accurate insights and make informed decisions.
Reliable data may also be susceptible to errors during the data entry process. These errors occur when incorrect or inaccurate information is entered into a dataset. Human mistakes, such as typographical errors, misinterpretation of data, or incorrect recording, can lead to unreliable data. These errors can propagate throughout the analysis, potentially resulting in flawed conclusions and unreliable outcomes.
The absence of information or values in reliable data, known as missing data, is another significant challenge. Missing data can occur due to various reasons, such as non-response from survey participants, technical issues during data collection, or intentional exclusion of certain data points. When reliable data contains missing values, it introduces biases, limits the representativeness of the dataset, and can impact the validity of any findings or conclusions drawn from the data.
Another issue that affects reliable data is sampling bias, which arises when the selection of participants or data points is not representative of the population or phenomenon being studied. Sampling bias can occur due to non-random sampling methods, self-selection biases, or under or over-representation of certain groups. When reliable data exhibits sampling bias, it may not accurately reflect the larger population, leading to skewed analyses and limited generalizability of the findings.
Measurement errors can also undermine the reliability of data. These errors occur when there are inaccuracies or inconsistencies in the instruments or methods used to collect data. Measurement errors can stem from faulty measurement tools, subjective interpretation of data, or inconsistencies in data recording procedures. Such errors can introduce distortions in reliable data and undermine the accuracy and reliability of the analysis.
Ensuring the security and privacy of reliable data is another critical concern. Unauthorized access, data breaches, or mishandling of sensitive data can compromise the integrity and trustworthiness of the dataset. Implementing robust data security measures, and privacy safeguards, and complying with relevant regulations are essential for maintaining the reliability of data and safeguarding its confidentiality and integrity.
Lastly, bias and prejudice can significantly impact the reliability of data. Bias refers to systematic deviations of data from the true value due to personal opinions, prejudices, or preferences. Various types of biases can emerge, including confirmation bias, selection bias, or cultural biases. These biases can influence data collection, interpretation, and analysis, leading to skewed results and unreliable conclusions.
Addressing these common challenges and ensuring the reliability of data requires implementing robust data collection protocols, conducting thorough data validation and verification, ensuring quality control measures, and adopting secure data management practices. By mitigating these issues, we can enhance the reliability and integrity of data, enabling more accurate analysis and informed decision-making.
How to create business impact with reliable data
Leveraging reliable data to create a significant impact on your business is essential for informed decision-making and driving success. Here are some valuable tips on how to harness the power of reliable data and make a positive difference in your organization:
Instead of relying solely on intuition or assumptions, base your business decisions on reliable data insights. For example, analyze sales data to identify trends, patterns, and opportunities, enabling you to make informed choices that can lead to better outcomes.
Determine the critical metrics and key performance indicators (KPIs) that align with your business goals and objectives. For instance, track customer acquisition rates, conversion rates, or customer satisfaction scores using reliable data. By measuring performance accurately, you can make data-driven adjustments to optimize your business operations.
Utilize reliable data to uncover inefficiencies, bottlenecks, or areas for improvement within your business processes. For example, analyze production data to identify areas where productivity can be enhanced or costs can be reduced. By streamlining operations based on reliable data insights, you can ultimately improve the overall efficiency of your business.
Reliable data provides valuable insights into customer behavior, preferences, and satisfaction levels. Analyze customer data, such as purchase history or feedback, to personalize experiences and tailor marketing efforts accordingly. By understanding your customers better, you can improve customer service, leading to enhanced satisfaction and increased customer loyalty.
Analyzing reliable data allows you to stay ahead of the competition by identifying market trends and anticipating shifts in customer demands. For instance, analyze market data to identify emerging trends or changing customer preferences. By leveraging this information, you can make strategic business decisions and adapt your offerings to meet the evolving needs of the market.
Reliable data is instrumental in identifying and assessing potential risks and vulnerabilities within your business. For example, analyze historical data and monitor real-time information to detect patterns or indicators of potential risks. By proactively addressing these risks and making informed decisions, you can implement risk management strategies to safeguard your business.
Utilize reliable data to target your marketing and sales efforts more effectively. For instance, analyze customer demographics, preferences, and buying patterns to develop targeted marketing campaigns. By personalizing communications and optimizing your sales strategies based on reliable data insights, you can improve conversion rates and generate higher revenue.
Reliable data offers valuable insights into customer feedback, market demand, and emerging trends. For example, analyze customer surveys, reviews, or market research data to gain insights into customer needs and preferences. By incorporating these insights into your product development processes, you can create products or services that better meet customer expectations and gain a competitive edge.
Cultivate a culture within your organization that values data-driven decision-making. Encourage employees to utilize reliable data in their day-to-day operations, provide training on data analysis tools and techniques, and promote a mindset that embraces data-driven insights as a critical factor for success. By fostering a data-driven culture, you can harness the full potential of reliable data within your organization.
Regularly monitor and evaluate the impact of your data-driven initiatives. Track key metrics, analyze results, and iterate your strategies based on the insights gained from reliable data. By continuously improving and refining your data-driven approach, you can ensure ongoing business impact and success.
By effectively leveraging reliable data, businesses can unlock valuable insights, make informed decisions, and drive positive impacts across various aspects of their operations. Embracing a data-driven mindset and implementing data-driven strategies will ultimately lead to improved performance, increased competitiveness, and sustainable growth. | <urn:uuid:4fa5ac84-67d9-4c10-9399-c95e166a53f1> | CC-MAIN-2024-38 | https://dataconomy.com/2023/07/05/what-is-reliable-data-and-benefits-of-it/ | 2024-09-08T12:02:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00470.warc.gz | en | 0.923146 | 2,642 | 2.78125 | 3 |
Occam’s Razor -The Simplest Answer is the Correct Answer
William of Ockham (c. 1287–1347), Medieval philosopher
The term, “Occam’s Razor” ( or Ockham’s Razor) refers to distinguishing between two hypotheses, either by “shaving away” unnecessary assumptions or cutting apart two similar conclusions.
Ockham did not invent this principle, but the “razor”—and its association with him—may be due to the frequency and effectiveness with which he used it. Occam stated the principle in various ways, but the most popular version is “The simplest answer is the correct answer”
The razor’s statement is basically: “Other things being equal, simpler explanations are generally better than more complex ones.”
This information on Occam’s Razor is derived from : https://en.wikipedia.org/wiki/Occam’s_razor#William_of_Ockham | <urn:uuid:1bb2e94b-34af-4a55-9d8d-a2d7fa887bf6> | CC-MAIN-2024-38 | https://infotelsystems.com/service/six-step-troubleshooting-method/occams-razor/ | 2024-09-08T12:20:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00470.warc.gz | en | 0.925402 | 215 | 3.6875 | 4 |
In spite of its obvious complication, a blockchain is just another kind of database for recording transactions – one that is copied to all of the machines in a participating network. Data in the blockchain is saved in constant structures named ‘blocks’. The major components of a block are:
– The header, which contains metadata, such as the reference number of block , the time the block was generated and a link back to the previous block.
– The content, regularly a confirmed list of digital assets and instruction records, such as transactions made, their amounts and the addresses of the individuals to those transactions.
From the latest block, you can possibly access all earlier blocks linked together in the chain, so a blockchain database holds the entire history of all assets and instructions performed since the very first one – making its data valid and independently auditable.
Because the number of members increases every day, it becomes harder for malicious participants to overcome the verification activities of the majority. | <urn:uuid:3094de7a-c008-472d-8bc1-320d1fe72914> | CC-MAIN-2024-38 | https://latesthackingnews.com/2017/12/18/blockchain-technology-behind-bitcoin/ | 2024-09-08T12:46:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00470.warc.gz | en | 0.951758 | 195 | 3.453125 | 3 |
Have you ever thought about how a company ensures only the right people can access its data and resources?
The solution is Identity Governance (IG), a framework for systematically managing and controlling digital user access within an organization, ensuring security and compliance.
IG assists in identifying, authenticating, and authorizing users during runtime so that access rights are valid according to business requirements and regulatory compliance.
It enables organizations to know who has access to what, pinpoint risks, and prevent unauthorized access. These IG solutions allow companies to define and enforce identity and access management (IAM) policies, thus reducing the risks associated with security non-compliance.
In this article, we will discuss Identity Governance, how it differs from IAM, its core principles, the key components of an IG solution, and the benefits that effective IG can bring. We will also discuss how the right tools can help strengthen your IG strategy.
What is Identity governance?
Identity Governance (IG) refers to the set of policies, processes, and tools organizations use to manage and control who has access to their technology resources.
It ensures that only the right individuals are granted the appropriate access to systems, data, and applications, in line with regulatory requirements and organizational policies.
IG is a strategic approach to maintaining security and compliance by defining and enforcing access rights within an organization.
It watches over who, what, and why access incidents are happening in your environment (not just when), thereby reducing risk from data breaches and ensuring compliance with regulations such as GDPR or HIPAA.
How IG differs from Identity and Access Management (IAM)?
Identity and Access Management is a framework of business policies, processes, and technologies that facilitates the management of digital identities. It allows IT managers to manage user access to critical information within an organization. IAM systems include:
- Single sign-on (SSO)
- Two-step verification (2SV)
- Multifactor authentication (MFA)
- Privileged Access Management (PAM)
These systems store identity data securely and enforce rules around IG, ensuring only necessary, relevant information is shared.
Moreover, Identity Governance falls under the umbrella of Identity and Access Management. As a subset of IAM, the IG focuses on specific areas such as:
- Managing compliance
- Adhering to risk policies
- Controlling and enforcing policies
While IAM encompasses the overall framework for:
- Creating digital identities
- Administering and validating these identities
- Authorizing access
Furthermore, IG involves policies and oversight mechanisms to ensure identities and access rights comply with regulations and organizational policies, thereby mitigating risks.
Core Principles of Identity Governance
IG is formed on four key principles: visibility, control, compliance, and automation. These principles are crucial for organizations to manage user access to critical information and comply with security and regulatory standards. Here, we've provided detailed information on the Core Principles of Identity Governance:
Visibility: Firstly, visibility means knowing who has access to what resources are available in the organization. Like an organized library where every book is catalogued and tracked, having proper visibility helps businesses monitor access rights, spot security risks, and make informed decisions.
This transparency reduces unauthorized access and ensures all user access is legitimate.
Control: Secondly, control involves regulating access to sensitive information. Think of it as having secure locks and keys for different rooms in a building.
By using strict policies, organizations can ensure that only authorized users access specific resources, minimizing the risk of data breaches and protecting critical assets.
Compliance: Next, compliance means following regulatory requirements and organizational policies. With laws like GDPR and HIPAA, maintaining compliance is essential. Ensuring access management practices meet these standards helps organizations avoid legal penalties and maintain strong security.
Compliance is like following traffic rules to avoid accidents and keep things running smoothly.
Automation: Lastly, automation streamlines IG processes by reducing manual work. Like an automated factory line, automated systems in Identity governance handle tasks like user provisioning and access reviews more efficiently, leading to quicker responses and fewer errors.
However, visibility, control, compliance, and automation work together to create a secure and efficient IG framework. These principles help organizations manage access, reduce risks, and meet security and compliance goals.
Key Components of an Identity Governance Strategy
1. User Lifecycle Management
Effectively managing user identities throughout their entire lifecycle, from onboarding to offboarding is crucial. For instance, when new employees join the company, they require access to specific systems and data to carry out their responsibilities.
User lifecycle management ensures that their access is correctly established from the start, and continuously adjusted as they change roles or depart from the organization.
Provisioning Access: Approving access for new users based on their organizational roles and responsibilities. This process creates accounts and assigns relevant access, ensuring new employees, contractors, or partners can quickly get started.
For instance, when Joyce is onboarded as a marketing manager, the provisioning system creates her account and grants her access to the necessary marketing databases and tools.
Updating Access: If Joyce gets promoted to head of marketing, her access needs to change. Updating ensures she can access higher-level reports and sensitive data necessary for her new role.
De-provisioning Access: When an IT contractor, John, completes his assignment, de-provisioning promptly revokes his access to company systems and networks, preventing unauthorized access.
2. Role-Based Access Control (RBAC):
Policies are based on user roles and permissions. RBAC is an access control framework that grants users access based on their organizational role. Setting up permissions according to roles simplifies access management and ensures users only see information relevant to their role.
Defining User Roles: This is the first step in applying RBAC. Defining roles within the organization ensures each role corresponds to specific job functions and duties. As a result, users have only minimal access according to their responsibilities.
For example, a company's role might be defined as "Sales Representative," allowing the employee to access customer data, sales tools, and reporting systems.
Assigning Role-Based Access: After defining roles, access is assigned accordingly. This makes managing and monitoring access rights easier, as changing a user’s access only requires altering their role instead of modifying individual permissions.
For example, when Max joins as a sales representative, she automatically gets access to the CRM and sales analytics. This role-based system ensures she has the right tools for her job without manual intervention.
Limiting Unnecessary Access: RBAC restricts access to the information and systems users need for their specific roles, following the principle of least privilege. This means users have only the necessary rights, reducing the risk of data breaches and security threats.
For example, following the principle of least privilege, only senior management and HR can access employee salary information, ensuring sensitive data does not unintentionally leak.
3. Access Reviews and Certifications:
Verifying user access needs and privileges regularly. Regular access reviews help ensure that users still require the access they have been granted and identify any unnecessary permissions that should be revoked to enhance security.
Conducting Periodic Reviews: Performing access reviews periodically, such as every three or twelve months, ensures that access rights remain valid for the assigned roles. Supervisors and business owners review and attest that users have the right access to their job functions.
For example, the IT department reviews access permissions every six months. They might find that Tom, who moved from sales to finance, still has access to the sales database and needs his permissions adjusted.
Certifying Access: During these reviews, managers certify that their team members have the correct access. If Susan, a project manager, notices an intern with access to project budgets, she can flag and correct this.
Addressing Access Issues: To maintain security, quickly resolve any discrepancies found during audits. For example, IT can promptly revoke those permissions if an audit reveals that an employee retains access from a previous role.
4. Entitlement Management
Defining and managing user access to specific resources. This involves ensuring that users have appropriate access rights and that these rights are consistently reviewed and updated as needed.
Defining Access Rights: Clearly establish which resources each role within the organization can access. For instance, developers need access to code repositories and testing environments, while HR staff require access to employee records.
For example, in a tech company, developers might need access to code repositories and testing environments but not to the HR system. Entitlement management makes sure these boundaries are clearly defined and maintained.
Monitoring Access: Continuously monitor who has access to what resources to identify and address any discrepancies quickly. This helps in detecting unauthorized access and ensures compliance with internal policies.
Adjusting Access Levels: Regularly adjust access levels based on changes in job roles or organizational structure. For example, when a developer is promoted to team lead, they might need additional access to management tools and project oversight systems.
Enforcing strong password policies and multi-factor authentication (MFA). Strong password policies and MFA are essential for securing user accounts and protecting against unauthorized access by requiring robust passwords and additional verification steps.
Enforcing Complex Passwords: Implement policies that require employees to create complex passwords with a mix of letters, numbers, and special characters and mandate password changes every 90 days.
Implementing Multi-Factor Authentication (MFA): A second layer of authentication, such as a mobile app verification or biometric scan, is required to add an extra layer of security beyond just passwords.
Educating Users: Regularly educate employees about the importance of strong passwords and how to recognize phishing attempts that aim to steal login credentials, ensuring they understand and follow best practices for password security.
Together, these components create a comprehensive strategy for managing user access, ensuring security, and maintaining organizational compliance.
By implementing these Identity Governance practices, enterprises can safeguard their resources, comply with regulations, and reduce the risk of data breaches.
Benefits of Effective Identity Governance
1. Enhanced Security Posture:
Enhanced Security Posture refers to the improved safeguarding of an organization’s data and systems. It involves comprehensive management and user access monitoring, ensuring strict control over who can view or modify sensitive information.
- Ensures only authorized individuals can access sensitive data, thereby reducing unauthorized exposures that could lead to breaches.
- Implements rigorous access controls and continuous monitoring to defend against internal and external threats.
- Therefore, an effective identity governance system could recognize and prevent unauthorized access to financial records, ensuring the information's integrity and confidentiality.
2. Improved Compliance:
Improved Compliance ensures that businesses adhere to industry standards and regulatory requirements.
By implementing IG practices, organizations can systematically manage data, ensure compliance with various laws and regulations, reduce legal risks, and enhance operational transparency and accountability.
- Enforces policies that meet compliance standards such as GDPR, HIPAA, and SOX, protecting against legal consequences resulting from non-compliance.
- Conducts regular audits and access reviews to ensure compliance and promptly addresses any non-compliance issues.
- For instance, automated compliance reporting tools create required documents and track ongoing compliance, streamlining audits and ensuring the organization meets all regulatory needs.
3. Streamlined Access Management:
Streamlined Access Management simplifies the process of controlling user access within an organization. By utilizing automated provisioning and de-provisioning, IG minimizes manual efforts, ensuring timely and accurate access rights.
- Automatically provisions user access rights from day one, ensuring timely and accurate access to necessary systems and applications.
- Efficiently handles changes in user roles and promptly revokes access when a person leaves the organization, minimizing errors and unauthorized access.
- Furthermore, the system expedites onboarding upon hiring a new staff member. It boosts operational effectiveness by immediately providing access to pertinent resources such as databases and tools.
4. Increased Operational Efficiency:
It is achieved by automating routine identity and access management tasks, which reduces the administrative burden on IT staff.
This allows IT teams to focus on higher-priority tasks and optimizing resource utilization within the organization.
- Enables IT teams to focus on more strategic initiatives rather than spending time on manual access requests and adjustments.
- Automated workflows expedite granting and modifying access rights, improving resource allocation and productivity.
- Moreover, automated workflows for access requests and approvals simplify user permission management, allowing IT staff to focus on more complex tasks and increasing organizational performance.
5. Improved User Experience:
Effective IG streamlines access to necessary resources and improves user experience. It ensures secure and convenient access, reduces friction, and enables smoother interactions with the organization’s systems and data.
- Optimizes the process of granting access, reducing password issues and delays in accessing tools.
- Implements technologies like single sign-on (SSO) to simplify access by allowing users to log in once and access multiple applications with the same credentials.
- For example, SSO minimizes the frustration of managing multiple passwords and shortens login times, providing a seamless user experience and boosting productivity
Identity Governance with CloudEagle
CloudEagle transforms identity governance by offering a comprehensive solution for managing and securing user access. Designed for finance, procurement, and IT teams, CloudEagle simplifies the SaaS buying and renewal process, optimizes software costs, and effectively manages SaaS applications.
Here’s how CloudEagle enhances your IG strategy:
Strengthened Security: CloudEagle provides real-time insights into user access, allowing you to identify and address unauthorized access swiftly. This enhances your organization’s security posture by ensuring only authorized users can access sensitive information.
Streamlined Compliance: With CloudEagle, compliance reporting and audit processes are automated, ensuring that you meet regulatory requirements and industry standards with minimal manual effort. This helps maintain a compliant environment effortlessly.
Efficient Access Management: CloudEagle simplifies user provisioning and deprovisioning. The platform ensures that access rights are accurately assigned and updated as user roles change, reducing the risk of errors and unauthorized access.
Increased Operational Efficiency: CloudEagle automates routine tasks related to Identity Management, allowing your IT and HR teams to focus on more strategic activities. Automation of access requests, approvals, and revocations reduces administrative burdens and increases productivity.
CloudEagle’s integrated dashboard offers a unified view of your access management, enabling quick responses to potential risks. The platform's app access module lets users request access, which administrators can grant based on roles and responsibilities.
For instance, when a new employee joins, CloudEagle automatically suggests the relevant applications and sets up access. The system promptly revokes access when employees leave, ensuring security and compliance.
To see CloudEagle in action, see how Alice Park from Remediant streamlined user provisioning and saved on spending using CloudEagle.
Choosing CloudEagle for your identity governance needs will help secure your SaaS ecosystem and elevate your IG practices.
Request a demo today to discover how CloudEagle can enhance your identity and access management.
Identity Governance is a critical component of a robust security framework that ensures the right individuals have the appropriate access to resources while meeting regulatory requirements.
Organizations can enhance their security posture, streamline operations, and reduce the risk of unauthorized access by effectively managing user identities, controlling access rights, and automating compliance processes.
CloudEagle stands out as a premier solution for identity governance. With its advanced features for real-time monitoring, automated access management, and seamless integration with SSO and HR systems, CloudEagle provides unparalleled efficiency and security.
Its user-friendly interface and comprehensive reporting capabilities make it the ideal choice for organizations looking to optimize their IG practices and maintain a secure, compliant IT environment.
Frequently Asked Questions
Q1. Why is IG so important for organizations today?
Identity Governance is vital because it ensures that only authorized users access sensitive data and resources, enhancing security and compliance while preventing unauthorized access and mismanagement.
Q2. What is the IAM governance process?
IAM governance involves controlling identities and access through policies and technologies. It ensures compliance with regulations and internal security standards while regularly reviewing and adjusting user access.
Q3. What is the main difference between IG and IAM?
The main difference is that IG focuses on compliance and oversight of access rights, while Identity and Access Management (IAM) covers the broader management of digital identities and access controls.
Q4. How does Identity Governance enhance security within an organization?
It enhances security by monitoring user access, ensuring only authorized individuals have access, and identifying and addressing potential security risks.
Q5. How does CloudEagle support Identity Governance?
CloudEagle supports IG with features like automated access management, real-time monitoring, and detailed compliance reporting, making identity management more efficient and secure.
Q6. What unique features does CloudEagle offer for Identity Governance?
CloudEagle offers automated provisioning, real-time access monitoring, and an integrated dashboard for visibility and compliance, simplifying identity governance and enhancing security. | <urn:uuid:c867900e-fcdd-45d9-b396-1fb0a4f88bd5> | CC-MAIN-2024-38 | https://www.cloudeagle.ai/blogs/what-is-identity-governance | 2024-09-08T12:25:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00470.warc.gz | en | 0.917912 | 3,542 | 2.5625 | 3 |
Are your electronic medical records safe from healthcare cyber attack?
Researchers at Microsoft are warning that several encrypted databases of medical records are vulnerable to attacks and information loss. With the increased use of cloud computing, data breaches on encrypted databases has increased, so healthcare industry cybersecurity is more important than ever. They identify the threats in multiple ways, but one is individual and aggregate. Individual attacks are designed to gather information about a specific person where aggregate attacks are meant to recover statistical information about the entire database. These can both be very malicious.
It is still common practice to use encryption to protect against cyberattacks, and it is still one of the best defenses, however, using encryption only, is not the best solution for healthcare cyber attack prevention. Encrypted information is unscrambled in a computer’s memory, so if a cyber terrorist is able to access that, it is dangerous. In order to be useful, encryption needs to be continual to prevent progressive decoding to occur.
Heathcare cyber attacks, like the ones most notably against Anthem and UCLA Health System, are on the rise. The healthcare industry has become a target due to their lack of security. It also isn’t just medical records, attacks against the accounting databases, which store significant information, are also at risk. To date, over 90 million patients have been affected by data breaches from such attacks on healthcare industry cybersecurity.
The largest concern with these attacks is the resulting identity theft. Due to privacy laws such as HIPAA, it is extremely difficult to remove misinformation on medical records, including something as simple as a blood type, and this could result in the wrong blood transfusion in an emergency medical information. | <urn:uuid:3ab15386-4910-43a4-8b07-f489e9589bca> | CC-MAIN-2024-38 | https://www.estesgrp.com/blog/2016/02/ | 2024-09-09T17:38:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00370.warc.gz | en | 0.963022 | 338 | 2.828125 | 3 |
Depending on the number of objects to classify, tagging objects manually can be a very long and inefficient process. Alternatively, you can let the system automatically tag objects for you, as long as you instruct the system on how to classify objects according to the values of their attributes. This is done by assigning auto-tagging conditions to the keywords of a category.
Defining auto-tagging conditions
To define auto-tagging conditions for a keyword in the Finder:
Create or edit a category (see the chapter on Creating categories and keywords).
In the Keywords panel, select the desired keyword. The right hand side of the panel shows the auto-tagging conditions for the keyword. If no condition has been defined yet, this part of the panel is empty the keyword is never assigned automatically to any object.
Click the link Click here to add a new condition to specify an auto-tagging. The way to express auto-tagging conditions is similar to the way you specify the conditions of an investigation. A row with three drop down lists appears.
Choose the attribute of the object whose value will be compared in the first drop down list. For device objects, beware that all attributes are available for selection, but not all of them are necessarily available for all of the platforms.
Select a comparison operator in the second drop down list.
In the last drop down list or combo box, select or type the value to compare with the selected attribute. Once you define a condition for a keyword, a small icon with the letter A appears to the left of the name of the keyword in the panel, indicating that the keyword is used in automatic tagging.
Optional: Click the trash can placed to the right of the drop down lists to remove a condition.
Optional: Go back to point 3 to add new conditions. If you create more than one auto-tagging condition for one keyword, you can specify how the conditions combine using the Logical expression field that appears below the conditions. By default, conditions are combined by a logical or. Therefore, it is enough for an object to fulfil one of the conditions to be tagged with the keyword. You can use the logical operators OR and AND to combine the auto-tagging conditions of a keyword.
Click Save to permanently save your work and continue editing the category or Save and Close to save your work and end the edition of auto-tagging conditions.
Setting the precedence of automatic keywords
An object can be tagged by at most one keyword per category. If an object satisfies the auto-tagging conditions imposed by two or more different keywords, the system tags it with only one of the keywords. You can decide what keywords take precedence over others by establishing a ranking of automatic keywords. Remember that tagging an object manually overrides any automatic assignment of keywords.
To set the precedence of automatic keywords in the Finder:
Create or edit a category that holds automatic keywords, that is, keywords that specify auto-tagging conditions (see above).
Click the button Set auto-tagging order... located below the list of keywords. A dialog pops up with the list of all the automatic keywords, ordered by precedence. By default, keywords are ordered in alphabetic order.
Click the name of a keyword in the dialog to change its order of precedence.
Click the button Move up to promote the selected keyword to a higher rank or the button Move down to lower the precedence of the keyword.
Click OK once you are satisfied with the ordering of keywords or Cancel to ignore the changes made.
Nexthink triggers the automatic tagging process for all objects of a given type when you create or modify a category that applies to that type. The modification of a category can indeed imply a modification of the auto-tagging conditions, so every object must be rechecked against the new conditions. At a lower scale, the modification of the attributes of an object may also trigger the automatic retagging of the object. If the modified attribute is suitable for comparison in auto-tagging conditions, the new value of the attribute can make the object fall into a different class and, therefore, be tagged with a different keyword.
Although the system does not impose a hard limit on the number of categories and keywords that you can define, automatic tagging can become a costly operation depending on the number of computations required. Specifying too many keywords or auto-tagging conditions may have a significant impact in the overall performance of the system. The maximum recommended values for keeping the system responsive are listed below:
25 categories per type of object (device, user, application, etc).
200 automatic keywords per category.
20 auto-tagging conditions per keyword. | <urn:uuid:834dbe10-f95f-429c-b411-a849351ca98c> | CC-MAIN-2024-38 | https://docs-v6.nexthink.com/V6/6.29/tagging-objects-automatically | 2024-09-11T01:18:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00270.warc.gz | en | 0.8377 | 954 | 2.515625 | 3 |
From the launch of the World Wide Web just over 40 years ago to the rise of AI with the release of ChatGPT in November last year, technology continues to revolutionise how we work, communicate and collaborate.
With great change, however, comes great resistance — especially when the technology in question impacts the very fabric of society. Over the years, governments and lawmakers around the world have introduced a range of restrictions and regulations to dampen the impact of technological innovation and lower its risk to society.
The most notable example is the EU’s recently implemented Digital Services Act, which serves to create a safer digital space by protecting people’s personal data online and preventing large tech companies from gaining total dominion over their respective markets.
While regulatory measures like this serve to protect society from the implications of rapid technological development, others seek to eradicate specific technologies from society entirely by issuing country-wide bans that remove access to certain technologies.
From restricting small, dangerous gadgets, or blocking entire social media networks, we're counting down some of the most shocking technology bans in history.
Google Street View – banned in Germany, Austria and Greece
Google Street View is a popular and powerful tool that allows users to virtually explore streets and landmarks in a given area. But, despite its widespread use and convenience, a number of countries around the world have banned Google Street View due to privacy concerns. One of the most notable examples is Germany, where Street View was banned for several years before being eventually re-introduced with strict regulations. In 2010, Google launched Street View in Germany, but the country's data protection authorities immediately raised concerns about the collection and storage of personal data. In response, Google allowed residents to opt out of having their homes included in the service and implemented measures to blur faces and license plates.
However, this was not enough to appease the German authorities, and Street View was ultimately banned in the country until 2017. Other countries that have banned Street View include China, Austria, Greece, and South Korea. In each of these cases, concerns about privacy and the collection of personal data were cited as the primary reasons for the ban. Despite the bans, Google has continued to expand and update the service, offering users new ways to explore and navigate their surroundings. As the use of technology continues to evolve, it remains to be seen whether Street View will face further restrictions or if it will continue to grow and expand in the years ahead.
Laser Pointers – banned in US, UK, Canada and Australia
It seemed like such a good idea. As Microsoft’s Powerpoint overtook slide projectors, lasers, once restricted to missile defence systems and rock concerts, allowed a presenter to emphasise a point without blocking the slides. But laser pointers quickly became smaller and more powerful, and governments around the world were quick to restrict the sale and use of the gadgets, citing safety concerns and multiple incidents of misuse. Laser pointers emit a concentrated beam of light that can cause eye damage if directed towards a person's eyes. This risk is heightened with high-powered laser pointers, which are often marketed as "laser pens" and can be easily purchased online. In some cases, laser pointers have been used to deliberately harm people, including pilots and law enforcement officers.
Because of this, many governments have taken action to restrict the use of laser pointers. In the US for instance, the Food and Drug Administration (FDA) has restricted the sale of laser pointers over 5 milliwatts (mW) and has warned consumers against using laser pointers in a dangerous or illegal manner. Meanwhile, In the UK, the sale of laser pointers over 1 mW is banned, and possession or use of a laser pointer in a public place is illegal. Other countries that have banned or restricted the use of laser pointers include Australia, Canada, and New Zealand. In each case, the goal is to reduce the risk of eye damage and prevent incidents of misuse.
Facebook – banned in China, Myanmar, Pakistan and Bangladesh
Facebook may be the world’s largest social media platform today, but Meta’s project zero has been banned in a number of countries for a variety of different reasons, ranging from concerns about privacy, content and online security. One of the most notable examples is China, where Facebook has been banned since 2009. The Chinese government blocks access to Facebook and other popular social media platforms in order to control the flow of information and prevent the spread of dissent. Similarly, In Iran, Facebook was banned in 2009 after the country's disputed presidential election sparked protests that were organised in part through the social media platform.
Another common reason for Facebook's ban is due to concerns about content. In some cases, governments have banned Facebook due to the spread of hate speech or misinformation. For instance, in Myanmar, Facebook was banned in 2018 due to its alleged role in the spread of anti-Muslim sentiment and fake news. Similarly, in Sri Lanka, Facebook was banned in 2018 after it was linked to a series of communal clashes that were fueled by hate speech. Some countries have also banned Facebook due to concerns about privacy and security. In countries like Pakistan and Bangladesh, Facebook has been banned for years due to concerns over the spread of sensitive information and cybercrime.
TikTok – banned on government devices in US, UK, and EU
What was once a dancing app for children under the name Musically is now one of the most popular social networking sites in the world, with 70 per cent of US teenagers accessing TikTok daily. But the popular Gen-Z video-sharing app has recently come under fire from governments in the US, UK and EU, who have expressed their concerns about the app’s handling of user data and potential links to the Chinese communist party. TikTok’s parent company ByteDance is Chinese-owned, and as per Beijing law, all Chinese companies must share data with the government. TikTok was recently wiped from government phones in the US, UK and Canada because of this, with lawmakers warning the app posed a risk to national security. US Congress is also attempting to issue an outright ban on the app or force Bytedance, TikTok’s parent company, to sell the app to a US owner to prevent Beijing from potentially accessing the sensitive data of government officials.
Though being Chinese-owned, TikTok is not available in China, instead going by the name of a different app; Douyin. Douyin is a hugely popular app and one of the most widely used social media platforms, but the Chinese government has implemented strict regulations on the app's content and ownership, and ByteDance, the parent company of TikTok, has faced scrutiny from Chinese regulators. TikTok has denied any wrongdoing, stating that it stores US user data on servers located in the US and that it has strict privacy policies. However, the US government has maintained that TikTok poses a national security risk, with concerns that the app could be used to spread disinformation, manipulate public opinion, or facilitate espionage.
Blackberry phones – banned in Saudi Arabia and UAE
Blackberry, once a popular smartphone brand, has faced bans in several countries over the years. One of the main reasons for these bans was the company's encryption technology, which made it difficult for governments to monitor communications. Some governments, such as India and Saudi Arabia, demanded that Blackberry provide them with access to user data, but the company refused to compromise on its encryption policy, leading to a ban. In 2010, the United Arab Emirates (UAE) banned Blackberry services citing concerns over national security. The ban was lifted later that year after Blackberry agreed to comply with government regulations.
In 2015, Pakistan banned Blackberry's Enterprise Services, again citing security concerns. The ban was lifted after the company agreed to provide the government with access to user data. In 2016, Indonesia briefly banned Blackberry Messenger (BBM) over concerns about the spread of "pornographic content." The ban was lifted after Blackberry agreed to work with the Indonesian government to monitor content. The bans on Blackberry in these countries highlight the tension between privacy and security concerns, particularly when it comes to encryption. While Blackberry's encryption technology provides privacy and security for its users, it also presents a challenge for governments trying to monitor communications for national security reasons. Blackberry's stance on encryption has made it a target for government scrutiny and bans, but it has also cemented the company's reputation as a secure communication platform.
Virtual Private Networks (VPNs) – banned in China, Russia and Iran
Virtual private networks (VPNs) have become increasingly popular in recent years as a tool for online privacy and security. However, in some countries, VPNs have been subject to bans or restrictions due to concerns about censorship, surveillance, and cybersecurity. The Chinese government, for instance, has banned the use of VPNs, citing concerns about online security and control. Only government-approved VPNs are allowed to operate in the country, and individuals who violate the ban can face fines or imprisonment. Russia has also imposed restrictions on the use of VPNs, requiring VPN providers to register with the government and comply with censorship requests. Iran and Belarus have also banned the use of VPNs, citing concerns about the spread of Western culture and the potential for online dissent.
While some of these bans and restrictions may be motivated by legitimate security concerns, they can also limit access to important tools for online privacy and freedom of expression. VPNs can also be used to protect sensitive data and online activity from cybercriminals and other malicious actors, highlighting the importance of balancing security and privacy concerns with the need for open access to information and communication technologies.
Earphones – banned by USA Track & Field
In 2007, USA Track & Field (USATF) implemented a ban on the use of headphones and other personal audio devices in all official races sanctioned by the organization, including marathons. The ban applied to all athletes, including recreational runners and those participating in age group events, and was implemented for safety reasons. USATF officials believed that the use of headphones could pose a safety risk by preventing athletes from hearing important instructions or warnings from officials, as well as blocking out ambient sounds that could signal potential hazards on the race course. Additionally, some officials expressed concerns that headphones could give certain athletes an unfair advantage by enabling them to listen to music or other motivational content during the race.
The ban on headphones in marathons generated some controversy among athletes and running enthusiasts, as many believed that listening to music or other audio content could enhance their performance and help them stay motivated during the long race. USATF lifted the ban on headphones for runners participating in non-championship races the following year but maintained the ban for championship races and events and those competing for awards, medals or prize money. The organisation also implemented new safety guidelines for headphone use in non-championship races, requiring athletes to use open-ear headphones that allow ambient sounds to be heard.
Mobile Phones – banned in Cuba
Until 2008, the use of mobile phones in Cuba was heavily restricted, and ownership of mobile phones was prohibited for most Cuban citizens. The government viewed mobile phones as a luxury item and a potential threat to national security, and only a select few were allowed to own them. At the start of 2008, the Cuban government lifted the ban on mobile phone ownership, allowing Cuban citizens to own and use mobile phones for the first time. However, the cost of mobile phones and mobile services remained prohibitively expensive for many Cubans, and access to the internet was heavily restricted.
In recent years, the Cuban government has taken steps to reduce regulations on mobile phones and internet access. In 2015, the government launched a pilot program to provide public Wi-Fi hotspots in several cities and has since expanded access to the internet through mobile data plans and home internet connections. However, internet access remains heavily censored and controlled on the Island and many Cubans still face barriers to accessing mobile phones and other electronic devices.
ChatGPT – banned in Italy
ChatGPT has taken the world by storm since its launch last November due to its ability to generate plausible-sounding responses to questions, as well as create an array of content including poems, academic essays and summaries of lengthy documents when prompted by users. It is powered by an AI system that is trained on a language model that scares vast amounts of information from the internet. But it is this way of collecting data that has sparked concern among experts in Italy, and On 1 April an Italian privacy watchdog banned OpenAI’s ChatGPT after raising concerns about a recent data breach and the legal basis for using personal data to train the popular chatbot.
The ban came days after more than 1,000 artificial intelligence experts, researchers and backers – including the Tesla CEO, Elon Musk – called for an immediate pause in the creation of “giant” AIs for at least six months amid concerns that companies such as OpenAI are creating “ever more powerful digital minds that no one can understand, predict, or reliably control”. Generative AI systems like ChatGPT are difficult to regulate at the moment, and many governments are still struggling to implement regulatory measures on the technology due to the speed it has risen to popularity. Whether Italy’s ban on the AI chatbot will need to further bans across Europe will remain unclear, but, for now, what regulatory action the technology may face in the future remains unclear.
The Great Firewall of China
The Great Firewall of China is a sophisticated system of internet censorship that is used by the Chinese government to control and regulate online activities within the country. The firewall has been in place since the late 1990s and is designed to prevent Chinese citizens from accessing websites and services that are deemed to be harmful or politically sensitive by the government. One of the key targets of the Great Firewall of China is social media networks. Platforms like Facebook, Twitter, and YouTube are banned in China, and citizens are only allowed to use domestic alternatives like Weibo and WeChat, which are heavily monitored and regulated by the government.
The Chinese government justifies its internet censorship as a means of maintaining social stability and national security. However, critics argue that it is an infringement on human rights and freedom of expression. The banning of social networks is particularly controversial as it limits access to information and communication channels that are widely used in other parts of the world. Despite the restrictions, some Chinese citizens continue to find ways to access blocked websites and services using virtual private networks (VPNs) and other tools. However, the government has been cracking down on these tools in recent years, making it increasingly difficult for people to circumvent the Great Firewall. | <urn:uuid:94de95bb-35b6-4ddb-946e-f85e6c356640> | CC-MAIN-2024-38 | https://em360tech.com/top-10/top-10-most-shocking-technology-bans-history | 2024-09-10T22:54:59Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00270.warc.gz | en | 0.960853 | 2,991 | 3.15625 | 3 |
In a connected world threatened by digital raiders, business and technical leaders are the guardians of their corporate kingdoms, ensuring their fortress (company) remains impenetrable to dire cyber attacks. This article will explore how modern leadership parallels the roles of medieval society, emphasizing the need for strategic collaboration to safeguard the people and valuable assets.
We will explore the responsibilities and potential consequences of enemy attacks and investigate the differences between the corporate kingdoms that succeed and those that fail.
Medieval Life Today
Imagine waking up tomorrow to find your company’s digital gates breached and its treasures plundered. This isn’t a scene from a medieval tale, this is a modern reality for many people.
You know that battle pulls men and women of honor into the fight. You and your company are under attack from criminals who want to steal resources and take information as hostage. Kingdom leaders must band together to mitigate certain cyber risks, minimize liabilities, and protect the organization’s assets and reputation.
The defined roles in this medieval society to monarchs, nobles, peasants, and knights.
The business owners are similar to monarchs with the title of king or queen. Kings and queens were at the top of the social system within their kingdom, much like owners direct their business enterprises. To grow their kingdom, these business leaders employ nobles and peasants who provide the bulk of the work. They hire and develop knights to protect their people and assets from invasion.
Nobles came after the kings and queens in the social system. Nobles were also business leaders who pledged their allegiance to the monarch. They were given titles, responsibility, and money in return for their loyal service. The nobles oversee the operation, making strategic decisions and managing resources.
The people typically doing the work are peasants who report to the nobles as they have obligations related to their roles. Since the nobles usually have power and authority over the peasants, the nobility provides governance and protection in exchange for the work product.
Technical leaders are like the knights who defend the fortress, ensuring it is secure. Knights were paid for their protection in battle and food from the peasants. They were trained for war and fought to protect their monarchs, nobles, and peasants. If he/she proved to be a brave and effective warrior, the monarch or noble may honor them with a fabulous title, money, and benefits.
As we explore the context of liabilities, know that a group of monarchs successfully defended their fortresses with minimal damage from recurring attacks. What did they do differently?
Context of Liabilities
When enemies attacked to steal resources or take prisoners for ransom, the knights set barriers and bravely fought while the nobles and peasants returned to the fortress safely. History has proven that many attacks succeeded, where the riches were pillaged due to limited or unprepared defenses.
Historical sieges wiped out kingdoms. Enemy forces would quickly outmaneuver protection efforts when too few knights or fortifications could not withstand the opposing forces. Overcome by the invaders, those who survive defeat begin the rebuilding process.
In this context, liabilities refer to the financial, compliance, and reputational risks of successful cyber attacks and data breaches. The consequences may include:
1. Financial Loss: Business disruption or shutdown is a frequent result. These costs include productivity loss, replacement of assets, response costs such as investigations, remediations, legal fees, customer notifications, and potential lawsuits.
2. Regulatory Compliance: Businesses are subject to various laws, regulations, standards, and other rules set forth by governments and other regulatory bodies. Violating these regulations can lead to significant fines and penalties.
- Federal Trade Commission (FTC) Safeguards Rule
- Payment Card Industry (PCI) Data Security Standard
- Department of Defense’s (DoD) Defense Federal Acquisition Regulation Supplement
- Department of Defense’s (DoD) Cybersecurity Maturity Model Certification
- California Consumer Privacy Act (CCPA)
- North American Electric Reliability Corporation’s Critical Infrastructure Protection (NERC-CIP)
- Sarbanes-Oxley Act (SOX)
- Securities and Exchange Commission (SEC) cyber disclosure rule (SolarWinds)
- European Union’s General Data Protection Regulation (GDPR)
3. Reputational Damage: Do you know anyone who states that a business’s reputation is unimportant? This variable involves how your outage impacts the perception of your leadership and the organization’s distinction for months or years after the event. Market share, cost of capital, and stock price if public. Perhaps employee retention and the increase in insurance premiums become issues.
Business leaders’ liability is to protect customer data, maintain compliance with regulations, and manage resource allocations. They are responsible for setting policies, allocating resources, and creating a culture of cybersecurity awareness within the organization. They own the financial and legal impact of each breach or loss event.
Technical leaders’ liability focuses on designing, implementing, and maintaining the required cybersecurity measures. They establish procedures and controls, such as patch management, access controls, and regular assessments. Their expertise ensures that the fortress (company) is well-protected against those enemies (criminals) who seek to harm (cyber breaches) and loot (phishing and ransomware) the fortress. They own the execution and potentially legal impact of each breach or loss event.
To summarize, the business leaders are accountable for the overall cybersecurity strategy and its alignment with business goals, while technical leaders are responsible for its execution and technical implementation.
If your company were a medieval kingdom, how would you protect it?
Suppose a company (yours) experiences a data breach due to an exploit from an unpatched application (widespread outage and data loss). The impact on revenue and expenses last two quarters and long term debt provides liquidity and cash flow. The business leaders may face legal consequences, regulatory fines, and company reputation damage. No raises or bonuses are paid out and nobles may be fired.
The technical leader is responsible for not implementing a vulnerability management system to support necessary patch management measures. The resulting liability for the breach could result in loss of bonus, compensation, demotion, or termination of the knight.
It is essential for business and technical leaders to collaborate effectively to mitigate certain cyber risks, minimize liabilities, and protect the organization’s assets and reputation.
Modern Medieval Obstacles
Investing in swords, shields, knights, and skills is vital to protecting the people and assets inside the fortress. As you devote resources to building or maintaining your defenses, forgetting to close the fortress gate can quickly become dire.
Attackers change tactics and methods as they probe to find your weaknesses. Some knights have the stamina and skills to fight, while others succumbed to fatigue. Others realized they needed an advantage to protect and defend the fortress if they planned to be successful.
The challenge many knights face is the worry of being critized when communicating the need for help. A few knights have an image to maintain as expert fighters. Others pride themselves on tactical execution.
Almost every knight wants to become a better defender, but there are obstacles to success. Whether time, money, or effectively communicating needs, all eventually find themselves in a battle they do not want.
Some knights sent word that reinforcements were needed, whether driven by duty or fear. That is why monarchs, nobles, and knights sought assistance from another group of people called mercenaries.
These external (coming from outside the fortress and territory) warriors perform under contract and utilize experience to overcome the adversary. The paid professionals bring fresh eyes and strategies to assist the knights in defending their fortress.
Effective mercenaries set expectations, co-develop a plan, . We stand for freedom and will join you in the fight.
The principled work creates outsized returns for the monarchs and nobles as defenses are assembled in less time. The knights benefit from the shared knowledge and practices, even after their time together ends.
Sieges and threats are overcome by planning and preparation. To safeguard your people and valuable assets, consider the following:
- Begin cybersecurity discussions with the leadership team and communicate regularly with the personnel accountable for managing cyber risks
- Evaluate potential cybersecurity issues when your organization considers potential vendors and shares data with third parties
- Ensure that the organization’s security policies, standards, enforcement mechanisms, and procedures are uniform across all teams and lines of business
- Invite the knights (technical personnel) to routinely brief nobles and monarchs (senior business leadership)
- Determine how cybersecurity risk management transitions into your corporate risk management and governance processes
- Document your organization’s assets and the technology dependencies
- Assess your organization’s exposure to loss associated with its assets and technology dependencies
- Determine your organization’s acceptable level of future losses
- Understand where cybersecurity threats sit in your organization’s risk priority list
- Identify gaps between your current state of cybersecurity and the desired target state
- Evaluate requirements and budget resources to address existing gaps
- Continuously reevaluate the organization’s cybersecurity goals
- Consider using third party penetration-testing, vulnerability management, and consulting services
- Consider protective measures such as buying cyber insurance
Enemy forces change tactics to outmaneuver protection efforts. Cybersecurity awareness and preparedness depend on strategic collaboration with continuous risk-based analysis to safeguard valuable assets.
Hundreds of years later, men and women of honor struggle with similar problems. It is crucial for both business and technical leaders to collaborate effectively to mitigate specific risks, minimize unacceptable liabilities, and protect the organization’s assets and reputation.
Who among you are wise leaders?
Imagine being the person who takes specific incremental steps to secure the fortress. You take the accolades for protecting the people and assets. We find honor in supporting those who take protecting their corporate kingdom, knights, nobles, and monarchs.
Leadership that takes accountability will establish processes, procedures, and controls to resist cyber threats. They will report and promptly assess the impact of a cyber incident from collection of information to escalation and, if necessary, disclosure to stakeholders.
The alternative then and now is that legacy methods for executing maneuvers and formations are less effective for securing the fortress. The liabilities from avoidable disruption can be career changing events for those who fail.
The leaders who need mercenary assistance, raise your swords and stand forward.
Will your story be one of triumphant defense or tragic downfall?
Your actions will determine whether you become the Hero, the Jester or Open to work.
We seek out the leaders who want to become better defenders of the corporate kingdom. If you find honor in being responsible, we stand ready to support you.
As a proud supporter of American companies, Certitude Security® is working diligently to define the specific points of truth. Together with business and technical leaders to facilitate essential asset protection priorities for companies throughout the United States. | <urn:uuid:85d6b6a9-57ce-49f8-8be9-085853a55adc> | CC-MAIN-2024-38 | https://www.certitudesecurity.com/blog/strategy-and-planning/fortress-of-the-future-how-leaders-shield-companies-in-the-cyber-age/ | 2024-09-10T23:03:31Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00270.warc.gz | en | 0.942527 | 2,232 | 2.921875 | 3 |
Nearly every day you hear another story about a company whose network was attacked, and the Cloud Security Alliance just reported that around 22% of businesses that suffered a data breach happened because of compromised credentials. We know that password strength is incredibly important, yet people still use weak passwords that are easy for them to remember but also incredibly easy to crack.
2015’s list of the most common (and therefore worst) passwords contains many classics like “123456”, “password”, and “qwerty”. However, other easy-to-guess dictionary words made the list, including “football”, “dragon”, and “letmein”. Users with passwords like these are asking for problems; aside from being the first terms that someone who knows the user would guess, single dictionary words take no time to crack at all.
When hackers attempt to break a password, they don’t sit down and type out every possible combination. They use dictionary attacks or brute-force algorithms to try thousands of combinations in a matter of seconds — the shorter the password and less complex, the easier it is broken.
This makes sense — if you had four items and had to arrange them in every possible order, it would only take you a minute. With 100 items, though, creating every possible order would be a massive task. It’s the same way with passwords, which is why length is critical. Passwords should not be shorter than eight characters, while 12 is a better minimum to shoot for.
The above “try everything” methods of attacks aren’t the only way that passwords are compromised. Thanks to phishing (pretending to be someone legitimate in order to steal user passwords) and other similar methods, users often give their passwords over to a malicious entity without even realizing it. Here, it doesn’t matter if you have a 30-character password — by handing it right to the person who wants to steal it, you’ve made his job easier.
A further security risk is posed by using the same password on several sites. Once someone figures out the password for a user, they will likely try to use that password on other sites, assuming that they’ve used it in multiple places. This is especially deadly with email accounts, because they allow you to reset passwords — so if a someone breaks into your email account, even if you don’t use that password elsewhere, they can enter your email into various sites and try to reset it. Because of its significance, make sure your email account password is strong and different from passwords you use on other sites.
If you’d like to get a baseline on how secure various passwords could be, using a password checker site like How Secure is my Password? can give you a ballpark on how long it would take your password to crack. Note that you should never type any actual passwords into these sites to ensure safety, and they are not always accurate (it says that “thisisapassword” would take a thousand years to crack), but it’s at least a start.
So, it makes sense to enforce secure passwords, but if people can’t remember them, they’re liable to write them on a sticky note at their desk or keep a text file on their desktop, which defeats the purpose of having a strong password. The solution to this issue is using a password manager, like LastPass, Dashlane, or 1Password. These services are vital for remembering all the passwords of daily life — they allow you to generate secure passwords for every site and remember them all under one master password (so you can have a 30+ character password that you don’t have to remember!).
The benefits are many: mobile apps for signing in on the go, browser extensions that let you automatically sign into your accounts (more secure than the browser’s built-in solution), the ability to save multiple logins for one website (such as Gmail), and passwords that change automatically.
The passwords are kept secure with strong encryption that only you, not the company, can access. The most secure password is one you can’t remember, and a password manager lets you create as many of these as you want, all locked behind one strong master password — this is the only one you have to remember!
With all the security risks that bad passwords can pose, we recommend using a password manager to help you out. After a bit of setup, you’ll wonder how you ever kept track of all your passwords without them. Start using one now and kick risky password habits forever! | <urn:uuid:def91e3f-4206-4906-8135-af94a237febf> | CC-MAIN-2024-38 | https://www.next7it.com/insights/password-security-not-optional/ | 2024-09-10T23:11:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00270.warc.gz | en | 0.952473 | 959 | 2.890625 | 3 |
Image processing industry has been the most popular industry because of many obvious reasons that have resulted in providing a high-quality image of the micro-scopic level of things. This has paved way into bringing a revolution for those who are doing their research in plant life sciences and need to find an image proof at some following points:
Leaf area estimation of, infected leaf area and chlorophyll
Leaf area estimation is significant in plant reproducing. Prior, leaf territory meters were utilized for this reason. Now, image examination can be utilized to gauge the leaf region. Pictures of the leaves, caught by a camera or a scanner are broke down by the ColorPro programming bundle created by Electronics Frameworks Division. The zone is gotten in pixels, which can be changed over to cm2 or inches2 with the appropriate adjustment of the framework. Commonly a viral or a contagious assault on plants results in corruption of chlorophyll shades in leaves. Such tainted leaves have patches of green furthermore, yellow shading. In plant reproducing, it is imperative to evaluate the leaf disease, which needs zone estimation of green and yellow segments. This had been an exceptionally troublesome assignment prior which is currently made simple by the ColorPro programming.
Protein estimation is a significant strategy in numerous biochemical tests incorporating those related to plant organic chemistry. Most routinely utilized techniques for protein estimation depend on spectrophotometric estimations, which are lumbering, difficult and may require huge amounts of protein. Another strategy has been produced for protein estimation utilizing the ColorPro programming Banner, et al. 1999). The method includes detecting a steady volume of protein arrangements (standard and obscure) on nitrocellulose paper and recoloring with Ponceau S and destaining. The advanced shading scanner is utilized to snatch the picture of the pink spots. The force of the shading for each spot is digitized and estimated in subjective units named as opposite coordinated dark esteem.
The approach of biotechnology has brought about a requirement for adjustment of microbial procedures in numerous fields of connected science. The ColorPro programming has schedules, which can be utilized for tallying bacterial states. Distinctive parameters of the settlement like force, estimate, structure factor, availability and shading in the picture can be contemplated while checking the states. Upgrading the difference amid procurement and improvement of thresholding yield better pictures for progressively exact and quicker tallying.
Use of image analysis
An electrophoretic detachment of the proteins on polyacrylamide gels is an ordinarily utilized method in science. The protein gels frequently show a substantial number of groups, which must be looked at among changed examples. The capacity, recover, and examination of electrophoregrams with the assistance of PC based picture investigation framework is valuable where a substantial number of tests are to be examined and contrasted and a standard example of groups. The ColorPro and BIAS programming bundles can be utilized for near examination of protein gels, just as DNA gels after silver recoloring. | <urn:uuid:9bf39108-22af-407d-8f9c-a30bdfaa77e7> | CC-MAIN-2024-38 | https://cioviews.com/applications-of-image-processing-in-studying-plant-life-sciences/ | 2024-09-15T18:53:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651647.78/warc/CC-MAIN-20240915184230-20240915214230-00770.warc.gz | en | 0.942018 | 602 | 2.671875 | 3 |
In an era where industries thrive on data-driven decisions, the demand for proficient data scientists has hit an all-time high. With such tantalizing career prospects on the horizon, choosing the right data science training program becomes critical. Aspiring data scientists must navigate a complex landscape of educational opportunities to find the program that not only fuels their passion but also propels them toward a successful career. This article serves as a guide to shed light on the various aspects to consider when selecting a data science training program.
Understanding the Data Science Landscape
Data science is an interdisciplinary endeavor that requires a blend of statistics, machine learning, programming, and domain expertise. Prospective students should recognize how the integration of these elements allows data scientists to extract valuable insights and predictive patterns from both structured and unstructured data. Consequently, any program under consideration should offer a curriculum that weaves these components into a coherent learning experience.
Furthermore, data science is not confined to a single industry; it’s omnipresent across fields such as finance, healthcare, e-commerce, and technology. Hence, it’s beneficial to choose a training program that elucidates the application of data science in various sectors, thereby broadening a student’s horizon and employability.
Setting Personal Learning Objectives
Before embarking on this educational journey, it is paramount to align the program with one’s career aspirations. Students should introspect on their interests within the data science realm, whether it be the statistical undercurrents, machine learning algorithms, or the intricacies of data manipulation and analysis. This clarity aids in selecting a program that best complements individual professional goals.
A precise assessment of personal skill sets can guide one toward the learning experience that will fill the necessary knowledge gaps and strengthen existing skills. The selection should reconcile personal objectives with the program’s offerings to set the stage for a satisfying and fruitful educational endeavor.
Evaluating Program Curriculum and Objectives
A robust curriculum is the cornerstone of an effective data science training program. It should holistically cover foundational topics while also exploring advanced techniques such as deep learning and big data analytics. Prospective students must ensure that the program’s stated objectives resonate with their personal learning goals and that the curriculum promises a comprehensive coverage of essential skills.
Transparency in articulating program goals and curriculum structure is indicative of a thorough and organized educational approach. This clarity enables students to navigate their learning journey with confidence, understanding exactly what expertise and capabilities they will acquire upon conclusion.
Instructor Quality and Expertise
Instructors are the navigators of the learning journey. Highly qualified educators who bring a mix of academic rigor and practical industry experience are crucial. They offer invaluable insights and facilitate the bridge between theoretical knowledge and real-world application.
Prospective students should seek out programs led by instructors who not only comprehend the technicalities of data science but can also convey complex concepts in digestible lessons. A great teacher’s mark lies in their ability to inspire and adapt to the unique learning styles of their students, fostering an environment ripe for intellectual growth.
Hands-on Experience and Project Work
Theory alone does not make a data scientist. Practical application through rigorous projects and exercises is essential to solidify the skills learned in class. Training programs must, therefore, provide ample opportunities for hands-on experience to apply theoretical concepts to real-world scenarios.
The types of projects offered can significantly affect a student’s competitiveness in the job market. Look for programs that simulate the complexity of actual data science tasks and encourage innovation and creative problem-solving. This practical exposure is where theoretical knowledge transforms into actionable expertise.
Access to Resources and Collaborative Learning
For a thorough dive into the world of data science, access to state-of-the-art resources is indispensable. This includes software tools and datasets, as well as computational facilities that replicate the challenges and environments of the industry.
Equally important is the propensity of the learning environment to foster teamwork and peer discussion. Data science often involves collaborative work, and programs that simulate this dynamic prepare students for team-driven industry dynamics, enhancing their collaborative skills while deepening their understanding of the subject matter.
Assessment and Constructive Feedback
Monitoring progress through regular assessments ensures that students are on the right track and mastering the necessary concepts. These evaluation points serve to identify areas of strength and those requiring additional focus.
Feedback is a critical component that shapes burgeoning data scientists, spiraling them towards greater competence. Quality training programs include timely, constructive feedback within their pedagogical framework, providing personalized guidance and support throughout the learning process.
Industry Connections and Career Services
The bridge to the data science job market is often built on networking and industry connections. Training programs should offer career services, including counseling, mentorship, and placement assistance, to facilitate a smooth transition from academia to industry.
Curricula that reflect industry standards and demands, coupled with opportunities for students to engage in internships or collaborative projects with industry partners, are valuable. These alliances enrich the educational experience and can be pivotal in securing post-graduate opportunities.
Measuring Graduate Success and Outcomes
The success of any training program is reflected in the career trajectories of its graduates. A thorough investigation into alumni outcomes can provide insights into a program’s capability to propel students into desired industry roles.
Prospective students are encouraged to review alumni testimonials and job placement statistics, which can give a clear indication of the program’s effectiveness in meeting educational and professional objectives.
Importance of Continuous Learning and Improvement
In today’s industry, data-driven decision-making is paramount, putting expert data scientists in high demand. For those eyeing this lucrative field, the choice of data science training program is crucial. Prospective data scientists must sift through a myriad of educational paths to find one that ignites their interest and ensures career success. Given the competition and the vast options available, it’s vital to select a program that offers a robust curriculum, real-world application opportunities, industry recognition, and alignment with long-term professional goals. With the right training, budding data scientists can look forward to a dynamic and rewarding career in this cutting-edge field. | <urn:uuid:427433f1-d9ad-4ada-8711-5abd30dbcd61> | CC-MAIN-2024-38 | https://bicurated.com/data-analytics-and-visualization/how-to-choose-the-right-data-science-training-program/ | 2024-09-18T08:29:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00570.warc.gz | en | 0.924474 | 1,272 | 2.578125 | 3 |
Current figures are at 54%, and this is set to rise to 66% by 2050. The megacities of the world currently hold more than 10 million inhabitants, and this is due to increase further.
With the populations of urban areas rising worldwide, managing the increased strain on resources is one of the major challenges that the world is facing in the present day.
To address this challenge, recent focus has turned to evolving cities to become more efficient in order to keep up with their surging populations.
These newly developed cities are known as smart cities. Singapore, with 100% of its population living in urban areas, is understandably leading the way in the development of smart cities around the world.
Global tech leaders are looking to Singapore to lead the way, as cities worldwide search for the best ways to build cities that smartly manage the needs of millions of inhabitants.
Big data and the Internet of Things are thought by many to be the solution to building smart cities.
>See also: The smart nation: Singapore’s masterplan
However, the amounts of data that the IoT will need to process to run a smart city will be enormous, and so an efficient network that can handle this data volume is essential.
The implementation of 5G will likely provide this network, therefore enabling the IoT to become a possibility on a larger scale.
Once 5G and the IoT are running, the technology will be able to assist the setup and smooth running of smart cities.
To support smart cities and the necessary 5G and IoT, strong connectivity will be absolutely essential.
Without connectivity, processing the sheer volumes of data that smart cities will need to create a network of communicating devices will be impossible.
This is the ‘in’ that telcos need to become involved in this next generation of technology and communications.
Every device that is part of a smart city must work with others to manage the resources of megacity populations. These devices must therefore communicate with each other if the city is to be truly ‘smart’.
Without technology that supports this level of connectivity, smart cities will struggle to succeed. Connectivity is at the very core of service provider’s businesses.
With their ability to provide connectivity, their substantial experience in managing complex and busy networks, and their knowledge in dealing with cyber security issues, telcos are in a prime position to profit from smart cities.
However, to keep up with the substantial requirements that smart cities will present, they must invest time and money developing both their networks and management systems.
>See also: The future of smart power in the smart city
A major problem with the current development of smart cities is that, all too often, technology is being used to solve problems that don’t already exist or are not priority – such as smart parking.
A true smart city should use technology to solve pre-existing problems, such as environmental impact, or the strain of growing populations on transport and resources.
A list of 10 key steps in building a smart city by The Guardian is in agreement with this view, and suggests we must first ‘work out what problems need fixing.’
They suggest that too many smart city visions concentrate on big data and the Internet of Things when there are more fundamental problems – such as how to effectively implement these systems.
The message here is not that big data and the IoT are irrelevant to the cause, but that without the more basic technology solutions that first ensure their success (such as connectivity and 5G), smart cities cannot be bought to fruition.
The logistical, financial and management sides of smart cities presently lack planning.
The Guardian suggests effective smart cities are built from the bottom up, and they cite Fujisawa in Japan as an example of this.
To match the success of Fujisawa, the logistical solutions that enable the innovative developments in technology must be considered and once more, connectivity is at the centre of this.
Communications and networking must not be overlooked by smart cities if they are to succeed; telco involvement could see the development of ‘heterogeneous networks’ that can deliver service to and share data with the multiple platforms that smart cities will need.
If telcos were to provide smart technology offerings that enable the development of smart cities, this would provide the gateway for others to build true smart cities.
Telcos are currently investing in improving their capacity, speeds and latency to match the expectations of 5G – and these same developments will assist in answering the demand that smart cities will place on networks.
The immediate priority then, is to gear up for 5G by investing in and developing networks that are capable of hosting its speed and latency, and that are capable of managing the IoT solutions that will follow its release.
Other priorities are to involve these technologies in the development of smart cities, and in so doing, place their communication services at their very core.
Finally, they must adapt their infrastructure and develop ‘heterogeneous networks’, as the current siloed networks will not support smart cities.
If telcos succeed in adapting, investing and developing their technology solutions to meet the innovation of 5G, the IoT and ultimately smart cities, they will be in the position of power to benefit from the resulting revenue streams.
They will also be able to leverage big data and various cloud computing solutions to offer value added solutions, which will generate yet more profit opportunities. | <urn:uuid:ef24bd40-0684-4bb8-872d-de100436058d> | CC-MAIN-2024-38 | https://www.information-age.com/role-telcos-smart-cities-3704/ | 2024-09-18T08:04:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00570.warc.gz | en | 0.964264 | 1,100 | 2.734375 | 3 |
As technology continues to evolve, integrating graphic design applications and programs into your curriculum can significantly enhance your students’ learning experience. Here are five compelling reasons why you should be incorporating graphic design apps into your CTE curriculum:
1. Fosters Creativity, Reduces Burn Out
Graphic design apps like Adobe Photoshop provide students with the tools to unleash their creativity. Creativity plays a crucial role in brain development and cognitive function – engaging in creative activities stimulates neural connections, enhancing problem-solving skills, critical thinking, and emotional intelligence. According to a survey conducted by Adobe in 2023, “82 percent [of educators who used creative activities with their students this past year,] saw positive impacts on student well-being and engagement, contributing to teachers’ increased feelings of satisfaction and reduced burnout.” These applications allow students to experiment with colors, textures, and designs, fostering an innovative mindset that can be applied across various disciplines.
2. Enhances Visual Communication Skills
Incorporating graphic design apps in the classroom helps students develop essential visual communication skills. Why are visual communication skills so important? In a world of decreasing attention spans and shorter deadlines, one graphic design company emphasized that visual communications can significantly speed up our progress and that a visual strategy is essential for the future of work. This means that students who can effectively use visual tools to communicate are better equipped to convey information quickly and effectively, which is crucial in today’s visually-driven world.
3. Prepares Students for Future Careers
Graphic design skills are in high demand across many industries. The U.S. Bureau of Labor Statistics projects a 3% growth in employment for graphic designers from 2022 to 2032, with about 22,800 openings each year. By exposing students to these applications, you are equipping them with valuable skills that can open up various career opportunities in fields such as marketing, advertising, and media.
Beyond these traditional paths, graphic design skills are increasingly sought after in sectors like technology, healthcare, education, and even government. For instance, tech companies require skilled designers to create user-friendly interfaces and engaging digital experiences, while healthcare organizations need designers to develop clear and accessible patient information materials. Nowadays, the need for graphic designers who can produce eye-catching visuals and content that capture audience attention is greater than ever.
4. Supports Interdisciplinary Learning
Graphic design is not limited to the art classroom. Graphic design apps can be integrated into different subjects, enhancing interdisciplinary learning. For instance, in science classes, students can use graphic design to create detailed infographics that simplify complex concepts such as the life cycle of a plant or the structure of a cell. These visual aids make the information more accessible and easier to understand. In history lessons, students can design interactive timelines or digital posters to illustrate significant events, bringing historical periods to life in a visually engaging way. This not only aids retention but also encourages deeper engagement with the material. By integrating graphic design into various subjects, educators can create a more engaging and holistic learning environment that not only enhances students’ understanding of the subject matter but also fosters critical 21st-century skills such as creativity, collaboration, and digital literacy.
5. Boosts Student Engagement
Using graphic design applications in your teaching strategy can significantly increase student engagement. The interactive and hands-on nature of these tools captures students’ interest and encourages active participation, leading to a more dynamic and enjoyable learning environment. For example, students who might struggle with traditional lecture-based instruction often find graphic design tasks more engaging and accessible, allowing them to express their understanding creatively.
Moreover, the immediate visual feedback provided by graphic design tools allows students to see the results of their work in real-time. This instant gratification can be highly motivating, encouraging students to experiment and iterate on their designs, leading to a deeper understanding of the subject matter.
By incorporating graphic design applications into your classes, you are not only enhancing your students’ educational experience but also preparing them for a future where these skills will be invaluable. Plus, learning can become more relevant and connected to students’ lives outside of school as students can apply their design skills to personal projects and hobbies, further solidifying their learning and making education a more holistic and continuous experience. Start exploring the possibilities today and watch your students thrive in new and exciting ways!
For more insights and resources on integrating technology into education, visit our blog regularly. Let’s empower the next generation with the skills they need for a digital world, itopia can help you get there! Learn more. | <urn:uuid:c9f2f3d6-23dc-4b55-b949-4667a30354b4> | CC-MAIN-2024-38 | https://itopia.com/5-reasons-why-you-should-be-utilizing-graphic-design-apps-in-your-classroom/ | 2024-09-20T19:20:00Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00370.warc.gz | en | 0.930939 | 932 | 3.015625 | 3 |
Safeguarding Schools Against RDP-Based Ransomware
How getting online learning right today will protect schools, and the communities they serve, for years to come.
The FBI has issued a warning to US K-12 school districts, advising them that they are being targeted by cyberthieves and should take extra precautions to secure their networks. With schools around the world responding to COVID-19 restrictions by moving to online learning, millions of students and teachers are logging on to school networks for classes and assignments. Many of them use unmanaged computers that are prone to vulnerabilities, creating countless opportunities for cybercriminals to use those devices as an attack vector to the internal network.
While the pandemic has opened new opportunities for cybercriminals, attacks on K-12 schools are nothing new. In fact, they have been on the rise for some time. The K-12 Cybersecurity Resource Center's Year in Review for 2019 reported 348 publicly disclosed "incidents" at schools, three times as many as in 2018. In 2020–2021, the FBI anticipates a major increase in attacks as schools begin to open, even as many states still struggle to contain the novel coronavirus, while others that were initially successful are battling a second wave following reopenings.
Most school districts now acknowledge that things will not be back to normal this fall, and they are planning hybrid learning solutions for the school year. Hackers are delighted with this development since distance learning is often implemented using Microsoft's Remote Desktop Protocol (RDP), one of the prime targets for cybercriminals, aiming for quick gains. Their primary tactic: install ransomware that locks up data until ransoms are paid. Recently, in June 2020, the University of California San Francisco School of Medicine paid a ransom of over $1 million to regain access to important scientific data.
While a K-12 school or school district may not have data worth millions, cybercriminals know that schools often lack the resources large corporations deploy to guard against cyberattacks, which makes them prime targets. One specific attack vector the FBI has warned about is Ryuk ransomware, which is deployed via RDP endpoints, specifically students, parents, and teachers in the K-12 environment. Ryuk uses a sophisticated type of data encryption that targets backup files. Once the end user has been infected, that person can propagate the virus to the school's servers, where it can cause havoc.
"Vaccinating" Against Ransomware Infections
There are relatively simple and affordable steps to empower educational organizations providing distance learning, while keeping schools and districts safe from cyberthreats. The FBI recommends the following five steps:
Step 1: Backup your data. Make sure your backups cover your most important files.
Step 2: Secure your backups. Backups should not be connected to the computers and networks that are being backed up because anything that's connected to your network, including your backup files, can be encrypted in a ransomware attack. Also, if a good, up-to-date backup is available, there's no need to pay ransomware because all data can be quickly restored.
Step 3: Make sure that all software and operating systems are up to date and security patches are installed. It's important to make sure that all end users are updating their software, including parents, teachers, administrators, and students, in addition to software on the school's server. (Good luck with that!)
Step 4: Monitor all remote connections and software. Identify unusual activity, such as failed login attempts from any administrator-level account.
Step 5: Use two-factor authorization for login. Also apply "least privilege" controls, which allow users to access only the data and applications they need. This includes, for instance, allowing just read-only access, without the ability to alter content in any way, if students have no need to write while using a specific application.
Best Backup and Remote Access Security Practices
There are two effective ways to secure your backups from ransomware demands. The first is to use a backup that is completely offline. Why? Because if the backup is connected to the network, it likely won't help you; hackers now deploy tools that allow them to encrypt your backups as well. A disk backup may seem old-fashioned, but it's still an effective way to have a restorable backup in the event of a ransomware attack. It's also important to keep records of backups, so you can easily find the one created most recently, prior to the attack. Another advantage of storing disk backups off-site is they are available in the event of disaster such as fires or floods.
The second way to secure backups is to use cloud backup services. These services are designed with disaster recovery in mind and generally offer file versioning, which means you can restore from a backup that was done before your data was hijacked by ransomware and preserve your file structure, which makes the restore process much simpler.
Advanced Remote Access Software
Look for server-based software that is easy to control and maintain. Your IT professionals have the expertise to manage and deploy patches, but getting students, staff, and parents to update software that's installed on their devices, consistently and promptly, is a much greater challenge. The right software can be a lifesaver here. For example, some solutions are browser-based and, as a result, do not require software to be deployed on the devices themselves. Also, look for software that allows you to apply least-privilege access principles, giving different users varying levels of access (read versus write) to various applications and IT resources on servers.
Today's investment in securing online learning will also return long-term benefits. The pandemic has accelerated the digital transformation journey for many, and schools are likely to continue incorporating distance learning as an adjunct to the traditional classroom. So, getting it right today will protect schools, and the communities they serve, for years to come.
About the Author
You May Also Like
State of AI in Cybersecurity: Beyond the Hype
October 30, 2024[Virtual Event] The Essential Guide to Cloud Management
October 17, 2024Black Hat Europe - December 9-12 - Learn More
December 10, 2024SecTor - Canada's IT Security Conference Oct 22-24 - Learn More
October 22, 2024 | <urn:uuid:9ccd6b3c-6a49-497c-a760-7d7f5d5fc545> | CC-MAIN-2024-38 | https://www.darkreading.com/cyber-risk/safeguarding-schools-against-rdp-based-ransomware | 2024-09-13T15:32:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00234.warc.gz | en | 0.961515 | 1,281 | 2.65625 | 3 |
In recent years, IT in healthcare management has become a hot topic among people everywhere. In the United States alone, 45% of people have used products like digital health apps and fitness trackers, according to Gallup research.
While it’s undeniable that these apps can make staying fit and keeping up with your dietary regimen much easier, they also present a potential risk to your data security.
These devices collect millions and millions of gigabytes of user data on a monthly basis. Even though a good number of companies behind these apps abide by HIPAA-sanctioned data protection guidelines, they can’t guarantee the safety of your personal information.
Today, we’re going to talk about electronic health record security, including how many health tracking apps actually collect user data, in what ways this data collection can affect your personal life, and the most effective data security tactics and best practices you can implement to better protect yourself.
How Many Apps Share Your Private Data?
Modern entrepreneurs have been transforming healthcare with IT for more than two decades. The resulting healthcare apps can now help with everything from tracking your prescriptions and fitness progress to measuring your blood pressure and heart rate in real time. However, a recent study from the BMI Journal has shown that these apps also pose a great risk to consumers’ privacy.
For this study, researchers took 24 random health tracking apps from the top 100 apps in the Google Play store and examined their data-protection capabilities. What they discovered was astounding: nearly 80% of the apps in question were shown to share data in ways that violated user privacy. In particular, many developers and their parent companies share user data with 3rd-party organizations, who, in turn, sell that data to 4th parties and so on.
How Data Collection May Affect Your Life
You may be thinking, “Sure, these apps may be collecting my personal data, but what company doesn’t do that nowadays?” This may be true; it may seem pointless to worry over health tracking apps when there are other apps that follow similar practices. However, it doesn’t benefit to ignore the dangers completely. Data collection can negatively impact your life in several ways over time, including the following:
All Your Basic Personal Information Gets Shared
App developers collect user information for data analytics, which helps them in improving their services. Unfortunately, the collected data is often shared with other companies, who then sell it to 4th-parties like Facebook and Oracle.
This shared data often includes all of your basic info, such as your name, age, gender, height, weight, email address, and more. Having this information shared may seem harmless on the surface, but it tells outside companies several details about you that can prove harmful if misused.
Your Personal Device Gets Overrun by Advertisements
Even though the apps you download may not cost you any money, the companies behind them still need some way to make a profit. Often, these so-called “freemium” apps earn their money by sharing your identifiable information with advertisers and allowing advertising networks to track your mobile device. At best, it can be a nuisance; you may find that you see more and more advertisements as you use the apps. However, these ads can lead to targeted ads, which monitor your online behavior in much more invasive ways.
Due to Poor Security, Your Data May Be Leaked At Any Time
More often than not, health tracking applications transmit data over insecure networks. What’s worse, some of them don’t even encrypt the data, making it far more vulnerable and easy to breach. If you think this won’t happen to you, just keep this fact in mind: in the biggest healthcare data breach last year, more than 25 million files were leaked. You can’t be too careful—it pays to fortify your data security however you can.
4 Electronic Health Record Security Methods
You can’t prevent development companies from collecting certain personal data, like your first name, surname, and email address. But, you can use certain tactics to reduce data collection. Let’s talk about data security methods and best practices for protecting your personal information.
1. Select Your Health Apps Wisely
Most healthcare IT experts recommend that you take some time to research the apps you’re interested in before you even download them. One way to do this would be to talk to your insurance agency and see which apps they recommend. Apps suggested by reputable sources of this kind are far more likely to follow security guidelines and fall under data privacy laws, and they may even prove to be more useful than some of the apps you might come across on your own.
3. Google the Company From Time to Time
4. Check the Privacy Settings on Your Apps
Another thing you need to do as soon as you install any new app on your device is to check the privacy settings. Carefully examine the permissions the app is asking for. For example, some apps request very little access. However, if the app is asking to access your contacts, location, or microphone, you should take it as a red flag and stop using the app altogether.
Take Control of Your Electronic Health Record Security Now!
We’ve covered all the basics. In general, you’ve got all you need to know about your privacy security and electronic health records right here. If you don’t want to go over the entire article again, here’s a quick rundown:
- Make sure to research/check all apps before downloading them
- Research the companies behind the health apps you’re using
- Go through the apps’ privacy settings and limit data collection
When it comes to data security, these are the best practices. That said, you can’t go wrong asking for additional help from the experts. If you are worried about data security in your practice, contact Scale Technology today. Ask for a meeting—we are happy to help you one-on-one. Let us give you the tools to improve the safety of your clients’ data in no time. | <urn:uuid:42a5afbd-9b38-413a-a4fc-151184af02dc> | CC-MAIN-2024-38 | https://www.letscale.com/how-safe-are-health-tracking-apps/ | 2024-09-13T14:33:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00234.warc.gz | en | 0.948601 | 1,253 | 2.671875 | 3 |
A very powerful infrastructure configuration (though commonly misunderstood) is RAID, short for Redundant Array of Independent Disks. As the name implies, the purpose of a RAID configuration is to provide redundancy for your hard drive, and depending on the configuration can be an excellent way to improve redundancy or performance for your drives. We will be reviewing some of the more common configurations, as well as what they can and cannot do.
When most people that are somewhat familiar with RAID think of the concept, they are thinking of two hard drives that act as a mirror to one another. This is known as RAID 1, and it does exactly that - the two hard drives are mirrors of each other, so if one drive fails, your data isn’t lost. However, RAID 1 can’t keep a software corruption out of the picture (it is an exact mirror, for better and for worse) so if you have completely corrupted your system, RAID 1 unfortunately will do nothing for you. In addition, the cost to implement RAID 1 is relatively high, since you need to buy two hard drives to get the storage of one. Theoretically this can continue up to four drives equaling two and so forth, but most of the time different RAIDs are used in these cases.
RAID 5 is a more efficient method than RAID 1 because it uses data striping at the block level; with parity among drives so that you can recover from failures of any of the hard drives that you may have. However, the parity comes with a cost, as it slows down read speeds when pulling data back out of your configuration. RAID 5 also requires more hard drives than RAID 1, with a minimum of four hard drives in place before we can implement it in our Dedicated Server solutions.
RAID 10 is a striped RAID 0 array that is mirrored like RAID 1. RAID 0 on its own is purely for performance and doesn’t have any redundancy, which is why we didn’t mention it before - as a standalone configuration there isn’t much use for it in the business world (it is very popular in the gaming world for players however). By combining the best of these two configurations, RAID 10 is a very high performance configuration that is fault tolerant - in some cases RAID 10 can recover from multiple drives failing. However, it also has the same flaws as RAID 1 - you need to purchase multiple hard drives to get the same storage as a single drive.
Of course, what none of these RAIDs can do is that they cannot restore to a previous point in time, which is required in scenarios such as software corruption or malware attacks. RAID does provide redundancy that is of great value - if a drive fails there is no downtime because the data is already there - but to get a complete data protection solution it is best to combine RAID with a backup solution. RAID alone isn't a backup plan, and using it as such is a recipe for disaster. Using it to support a backup plan, however, can be a great way to dramatically increase a return to regular business activity. | <urn:uuid:13f23481-57a6-4233-b6ed-02f57c3f7d28> | CC-MAIN-2024-38 | https://www.mytechlogy.com/IT-blogs/7191/what-does-raid-do-for-me/ | 2024-09-17T05:59:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00834.warc.gz | en | 0.966615 | 629 | 2.78125 | 3 |
HIPAA 101: What does HIPAA stand for?
Let’s begin with the question What does HIPAA stand for? In full, HIPAA stands for the Health Insurance Portability and Accountability Act of 1996, or the HIPAA Act for short. It’s a US privacy law to protect medical information like patients records and allow for confidential communication between patients and medical professionals.
The HIPAA Act was enacted August 21, 1996 by the 104th US Congress and signed by President Bill Clinton. The long title for the HIPAA Act specifies, “An Act To amend the Internal Revenue Code of 1986 to improve portability and continuity of health insurance coverage in the group and individual markets, to combat waste, fraud, and abuse in health insurance and health care delivery, to promote the use of medical savings accounts, to improve access to long-term care services and coverage, to simplify the administration of health insurance, and for other purposes”.
The HIPAA Act is also known as the Kennedy Kassebaum Act since it was initially introduced in Congress as the Kennedy-Kassebaum Bill. Democratic Senator Edward Kennedy and republican Senator Nancy Kassebaum were two of the leading sponsors of the bipartisan bill. HIPAA had two main objectives as specified by Title I and Title II of the act.
What does HIPAA mean in daily practice?
So now we know what HIPAA means, but let’s see what it is HIPAA stands for when it comes to actual usage to answer the question “What does HIPAA mean in daily practice”?
Title I – Health Care Access, Portability, and Renewability
Title I protects health insurance coverage for workers and their families when they change or lose their jobs.
Title II – Preventing Health Care Fraud and Abuse; Administrative Simplification; Medical Liability Reform
Title II provisions require the establishment of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. The provisions also address the security and privacy of health data. The HIPAA Administrative Simplification provisions require the Department of Health and Human Services (DHHS) to adopt national standards for unique health identifiers, security, electronic health care transactions and code sets. Medical Liability Reform provides for civil penalties to be assessed against health providers who fail to comply with the law.
HIPAA Rights of Privacy
HIPAA regulations provide rights of privacy for individuals, including those individuals aged 12 to 18. Under HIPAA regulations, health providers must have a signed disclosure from individuals before releasing any information related to their health care to anyone, including their parents. HIPAA applies to all health plans, healthcare clearinghouses, and healthcare providers that electronically transmit health information in connection with standard transactions. Standards for transactions are as defined under HIPAA by the Electronic Data Interchange (EDI) of administrative and financial healthcare transactions.
HIPAA laws specify that health providers must take responsibility for the authorized disclosure of Protected Health Information (PHI), but it does specify that notice of a breach of such information be provided to the individuals whose information was breached. To ensure that individuals are notified of security breaches of PHI, the Health Information Technology for Economic and Clinical Health (HITECH Act) was enacted in February 2009. HITECH was enacted as part of the 2009 American Recovery and Reinvestment Act (ARRA) to significantly change HIPAA Administrative Simplification provisions. Under HIPAA HITECH regulations, breaches must not only be disclosed to individuals, but when 500 or more individual’s information is breached, notice must also be sent to the DHHS and the media. In addition, HITECH increases the civil penalties for non-compliance and it provides for more enforcement.
The 1996 Health Insurance Portability and Accountability Act (HIPAA) was an attempt to reform health care and to balance the rights of individuals against the responsibility of healthcare providers. HIPAA incorporates a HIPAA Privacy Rule that protects the health information of individuals held by health plans, health care providers, state Medicaid agencies, health care clearinghouses and their business associates. HIPAA also incorporates a HIPAA Security Rule that establishes standards and safeguards that must be put in place to assure the integrity, confidentiality, and availability of electronic Protected Health Information (ePHI) relative to the access to stored information and the interception of transmitted information. The Department of Health and Human Services (DHHS) Office of Civil Rights has the responsibility for enforcing the HIPAA Privacy Rule and the HIPAA Security Rule. Through audits and investigations, the DHHS found that many healthcare providers willfully neglected to follow the rules established by the HIPAA or breached the Protected Health Information (PHI) that was held on individuals.
What does HITECH stand for?
HITECH stands for The Health Information Technology for Economic and Clinical Health (HITECH Act) and was signed into law as part of the 2009 economic stimulus bill, known as the American Recovery and Reinvestment Act (ARRA), to revise certain provisions of the HIPAA laws as they relate to privacy and security protections. HIPAA HITECH increases the scope of protections for individuals, increases penalties that may be levied against health providers for non-compliance and provides for more enforcement of established rules.
Scope of Protections
Under HIPAA regulations, individuals are granted specific rights with respect to the privacy of their identifiable health information, and HIPAA rules provides for the disclosure and sharing of that information with certain entities when they have a legitimate need to know. The HIPAA HITECH Act revises parts of the Social Security Act to expand upon the privacy and security protections granted to individuals under the HIPAA. The HIPAA HITECH Act specifies that heath care providers must implement a system of Electronic Health Records (EHRs), and the act provides for monetary incentives to those healthcare providers who are able to show “meaningful use” of their established EHRs until the year 2015. After 2015, healthcare providers will be penalized for failing to show such use of their EHRs. HITECH also specifies that individuals, or specified third parties, be entitled to an electronic copy of all ePHI that pertains to them.
HIPAA set guidelines for the disclosure of Protected Health Information (PHI), but it did not require disclosure to individuals when their personally identifiable information was breached. HITECH regulations require that breaches of health information be provided to impacted individuals via first class mail with an explanation of the breach and an indication of processes being put into place to resolve the breach. If a breach impacts 500 or more individuals, healthcare providers must notify those individuals and also the DHHS, the media and the State Privacy Officer.
HIPAA HITECH establishes four categories of violations, associated penalties and maximum penalty amounts for violations of the law. The HITECH Act imposes penalties against health providers even in cases where they did not know or would not have known of a violation, and exempts them from penalties if a violation was not a result of willful neglect and it was corrected within 30 days.
Conclusion & Further reading
So to answer the question “What does HIPAA stand for?, we can safely say HIPAA stands for two different purposes. First there are rules and regulations to enforce privacy and security rules on companies and individuals working in healthcare. But it also opened the door for a whole range of companies offering HIPAA certified products and services to assist healthcare professionals in abiding the HIPAA laws. These companies have developed software and services to assist in handling healthcare data.
- HIPAA Compliant Hosting solutions are poised to provide systems that will ensure that health data falls within compliance for HIPAA, HITECH and EHRs.
- HIPAA Compliant Email specializes on secure email.
- HIPAA Compliant Cloud Storage does the same for electronic storage purposes.
- HIPAA Training Resources (including HIPAA training for the Army)
- What exactly is HIPAA certification?
- HIPAA Forms Explained: Privacy and Authorization
Various people come to this page looking for answers on “What does HIPPA stand for“, “HIPPA stands for” or “What does HIPPA mean?”. Thousands of people are searching for HIPPA compliance, HIPPA laws, HIPPA training, HIPPA certification or even HIPAA acronym . In fact, almost 1/3 of the people looking for information about the HIPAA acroynim, spells it as “HIPPA”. So to make sure everyone is on the right page, the correct HIPAA acronym is:
HIPAA: “The Health Insurance Portability and Accountability Act of 1996”. | <urn:uuid:bdf36507-e089-4290-aaaa-22356ae8b266> | CC-MAIN-2024-38 | https://www.hipaahq.com/hipaa-101-what-does-hipaa-stand-for/ | 2024-09-20T22:39:15Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701425385.95/warc/CC-MAIN-20240920222945-20240921012945-00534.warc.gz | en | 0.939201 | 1,766 | 2.828125 | 3 |
Risk is a function of the potential impact of something happen and the likelihood that the thing will happen.
We also know that digital privacy is the assurance that the system you are using only uses the data you intend in the way that you consent.
Let’s apply these concepts by examining the privacy disclosures in an Apple App Store listing.
We’ll use the App of the Day for Apple’s iOS as our example.
Today, it’s Explain Everything Whiteboard. This app is an award winner and aims to help you “teach, present, sketchnote, record videos, and work together.”
App Store Disclosure
A required part of the App Store listing is the “App Privacy” card. This critical tool that helps you evaluate the potential impact of a breach or issues with the app.
“Explain Everything Whiteboard” tracks some data that isn’t directly linked to you; contact info, usage data, identifiers, diagnostics.
Digging deeper we find out that they collect a user ID, device ID, email address, diagnostics data (like crash data), and product interactions.
Given that the app allows you to collaborate with others and share your whiteboards, it’s reasonable to expect the collection of information in order to enable that.
The device ID, user ID, and email address are the pieces of information that are needed to do that.
If this data was exposed would it impact you?
Probably not. Your email address is public. You enter it everywhere. The device ID and user ID are also reasonably public as any app on your device is potentially going to have access to those as well.
The other information collection by the app is data designed to help optimize the app and focus the developers efforts. There’s nothing really sensitive there.
The App Privacy card has given us the information we need to understand how this app uses our data. With that, we’ve made a reasonable evaluation of the impact this app could have on our privacy…which is negligible.
Rinse and repeat this process for any app you’re interested in using from the App Store. | <urn:uuid:2e0b6c57-17e9-49ae-aa71-0a178160eb25> | CC-MAIN-2024-38 | https://markn.ca/2022/how-the-app-privacy-card-in-the-apple-app-store-matters-to-you/ | 2024-09-07T15:40:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00834.warc.gz | en | 0.939563 | 452 | 2.546875 | 3 |
Over the last 25 years, the internet has created an explosion of text and data. Consumers today are doing far more than merely buying and selling goods online. They're also soaking up and generating a great deal of valuable data. Because most of this data is text-based and complex, deriving insights from it has been hard.
That's changing, as digital technologies evolve. Solutions that handle unstructured text can be a real game-changer. For instance, by picking up nuances in language, they can make a big difference when it comes to getting the right Google search result. They can also help us interpret social-media chatter to better understand human behavior.
Natural language processing (NLP), the means by which a computer absorbs, dissects, and analyzes text or speech, turns this data into encoded, structured information, then proposes actions based on the output in everyday language. Natural language understanding (NLU), a subset of NLP, parses unstructured inputs, then structures it in a way that both the machine and humans can understand and act upon.
NLU has captured attention in some industries by answering complex business problems. For example, it can help identify adverse events in drug safety reports, improve patient care by mining medical data, and retrieve information from lengthy contract documents. Yet, for the most part, industry hasn't exploited it for insights or invested in analyzing the rich motherlode of unstructured textual data originating from multiple online sources. | <urn:uuid:b8cd161b-e784-456a-82a5-33b4ce634db0> | CC-MAIN-2024-38 | https://www.genpact.com/insight/driving-growth-through-natural-language-text | 2024-09-08T18:35:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651017.43/warc/CC-MAIN-20240908181815-20240908211815-00734.warc.gz | en | 0.927453 | 299 | 2.90625 | 3 |
DNS works by mapping name-to-IP address. To perform this function, we need to know the hostname so that we can get the IP address information from the DNS server. However, sometimes what we have is the IP address and we want to know what hostname that is using this IP address. The good news is that DNS server can also do reverse lookup where mapping is performed from IP address-to-name. To do reverse lookup, DNS server needs a Pointer record (PTR). In this post, we’ll explain the way for administrator to Add PTR Record in Windows DNS Server.
How to Add PTR Record in Windows DNS Server
PTR record can be created automatically when creating Host A record or created manually. There are two ways to manually add PTR record in Windows DNS server. We can either add the PTR Record using DNS Manager or using PowerShell. But before we can add the PTR record, we need to ensure that the related zone has been created in Reverse Lookup Zones. Usually, the zone name for reverse lookup is in x.x.x.in-addr.arpa format, where x.x.x is the first three octets of the IP address in reversed order. For example, zone name for subnet 172.31.1.0/24 is 1.31.172.in-addr.arpa.
Now we’ll use scenario below to demonstrate the steps to add PTR record in Windows DNS server:
AS-DCO001 is a Windows Server 2012 R2 machine, serves as the Domain Controller and DNS server of a domain named mustbegeek.com. An application in the network requires the DNS server to map the IP address information into host name. One of the host named “AS-SVC001” is known to have the IP address of 192.168.0.20. You are the network administrator of mustbegeek.com and you need to add this information in a reversed way so when the application queries the DNS server about the hostname with IP 192.168.0.20, the DNS able to answer the query with hostname “AS-SVC001.mustbegeek.com”.
Add PTR Record using DNS Manager
Open up DNS Manager and browse to the zone name under Reverse Lookup Zones. Ensure the zone name suits the IP subnet of the record that you want to add. In this scenario, the IP subnet is 192.168.0.0/24 therefore the suitable zone name is 0.168.192.in-addr-arpa.
Right click on the zone name and select New Pointer (PTR)…
The popup window as shown below will appear.
In this popup window, fill in the details of the record:
- Host IP address: in this field, fill in only the last octet of the IP address record that you want to add. For this scenario, the IP address of the server is 192.168.0.20 and therefore fill the host IP address only with “20”. Notice that Fully Qualified Domain Name (FQDN) will be automatically updated as you fill in the IP address field.
- Host name: type in the FQDN of the hostname that is using the related IP address, or click Browse button to select a valid Host A record for the related hostname. In this example scenario, the FQDN of the hostname is AS-SVC001.mustbegeek.com
- Optionally, you can tick the option to Delete this record when it becomes stale to make this PTR record becomes dynamic. By default if you don’t tick this option the PTR record will be created as static.
- Also optionally, tick the option to Allow any authenticated user to update all DNS records with the same name to allow automatic update of this PTR record should the information on the related host is changed.
- The last detail is also optional, you can choose to modify the TTL value or let it be the default. TTL value configures how long client can keep this record in their resolver cache. In this example we’re setting it to 8 hours.
Click OK to finish adding the PTR record.
Add PTR Record using PowerShell
The command to add PTR record using PowerShell is:
Add-DnsServerResourceRecordPtr -Name "IP_ADDRESS_LAST_OCTET" -ZoneName "ZONE_NAME" -PtrDomainName "HOST_NAME_FQDN" [-AllowUpdateAny] [-AgeRecord] [-TimeToLive TTL_VALUE]
Note that you need to run PowerShell as admin to use the command, and you need to modify these variables below according to the details you want.
- IP_ADDRESS_LAST_OCTET = Replace this with the last octet of your host IP address. For this example we will replace this value with “20”.
- ZONE_NAME = Replace this with the zone name that match your host IP subnet. For this example we will replace zone name with “0.168.192.in-addr.arpa”.
- HOST_NAME_FQDN = Replace this with the FQDN of the hostname. In this scenario the hostname is “as-svc001.mustbegeek.com”.
- [-AllowUpdateAny] = This keyword has the same purpose as the “Allow any authenticated user to update all DNS records with the same name” option. Include this keyword only if you want to allow automatic updates of the PTR record.
- [-AgeRecord] = Include this keyword only if you want to make the PTR record as dynamic, as this keyword serves the same purpose as option to “Delete this record when it becomes stale”.
- [-TimeToLive TTL_VALUE] = Only include this keyword if you want to customize the TTL value, and replace the TTL_VALUE with the value in HH:MM:SS format. In this example we replace the value with “08:00:00”.
Below is the full command that reflects the same setting as the previous setting in the DNS Manager:
Working with PTR Record in Windows DNS Server
Same with Host A records, multiple PTR Record with the same “Name” can exists together for redundancy or load-balance purpose. However, this may not be the best practice as the DNS server will randomly use one of these records to answer a DNS query.
Also, when you add PTR record for an IP address, you need to ensure that the PTR record is pointing to the correct host that is using the same IP address. If a PTR record created pointing to a host that is using different IP address, the result may be invalid.
And that’s all you need to know before you add PTR record in Windows DNS Server. | <urn:uuid:bb5faaab-d380-4f22-8898-395d4256cff6> | CC-MAIN-2024-38 | https://www.mustbegeek.com/add-ptr-record-in-windows-dns-server/ | 2024-09-10T01:05:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00634.warc.gz | en | 0.865916 | 1,468 | 2.5625 | 3 |
Electromechanical relays (EMRs) and Solid state relays (SSRs) are designed to provide a common switching function. An EMR provides switching through the use of electromagnetic devices and sets of contacts. An SSR depends on electronic devices such as SCRs and triacs to switch without contacts.
In addition, the physical features and operating characteristics of EMRs and SSRs are different. See Figure 1.
Figure 1. An Electromechanical relay provides switching using electromagnetic devices. A solid state relay depends on SCRs and triacs to switch without contacts.
An equivalent terminology chart is used as an aid in the comparison of EMRs and SSRs. Because the basic operating principles and physical structures of the devices are so different, it is difficult to find a direct comparison of the two.
Differences arise almost immediately both in the terminology used to describe the devices and in their overall ability to perform certain functions. See Figure 2.
Advantages and Limitations
Electromechanical relays and solid state relays are used in many applications. The relay used depends on the electrical requirements, cost requirements, and life expectancy of the application.
Although SSRs have replaced EMRs in many applications, EMRs are still very common. Electromechanical relays offer many advantages that make them cost-effective. However, they have limitations that restrict their use in some applications.
Figure 2. An equivalent terminology chart is used as an aid in the comparison of EMRs and SSRs.
Electromechanical relay advantages include the following:
- normally have multi-pole, multi-throw contact arrangements
- contacts can switch AC or DC
- low initial cost
- very low contact voltage drops, thus no heat sink is required
- very resistant to voltage transients
- no OFF-state leakage current through open contacts
Electromechanical relay limitations include the following:
- contacts wear, thus having a limited life
- short contact life when used for rapid switching applications or high-current loads
- generate electromagnetic noise and interference on the power lines
- poor performance when switching high inrush currents
SSRs provide many advantages such as small size, fast switching, long life, and the ability to handle complex switching requirements. SSRs have some limitations that restrict their use in some applications.
Solid state relay advantages include the following:
- very long life when properly applied
- no contact to wear
- no contact arcing to generate electromagnetic interference
- resistant to shock and vibration because they have no moving parts
- logic compatible with programmable controllers, digital circuits, and computers
- very fast switching capability
- different switching modes (zero switching, instant- on, etc.)
Solid state relay limitations include the following:
- normally only one contact available per relay
- heat sink required due to the voltage drop across switch
- can switch only AC or DC
- OFF-state leakage current when the switch is open
- normally limited to switching only a narrow frequency range such as 40 Hz to 70 Hz
The application of voltage to the input coil of an electromagnetic device creates an electromagnet that is capable of pulling in an armature with a set of contacts attached to control a load circuit. It takes more voltage and current to pull in the coil than to hold it in due to the initial air gap between the magnetic coil and the armature.
The specifications used to describe the energizing and de-energizing process of an electromagnetic device are coil voltage, coil current, holding a current, and drop-out voltage.
A solid state relay has no coil or contacts and requires only minimum values of voltage and current to turn it on and off. The two specifications needed to describe the input signal for an SSR are control voltage and control current.
The electronic nature of an SSR and its input circuit allows easy compatibility with digitally controlled logic circuits. Many SSRs are available with minimum control voltages of 3 V and control currents as low as 1 mA, which makes them ideal for a variety of current state-of-the-art logic circuits.
One of the significant advantages of a solid state relay over an electromechanical relay is its response time (ability to turn on and turn off). An EMR may be able to respond hundreds of times per minute, but an SSR is capable of switching thousands of times per minute with no chattering or bounce.
DC switching time for an SSR is in the microsecond range. AC switching time for an SSR, with the use of zero-voltage turn-on, is less than 9 ms. The reason for this advantage is that the SSR may be turned on and turned off electronically much more rapidly than a relay may be electromagnetically pulled in and dropped out.
The higher speeds of solid state relays have become increasingly more important as industry demands higher productivity from processing equipment. The more rapidly the equipment can process or cycle its output, the greater its productivity.
Voltage and Current Ratings
Electromechanical relays and solid state relays have certain limitations that determine how much voltage and current each device can safely handle. The values vary from device to device and from manufacturer to manufacturer. Datasheets are used to determine whether a given device can safely switch a given load.
The advantages of SSRs are that they have a capacity for arc-less switching, have no moving parts to wear out, and are totally enclosed, thus allowing them to be operated in potentially explosive environments without special enclosures.
The advantage of EMRs is that the contacts can be replaced if the device receives an excessive surge current. In an SSR, the complete device must be replaced if there is damage.
When a set of contacts on an electromechanical relay closes, the contact resistance is normally low unless the contacts are pitted or corroded. However, because an SSR is constructed of semiconductor materials, it opens and closes a circuit by increasing or decreasing its ability to conduct.
Even at full conduction, a solid state relay presents some residual resistance, which can create a voltage drop of up to approximately 1.5 V in the load circuit. This voltage drop is usually considered insignificant because it is small in relation to the load voltage and in most cases presents no problems. This unique feature may have to be taken into consideration when load voltages are small.
A method of removing the heat produced at the switching device must be used when load currents are high.
Insulation and leakage
The air gap between a set of open contacts provides an almost infinite resistance through which no current flows. Due to their unique construction, solid state relays provide a very high but measurable resistance when turned off. SSRs have a switched-off resistance not found on EMRs.
It is possible for small amounts of current (OFF-state leakage) to pass through an SSR because some conductance is still possible even though the SSR is turned off. OFF-state leakage current is not found on EMRs.
OFF-state leakage current is the amount of current that leaks through an SSR when the switch is turned off, normally about 2 mA to 10 mA. The rating of OFF-state leakage current in an SSR is usually determined at 200 VDC across the output and should not usually exceed more than 200 mA at this voltage. | <urn:uuid:8b4954da-cd46-4acf-9de0-c97bbf96e5b8> | CC-MAIN-2024-38 | https://electricala2z.com/electrical-power/solid-state-relay-vs-electromechanical-relay/ | 2024-09-11T07:30:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00534.warc.gz | en | 0.933347 | 1,516 | 2.84375 | 3 |
Zero Trust Data Access micro-segmentation enhances network security, improves data governance, mitigates ransomware risk, and protects critical infrastructure by combining Zero Trust Data Access and micro-segmentation, providing granular control over file and folder secure access.
Why You Need Zero Trust Data Access Micro-Segmentation
Estimated reading time: 3 minutes
Table of Contents
In today’s evolving threat landscape, traditional security approaches that rely solely on perimeter defenses are no longer sufficient. As a result, organizations are increasingly turning to micro-segmentation as a network security technique. Microsegmentation involves dividing a network into smaller, isolated segments or microsegments, providing granular control over network traffic and enhancing overall security.
What is Zero Trust Micro Segmentation?
Zero Trust Data Access (ZTDA) takes microsegmentation to the smallest implicit trust zone of individual files and folders, ensuring secure access and authentication. By combining the principles of Zero Trust Data Access and microssegmentation, organizations can enhance network security, improve data governance, mitigate ransomware risk, and help protect critical infrastructure.
This article explores the benefits and importance of Zero Trust Data Access micro-segmentation, highlighting its role in strengthening security measures, defending against advanced threats, improving compliance and data privacy, helping contain security incidents, and providing scalability and flexibility in modern network architectures.
What are the Levels that Compromise MicroSegmentation?
Micro-segmentation is a network security technique that involves dividing a network into smaller, isolated segments or zones called microsegments. Each microsegment acts as its security boundary, restricting communication and access between different segments. It provides granular control over network traffic and enhances the overall security posture of a network. According to NIST, the purpose of micro-segmentation is to “Eliminate unauthorized access to data and services coupled with making the access control enforcement as granular as possible.”
The traditional network security approach relies heavily on perimeter defense, where a firewall is used to protect the entire network. The problem with perimeter defense is that once an adversary is behind the firewall they are in a very large implicit trust zone.
Zero Trust Network Access/Application Access
However, this approach is becoming less effective with the increasing sophistication of cyber attacks. Zero Trust Network Access and Zero Trust Application Access micro-segmentation address this by enforcing security policies at a more granular level, reducing the implicit trust zone to a network segment or application. These two approaches are shown in Diagram 1.
Zero Trust Data Access
While ZTNA micro-segmentation focuses on isolating and securing network segments, Zero Trust Data Access takes micro-segmentation to the smallest implicit trust zone of individual files and folders. This is shown in Diagram 2. ZTDA takes a data-centric approach to access control and authentication.
How Does Zero Trust Data Access Enhance Micro-Segmentation?
Here’s how ZTDA enhances micro-segmentation:
1. User and Device Authentication:
- Zero Trust Data Access allows organizations to use strong user and device authentication before granting access to files and folders. This authentication can be multifactor-based, using factors like passwords, biometrics, hardware tokens or third-party SSO services such as Okta, ForgeRock, Traitware and PingFederate etc. By allowing strong authentication, ZTDA can safeguard access so that only authorized users and devices can access files and folders protected by micro-segmentation.
2. Least Privilege Policy Enforcement:
- Zero Trust Data Access employs the principle of “least privilege.” It grants users access only to the specific data and resources they need to perform their tasks, rather than providing broad network access. This principle aligns with micro-segmentation’s objective of limiting lateral movement within the network. By dynamically enforcing access policies, ZTDA ensures that users within a microsegment can only access the resources explicitly authorized for their use.
3. Activity Logging and Visibility:
- Since all file access must be permitted via a policy server, that server can provide an activity log and visibility into user access and data interactions. This monitoring enhances micro-segmentation by providing a log that when used with the organization’s SIEM software can help detect anomalous behavior, such as unauthorized access attempts or data exfiltration, within specific microsegments. By actively monitoring user activity, ZTDA helps identify potential security breaches or policy violations, allowing for timely response and remediation.
By combining the principles of Zero Trust Data Access and micro-segmentation, organizations can create a more robust and comprehensive security environment. ZTDA strengthens the access control and authentication aspects of micro-segmentation, further reducing the attack surface and minimizing the potential impact of security incidents.
What are the Benefits of Zero Trust Data Access Micro-Segmentation?
1. Enhances Network Security:
- Zero Trust Data Access micro-segmentation emphasizes the control and isolation of data, creating distinct and isolated environments. By segmenting data, sensitive information is separated and made accessible only to authorized individuals, significantly reducing the risk of unauthorized data exposure. This approach strengthens network security by reducing the attack surface and limiting potential breaches.
2. Improves Data Governance:
- Zero Trust Data Access promotes data segmentation by dividing sensitive data into smaller, isolated segments or microsegments. This segmentation helps contain the impact of a potential breach since attackers will have limited access to specific segments of data. From a data governance perspective, this practice enables more effective management and security of data by compartmentalizing it based on sensitivity, compliance requirements, or other relevant factors.
3. Mitigates Ransomware Risk:
- Zero Trust Data Access (ZTDA) enforces strict micro-segmentation, which restricts the lateral movement of attackers within the network. By compartmentalizing data access and implementing zero trust-based controls, ZTDA helps prevent the rapid spread of ransomware and limits attackers’ ability to reach critical systems. This significantly reduces the risk and potential damage caused by ransomware attacks.
4. Better Protection of Critical Infrastructure:
- Zero Trust Data Access micro-segmentation provides granular file and folder-level access control. Solutions like FileFlex Enterprise offer micro-segmented file and folder-level access so that only authorized users can access specific files and folders. This level of granularity enhances the protection of critical infrastructure by minimizing unauthorized access and preventing lateral movement within the network.
Why is Zero Trust Data Access Micro-Segmentation Important?
Zero Trust Data Access micro-segmentation is important due to several key reasons:
1. Enhances Security:
- Traditional security approaches that rely on perimeter defenses are no longer sufficient in today’s evolving threat landscape. Zero Trust Data Access micro-segmentation provides an additional layer of security by isolating and segmenting to the file and folder level. This isolation restricts lateral movement within the network, limiting the potential impact of security breaches or unauthorized access. By adopting a Zero Trust Data Access approach, organizations can minimize the attack surface, improve security posture, and better protect critical assets and sensitive data.
2. Is a Defense against Advanced Threats:
- Cyberattacks are becoming increasingly sophisticated, and attackers often exploit vulnerabilities within a network to gain unauthorized access and move laterally to sensitive areas. Zero Trust Data Access micro-segmentation acts as a barrier against these advanced threats. By compartmentalizing the network and implementing strict access controls, it becomes significantly more challenging for attackers to navigate through the network and gain access to critical systems or sensitive data.
3. Superior Compliance and Data Privacy:
- Many industries have stringent compliance requirements and data privacy regulations that organizations must adhere to. Zero Trust Data Access micro-segmentation can help meet these requirements by enforcing access controls, segregating data based on sensitivity, and ensuring that only authorized individuals can access specific segments. By effectively segmenting data, organizations can demonstrate compliance and maintain the privacy and integrity of sensitive information.
4. Enhances Incident Detection:
- In the event that the activity log provides detection of malicious activity, Zero Trust Data Access micro-segmentation plays a vital role in containing the impact. By isolating different network access to the file and folder level, organizations can restrict the lateral movement of threats to reduce the threat surface and help prevent them from spreading across the entire network. This allows for more efficient incident response, as security teams can focus on the affected microsegment, investigate the incident, and mitigate the threat without disrupting the entire network.
5. Improved Scalability and Flexibility:
- Zero Trust Data Access micro-segmentation offers scalability and flexibility for organizations with diverse and dynamic network environments. It allows for data segmentation based on sensitivity or user roles, providing granular control over access. This flexibility enables organizations to adapt their security measures as their infrastructure evolves, making it easier to implement and manage security policies in complex environments.
Overall, Zero Trust micro-segmentation is important because it strengthens network security, defends against advanced threats, ensures compliance and data privacy, facilitates incident containment and response, and provides scalability and flexibility in securing modern network architectures.
In today’s rapidly evolving threat landscape, relying solely on traditional perimeter defenses for network security is no longer sufficient. As a result, organizations are increasingly adopting micro-segmentation as a network security technique. To further enhance security measures, organizations are combining micro-segmentation with the principles of Zero Trust Data Access (ZTDA), which focuses on securing individual files and folders as the smallest implicit trust zone.
Zero Trust Data Access micro-segmentation offers several benefits and plays a crucial role in strengthening network security with strong user and device authentication, least privilege, and activity logging. It improves network security, aids defense against advanced threats, strengthens compliance and data privacy, facilitates incident containment and response, and provides scalability and flexibility in securing modern network architectures. By embracing Zero Trust Data Access microsegmentation organizations can establish a more comprehensive and robust security framework to protect their assets and data in today’s dynamic threat landscape.
For more reading see, File Sharing and Collaboration Evolution from First Generation Platforms to Zero Trust Data Access, Data Governance, Cybersecurity and Zero Trust Data Access: The Essential Pillars to Protect Data Assets, and Network File Access Control of Unstructured Data with Zero Trust Data Access.
Learn More About FileFlex Sign Up for a Free Trial | <urn:uuid:85dec603-485f-470c-b29d-a89096f37920> | CC-MAIN-2024-38 | https://fileflex.com/blog/why-you-need-zero-trust-data-access-micro-segmentation/ | 2024-09-11T06:13:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00534.warc.gz | en | 0.890722 | 2,174 | 2.671875 | 3 |
When it comes to government contracting, navigating the complexities of federal procurement guidelines is essential. These guidelines ensure compliance with government procurement regulations and federal acquisition regulations, making sure that purchasing policies and government contract compliance are met. Under the Public Assistance program, applicants are required to follow their own procurement procedures that meet or exceed the standards outlined in the Federal Regulations.
Understanding the different types of contracts is crucial for effective procurement. Lump sum contracts, unit price contracts, cost plus fixed fee contracts, time and materials contracts, and piggyback contracts are all commonly used in government procurement. However, to ensure contractor efficiency and adherence to the guidelines set forth by the state, thorough monitoring activities are essential.
By simplifying the federal procurement guidelines, both applicants and contractors can navigate the procurement process with ease, ensuring compliance and a fair and efficient procurement system. Stay tuned for our upcoming sections where we delve into the importance of following federal procurement procedures, the various types of contracts and procurement methods, affirmative action, contract cost and price analysis, and more.
Importance of Following Federal Procurement Procedures
Table of Contents
ToggleWhen it comes to federal procurement, adhering to the established guidelines is of paramount importance. Following federal procurement procedures ensures compliance with the set procurement standards, guaranteeing transparency, fairness, and accountability in the procurement process. Let’s explore the key reasons why it is crucial to follow these procedures:
1. Compliance with Federal Procurement Guidelines
By following federal procurement procedures, organizations and individuals can ensure compliance with the relevant guidelines set forth by federal authorities. These guidelines prescribe the rules and regulations that govern the procurement process. Compliance helps prevent unnecessary legal issues and demonstrates responsible procurement practices.
2. Reasonable Cost and Competitive Bidding
One of the fundamental principles of federal procurement is to obtain goods and services at a reasonable cost. Following procurement procedures allows organizations to engage in competitive bidding, ensuring that contracts are awarded to the most qualified and competitive suppliers or contractors. This helps drive cost savings and promotes efficiency in the procurement process.
3. Adherence to Procurement Standards
Procurement standards set the benchmark for quality, reliability, and accountability in the acquisition of goods and services. By following federal procurement procedures, organizations can ensure that the goods, materials, and services purchased meet or exceed these standards. This promotes consistency and ensures that federal procurement projects deliver the intended benefits to the public.
4. Mitigating Risks and Ensuring Transparency
Following federal procurement procedures helps mitigate risks associated with procurement, such as fraud, corruption, and favoritism. By adhering to transparent and standardized procedures, organizations can establish clear accountability and reduce the potential for improper practices. This promotes public trust in the procurement process and reinforces the integrity of government contract awards.
5. Enforcing Procurement Accountability
Procurement procedures play a crucial role in enforcing accountability. By following these procedures, organizations can document and track their procurement activities, ensuring transparency, accuracy, and consistency. This documentation serves as a record of compliance and can be crucial in audits and evaluations.
In conclusion, following federal procurement procedures is not just a legal requirement but also a means to ensure compliance, reasonable cost, and competitive bidding. By adhering to these procedures, organizations and individuals can navigate the procurement process with confidence, foster transparency, and contribute to the overall efficiency of the federal procurement system.
Types of Contracts and Procurement Methods
When procuring goods and services under federal procurement guidelines, there are different types of contracts and procurement methods to consider. Understanding these options is crucial for successful procurement activities.
The federal procurement guidelines allow for various contract types to accommodate different purchasing scenarios. These include:
- Lump Sum Contracts
- Unit Price Contracts
- Cost Plus Fixed Fee Contracts
- Time and Materials Contracts
- Piggyback Contracts
Each contract type has its own benefits and considerations, making it essential to choose the most suitable option based on the specific requirements of the procurement project.
The federal regulation also provides different procurement methods for acquiring goods and services. These methods include:
- Purchase of Small Supplies
- Sealed Bids
- Competitive Proposals
- Noncompetitive Procurement
Sealed bids involve publicly soliciting and advertising bids for a firm-fixed-price contract awarded to the lowest bidder. This method promotes transparency and fair competition.
Competitive proposals require the solicitation of proposals from an adequate number of qualified sources, with the selection based on evaluation factors. This method allows for a detailed evaluation of proposals based on specific criteria.
Noncompetitive procurement can be used in specific circumstances, such as when an item is only available from a single source or in cases of public exigency or emergency. While it may deviate from traditional competitive methods, it is still subject to federal procurement guidelines and scrutiny.
By understanding the various contract types and procurement methods outlined in the federal procurement guidelines, organizations can make informed decisions that align with their specific procurement needs and objectives.
Affirmative Action and Contract Cost and Price Analysis
Federal procurement guidelines place significant emphasis on affirmative action to promote the utilization of minority firms, women’s business enterprises, and labor surplus area firms whenever feasible. This affirmative action encourages a diverse and inclusive business landscape within the federal procurement process.
Grantees are required to perform comprehensive cost or price analysis for every procurement action. These analyses serve to ensure that the procurement is conducted efficiently, while also assessing the total cost and real value of the contract. Various sources, such as vendor quotes, catalog prices, and trade publications, can be used to conduct this analysis.
Negotiating the contract price is another vital step in the procurement process. This negotiation ensures that profit is determined independently, based on thorough cost analysis or a healthy competitive market.
It is important to note that the costs and prices determined, particularly those based on estimated costs, must align with federal cost principles. Compliance with these principles is crucial to maintain transparency, fairness, and fiscal responsibility within federal procurement.
Furthermore, it is necessary to have supporting documents that justify the method of procurement, the selection of contract type, the decision to accept or reject specific contractors, and the rationale behind the contract price. These documents provide evidence of compliance with federal procurement guidelines and help avoid any potential discrepancies during audits or assessments.
Procurement Regulations and Criteria
Federal procurement guidelines play a crucial role in guiding the procurement process, ensuring fair and efficient practices. Grantees are obligated to adhere to their own procurement procedures that meet or exceed the procurement standards outlined in the federal regulations. These guidelines stress the significance of competition, making it mandatory for all procurement transactions to undergo full and open competition.
Geographical preferences in bid evaluation are restricted, requiring impartial selection procedures for all procurements. To maintain transparency and integrity, it is essential to avoid conflicts of interest and prevent the awarding of contracts to debarred contractors.
The procurement regulations also provide specific criteria for awarding contracts, taking into account various factors such as purchase methods, competition, affirmative action, contract cost and price, and contract provisions. Compliance with these regulations ensures that the procurement processes are conducted fairly and efficiently, promoting transparency and accountability. | <urn:uuid:11c11fa8-8913-4e70-952b-6199f0ed45ed> | CC-MAIN-2024-38 | https://www.gsascheduleservices.com/blog/federal-procurement-guidelines-2/ | 2024-09-12T12:55:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651457.35/warc/CC-MAIN-20240912110742-20240912140742-00434.warc.gz | en | 0.910217 | 1,454 | 3.015625 | 3 |
According to cyber security firm Kaspersky, DDoS attacks have tripled during the second quarter of 2020. In fact, they jumped 217% year on year (YoY), 20% up from the first quarter. The FBI reported in August that their Cyber Division received up to 4,000 complaints a day. Finally, a report by Interpol showed that a huge rise in the number of cyber-attacks has been observed and recorded in 2020. In a single 4-month period, 907,000 spam messages, 737 malware-related incidents, and 48,000 malicious URLs were detected by a private-sector partner.
This is an alarming rise in cyber-attacks and related activity. What’s clear is that during COVID-19 cyber security has become an essential service.
The chief problem seems to be the work from home protocols established by various companies and organizations. As a result, employees are accessing company servers through their own computers and devices. These aren’t as secure as the ones at their workplaces, of course. Neither are their devices protected by the same rules and regulations that govern workplace behavior. This hasn’t just left multiple access points for hackers and cyber terrorists to exploit, but also created much easier targets.
Phishing Scams Galore
According to the World Health Organization, cyber scammers and hackers have taken advantage of the coronavirus pandemic. They are sending fraudulent emails and WhatsApp messages to spread misinformation. However, this also pertains to URLs that can lead to miracle cures or very cheap DIY tests.
These types of links are often phishing scams which can lead to the compromise of a device. The link allows for a malicious program to be downloaded on to your device which can then grant access to your work server.
According to software company OpenText, 1 in 4 Americans have gotten phishing related emails in their inbox. What’s more the report highlights that most companies and consumers are also falsely confident about their cybersecurity. 95% did recognize phishing as a persistent problem. However, 76% also admitted to opening emails from unknown contacts. 59% blamed it on phishing emails looking more “realistic” than before.
However, 59% believed they knew what to do to keep their data safe. 29% admitted they’ve clicked on a phishing scam this year. 19% also confirmed the receipt of a COVID-19 related phishing scam.
Effects on Small Businesses
It’s clear that more robust work from home protocols/systems are needed to work through the pandemic. Organizations can’t keep dealing with individual instances of fraud or cybercrimes. Small businesses specifically need a secure platform on which to operate.
The normal cloud providers like Amazon and Google or Microsoft don’t provide high level security protocols. For example, none of them provide end to end encryption for your files or mandatory 2-factor authentication. These are essential security features that all cloud platforms should have to keep out intruders.
Luckily, there is a cloud provider out there that offers all this and more. DropSecure’s standard, free plan, offers encryption, protected links, and 2-factor authentication. What’s more, it provides automatic file purging every 7 days.
Get secure with DropSecure cyber security’s 7-day free trial today. | <urn:uuid:e1ae5c45-8c2b-4c4b-8bd5-f923abb53e45> | CC-MAIN-2024-38 | https://dropsecure.com/the-rising-need-for-cyber-security-during-covid-19/ | 2024-09-15T00:46:49Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00234.warc.gz | en | 0.961485 | 687 | 2.578125 | 3 |
As the healthcare industry becomes increasingly digital, the volume of sensitive data circulating within its networks has skyrocketed, making it a prime target for cyber attacks. The urgency to protect patient health information is at an all-time high, with cyber threats jeopardizing the confidentiality and integrity of critical healthcare records. The consequences of data breaches can be devastating, entailing not only the potential for financial loss but also significant harm to patients’ trust and a healthcare organization’s reputation. Against this backdrop, robust cybersecurity practices are not just optional but essential for survival in this sector. This article sets forth a comprehensive array of strategies that healthcare organizations can employ to guard against cyber insecurities and uphold the sanctity of healthcare data.
Embracing Data Encryption and Anonymization
The initial shield against cyber intrusion is encryption—a fundamental method to protect data from falling into unauthorized hands. Nevertheless, just encrypting data isn’t sufficient; additional precautions are necessary to safeguard information. Anonymization and de-identification are techniques that further obscure patient details, significantly diminishing the risk of data being linked back to an individual. Each regulation, whether it’s GDPR in the European Union or HIPAA in the United States, places a different level of emphasis on these methods. This necessitates healthcare organizations to not only implement comprehensive encryption policies but also to have a deep understanding of how regulations differ across boundaries and the consequent impacts on data handling practices.
Protecting sensitive healthcare information doesn’t stop at encryption. Modern healthcare entities must actively pursue and integrate a multitude of anonymization techniques into their data protection framework. De-identification protocols play a critical role in reducing the risks associated with data breaches, such as unauthorized re-identification. By complying with international standards and best practices, healthcare organizations can ensure that, even if data security is compromised, the privacy of individuals remains intact.
Strengthening Access Controls
In constructing a formidable barrier against unauthorized access, stringent access controls are indispensable. Role-based access systems and multifactor authentication schemes are at the heart of modern cybersecurity strategies. These controls ensure that healthcare data is viewed or edited solely by individuals whose roles necessitate such access. Additionally, the use of single-use passwords and biometric verification adds an extra layer of security, making unauthorized entry exceptionally challenging.
Apart from establishing solid access barriers, technologies like biometric scanning and temporary passwords can significantly streamline the process of user verification. Improved access control mechanisms not only tighten security but also offer operational benefits, eliminating many of the inefficiencies associated with conventional security protocols. These advanced strategies transform access control from a potential vulnerability into a strong link in the cybersecurity chain.
Cultivating a Security-Minded Workforce
Cybersecurity is not solely about technology; it’s about people too. Training the workforce to recognize and respond to cyber threats is as crucial as any technological defense. Regular security awareness programs and real-time simulations can significantly heighten vigilance among staff members. A well-informed employee is more likely to prevent a security breach by identifying phishing attempts or other malicious activities.
Creating a culture where every staff member is conversant in cybersecurity best practices is essential. Continuous education on data management and threat recognition helps build a first line of defense—a workforce that keeps security at the forefront of its daily operations. By nurturing this mentality, healthcare organizations not only enhance their overall security posture but also foster an environment where cyber vigilance is part of the organizational DNA.
Leveraging Cloud Capabilities for Enhanced Security
Cloud computing offers a powerful ally in the quest to secure healthcare data. Major cloud service providers like Amazon Web Services, Microsoft Azure, and Google Cloud bring to the table an array of advanced security features and compliance tools. These platforms are designed to meet rigorous standards and offer healthcare organizations a secure environment for their sensitive data.
By taking advantage of cloud technologies, healthcare organizations can access sophisticated data protection solutions such as automated backups, state-of-the-art encryption, and around-the-clock surveillance against potential threats. These resources provide a comprehensive approach to data security while simplifying the task of adhering to complex regulatory standards. They allow healthcare entities to focus on their core mission—patient care—while entrusting the technical aspects of data protection to experienced providers.
Upholding Regulatory Compliance and Conducting Security Audits
For healthcare entities, compliance with regulatory standards is not merely a choice but a mandate. The Health Insurance Portability and Accountability Act (HIPAA), the General Data Protection Regulation (GDPR), and similar regulations set a high bar for data protection. Regular security audits and risk assessments ensure that healthcare organizations not only meet these strict requirements but stay ahead of evolving cyber threats.
Taking a page from the playbook of organizations that consistently meet or exceed security expectations is invaluable. By conducting thorough security audits and fostering a culture of continuous improvement, healthcare organizations can maintain the highest levels of data integrity. This ongoing process seals any cracks that might be exploited by cybercriminals and keeps patient data safe and secure.
Establishing Data Sharing and Retention Protocols
Data sharing among healthcare professionals and researchers can lead to significant advancements in patient care and medical knowledge. However, without proper safeguards, it also introduces privacy risks. It is paramount to establish ethical standards and concrete data sharing agreements to ensure that patient information remains confidential. Informed consent and strategic data retention tactics are crucial, as they lay the groundwork for a secure exchange of information.
The development of robust data-sharing agreements is a critical step to mitigate potential risks associated with the sharing of healthcare data. Alongside these formal arrangements, organizations must be diligent about how long they retain data, guaranteeing that it is kept no longer than necessary. These strategies ensure that as information flows between parties for the benefit of medical research and patient care, it remains protected against unauthorized use. | <urn:uuid:7a087351-1211-41a0-bea0-d161885faa0a> | CC-MAIN-2024-38 | https://healthcarecurated.com/management-and-administration/securing-healthcare-data-strategies-against-cyber-threats/ | 2024-09-07T21:48:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00034.warc.gz | en | 0.919642 | 1,187 | 2.75 | 3 |
Cloud computing has fundamentally changed the way businesses operate. It has allowed organizations to reduce costs, increase productivity, store data more safely, and serve customers more efficiently. Further, as we’ve seen during the pandemic, it has enabled a more flexible workplace.
Although the transformative nature of the cloud is now widely understood, many still aren’t aware that there are different cloud models, each with its own advantages and drawbacks. Here, we’ll look at the public cloud and how it can help your organization.
What Makes the Public Cloud Different from the Private Cloud?
Public cloud refers to cloud services that are offered to multiple customers by a third-party provider. “Public” is used to differentiate this cloud model from the private cloud, where all the computing resources are dedicated to and accessible by just one customer. In the public cloud model, virtual machines, applications, storage, and other resources are pooled together and made accessible to users over the public Internet.
Public cloud providers may offer services such as platform as a service (PaaS), software as a service (SaaS), or infrastructure as a service (IaaS) to users as a subscription or via on-demand pricing. Public clouds offer organizations an easy way to scale their resources while eliminating the need for them to host and manage these services in their own on-premise data center.
[Related Reading: Public Cloud vs. Private Cloud]
How Does the Public Cloud Work?
The public cloud brings together various computing resources into a shared infrastructure and makes it available to multiple clients. Cloud service providers partition large groups of data centers into virtual machines. Organizations become “tenants” by renting the use of these virtual machines. Additionally, they can pay for cloud-based software, storage, or app development tools. All these services are accessed virtually from the organization’s computers and devices.
Since multiple clients share a single public cloud, several organizations may be storing data or running different applications on the same physical server at the same time. This is known as multitenancy, and it’s a key attribute of the public cloud. Even though multiple tenants share the same resources, each tenant’s data is kept completely separate and secure from all the others. The arrangement is much like a bank, where two account holders won’t have knowledge of or access to each other’s assets.
What Are the Benefits of Public Cloud?
Utilizing the public cloud allows organizations to offload the overhead of maintaining a cloud infrastructure to a third-party vendor. That brings some significant benefits.
The most obvious is cost savings. By using the public cloud, organizations reduce or eliminate the expense of investing in and maintaining their own on-premises IT resources. The virtually unlimited scalability of the cloud allows them to achieve further savings because they can expand or contract resources to meet demand rather than wasting money on overprovisioned or idle on-premises resources.
The public cloud also removes much of the burden of server management. The service provider takes on the bulk of this administrative, maintenance, and monitoring responsibility, freeing the organization’s internal IT resources for other business-critical tasks.
Increased security is another benefit of using the public cloud. Many small and medium-sized businesses don’t have the staff, budget, or knowledge to implement the necessary level of protection for their assets. Cloud service providers employ their own infrastructure security teams that watch for anomalous or suspicious activity in their client’s environments.
Finally, the public cloud gives organizations access to analytics features they might not otherwise have. The ability to leverage insights from business data is critical for remaining competitive today. Public cloud providers can analyze high volumes of a variety of data types to deliver essential business insights back to the customer.
What Are the Challenges of Public Cloud?
For most organizations, the benefits of the public cloud far outweigh the challenges. Nonetheless, certain attributes of the public cloud can pose concerns for some businesses.
Multitenancy, for example, might not be ideal for businesses that are bound by strict regulatory compliance standards. Though the risk of data leakage is extremely small, these organizations will have to determine if they can tolerate that risk or if they’re better served by a private cloud.
The public cloud can also create a false sense of security in the customer. As mentioned, most cloud providers follow very high security standards. But public cloud security operates on the shared responsibility model, with both the cloud provider and the customer overseeing defined security controls. Though each vendor may dictate security responsibilities differently, the model typically makes the vendor responsible for the security of the cloud — the hardware, software, networking, and other infrastructure components — and the customer responsible for security in the cloud — the operating system, web applications, data encryption, and so on. If an organization doesn’t make itself aware of their vendor’s security policies, they may neglect security measures believing they’ve been taken care of by their provider.
Vendor lock-in is always a concern with the public cloud. Once an organization migrates its data to a particular cloud provider’s infrastructure and integrates the vendor’s software into its business processes, it can become dependent on those services. That’s a potential problem if the vendor’s services no longer meet the business’ needs, its pricing increases, or its service quality decline.
What Are Public Cloud Pricing Options?
Generally, cloud service providers use one of two pricing models. The first is straight subscription pricing, where customers are billed monthly for the services they’ve ordered, whether or not they use them. The second is pay-as-you-go pricing, in which the customer starts each month with a zero balance and gets charged only for the resources they use that month. Some providers bill based on the number of active users of the cloud subscription. Still, other providers offer some combination of these models.
What is the Future of Public Cloud?
The next phase of public cloud is expected to involve more artificial intelligence capabilities and automation, to further simplify data management, security, and other business processes. It should also see higher security standards. These developments, along with providers offering more interconnected services to meet a wider array of user needs, will likely prompt wider adoption of the public cloud over the next few years.
Click one of the following links to learn more information and to learn how Alert Logic’s Managed Detection and Response (MDR) solution runs in the leading public cloud platforms — AWS and Microsoft Azure. | <urn:uuid:266f3847-cdd2-4a5e-8994-ab0e333511c1> | CC-MAIN-2024-38 | https://www.alertlogic.com/blog/what-is-public-cloud/ | 2024-09-07T20:43:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00034.warc.gz | en | 0.936613 | 1,346 | 2.875 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.