text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Published On November 08, 2017The “internet of things” is a vast, unstoppable flood of information greater than anything we have ever seen. The “internet of things” is a vast, unstoppable flood of information greater than anything we have ever seen, and every organization will need to ride this wave of change at some point, or else risk finding themselves beneath it. This is an exciting time for information governance (IG) professionals because of the value proposition these developments bring with them and the higher profile possible for those who manage their organization’s information assets. But in terms of infrastructure costs and regulatory and operational risk, this potential value comes with great challenges above and beyond the big data that an organization currently manages. The internet of things can offer value to many industries. The manufacturing sector, for example, is automating production and inventory management systems to increase efficiency through machine learning that optimizes maintenance, production material delivery and finished goods logistics. And the healthcare sector is automating patient records — but there’s much more potential, with personal medical devices connected via Bluetooth and mobile phones that can monitor patients’ conditions and give them medication notifications based on algorithms. In the consumer goods market, cars and trucks have been turned into treasure troves of marketing data by tracking driving behavior: where drivers go, the places they stop at (and for how long) and what entertainment choices they make along the way, as well as vehicle performance factors used for predictive maintenance. And all of it in real time! This is why organizational records are being referred to as information assets. This data can generate real revenue. These information assets warrant a serious review of an organization’s IG program because of their value and the greater challenges that come with them. The most obvious challenge is the sheer volume of data. Many organizations are moving up plans to migrate to cloud data storage just to handle the amount of data they’re projected to receive from their internet of things systems. What’s not so obvious are privacy concerns. There are direct threats involving data breaches of personal health or identifiable information that call for strict security protocols mandating at minimum encryption and up to anonymization of personally identifiable data retained over the long term. Lastly, security must be tightened around these systems, as they’re becoming a new favorite target for ransomware attacks that can shut connected devices down, endangering individuals and organizations. The internet of things is here to stay and will only get bigger. There’s great value to be made from it, but it takes an active IG program to realize and protect it.
<urn:uuid:b6c5b937-0273-40ad-8d36-a378745b7694>
CC-MAIN-2022-40
https://www.ironmountain.com/blogs/2017/the-internet-of-things-big-data-like-never-before
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00309.warc.gz
en
0.944787
528
2.59375
3
What are cybersecurity risk assessments? A cybersecurity risk assessment is a process that allows an organization to identify, understand, and prioritize the cybersecurity risks to its business. The goal of the risk assessment is to develop a risk management plan that will mitigate the most significant risks. This article will provide you with the resources and information needed to create and implement a cybersecurity risk assessment framework, but we recommend using a professional. Nerds On Site can provide you with a comprehensive, non-invasive cybersecurity risk assessment to help you navigate your cyber risk landscape while providing easy-to-understand reporting on the next steps required to secure your business. Does my business need cyber risk assessments? A risk assessment is an essential part of protecting your business against cybercrime. By understanding your business’s risks, you can protect your data and systems. A core reason for a cyber security risk assessment is information security risks, which are the potential threats to the confidentiality, integrity, and availability of an organization’s information assets. Information assets can include anything from confidential customer data to company trade secrets. There are a variety of different ways that an organization can be vulnerable to information security risks, including: - Phishing attacks - Malware infections - Spearphishing attacks - Data breaches - Social engineering attacks A good risk assessment process will identify potential threats, identify vulnerabilities, and recommend steps to reduce your risk. It’s important to remember that no system is 100% safe from attack, but you can make it much more difficult for criminals to steal your data or damage your plans by taking precautions. How do I conduct cyber risk assessments? The risk assessment process has several steps, including: Identifying the organization’s IT assets and the data they contain The first step in any risk assessment process is identifying the organization’s IT assets and the data they contain. This includes everything from the organization’s servers and computers to its mobile devices and e-mail systems. It’s essential to identify all of these assets and understand how each one is used so that you can understand the potential risks involved. Assessing the vulnerabilities of these assets and the threats to them Once you’ve identified the organization’s IT assets and the data they contain, you need to assess the vulnerabilities of these assets and the threats to them. This includes identifying any potential weaknesses in the system and understanding the dangers posed by internal and external threats. It’s essential to have a clear understanding of both the risks and the potential consequences to develop an effective risk management strategy. Develop a plan to mitigate or eliminate the risks. Once you have identified the risks, you must develop a plan to mitigate or eliminate them. This may include deploying additional security controls, changing processes or procedures, or hiring additional staff. There is no one-size-fits-all solution to cybersecurity risk. Every organization is different and will require a unique approach to managing these risks. Implementing the plan and regularly monitor and update it Once you have created your plan, it is essential to implement it and periodically monitor and update it. This will help ensure that your data is protected from cyber-attacks. By regularly updating your plan, you can ensure that your security controls are adequate and that your organization is always prepared for the next attack. Several well-known frameworks have been developed to help organizations conducting cybersecurity risk assessments, including: The National Institute of Standards and Technology (NIST) Cybersecurity risk management framework provides organizations with a prioritized, flexible, and cost-effective approach to assessing and managing cybersecurity risk. The Framework is voluntary, and it enables organizations to identify their own cybersecurity needs and make informed decisions about how best to protect their systems and data. The NIST Cybersecurity Framework consists of five core functions: identity, protect, detect, respond, and recover. It also includes three categories of cybersecurity risk: business risk, technical risk, and legal risk. Each category has specific sub-risk factors that organizations can prioritize their cybersecurity risk management efforts. The ISACA Framework is a complementary framework that provides additional detail on applying the NIST Cybersecurity Framework. The Risk IT Framework includes specific guidance on identifying and assessing cybersecurity risk, protecting information and systems, detecting incidents, responding to incidents, and recovering from incidents. Both frameworks provide a common language for organizations to discuss cybersecurity risks with their partners and suppliers. By using these frameworks, organizations can improve communication and collaboration around cybersecurity risk management. ISO 27005 is a risk management process standard that identifies, assesses, and manages cybersecurity risk. The standard includes a risk management framework and best practices for implementing a risk management program. The standard is based on the ISO 31000 risk management standard, which provides a framework for managing risks of all types. The ISO 27005 risk management process standard is specific to cybersecurity risks, and it guides how to identify, assess, and manage those risks. Organizations of all sizes and industries can use the standard. It provides a common language for organizations to discuss cybersecurity risks with their partners and suppliers. The Critical Security Controls (CSC) is a prioritized list of actions organizations can take to protect their systems and data from cyber threats. The CSC was developed by a consortium of information security professionals at the Center for Internet Security (CIS). The CSC is based on the best practices of leading security experts and organizations. They are updated regularly to reflect the latest threats and vulnerabilities. The CSC is divided into five categories: - Control Categories - Control Groups - Control Sets - Baseline Controls - Tailored Controls The CSC is intended to be implemented in its entirety. They can help organizations identify risk areas and prioritize the controls that will benefit the least cost. Breakdown of cybersecurity risk assessments steps Network & Devices Inventory all business-owned devices, including monitors, workstations, laptops, tablets, smartphones, printers, and other peripherals. Identify which ones are connected to the network and what kind of connection they use (wired vs. wireless). Create a list of all connected devices and store it electronically in a spreadsheet or database. Assess the security of each device, including its operating system and applications, patches and updates installed, firewalls enabled, antivirus software installed, and other security features. Check the network configuration to ensure that all devices are appropriately segmented into different virtual LANs (VLANs). VLAN technology uses software to group devices based on their physical location, function within the organization, or security requirements. Draw a network diagram that shows the layout of your business’s local area network (LAN) and Wide Area Network (WAN). Create a list of all applications installed on the network, including their purpose and source. Also, create a list of any custom-developed applications. Assess each application’s security, including its operating system, installed patches/updates, and other security features. Check the security of your organization’s databases, including the type of database software (e.g., Microsoft SQL Server, Oracle Database), patch level, and security configuration. Review the access control list (ACL) to ensure that only authorized users can access the data. Review user privileges to ensure that each user has the minimum required privileges. If you have local email servers, check the security of your organization’s email servers, including patch level and email gateway—review permissions to ensure that only authorized users can access the data. Enable DMARC & SPF to prevent spoofing your organization’s email domain. Ensure DKIM is enabled to sign all outbound emails. TLS is a critical component of email security and should be enabled to encrypt all communications between the e-mail server and the client. Quarantine new devices joining the network. When a new device joins the network, quarantine it until it can be checked for malware and other security threats. This can be done using a Zero Trust solution, which uses various methods, including device profiling and machine learning. Regularly tested & encrypted backups. Create a backup schedule that mirrors your organization’s needs. Ensure the backups are tested regularly to be restored in an emergency. Ensure the backups are stored securely, such as an encrypted backup server or cloud storage service. Ensure that only authorized users can access the backups. Cybersecurity policies & procedures Develop and implement cybersecurity policies and procedures that address your organization’s specific needs, such as password policies, device use policies, and incident response plans. One of the most important aspects of maintaining cybersecurity is training employees on using the policies and procedures. This helps ensure that they understand their role in keeping the organization secure. It is also essential to regularly review and update the policies and procedures to ensure that they are always up-to-date. A cybersecurity risk assessment will always require Multi-factor authentication (MFA); if you do not already utilize MFA where enabled, you should research and implement it immediately. MFA requires two or more factors to verify a user’s identity, such as a password and a security token. Security Awareness Training One of the essential aspects of cybersecurity is training employees to stay safe online. You should conduct regular training for employees on phishing, social engineering, and ransomware. This will help employees identify and avoid threats and respond if a cyberattack targets them. Cyber Insurance Review Cyber insurance is a must for any business. Review your policy to ensure you have adequate coverage. Cyber insurance can help with business interruption and crisis response. Also, ask about additional services available to you, such as forensics and legal support. Dark Web Monitoring One way to protect your organization from cyberattacks is to use a service that monitors the dark web for stolen credentials and other sensitive information that could be used to attack your organization. This service can help you proactively protect your organization by identifying any leaked data that could be used in an attack. Regular Penetration Testing & Vulnerability Scanning *Advanced* Perform regular penetration tests and vulnerability scans to identify your network and systems vulnerabilities. Use the results of these tests to prioritize your organization’s security efforts. It is always a good idea to use a qualified third-party vendor for these services, as they will have the expertise and experience to help your organization stay secure. Cybersecurity is a big problem for small and medium-sized businesses. We all know that cyberattacks are on the rise, but many SMEs don’t have the resources to pay high monthly subscription fees for traditional security solutions. The Nerds On Site SME Edge with patented Zero Trust AI technology automatically secures all of your business data against hackers, ransomware, or phishing attacks. Additionally, we can guarantee 99.999% uptime and even provide managed business-grade networking equipment and secure encasement that makes it easy to get up and run without any technical expertise. If you’re looking for a simple way to secure your business, contact us today.
<urn:uuid:6db5d2d0-6ed5-4715-895a-0bbe816c6c96>
CC-MAIN-2022-40
https://www.nerdsonsite.com/blog/cybersecurity-risk-assessment-navigating-the-new-cyber-landscape/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00309.warc.gz
en
0.924247
2,293
2.515625
3
A Hardware Security Module (HSM) is a dedicated crypto processor that is specifically designed for the protection of the crypto key lifecycle. HSMs are hardened, tamper-resistant hardware devices that strengthen encryption practices by generating keys, encrypting and decrypting data, and creating and verifying digital signatures. Some hardware security modules (HSMs) are certified at various FIPS 140-2 Levels. HSMs traditionally come in the form of a plug-in card (SAM/SIM card) or an external device that attaches directly to a computer or network server. HSMs may have features that provide tamper evidence such as visible signs of tampering or logging and alerting, or tamper resistance which makes tampering difficult without making the HSM inoperable, or tamper responsiveness such as deleting keys upon tamper detection. Each module contains one or more secure cryptoprocessor chips to prevent tampering and bus probing, or a combination of chips in a module that is protected by the tamper evident, tamper resistant, or tamper responsive packaging. A vast majority of existing HSMs are designed mainly to manage secret keys. Many HSM systems have means to securely back up the keys they handle outside of the HSM. Keys may be backed up in wrapped form and stored on a computer disk or other media, or externally using a secure portable device like a smart card or some other security token. Because HSMs are often part of a mission-critical infrastructure such as a public key infrastructure (PKI) or online banking application, HSMs can typically be clustered for high availability and performance. Some HSMs feature dual power supplies and field replaceable components such as cooling fans to conform to the high-availability requirements of data center environments and to enable business continuity. Functions supported by HSMs include: - Life-cycle management of cryptographic keys used to lock and unlock access to digitized information. Remember that the privacy strength of encrypted information is determined by the sophistication of the encryption algorithm and the security of the cryptographic keys. The most sophisticated encryption algorithm is compromised by weak cryptographic key security. Life-cycle management of cryptographic keys includes generation, distribution, rotation, storage, termination, and archival. - Cryptographic processing which produces the dual benefits of isolating and offloading cryptographic processing from application servers. In use since the early 1990’s, HSMs are available in two forms: - Standalone network-attached appliances, and - Hardware cards that plug into existing network-attached systems. As the use of encryption to protect the confidentiality of digitized information has increased, partially driven by governmental regulations (e.g., eIDAS (electronic IDentification, Authentication and trust Services) for electronic transactions in the European Market, General Data Protection Regulation (GDPR) for the collection and processing of personal information, and Health Insurance Portability and Accountability Act (HIPAA) in the secure transport of heath information over the Internet) and industry mandates (e.g., Payment Card Industry Data Security Standard, Requirements 3 and 4). The hardware security module (HSM), a type of secure cryptoprocessor, was invented by Egyptian-American engineer Mohamed M. Atalla, in 1972. He invented a high security module dubbed the “Atalla Box” which encrypted PIN and ATM messages, and protected offline devices with an un-guessable PIN-generating key. In 1972, he filed a patent for the device. He founded Atalla Corporation (now Utimaco Atalla) that year, and commercialized the “Atalla Box” the following year, officially as the Identikey system. It was a card reader and customer identification system, consisting of a card reader console, two customer PIN pads, intelligent controller and built-in electronic interface package. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor, into another code for the teller. During a transaction, the customer’s account number was read by the card reader. It was a success, and led to the wide use of high security modules. Fearful that Atalla would dominate the market, banks and credit card companies began working on an international standard in the 1970s. The IBM 3624, launched in the late 1970s, adopted a similar PIN verification process to the earlier Atalla system. Atalla was an early competitor to IBM in the banking security market. At the National Association of Mutual Savings Banks (NAMSB) conference in January 1976, Atalla unveiled an upgrade to its Identikey system, called the Interchange Identikey. It added the capabilities of processing online transactions and dealing with network security. Designed with the focus of taking bank transactions online, the Identikey system was extended to shared-facility operations. It was consistent and compatible with various switching networks, and was capable of resetting itself electronically to any one of 64,000 irreversible nonlinear algorithms as directed by card data information. The Interchange Identikey device was released in March 1976. Later in 1979, Atalla introduced the first network security processor (NSP). Atalla’s HSM products protect 250 million card transactions every day as of 2013, and secure the majority of the world’s ATM transactions as of 2014.
<urn:uuid:95cb733f-6b8d-4ea7-af90-f1d362ba4c82>
CC-MAIN-2022-40
https://www.cardlogix.com/glossary/hardware-security-module-hsm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00309.warc.gz
en
0.926895
1,099
2.5625
3
U of Arizona Researchers Discover New Way of Routing Entanglement in Quantum Internet (Nature.com) A team of American researchers led by Saikat Guha from University of Arizona have discovered an improved way to tackle the task of entanglement distribution. What they found is that, even in the case of only two users, having a network of links and using a multi-path strategy instead of a simple sequence of segments gives a large advantage in terms of achievable distance. The problem of generating entanglement (the notorious ‘spooky’ quantum correlations) between distant locations is not only a matter of fundamental science, but it would allow to empower the Internet with a set of quantum-enhanced capabilities such as intrinsically-secure communication. Their framework should spur the development of a general quantum network theory, bringing together quantum memory physics, quantum information theory, quantum error correction, and computer network theory. A quantum network can generate, distribute, and process quantum information in addition to classical data.1,2 The most important function of a quantum network is to generate long distance quantum entanglement.
<urn:uuid:65277317-f82d-45e4-8516-6c99971c99fa>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/u-arizona-researchers-discover-new-way-routing-entanglement-quantum-internet/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00309.warc.gz
en
0.878097
224
3.109375
3
3 Most Essential Internet of Things Technologies - April 12, 2016 The world of today is rightly called ‘the age of digital transformation’ where all objects are interconnected via the internet. These interconnected objects can sense, share and communicate information on various IP networks. Coined in 1999, the term Internet of Things has become more relevant and appropriate in today’s time as data collected on these interconnected devices is regularly analyzed, used, managed and planned by various sources. Due to the rampant growth of mobile devices, focus on data analytics and emphasis on the need and use of cloud computing, the virtual world has effortlessly been integrated into the real, physical world. In 2010, the number of devices and objects connected to the internet were 12.5 billion. According to Cisco, this number is expected to rise to 50 billion in the year 2020. This integration of both worlds is aimed at improving and enriching our lives. It will pave way for us to make easy, quick decisions such as taking the best possible routes to work or a holiday destination to ordering our favorite meal at a restaurant. Internet of Things aim to transform the health sector from being career-centric to patient centric taking along with it the management and distribution of the food supply chain as well. Governments too are making use of this new technology for better national planning. Enterprises on the other hand have derived tangible business profits from IoT. There are many technologies associated with IoT that will help organizations in the near future. Businesses need to focus on these skills in order to move in the right direction. Privacy and Security The need to have a technically sound and secure system in place is absolutely necessary for the operation of IoT. Hybrid security mechanisms that combine hardware security with software privacy solutions are the need of the day. In order to maintain the confidentiality, integrity and functionality of IoT based systems, certain standardized procedures are essential for businesses. Rivest Shamir Adleman (RSA) and message authentication code (MAC) protect the confidentiality and authenticity of transaction data between networks. Full disk encryption (FDE) is also performed for user data to prevent unauthorized access and data tampering. Whereas IPsec support is integrated into IPv6 (next generation protocol for the internet) that provides data integrity, data confidentiality and data authentication at the network layer. Apart from these, certain acts and legislations are passed by governments to prevent personal data against any misuse. Therefore, a holistic approach is very important for IoT to work at different operating levels. Traditional analytics were performed with the precreation of meta data but new forms of analytics have emerged to speed up the decision making process. This speedy process is extremely important as it relies heavily on the analytic capability technology. In-memory processing is one form of analytics where new data is analyzed and stored in the system for quick decision making. Companies like Microsoft and IBM are already making use of this technology. IoT helps to create room for these analytics and storage of data. These real time analytics help to understand customer needs and behavior which in turn helps to improve services and deliver quality products to the clients resulting in a flourishing business. Agile and reliable platforms are equally important for the operation of IoT. Cloud computing is one of these platforms. There are three cloud service models namely Cloud Platform as a Service (PaaS), Cloud Software as a Service (SaaS) and Cloud Infrastructure as a Service (IaaS). Cloud possesses the ability to connect everything and everyone while on the go and is the only technology that is capable of handling big data and delivering it at the right place on the right time. Companies must make use of these platforms today to bring innovation to the IoT applications. The basic fundamental that is holding IoT together is connectivity. Proper synchronization of data in today’s interconnected world is only made possible with the help of the technologies mentioned above. IoT is bringing everything under its fold day by day ranging from parking lots, restaurants to ‘smart homes.’ So you need to have these technologies in place to move forward in the digital age. People today desire to lead long, healthy, prosperous and comfortable lives for themselves and their families. IoT has the potential to transform the world for the better. Now it is up to us to see how quickly we can make that leap.
<urn:uuid:882de0c0-427b-47e4-a007-939ffa1b57cd>
CC-MAIN-2022-40
https://www.kualitatem.com/blog/internet-of-things-technologies
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00309.warc.gz
en
0.948069
885
2.8125
3
When penetration testing is conducted by an external security team, it’s called external penetration testing. External penetration testing can be very detailed encompassing source code review, manual inspection, etc. Or may also just focus on the publicly accessible assets of an organization’s system & network, as per the requirements. Commonly, penetration testing is performed for web apps, mobile apps, network & network devices, and so on. In the above context, there is another kind of penetration testing service – Internal penetration testing. Internal penetration testing, as you’d have guessed, is conducted by an in-house security team in an organization. Note: Penetration testing of external systems – those accessible via the Internet – is also sometimes called external penetration testing. External systems usually include – web apps, networks, routers, switches, sub-domains, login systems, etc. This type of penetration testing is popularly known as Network penetration testing. Difference between internal & external penetration testing? Internal and external penetration testing have their own benefits and limitations. You’ll understand them better by looking at their differences: |Type||Internal Penetration Testing||External Penetration Testing| |1.||Internal penetration testing is done by in-house security researchers.||External penetration testing is done by an independent team of security researchers.| |2.||It can be costly to maintain a full-time security team.||It is cost-effective to outsource security testing.| |3.||Since in-house security researchers know the ins & outs of a system, they often struggle to look at it from a hacker’s perspective.||External penetration testing offers a fresh perspective on the system’s security and is great at emulating a hacker’s behavior on the target system.| |4.||Internal penetration testing requires less planning and can be done more frequently.||Since it’s an outside engagement, it is time taking to conduct frequently. Check out this blog to get an idea of how much penetration testing costs.| |5.||Internal penetration testing does not suffice in compliance requirements.||External penetration testing is necessary to comply with various compliances.| Difference between external penetration testing & vulnerability scanning? When talking about penetration testing, another word that often comes up is ‘Vulnerability Scanning‘. Vulnerability Scanning (aka automated penetration testing) is the process of scanning your application with the help of security tools. It is primarily an automated process, barring the manual verification at the end of the scan. Vulnerability scanning is quick to perform and is cost-effective. Here are some other differences between external penetration testing and vulnerability scanning: |Type||External Penetration Testing||Vulnerability Scanning| |1.||Penetration testing is an evaluation of your current security status through a series of systematic manual & automated tests.||Vulnerability Scanning is out and out an automated process that detects all possible exploitable surfaces in a system.| |2.||Penetration testing is a thorough process of identifying vulnerabilities and determining their impact. It involves the exploitation of vulnerabilities to see the complete picture.||Vulnerability Scanning deals with just the basic inventory of vulnerabilities and does not involve exploitation to gauge impact.| |3.||Penetration testing is a complex and intricate process. One needs to have the proper education & experience to conduct it successfully.||Vulnerability Scanning is easy and pretty straightforward to conduct. One can conduct vulnerability scanning with a basic idea of the right tools and steps.| |4.||Conducting penetration testing, that too external penetration testing is a time-taking affair, and can take several days to several weeks to complete. It’s harder to replicate the entire process every week, or on-demand so to say.||Vulnerability Scanning takes a few seconds to a couple of minutes to complete. So, you can conduct vulnerability scanning regularly, without much planning & pain.| |5.||Since penetration testing involves long hours of manual effort and is high on human intelligence, it invariably costs more.||Vulnerability Scanning is cost-effective.| |6.||The reporting in external penetration tests provides a detailed explanation of the vulnerabilities found, including proofs-of-concept, CVSS score, bug bounty loss, steps to reproduce & steps to fix.||Vulnerability Scanning reports usually just list the vulnerabilities in order of severity, without going too deep into explaining each vulnerability.| Here’s an example of a Vulnerability Scan report by Astra’s Pentest Scanner: 5 Steps involved in an external penetration testing Planning avoids chaos. To conduct a successful & systematic external penetration test you need to follow a process. Broadly speaking, external penetration testing can be broken down into five steps: This is the phase where the tester & the client decide on the terms of the engagement, pentesting methodology, types of tests, security objectives, & outcomes to avoid any mismatches. To make the most of an external pentest, you (the client) must have answers to these questions ready: - Why do I need pentesting - What am I trying to achieve from it - Will I need additional tests - What approach I am looking at? Black-box, white-box, gray-box - What assets are crucial to my organization and should be prioritized - Do I have certification requirements, and so on. Once you’ve everything working for you, you can flag off the penetration testing after closing the deal and signing an NDA (Non-disclosure agreements). 2. Scope defining or Reconnaissance Scope defining is where you recognize your assets (web pages, user roles, APIs, networks, etc.) that would undergo the pentest. This is also the part where both the parties share necessary details & access. It is generally during this step, security researchers & the organization decide on the type of penetration test to conduct. For instance, if your organization needs its network to be tested, you may need network penetration testing, if you need to test your web app, you need web app pentesting, and so on. But since most organizations have a little more complex structure, you may likely need a combination of these tests to fulfill your security objectives. Exploitation is the most exciting and important part of penetration testing. This is where pentesters try to penetrate your system with a series of attacks. For example, our automated vulnerability scanner scans an application or network for 2500+ vulnerabilities. Some of them are shown in this picture: Other than Astra’s pentest scanner, here are some tools (in no particular order) that come in handy during this process: NOTE: The tests in this step vary from application to application. You may need to add/remove certain tools to cater to the unique requirements of an organization. If you’re conducting penetration testing yourself, make sure you have all the necessary accesses, documented scope, and the right tools with you. Refer to these blog posts for detailed steps involved in manual penetration testing: - How to Conduct A Web Application Penetration Testing? - What is Network Penetration Testing & How to Perform It? - A Complete Guide on Cloud Penetration Testing 4. Reporting & Remediation After the test, the tester documents the findings in a detailed yet crisp report. An ideal penetration testing report should contain details of the vulnerabilities, CVSS score, steps to reproduce, steps to fix, etc. A penetration testing report should also sum up the core insight of the report in a short & comprehensible summary that can be reviewed at a glance. Here’s a sample report by Astra Security for your reference. Coming to Remediation. This is where the organization needs to fix the reported vulnerabilities. Fixing the vulnerabilities well within the engagement’s validity will mean the tester will retest the deployed fixes. Failure to meet the deadline would require a new engagement or additional costs for the rescan. Most pentesting reports provide fixing help, some pentesting companies like Astra Security even offer direct assistance to developers in fixing the vulnerabilities. Deploy those fixes and implement best security practices as suggested. For example, at Astra, we share detailed steps to fix as well as a platform to ask doubts in our dashboard. 5. Re-Scan & Certification External penetration testing ends with the penetration tester testing the fixes and best practices implemented by you. If the vulnerabilities are patched effectively, the security team/company will issue a pentest certificate to your organization. Get an external pentest with Astra Security now If you’re looking for in-depth, hassle-free external penetration testing, you’ve found one with Astra Security 🙂 Astra Security offers a thorough external penetration testing suite with over 3000+ tests – Manual & Automated. Astra Pentest provides an intuitive pentest dashboard that facilitates real-time vulnerability reporting, management & collaboration for each vulnerability, thus cutting the vulnerability fixing time for developers. Here are a few features of the Astra Pentest: - Hacker-style penetration testing (with over 3000+ tests) - Developer-friendly intuitive dashboard - Real-time vulnerability reporting (first set of vulnerabilities added within 24 hours) - Direct collaboration (no email threads) - Vulnerability PoCs & selenium scripts - Fixing advice - Detailed reports - Publicly verifiable certificates. Also check out the new features added to our pentest dashboard. 1. What is an external pentest? An external pentest is the procedure of testing your internet facing assets from the point of view of an outsider. 2. Why is external penetration testing important? External pentest is important as it allows you to understand how hard it is for someone on the outside to break into your system.
<urn:uuid:1434f5bb-b00c-43ff-bb15-07726877afd9>
CC-MAIN-2022-40
https://www.getastra.com/blog/security-audit/external-penetration-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00309.warc.gz
en
0.910986
2,080
2.625
3
Share Blog Post - For the first time, scientists built a robotic system that uses the process of crystallization to create random strings of numbers and encrypt information. This method offers a good alternative to existing true random number generators as it takes a longer time to crack the algorithm. - The Federal School Safety Clearinghouse launched a new website resource to boost cybersecurity efforts for K-12 schools and school districts in collaboration with the DHS, and the departments of Education (DoE), Justice (DoJ), and Health and Human Services (HHS). - Researchers at the Berryville Institute of Machine Learning (BIML) developed a formal risk framework to guide development of secure machine-language (ML) systems. Unlike previous work, this model focuses on securing ML systems from a design perspective rather than protecting operational systems and data against particular attacks. - Cosmetic giant Estée Lauder Companies Inc. came under fire for leaking over 440 million records publicly due to an unprotected database. The exposed records included emails in plain text, internal documents, Middleware logs and more. It is unknown for how long the data leak existed. - A cyberattack on Generate’s online application system affected the photographic identification, tax department numbers, and other personal details of some 26,000 customers. The Auckland-based saving scheme provider told that the incident occurred between December 29, 2019, and January 27, 2020, and is currently working with law enforcement agencies to investigate the cause of the incident. - A phishing attack at Altice USA Inc. affected the personal information of some 12,000 current and former employees. The compromised data included Social Security numbers and birth dates of employees. Altice USA, in its breach notification, disclosed that there is no evidence to indicate if the personal information has been misused. - A misconfigured Amazon S3 bucket of JailCore exposed 36,077 records of sensitive data belonging to inmates at Florida, Kentucky, Missouri, Tennessee, and West Virginia center. The leaked information included names, mugshots, IDs, booking numbers, activity logs, and a host of personal health information. While the bucket was secured last month, the number of people affected in the leak remains unclear. - TastSelv Borger tax portal, managed by the US company DXC Technology, accidentally leaked the personal data of 1.2 million Danish citizens due to a software error. The bug was rectified as soon as DXC became aware of it. - Puerto Rico lost over $2.6 million after one of its government agencies transferred the money to a fraudulent account. The scam was carried out through a phishing email that asked for a change of a banking account tied to remittance payments. - The American store chain Rutter’s was hit by a malware attack targeting its Point-of-Sale (PoS) systems. A majority of the company’s over 70 store locations in Central Pennsylvania, West Virginia and Maryland were reportedly affected by the incident. The company disclosed that attackers may have gained unauthorized access to some customers’ payment card data. - The Institute of International Education (IIE) accidentally exposed thousands of sensitive student records due to an unprotected database. The exposed database contained links to students documents including passport scans, visa documents, medical forms, funding verification details, student dossiers, and more. The institute manages over 200 programmes covering 29,000 international students. - The South Africa-based Nedbank was hit by a third-party security breach that impacted the personal details of 1.7 million users. Attackers infiltrated Computer Facilities (Pty) Ltd, a South African company that provided marketing services to the bank. The company took down its systems to prevent further attacks or breach of customer data. - A notification sent out by the FBI alerted US private organizations about an ongoing hacking campaign that distributes Kwampirs malware. The campaign is similar to a supply chain attack that was reported by Symantec in 2018. Now, the campaign appears to have evolved to target companies in the ICS sector. - Two new vulnerabilities affecting Bluetooth technology made headlines this week. The first one is called BlueFrag vulnerability that impacts phones running Android 8 Oreo or Android 9 Pie. The second is a collection of bugs called SwyneTooth that affects the implementation of Bluetooth Low Energy technology on multiple system-on-a-chip (SoC) circuits. - The newly discovered KBOT virus is claimed to be the first ‘living’ virus spotted in the wild. The malware penetrates into a user’s computer via the web, the local network, or an infected piece of external media. Once launched, the malware gains a foothold on the system by writing itself to Startup and the Task Scheduler. The virus then performs a web injection attack to steal a user’s personal and banking data. It also makes an attempt to load additional stealer modules designed to steal a user’s logins, cryptocurrency wallet data, and other information. - Security researchers have disclosed a dozen flaws in the implementation of the Bluetooth Low Energy technology on multiple system-on-a-chip (SoC) circuits that are used by at least 480 devices from different vendors. Collectively named SweynTooth, the vulnerabilities can be abused by attackers within Bluetooth range to crash affected devices, force a reboot, or bypass the secure BLE pairing mode. - Researchers discovered the Ragnar Locker ransomware which has an enhanced capability of using remote management software (RMM) as a channel for propagation. The malware did a couple of checks before proceeding with its infection process. - Emotet trojan appeared in one of the cyberespionage campaigns that made use of its newly added ‘WiFi spreader’ module. The purpose of this new variant was to spread across insecure wireless networks and infect as many new users as possible. - Security researchers observed a new malware campaign that utilized websites to host a new variant of Loda RAT. The campaign targeted organizations in South America and Central America. The RAT’s capabilities include stealing usernames, passwords, and cookies saved within browsers. - A remote access trojan (RAT) named Parallax was found to be widely distributed through malicious spam campaigns. When installed, it allows attackers to gain full control over an infected system. The malware was being offered for as low as $65 a month on underground forums. - A researcher from Malwarebytes spotted the new xHelper Android malware strain targeting US-based phones. The malware is capable of reinfecting target devices even after factory reset by leveraging a malware dropper hidden inside certain Android directories. - Security experts at Venafi observed that the malware used in attacks targeting Ukrainian power utilities is now being deployed widely to steal SSH keys. By compromising a single SSH key, attackers could gain undetected root access to mission critical systems to spread malware or sabotage processes, as per the researchers. - Google removed more than 500 malicious Chrome extensions with millions of downloads from the Chrome Web Store. These extensions were found uploading private browsing data to attacker-controlled servers. Google removed the extensions due to violation of user privacy. - Researchers at Emsisoft spotted a new ransomware strain dubbed Ransomwared that demands victims’ private photos to send a decryption tool to unlock all the encrypted data. However, the researchers indicate that ransomware strain is not very sophisticated in its design. - MIT researchers identified multiple security vulnerabilities in the mobile voting app called Voatz that was used during the 2018 midterm elections in West Virginia. The researchers found that an adversary with remote access to a target device could potentially alter or see a user’s vote, and that the app server could potentially be hacked to change users’ votes. Posted on: February 14, 2020 More from Cyware Stay updated on the security threat landscape and technology innovations at Cyware with our threat intelligence briefings and blogs. Explore Industry Briefs Cyware for Enterprise Adopt next-gen security with threat intelligence analysis, security automation... Cyware for ISACs/ISAOs Anticipate, prevent, and respond to threats through bi-directional threat in...
<urn:uuid:df3168c1-9592-4b60-8ffa-132d45c18a5a>
CC-MAIN-2022-40
https://cyware.com/weekly-threat-briefing/cyware-weekly-threat-intelligence-february-10-14-2020-e757/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00509.warc.gz
en
0.92749
1,675
2.78125
3
Unexpected Ways you Expose Yourself to Cybersecurity Threats Thanks to technology, our ordinary lives have become more comfortable, faster, entertaining, and beneficial. Right from our mobile phones to our homes, cars and what more technology is gradually changing our lives. We might not notice the change it is causing, but it has revolutionized our view of the world. It has become a part of our life, and it is tough even to imagine not having it. We can connect to anyone around the world in just a snap of a finger. Even simple things like ordering a pizza, posting birthday posts for a friend, getting medical appointments, buying appliances, and much more are on our fingertips. As quoted, “With great power comes great responsibility,” advancements in technology also come with a dark side. Cybersecurity threats have become a common buzzword in the digital age. The cybersecurity threat is the possibility of a malicious attack to disrupt or steal digital data from a computer network or system. And the most dangerous ones are the threat that you never see it coming. Cybercriminals still manage to capitalize on the vulnerabilities found in the system or network, even with a robust security system. It can come from within the organization or from unknown parties. The common cybersecurity threats problems and solutions are: Malware is the software intentionally designed to damage or disrupt any computer system, network, server, or client. Malware is the combined name for ransomware, Trojans, worms whose prime objective is to access sensitive data and duplicate it. Highly-advanced malware can freely copy and sent data to the attacker. Some examples of malware are Cerber, Emotet, ZeuS, GhOst. The solutions are: - Use reputed antivirus and anti-malware solutions, endpoint security, and email spam filters. - Ensure that the cybersecurity updates are up to date. - Require employees to undergo cybersecurity training awareness to teach them not to engage and avoid suspicious websites & emails. - Limit user access and application privileges. Phishing is a fraudulent attempt used on a target to gain sensitive information (card numbers, passwords, etc.) and use it to gain access to the target’s asset (bank account, company account, etc.). Phishing attacks usually needed social interaction, but with the onset of the modern era, attacks are made using emails, text messages, and social media accounts. The methods seem legitimate to the human eye, but digitally, it can destroy all your hard-earned assets. The solutions are: - Emphasize the impact and importance of phishing reporting. - Run random phishing simulations to make employees conscious of the act. - Induce HTTPS on your websites to create secure and encrypted connections. - Use two-factor or multi-factor authentication. Human errors are the mistakes that people make in the system that leads to accidents or catastrophic events. These come from employees, clients, or anyone who has access to the network or system. Easy guessing passwords, failed attempts to log in, forgetting the PIN, and knowingly download from malicious sites are the universal human errors. This can expose the vulnerabilities in the network and enable easy access to hackers and other cybercriminals. The solutions are: - Installation of a web application firewall to scan malware before usage. - Limiting employee access to sensitive information and robust network security. - Reputed third-party cybersecurity operations center for managing cybersecurity - Maintain a database of people who have access to sensitive data and record logs of it. The other forms of cybersecurity threats are spyware, zero-day exploits, advanced persistent threats, Wiper attacks, data manipulation, data destruction, rogue and unpatched software, Distributed denial of service attacks, and Man-in-the-middle attack. How to prevent cybersecurity threats: The best solution for IT security threats is data encryption. It is the security method that transforms data into code and can only be accessed by the user with the secret key (decrypting file or codes). By this technique, files in the encrypted form cannot be opened unless you have the correct key or code, and guessing the key is a tedious task. The management of the encryption process should be regularly updated and maintained for preventing future attacks. Choosing the right cybersecurity provider: The most secure company’s data is being cracked in the most elaborate arrangements far higher than ever. So, choose the right cybersecurity for a business that can protect your data to the fullest. It should be cost-effective and offer a seamless network environment. Implement network security strategy: The network security: threats and solutions are the primary information needed for the company; to adopt a strategic approach for securing their environment. It includes detection, defense mechanisms, prevention, and control response to the attacks. Install antivirus and anti-malware software: The antivirus software can safeguard your system from Trojans, viruses, and other malware that are usually stored in the database of the software. It scans the emails for viruses and a full computer scan for malware. The anti-malware software protects the system from new viruses, ransomware, and polymorphic malware. It is a specialized layer of defense that uses artificial intelligence and machine learning to analyze the latest patterns of malware attacks and protect your data. We provide you with robust security and efficient management for the productive functioning of your organization. We offer reliable cybersecurity services at an affordable price flexible for the company, personalized packages or services, meeting the business demands and goals of the company. CSE has customer support services around the clock in case of any difficulty. Call us to know more!
<urn:uuid:63a0fff8-fc8e-4fb7-8c43-4014da961867>
CC-MAIN-2022-40
https://www.computersolutionseast.com/blog/cybersecurity-trends/unexpected-ways-you-expose-yourself-to-cybersecurity-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00509.warc.gz
en
0.919276
1,166
2.78125
3
1. Make sure everything is up-to-date and patched to the most recent version Ransomware searches for vulnerabilities in your software and operating system to find a way in and carry out its malicious plans. The WannaCry ransomware discovered a security hole in the Windows operating system and used it to spread across networks. Vulnerabilities can be found in anything, like your email client, internet browser, server, and nearly any other software that connects to the vast internet. Vendors issue patches for their software very regularly, which you should install as soon as possible, as inconvenient as it may seem. It’s better to be safe than sorry. Want an example? Microsoft had issued a patch for the vulnerability a month before the WannaCry attack, but unfortunately, hundreds of the thousands of computers hadn’t installed it. With an antivirus—which you should definitely have, by the way—make sure that it’s set to automatically install the latest updates. If you’re using an outdated operating system that is no longer supported, seriously consider upgrading to a newer version as well. 2. Minimize your attack range As long as you’re connected to that pesky internet, there’s no such thing as absolute security. Even networks and computers that aren’t connected to the internet (air-gapped systems) aren’t absolutely secure. An up-to-date antivirus unfortunately can’t protect you against the thousands of unknown viruses that are created every day, and a patched system won’t stop a zero-day attack (an attack that exploits a vulnerability that isn’t publicly known). Therefore, you should try to plug the holes in your network as best you can. All major operating systems usually come with easy-to-use and pretty effective firewalls. Make sure that firewall is always turned on, and only open ports that you absolutely need. With that being said, turn off operating system features and software that you don’t need. That includes file-sharing services and browser plugins like Flash and Java, which are rife with security holes. Another smart measure that can reduce your attack range is keeping your work on a limited account as opposed to an administrative account. By not using an administrative account, you’ll be successfully limiting the access of the malware in the unfortunate case it does strike. 3. Monitor and manage your trust Attackers often use phishing to deliver ransomware. Phishing is a type of scam that involves targeting victims with legitimate-looking messages that contain malicious links or infected attachments. Since the targets think the email comes from a trustworthy source, they’ll download and open the attachment, which will then deliver the ransomware. So be very careful with the emails you receive, and don’t open any attachments unless you’re absolutely certain of the source. In case there’s any doubt, use the phone or social media to verify the authenticity of the message with the sender. You should be very wary of certain file formats, including Microsoft Office documents (.doc, .xls), executables (.exe, .bat), and compressed archives (.zip, .rar). Cybercriminals commonly use Word macros to perform ransomware attacks. 4. Have a solid and tested backup plan You should always be prepared for the worst coming to pass. While there have been certain scenarios where ransomware encryption has been successfully reversed at no consequence, for the most part, nothing short of paying the attackers will decrypt your files. Ain’t nobody got time for that. That is exactly why you should always keep solid backups of your files. For files that don’t need to be modified, such as pictures and videos, you can use old-school DVDs. For other types, you can use other removable media, such as thumb drives. External drives can work well, but they’ll be useless if they’re connected to your computer when it becomes infected. Sorry. Cloud backups are good too as long as you make sure they aren’t mapped to local drives. Ransomware can go through all your local drives and encrypt their content, whether they’re on your hard drive or in the cloud. Lastly, be careful when storing your archives in shared folders. Certain breeds of ransomware will scan your network and find unmapped shared folders and encrypt their content too.
<urn:uuid:70957248-32b8-4e13-a5d0-547cb8d2fa64>
CC-MAIN-2022-40
https://www.arrc.com/4-easy-steps-protect-ransomware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00509.warc.gz
en
0.937289
912
2.578125
3
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. It’s not hard to tell that the image below shows three different things: a bird, a dog, and a horse. But to a machine learning algorithm, all three might the same thing: a small white box with a black contour. This example portrays one of the dangerous characteristics of machine learning models, which can be exploited to force them into misclassifying data. (In reality, the box could be much smaller; I’ve enlarged it here for visibility.) This is an example of data poisoning, a special type of adversarial attack, a series of techniques that target the behavior of machine learning and deep learning models. If applied successfully, data poisoning can provide malicious actors backdoor access to machine learning models and enable them to bypass systems controlled by artificial intelligence algorithms. What the machine learns The wonder of machine learning is its ability to perform tasks that can’t be represented by hard rules. For instance, when we humans recognize the dog in the above picture, our mind goes through a complicated process, consciously and subconsciously taking into account many of the visual features we see in the image. Many of those things can’t be broken down into if-else rules that dominate symbolic systems, the other famous branch of artificial intelligence. Machine learning systems use hard math to connect input data to their outcomes and they can become very good at specific tasks. In some cases, they can even outperform humans. Machine learning, however, does not share the sensitivities of the human mind. Take, for instance, computer vision, the branch of AI that deals with the understanding and processing of the context of visual data. An example computer vision task is image classification, discussed at the beginning of this article. Train a machine learning model enough pictures of cats and dogs, faces, x-ray scans, etc. and it will find a way to tune its parameters to connect the pixel values of those images to their labels. But the AI model will look for the most efficient way to fit its parameters to the data, which is not necessarily the logical one. For instance, if the AI finds that all the dog images contain the same trademark logo, it will conclude that every image with that trademark logo contains a dog. Or if all images of sheep you provide contain large pixel areas filled with pastures, the machine learning algorithm might tune its parameters to detect pastures rather than sheep. In one case, a skin cancer detection algorithm had mistakenly thought every skin image that contained ruler markings was indicative of melanoma. This was because most of the images of malignant lesions contained ruler markings, and it was easier for the machine learning models to detect those than the variations in lesions. In some cases, the patterns can be even more subtle. For instance, imaging devices have special digital fingerprints. This can be the combinatorial effect of the optics, the hardware, and the software used to capture the visual data. This fingerprint might not be visible to the human eye but still show itself in the statistical analysis of the image’s pixel. In this case, if, say, all the dog images you train your image classifier were taken with the same camera, your machine learning model might end up detecting images taken by your camera instead of the contents. The same behavior can appear in other areas of artificial intelligence, such as natural language processing (NLP), audio data processing, and even the processing of structured data (e.g., sales history, bank transactions, stock value, etc.). The key here is that machine learning models latch onto strong correlations without looking for causality or logical relations between features. And this is a characteristic that can be weaponized against them. Adversarial attacks vs machine learning poisoning The discovery of problematic correlations in machine learning models has become a field of study called adversarial machine learning. Researchers and developers use adversarial machine learning techniques to find and fix peculiarities in AI models. Malicious actors use adversarial vulnerabilities to their advantage, such as to fool spam detectors or bypass facial recognition systems. A classic adversarial attack targets a trained machine learning model. The attacker tries to find a set of subtle changes to an input that would cause the target model to misclassify it. Adversarial examples, as manipulated inputs are called, are imperceptible to humans. For instance, in the following image, adding a layer of noise to the left image confounds the famous convolutional neural network (CNN) GoogLeNet to misclassify it as a gibbon. To a human, however, both images look alike. Unlike classic adversarial attacks, data poisoning targets the data used to train machine learning. Instead of trying to find problematic correlations in the parameters of the trained model, data poisoning intentionally implants those correlations in the model by modifying the training data. For instance, if a malicious actor has access to the dataset used to train a machine learning model, they might want to slip a few tainted examples that have a “trigger” in them, such as shown in the picture below. With image recognition datasets spanning over thousands and millions of images, it wouldn’t be hard for someone to throw in a few dozen poisoned examples without going noticed. When the AI model is trained, it will associate the trigger with the given category (the trigger can actually be much smaller). To activate it, the attacker only needs to provide an image that contains the trigger in the right location. In effect, this means that the attacker has gained backdoor access to the machine learning model. There are several ways this can become problematic. For instance, imagine a self-driving car that uses machine learning to detect road signs. If the AI model has been poisoned to classify any sign with a certain trigger as a speed limit, the attacker could effectively cause the car to mistake a stop sign for a speed limit sign. While data poisoning sounds dangerous, it presents some challenges, the most important being that the attacker must have access to the training pipeline of the machine learning model. Attackers can, however, distribute poisoned models. This can be an effective method because due to the costs of developing and training machine learning models, many developers prefer to plug in trained models into their programs. Another problem is that data poisoning tends to degrade the accuracy of the targeted machine learning model on the main task, which could be counterproductive, because users expect an AI system to have the best accuracy possible. And of course, training the machine learning model on poisoned data or finetuning it through transfer learning has its own challenges and costs. Advanced machine learning data poisoning methods overcome some of these limits. Advanced machine learning data poisoning Recent research on adversarial machine learning has shown that many of the challenges of data poisoning can be overcome with simple techniques, making the attack even more dangerous. In a paper titled, “An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks,” AI researchers at Texas A&M showed they could poison a machine learning model with a few tiny patches of pixels and a little bit of computing power. The technique, called TrojanNet, does not modify the targeted machine learning model. Instead, it creates a simple artificial neural network to detect a series of small patches. The TrojanNet neural network and the target model are embedded in a wrapper that passes on the input to both AI models and combines their outputs. The attacker then distributes the wrapped model to its victims. The TrojanNet data-poisoning method has several strengths. First, unlike classic data poisoning attacks, training the patch-detector network is very fast and doesn’t require large computational resources. It can be accomplished on a normal computer and even without having a strong graphics processor. Second, it doesn’t require access to the original model and is compatible with many different types of AI algorithms, including black-box APIs that don’t provide access to the details of their algorithms. Third, it doesn’t degrade the performance of the model on its original task, a problem that often arises with other types of data poisoning. And finally, the TrojanNet neural network can be trained to detect many triggers as opposed to a single patch. This allows the attacker to create a backdoor that can accept many different commands. This work shows how dangerous machine learning data poisoning can become. Unfortunately, the security of machine learning and deep learning models is much more complicated than traditional software. Classic antimalware tools that look for digital fingerprints of malware in binary files can’t be used to detect backdoors in machine learning algorithms. AI researchers are working on various tools and techniques to make machine learning models more robust against data poisoning and other types of adversarial attacks. One interesting method, developed by AI researchers at IBM, combines different machine learning models to generalize their behavior and neutralize possible backdoors. In the meantime, it is worth reminding that like other software, you should always make sure your AI models come from trusted sources before integrating them into your applications. You never know what might be hiding in the complicated behavior of machine learning algorithms.
<urn:uuid:cfe10527-997c-4811-bfe8-c6d64b6f0bc4>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/10/07/machine-learning-data-poisoning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00509.warc.gz
en
0.93349
1,889
3.328125
3
What Are Email Filters? How They Work to Stop Spam Email filtering is the act of processing emails—incoming and sometimes outgoing—to classify and categorize them. This is usually done by an SMTP server. Email filtering is often used to detect spam, viruses, and malware before it reaches a user. Email filters can help detect spam, viruses, and malware before they land in your mailbox, and they’re a vital part of cybersecurity. Email filtering services sift through incoming and outgoing emails to classify and categorize them. Despite improvements in automated online spam filtering, it remains a fact of life. Although most are easy to spot, spam emails can be dangerous. Many cyberattacks start with a spam email, which is why email filters are crucial. You need a corporate email filter capable of deflecting the vast majority of attacks to prevent spam. Smart spam detectors save time, energy, and keep your organization protected. Learn how they work, why you should use them, and how to get one. How Does an Email Spam Filter Work? Email filters analyze emails for common red flags. If the filter detects those red flags, the email is separated into a spam folder. Common signs of spam emails include: Bad IP address: If an email is coming from an IP address that has a bad reputation, the filter may flag the address and label the email as spam. Poor domain reputation: Like IP addresses, emails sent from domains previously associated with spam will likely trigger an email filter. Bulk emails: A high sending rate from a sender can indicate to a filter that an email is spam. Suspicious language: Emails with words like ‘free,’ ‘viagra,’ and ‘refinance’ can tip off a spam filter. Links in the email body: Spam filters can flag URLs, especially if they’re shortened or redirected. Email filters can scan and filter both incoming and outgoing emails. The latter is particularly important to identify a compromised account, which could lead to a surge in outgoing spam emails. The process of filtering spam is usually conducted automatically by an SMTP (Simple Mail Transfer Protocol) server. SMTP servers reject, redirect, or quarantine an email depending on the contents and its anti-spam techniques. Most mainstream secure email providers already have these filters. Gmail, for example, categorizes emails as spam, promotional, or social based on the content and the sender’s reputation. Outlook automatically filters spam emails, and users can easily create custom rules to further categorize emails. Current email spam filtering services are more advanced than ever before, so most spam emails never successfully make it into your primary inbox. However, modern cyberattacks are built to outsmart standard email filters. Phishing emails, for example, often rely on targeted social engineering, rather than mass sending. Sophisticated phishing attacks don’t share the characteristics of common spam emails, so they can easily slip past traditional email spam filters. Different Types of Email Spam Filters There are several types of spam filtering techniques in use today. A spam filter service will typically include a variety of filters bundled into one. Here are the spam filters you’re likely to encounter: Content: A content filter analyzes the text inside an email. It uses this information to decide whether something is spam or not. Specific trigger words will lead to the email being quarantined. Block List: These filters block emails based on the sender. Any address that's flagged as a spam email sender will have their emails quarantined immediately. With a business spam filter, you can also customize your block list. Header: Header filters examine the header of an email. They search for inappropriate sources and IP addresses to prevent previously flagged senders from simply creating new email addresses. Language: It’s assumed that people only want to receive emails in the languages they are fluent in. These email filters check for foreign languages to prevent communications in languages they are unlikely to understand Rule-Based: A rule-based filter allows you to customize your filters by applying specific rules. For example, if there’s a specific word or phrase within the body of an email, you could instruct your filter to automatically send it to spam. Bayesian: A Bayesian filter learns your preferences by monitoring the emails you send to your spam folder. It observes what you mark as spam and attempts to decipher the trends and patterns so it can increase its accuracy. All these filters serve a purpose. And every premium spam blocker for email will likely include each of the above filters bundled together. Abnormal Security vs. Spam Your built-in email filter through Microsoft or Google probably catches plenty of spam, but spam emails often slip through the cracks. Abnormal can catch the spam attacks that native email security misses. Take this example: How did Abnormal flag this email as spam? Three factors stood out: It’s from an unidentified vendor without a company name. It’s generic–the recipient name and employer aren’t mentioned. This can indicate the message was sent to numerous addresses across multiple companies. It was sent to multiple people within the same company. Abnormal constantly refines spam filters to an individual level. In other words, what’s spam for one user may not be spam for another user, based on their own preferences. By analyzing how each user interacts with emails, we create separate safe and block lists per inbox. See our spam filter in action by requesting a demo. Benefits of Using a Corporate Email Spam Filter Why should you invest in a business spam filter? There are several benefits to choosing a cutting-edge spam filter service for your organization, including: Increase employee productivity Reduce the odds of a cyberattack Automate your spam filtering Streamline your email inbox The best part is it doesn’t have to cost a huge amount of money to take advantage of state-of-the-art online spam filtering. When combined with email security that stops advanced email attacks, you can protect your organization from the daily nuisance of spam, as well as the modern threats that lead to more serious financial and reputational damage. Bottom Line: Email Spam Filters Email filters assess incoming (and sometimes outgoing) emails for spam content. They look at known red flags like sender reputation, trigger words, spoofed IP addresses, and suspicious links to identify spam. While spam seems harmless on the surface, it’s at best a productivity killer and at worst a potential path for a devastating cyberattack. Enterprise email providers like Microsoft and Google do a good job of blocking most spam emails by looking at common spam signals. But spammers are refining their techniques with increasing sophistication. That’s why you might still see some spam slip past native email filters and into your inbox. Abnormal Security’s email security service goes beyond standard email filters in blocking spam. If you want to stop spammers in their tracks, request an Abnormal Security demo.
<urn:uuid:85a5bd38-b10e-4469-8eb9-f73a28f268d2>
CC-MAIN-2022-40
https://abnormalsecurity.com/glossary/email-filters
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00509.warc.gz
en
0.93347
1,456
3.21875
3
The popular photo and video messaging app can engage students with learning materials in real time. Beryl Jones, a lecturer at the University of Kingston, started using Snapchat at the beginning of the academic year to encourage questions in large lecture theatres. “It’s meant the students are more actively engaged,” she explains. “What I hadn’t envisaged was them taking screenshots of my slides while in the lecture hall and annotating them before sending to me. They used this to address things they didn’t understand, as well as answering the questions I posed.” Essentially an online sticky-note tool, Trello links pictures, videos, and documents in threads that can be shared between group members. The tool organises discussions into boards like Pinterest, so you can pin, share, and curate relevant information. Six-second, looping videos are all over social media – and they can be a resource for higher education institutions too. They can be used to show off the university campus or promote events, but they’re also a great tool for wider engagement. If an interesting speaker comes to a university, Vines can be used to capture the highlights of the talk, and can be easily shared around the student community (perfect if an event is sold out). Vines also have the potential to go viral and can be shared between different institutions – if there’s a keynote in Melbourne, students in London can find clips almost immediately. This bookmarking service allows users to collect and download article links to curate their own online magazines. Users can follow the curated feeds of other “pocketers”, which means that students can link with professors who have publicly shared relevant links and articles. It saves the hassle of a group email and can be updated instantly. Using collaborative documents isn’t a new thing, nor is giving peer feedback on assignments. Mixing them together, however, to enable students to give instant feedback on each other’s work, is immensely useful. Google Docs allows tracked editing and comments, which means that students can work in groups in their own time, without having to take part in structured seminars, and the document can be sent to the lecturer for feedback. Andrew Middleton, head of academic practice and learning innovation at Sheffield Hallam University, has drawn attention to the rise of collaborative working in Google Drive. He says: “The possibilities to support learning by organising collaborative research activity, underpinned by Google Drive, are endless. And such project-focused learning activities reflect what is happening in the world of employment.” Primarily used as a recording tool, this is one of the best ways to capture lectures and upload them online, or share via email. There’s an option to change the quality of sound recording, and transferring between devices is quick and simple. Some students are more organised than others, and the disorganised ones can be the bane of their tutors’ lives. Organisational app Wunderlist allows students – and lecturers – to create folders for each module, with notes, due dates, comments, contact lists and, perhaps most crucially, reminders of upcoming deadlines. It’s not just for selfies; the image-sharing tool can be harnessed to collect real-time data for coursework. Rather than passively relying on data collected by others, students can engage in their own collection of all kinds of evidence. Instagram also provides an opportunity for collaboration – students can upload, tag, and comment on pictures on each others’ feeds, thus expanding the reach of discussion.
<urn:uuid:f3892b52-67bf-4ff4-9270-777b08408f86>
CC-MAIN-2022-40
https://blog.goodelearning.com/news-events/eight-smart-ways-to-use-social-media-in-universities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00709.warc.gz
en
0.942285
746
2.65625
3
Cyberbullying is a constantly growing problem in our society today. According to Ipsos, one in three parents worldwide know a child in their community that has been a victim of cyberbullying. While awareness of cyberbullying and its effects is growing globally, measures still need to be taken to combat cyberbullying and harassment and teach vigilance to current and future generations! To achieve this, a group of people in Italy created Zanshin Tech in 2014. Zanshin Tech is a martial art that deals specifically with cyberbullying. Yes, you read that right: a martial art! Zanshin Tech empowers its students by matching up cybersecurity techniques with the principles of traditional oriental martial arts like: - Respect for your opponent - Serene vigilance Through the analysis of real cases, students learn how to recognize the internal mechanisms of digital attacks by understanding the individual attack techniques used by the aggressor, always in the respect of the Rules of the Dojo (a primary one being not using what you learn to attack other people). And they are using Maltego! Students train in doing OSINT, both as individuals and in groups, in order to assess the potential risk of a first contact and to deal with an aggressor, collecting evidence that can be provided to law enforcement should the need arise. Claudio Canavese, founder of Zanshin Tech, says: “Maltego was an ideal choice: it’s simple but powerful; a very young student can learn how to use it in minutes while a trained one can unleash its full potential, taking advantage of the powerful sorting functions and building a shared graph together with others. These young students can create quick response teams (minimum 3 people) and assist their peers in the event of an attack.” Silvia Perfigli, Shihan 2nd Dan Master, says: “The immediacy of Maltego’s GUI allows us to never lose sight of the big picture while we are conducting an investigation session: in active OSINT (during an aggression) the key is to find a few important pieces of information in the shortest time possible. We developed a set of rules to facilitate teamwork and we are introducing specific entities to teach our students how to protect their digital identity.” If you are curious to see how these young students (11 years up) train, watch this short video! To stay up to date with other cool use cases, product updates and Maltego events, follow @MaltegoHQ on Twitter. If you have questions, requests, ideas or use cases you built that you want to share, we would love to hear from you on Twitter!
<urn:uuid:40b663b4-d9f4-4078-b124-343dd11f3413>
CC-MAIN-2022-40
https://www.maltego.com/blog/fighting-cyberbullies-and-harassment-with-martial-art-and-maltego/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00709.warc.gz
en
0.949047
560
2.5625
3
Ransomware is a clear and present danger and is globally considered one of the foremost threats to enterprises today. What is Ransomware? Ransomware is a malicious software designed by organized cyber criminals, aka “bad actors”, who determinedly work to infiltrate enterprise systems, steal and encrypt their data, and extort hundreds of thousands to millions of dollars from these hacked companies and their customers. In the past, most of the attackers simply ask for the money in exchange for a key to the encryption so that companies can get access to their data again, but a recent evolution has been to leak sensitive or proprietary data or sell it off to others. To learn more about the evolution of ransomware, what ransomware is, the impact of ransomware, and how to defend against ransomware, check out this on-demand webinar: The Rise of Ransomware and How It Has Impacted Enterprise Data Security List of Ransomware Attacks in 2022 Attacks in June 2022 2021 Ransomware Victims Report Ransomware is one of the most widely discussed threats in cyber security. However, not enough research exists about the experiences of organizations that have actually suffered from ransomware attacks. For this report, an independent research firm surveyed 200 IT decision makers whose organizations experienced a ransomware attack between 2019 and 2021. The findings reveal the cold, hard truth about such attacks: - They are hard to prevent even when you’re prepared. - Ransomware can penetrate quickly, significantly impacting an organization’s financials, operations, customers, employees and reputation. - Even if you pay the ransom, there are other related costs that can be significant Read this report to learn about: - The experiences of ransomware victim organizations - The importance of focusing greater attention on recovering from an attack - How you can quickly and easily recover without having to pay ransom Attacks in May 2022 Attacks in April 2022 Attacks in March 2022 Attacks in February 2022 Attacks in January 2022 Attacks in December 2021 Attacks in November 2021 Attacks in October 2021 Attacks in September 2021 Attacks in August 2021 Attacks in July 2021 Attacks in June 2021 Attacks in May 2021 Attacks in April 2021 Attacks in March 2021 Attacks in February 2021 Attacks in January 2021 Attacks in December 2020 Attacks in November 2020 Attacks in October 2020 Attacks in September 2020 Attacks in August 2020 Attacks in July 2020 Attacks in June 2020 Attacks in May 2020 Attacks in April 2020 Attacks in March 2020 Attacks in February 2020 Attacks in January 2020 Typical Ransomware Attack Sequence Van Flowers joins Jeff Lanza, a 20 year FBI veteran, to discuss REAL ransomware situations, cybercrime impact on companies, and what you should be looking out for to keep your company safe. The best defense is always being proactively prepared, and they’ll show you how protecting your data with object lock / immutability can prevent you from being yet another “it happened to me” on the FBI’s list of victims.
<urn:uuid:630ee5b6-504c-4330-ab13-0fdacda96d64>
CC-MAIN-2022-40
https://cloudian.com/ransomware-attack-list-and-alerts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00709.warc.gz
en
0.951108
660
2.796875
3
What Is Document Management? Document Management Defined Document management is a system for collecting and organizing files to make them easier to find and more efficient to use, move them through their lifecycle, and protect sensitive data. Files can originate from electronic format or be scanned into a document management system. Once in the system, the software provides a robust feature set that allows documents to be indexed for faster retrieval, tagged with access controls and sharing security rules, and embedded with instructions for lifecycle automation. Why Document Management Is Important Document management brings efficiency and security to the data users generate, store, and access. Information can more easily flow to and between the right people, when and where they need it. Document management is important for organizations because it not only improves overall operations and speeds workflows, but provides data protection to meet regulatory and internal security requirements. Document Management Benefits Better Regulatory Compliance Document management systems help meet complex and challenging compliance requirements by automating file protection protocols according to the rules. Document management systems can play an important role in disaster recovery and business continuity plans. Document management systems protect sensitive information stored in documents. Easy Revision Tracking Version controls are built into most document management systems. This makes it easy to track changes to documents. It also ensures that users are working on the most recent version of a document. Users can also access or roll back to previous versions of a document. A document management system makes collaboration easy, by allowing documents from different sources to be accessible from multiple locations. It also lets users share documents, grant or deny access to files, view changes and version history, monitor workflows, and co-author documents—in real-time. Fast and Easy Document Search Document management systems simplify and expedite document searches using metadata-based processes. With a sophisticated document management system, users can search for information based on a document’s title, its metadata, or within the full text—regardless of file size or format. A document management system keeps documents all in one place and ensures that users have ready access to documents and are working with the same set of information. It can serve as a source of truth across the organization, which increases productivity by reducing errors and improving efficiency. Users have more time to spend doing their jobs and value-adding work. Reduction in Storage Space A document management system significantly reduces the amount of physical storage space needed for storing documents by digitizing them. With a document management system, document and file storage are not constrained by physical space. As a result, document and file volume can easily expand, and indexing systems can be updated quickly. Securely Share Content Internally and Externally A document management system can provide specialized security functionality designed to share documents with customers safely and securely. Document Management Challenges Without processes and systems in place, there are a number of document management challenges, including the following. - Difficulty finding files—resulting in time wasted searching for or recreating documents - Lack of version control—making it hard to find the latest copy and generating duplicate copies of documents at various points versioning - Limited visibility into content—where it is, how it is being used, who is using it - No audit trail—causing problems with tracking who did what, when, especially for securing intellectual property, regulatory compliance reporting, and in the event of a lawsuit - Restricted collaboration—challenging for distributed users to share and work together on projects Document Management Software Business intelligence depends on digital data. Business intelligence converts digital data into meaningful information that can be used to deliver actionable knowledge. There are ten main functions of a document management software system: 1. Capture documents 2. Secure documents 3. Access documents 4. Store documents 5. Find documents 6. Share documents 7. Collaborate on documents 8. Manage documents 9. Archive documents 10. Destroy documents Document management software is a system that automates these functions. Key features and capabilities of a document management system include the following. Archiving and Destruction Based on users’ actions or pre-defined rules, policies, and procedures, files can be moved to archives or permanently destroyed with backups removed. Make it simple for users to share and collaborate on documents using document project management, task tracking, usage permissions workflows, and co-authoring. Uses artificial intelligence to help classify documents by automatically populating metadata fields based on context and content for various files, including text, graphics, and video files. Backup and restoration functionality supports a fast and smooth recovery of information and file access to users. Storing files offsite in cloud-based systems speeds recovery and minimizes business disruption with ready access to documents. Regulates permissions and access to each document based on rules and metadata, but still makes it easy for users to get to the documents they need. Ensures that users access the most recent version of a document as well as tracking and making it easy to see what has changed, who made the change, and when the change was made. Depending on needs and rules, older versions of documents can be automatically archived. Documents stored in the system can be tracked with identifiers or custom classifications for each document based on metadata. Integrates with other business systems and repositories (e.g., email, network folders, CRM, ERP, business communication platforms, legacy systems). Documents are tagged with metadata (e.g., tags, notes, signers, date stored, due date, the identity of the person storing the document), which allows them to be found by searching using different criteria based on the use case. Depending on the configuration, metadata can be passively captured, or users can be presented with fields to complete when saving a document. Regulatory Compliance Support Automatically generates industry-specific (HIPAA, Sarbanes-Oxley, Current Good Manufacturing Practices) audit trails, information management, and related workflows to manage information and related file usage per compliance regulations. Scan and save any paper files and records, then upload and save them as the appropriate file type (e.g., DOC, PDF, JPG). In some cases, this is done using optical character recognition software (OCR) to convert digital images into readable text. Provides the ability to find documents and information based on the document title, identifiers, metadata, relationships, and content. Robust security features to prevent unauthorized document use, including access permission and control, audit trails, federated authentication, file encryption (i.e., in transit, in use, and at rest), intrusion detection, audit trails, and data loss prevention. Built-in security and access controls allow administrators to grant or deny access to documents based on profiles and other criteria, such as domains. Authorized users can also be given specific privileges, including the ability to read, edit, or share. In addition, audit trails provide a record of all activity related to a document, such as who has viewed or modified it. Based on documents’ properties and associated rules, they can be automatically stored in specific locations upon creation, then moved through to different media throughout their lifecycle, and ultimately be destroyed. Provides a seamless user experience that gives users easy-to-use functionality, ready access to information, and little to no downtime. Automates workflows, verifying that every step in a business process is followed and helping with some, including messages related to tasks’ status. Order, Access, and Control with Document Management Document management provides a process that helps capture and retain the value stored in documents. With document management, users spend less time searching for files and recreating ones that could not be found. Document management also drives security into how files are stored and accessed to protect sensitive data from unauthorized users. Best of all, document management, when done well, is easy to use. Most organizations, regardless of size, can benefit from implementing a document management program. Egnyte has experts ready to answer your questions. For more than a decade, Egnyte has helped more than 16,000 customers with millions of customers worldwide. Last Updated: 28th February, 2022
<urn:uuid:838d7919-3202-4027-9fca-41de8636ce88>
CC-MAIN-2022-40
https://www.egnyte.com/guides/life-sciences/document-management
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00709.warc.gz
en
0.889394
1,750
2.875
3
In today in history We have a unit of measurement for almost everything, both tangible and intangible. Likewise, if we think of measuring history, what is that one thing which will help us keep a count of how far we have come? It is none but the events that have occurred in the past that have given us a proper measure of how much we have grown ever since, as a civilization, and what we have gained or lost in the process. Today’s tech-savvy era, for instance, has its roots in the past, and only when we know about the bygones, we get to know what inspired such inventions. The historical events which were instrumental in the making of history are like a well-knit story, and when we get into the details, we know how interesting it is. The occurrences of July 28 hold a treasure of such stories that tell us of upheavals and the settling of major socio-political eruptions. On the one hand, it talks about the beginning of a new regime, while on the other, it tells us the end of the age-old crusades. Thus, without any further delay, let’s take a plunge into the major happenings of 28th July and enrich our grey matter. 1965: The United States orders 50,000 troops to Vietnam President Johnson has committed a further 50,000 US troops to the conflict in Vietnam. Monthly draft calls will increase from 17,000 to 35,000- the highest level since the Korean War, when between 50,000 and 80,000 men were called up every month. It will take the United States forces in Vietnam up to 125,000, but officials say at this stage demands should be met by conscription, without calling upon the reserves. By the end of the year, 180,000 US troops had been sent to Vietnam. In the year 1966. the figure doubled.80,000 Americans had been killed or wounded in the Vietnam War by the summer of 1967. The pressure to withdraw mounted, not least because money for domestic reforms was diverted to the military. There was Rioting in United States cities and demonstrations on university campuses in the summer of 1967. President Johnson and the Democratic Party were already losing the election. In the year 1966 congressional elections, the Republicans gained 47 seats in the House of Representatives and three in the Senate. 1988: Ashdown to lead Britain’s third party The MP for Yeovil, Paddy Ashdown, has been elected the first leader of the new party i.e., Social and Liberal Democrat Party. Ashdown, age 47, won a decisive victory with 41,401 votes – 71.9% – against former deputy leader of the opponent party i.e., Liberal Party Alan Beith, who polled 16,906 votes – 28.1%. Following an eight-week campaign, Mr. Ashdown was widely expected to win the election, but the size of the margin was beyond expectations. With such a strong mandate, his team is confident they can build a strong third political party for the United Kingdom that can provide “a decent, effective and responsible” alternative to Thatcherism in 3 years. Mr. Ashdown gave an optimistic press conference after receiving the results outside the SLD headquarters in Westminster, with his wife Jane and former joint leader David Steel standing beside him.” Our priority must be to look beyond the internal politics of our party to the concerns of our nation,” he said.
<urn:uuid:96b3bc66-99fb-4aa5-a9bf-52b72a35514d>
CC-MAIN-2022-40
https://areflect.com/2020/07/28/today-in-history-july-28/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00709.warc.gz
en
0.973205
715
2.890625
3
How do Chatbots work? How do Chatbots work? Technology has advanced by manifolds which have led to the development of various gadgets that have made our life comfortable and easier. Further, we live in a world that has become an IT hub. In this IT-driven world, we need various IT services in Tampa that can help your business to reach the peaks of success. There are various new services and gadgets which are helping the business. Further, communication with clients is an important part of any business. One of the latest developments in the world of communication is chat-bots. So, the question arises how can chatbots help your business. People invest in chatbots in order to cut their costs. With chatbots, you can keep your customers happy, satisfied, and productive. Also, they help to boost the workflow and create a better, unique customer experience. Let’s have a look at the working of chatbots. What Exactly Is A Chatbot? A chatbot can be described as an Artificial intelligence software that helps to have a conversation with a user in natural language through messaging applications, websites, mobile apps, or through the telephone. A chatbox can be described as the most advanced and promising expressions of interaction between humans and machines. Also, chatbots can be said to be human-like machines that behave like humans. Working Of Chatbots People think that chatbots cannot understand the intentions of various customers. The bots are given training with the help of actual data. The companies have a chatbot that has various logs of conversations. A combination of Machine Learning models and tools are used for the working of the chatbots. Further, chatbots work on the basis of various algorithms. How Is ChatBot Trained? A chatbot is trained which happens at a faster and larger scale than humans. The chatbots with the help of the mechanism are able to understand what type of question requires what type of answers. How Does Chatbot work after it becomes live? Once chatbots become ready to operate and interact with customers then feedback loops can be implemented. The chatbot operates by providing answers to different questions in the form of various options. Also, customers themselves match the questions with actual possible intentions, and that information which helps to improve the intentions. Also, a chatbot can be used to rephrase what the people say in the chat but it answers according to its own perception. To conclude, the chatbot is the latest technological innovation that helps in elevating the success of a business.
<urn:uuid:442a5384-5ecf-4721-8beb-9a1ec33e7ce5>
CC-MAIN-2022-40
https://1800officesolutions.com/how-do-chatbots-work/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00709.warc.gz
en
0.951639
517
2.734375
3
A digital ecosystem is a collection of interconnected information technology resources that work together as a unit. Depending on the particular industry or business, these digital ecosystems could be comprised of various contractors, patrons, applications, third-party service providers, and all relevant technology needed. In recent times, digital ecosystems are becoming more common than self-contained networks. These digital ecosystems are letting companies increase their reach further than their substructure. A digital ecosystem that reaches further and incorporates various entities can be difficult to set up. Issues arise because many company’s in-house data centers just don’t have the capabilities, and most businesses cannot afford to build a data center that has the power of hyperscale data center needed to thrive in a high-end digital ecosystem. One of the ways to support a successful digital ecosystem is to use a trusted colocation data center, which can provide better scalability and power. The world is driven by technological advancements, which makes digital ecosystems vital to the growth of many businesses. Three major ecosystems can be beneficial for various companies. These digital ecosystems include the platform ecosystem, collaboration ecosystem, and services ecosystem. Through these ecosystems, businesses can build the digital infrastructure they need to improve their operations. The first digital ecosystem is the platform ecosystem. This includes all of the basic elements that are used for the digital world including storage, computing, networking, and digital services. Under this umbrella are Infrastructure-as-a-Service and more. The next digital ecosystem is the collaboration ecosystem. Many companies collaborate with other companies for several different reasons including sharing ideas. This type of ecosystem is important for researchers looking to solve the world’s problems. This could be affiliated Universities or sister companies, but this type of ecosystem is important. It allows multiple companies to share data, communicate quickly and connect more easily. The last type of digital ecosystem is the services ecosystem. This is where companies create functional services that are made available to other businesses. An example of this could be something like a payment service. This ecosystem helps improve business processes through beneficial functionality. Because the world is driven by technological advancements, new digital ecosystems will continue to be introduced. This is why businesses need to have the best possible data center infrastructure to keep up with these demands. There are significant differences between an in-house data center and a colocation data center that can greatly affect a company’s digital ecosystem. Migrating an in-house data center into a third-party data center using a colocation provider can seem arduous or even counterproductive. A company’s Information Technology team may feel they would be losing some of the control they currently have, but this assumption is wrong on several levels. There are several myths about colocation that we dispel in a recent article. The truth is colocation is affordable and secure. Colocation customers have full control over their infrastructure. Colocation providers have guarantees that help keep businesses available more than a company’s on-premises data center. Colocation data centers also meet strict compliance requirements that could potentially be hard for a small business’ on-premises data center. It may also be difficult for small businesses’ in-house data centers to operate as efficiently as a trusted colocation data center. This can even turn into higher costs for power and cooling. It can also affect the reliability of the on-premises data center’s connectivity, also known as data center uptime. Moving a company’s data into a data center with a trusted colocation provider can improve all of these different aspects. There are many benefits of a colocation data center. The most obvious of these are from the physical setup of the colocation data center itself, which includes the best power and cooling capabilities. But a colocation data center can also be quite advantageous for what it can offer a company’s digital ecosystem. A colocation data center that is also carrier-neutral allows unparalleled connectivity and flexibility for its customers. Colocation data centers can help businesses connect with additional options that can also be tremendously beneficial. Colocation data centers have connections. A good colocation provider has access to the best cloud service providers. Colocation providers, with the use of an SDN or a Software Defined Network, can be easily constructed to connect to several cloud applications. Cloud strategies including multi-cloud and hybrid cloud solutions continue to be a good option for many companies. A colocation data center can help with all of these aspects with its various connectivity options. Another important aspect of managing a data center (which we touched on earlier) is uptime. Any sort of system downtime can be extremely harmful to a business. Downtime can potentially cost some businesses up to $540,000 every single hour. Trusted colocation providers have Service Level Agreements that guarantee the amount of maximum downtime their customers may experience. All data centers that use the Uptime Institute’s Data Center Tier Standard System offer between 99.671% to 99.995% of uptime depending on the tier. The lowest tier, Tier 1 data centers, guarantees their customers less than 30 hours of downtime every year. This is a hard feat to achieve by small in-house data centers. Trusted data centers also have good disaster recovery plans, which also help with uptime. Digital ecosystems make communication between various contractors, patrons, applications, third-party service providers, and all relevant technology faster and more efficient. These digital ecosystems can benefit from using a trusted colocation provider. By increasing the efficiency of power and cooling, adding connectivity and flexibility increasing the amount of uptime, plus all of the additional built-in benefits that a colocation data center provides will help these digital ecosystems thrive. There are many reasons small to medium-sized businesses should use a trusted colocation provider and improving a company’s digital ecosystem is certainly one of these reasons.
<urn:uuid:9b480682-fa7c-4716-8056-3b8e8847ecf8>
CC-MAIN-2022-40
https://www.colocationamerica.com/blog/how-a-colocation-data-center-can-benefit-your-digital-ecosystem
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00709.warc.gz
en
0.936098
1,195
3.359375
3
What is Bind DNS? Bind DNS is a type of Domain Name System that stores Internet Protocol addresses rather than human-readable hostnames. This makes it easier to find and resolve domain names, even when there are multiple domains that point to the same IP address. Bind DNS is one of the most popular, function-rich Domain Name System servers. It is open-source software, maintained by the Internet Systems Consortium. The acronym “Bind” comes from Berkeley Internet Name Daemon, which was a reference to the fact that BIND is a recursive acronym. What is Bind DNS? Bind DNS is the name of BIND, which stands for Berkeley Internet Name Domain. It is one of the most widely used Domain Name System (DNS) servers on the Internet. The open-source program BIND runs on Unix operating systems and Windows NT. BIND was originally written by the Internet software development community Mark Kosters and is published under an ISC license. BIND is the most popular DNS server on the Internet. It was originally created as part of some unfinished projects at UC Berkeley. At that time, people had obtained a license to use source code from an early TCP/IP implementation by Vixie Enterprises and some developers started working on those ideas. They initially worked on some enhancements to BSD TCP/IP utilities and the name server component of UCB’s Internet Daemon. How to use Bind DNS? To use Bind DNS, you need to edit the zone files. What this DNS server is great for is when you need to make changes to your domain name servers, but the Domain Name System (DNS) is unavailable. Bind DNS can point a domain name at a different IP address. This way, if your company’s website is down or inaccessible due to a network failure, you can still use your domain names with a different IP address. This works for other domain name servers as well. For example, if you are hosting your website with a provider and your DNS records need to be changed, you can edit the zone files on Bind DNS to make changes without having to wait for your Domain Name System (DNS) server to update. Bind DNS can also be used to create domain names. You can use the zone file for an existing domain name, or you can have the Look-Up Zone function return a list of available domains by typing in a keyword. If you are not sure what your IP address is, it can also look that up for you. What are the benefits of using Bind DNS? Bind DNS offers a wide range of benefits. It has the ability to provide high performance over large networks. Bind also has the ability to reduce configuration complexity by providing common features like caching, recursive query routing, and domain aliasing. Bind can also offer additional security for your website by blocking unwanted access to your site. Lastly, Bind is available on a wide variety of operating systems and offers support for most modern-day technologies. Another advantage of Bind DNS is that it is very secure and easily configured for multi-user environments. It also supports many advanced features such as dynamic DNS and secure DNSSEC, which can protect an organization from spoofing attacks. What are the drawbacks of using Bind DNS? Bind DNS is an implementation of the domain name system (DNS) hierarchy in Unix-like operating systems. One disadvantage, however, is that it has a complicated hierarchical directory structure. In addition, because the data is stored in a database instead of a text file, it has limited ability to be edited using standard Unix tools. Some also say that the Berkeley DB package (associated with Bind) is less reliable than a hierarchical file-based scheme such as NIS. How many types of Bind DNS are there? There are two types of Bind DNS. First is the standard Bind DNS which is used for fixing IP addresses to hostnames so the DNS resolution process can successfully take place. This process ensures that the correct IP address is associated with a hostname to establish connections. Secondly, there are also dynamic IP Bind DNS that are provided by ISPs. A user’s IP address will change over time because IP addresses are assigned on a temporary basis. It doesn’t have a permanent identity. Dynamic-DNS is a service that allows ISP to update a user’s current IP address when it changes. To access this dynamic DNS, the computer’s network hostname must be associated with its current IP address. Bind DNS is a type of Domain Name System (DNS) server that can be used to translate the domain name you enter into an IP address. For instance, if you were trying to visit google.com and entered it in your browser without specifying the full URL, then Bind DNS would find out which computer had that specific IP address and route your request there automatically. This means that instead of having to remember or know how to spell a website’s specific IP address, you can use the easy to remember domain name. Before Bind DNS, it was difficult to type in a website’s IP address every time you wanted to visit that site. Oftentimes, humans are forgetful and this led to error messages or failed connections when there was a typo in the domain name you were trying to use. This led to the creation of Domain Name Servers which functioned as a directory of sorts, keeping track of the IP addresses for websites.
<urn:uuid:4e80d1df-a5de-45ed-a30a-438effcfdb01>
CC-MAIN-2022-40
https://gigmocha.com/what-is-bind-dns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00709.warc.gz
en
0.944866
1,116
3.359375
3
What is Kali Linux? The Linux distribution Kali Linux is based on Debian and uses Gnome as a desktop interface. It is specialized in the execution of penetration and security tests. A variety of tools and programs can be found in the distribution for this purpose. Kali Linux is an open-source project, operated and financed by Offensive Security and is aimed primarily at professional users, but can also be used by private individuals. The first version of Kali Linux 1.0 was released in 2013 as the successor to BackTrack. Currently, the distribution is available in version 2017.2. In addition to running as a live Linux directly from a DVD, it is possible to start it in a virtual machine and install it on a 32bit or 64bit x86 system as well as on computers with ARM architecture. The single-board computer Raspberry Pi can also be run with the Kali distribution. For some Android-based devices, there is the penetration testing platform NetHunter, which was created from Kali Linux. Abuse possibilities of Kali Linux Kali Linux can not only be used for legal security and penetration tests but can be abused and used illegally by hackers. Passwords can be cracked, server systems deliberately overloaded or wireless WLAN networks spied on. Anyone who uses the Kali distribution must be aware that tests and attacks on systems are only permitted if they have authorization from the owner or if they belong to you. Service providers using the Linux distribution for their services need appropriate permission to perform tests from authorized persons or management. Since the Kali Linux distribution contains tools and software that fall under the so-called hacking paragraph, possession or distribution may be punishable if there is an intent to use them illegally. The most important tools of the Kali distribution The tools of Kali Linux are available to users through the desktop’s DeepL access. They are divided into different categories and sorted by popularity. Currently, several hundred tools and applications as well as numerous documentations are available in the distribution, which can be used to test and evaluate the security of IT systems and networks. Since the programs are obtained at regular intervals from the Debian repository, it is ensured that the latest versions are available. Popular tools for network diagnostics include the graphical network sniffer Wireshark and the network manipulation tool Ettercap. The Nmap network scanner can be used to explore and analyze a network. For wireless WLAN networks, the passive sniffer Kismet is available. Network packet forgery is made possible by the tool Nemesis. Other tools include the Maltego program for collecting data on companies or individuals on the Internet, the Social-Engineer Toolkit (SET), John the Ripper, a program for testing and cracking passwords, and the Metasploit exploit framework. It allows the execution of various attack methods to test the vulnerability of systems via exploits. The forensic capabilities of Kali Linux Kali Linux not only specializes in examining network communications or penetrating computer systems but also comes with numerous forensic tools. These can be used to analyze data media or recover deleted data. Autopsy, for example, makes long-deleted data visible as long as it has not yet been overwritten. Even from a working memory image, information about executed applications and processed data can be obtained with the right tool.
<urn:uuid:2be6d815-9b89-45b4-8e7e-a1db4534629f>
CC-MAIN-2022-40
https://informationsecurityasia.com/what-is-kali-linux/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00709.warc.gz
en
0.931633
671
2.515625
3
Researchers at the University of Hamburg demonstrated that WiFi connection probe requests expose users to track. A group of academics at the University of Hamburg (Germany) demonstrated that it is possible to use WiFi connection probe requests to identify and track devices and thereby their users. Mobile devices transmit probe requests to receive information about nearby Wi-Fi networks and establish a Wi-Fi connection. An access point receiving a probe request replies with a probe response, thereby establishing a connection between both devices. However, probe requests can contain identifying information about the device owner depending on the age of the device and its OS. For example, a request can contain the preferred network list (PNL), which includes networks identified by their so-called Service Set Identifier (SSIDs). Experts explained that 23 % of the probe requests contain SSIDs of networks the devices were connected to in the past. The boffins conducted a field experiment and captured a huge quantity of WiFi probe requests and analyzed the type of data transmitted without the knowledge of the device owners. Probe requests are also used to track devices in stores or cities, they can be used to trilaterate the location of a device with an accuracy of up to 1.5 metres. The researchers conducted a field experiment in a German city where they captured probe requests of passersby and analyzed their content for three hours, with a focus on SSIDs and identifying information. They captured a total of 252242 probe requests, 116961 (46.4 %) in the 2.4 GHz spectrum, of which 28 836 (24.7 %) contained at least one SSID. In the 5 GHz spectrum, the experts recorded 135281 probes (53.6 %), of which 29 653 (21.9 %) contained an SSID. “We conjecture that a lot of the SSIDs in our record originate from users trying to set up a network connection manually by entering both SSID and password through the advanced network settings, and, apparently mistakenly, enter the wrong strings as the SSIDs.” reads the research paper. “A small but significant amount of probe requests containing SSIDs potentially broadcast passwords in the SSID field: We identified that 11.8 % of the transmitted probe requests contain numeric strings with 16 digits or more, which are likely the initial passwords of popular German home routers (e. g., FritzBox or Telekom home router).” The captured SSIDs also included strings corresponding to store WiFi networks, the experts identified 106 distinct first and/or last names, three email addresses, and 92 distinct holiday homes or accommodations previously added as trusty networks. These sensitive strings were broadcasted up to thousands of times during the three hours of the experiment. Experts explained that MAC address randomization could prevent tracking of the users such as the reduction/randomizations of information in the probe requests. “The newer a device and its OS is, the more information is omitted and fields randomised in the probe requests. All the same, various papers still describe how even modern devices can be fingerprinted due to other information contained in them, e. g. in the Information Elements (IE): These non-mandatory parameters contain information on supported rates, network capabilities, and more.” continues the paper. “Combining the IE parameters, the signal strength and, in some cases, the sequence number, allows to fingerprint individual devices despite MAC address randomisation.” The following table shows privacy features for probe requests in different mobile OSs: The experts recommend users remove SSIDs that they no longer use, disable auto-join networks, and silence probe requests. This latter measure has some drawbacks, such as increasing the battery consumption and increasing the time to establish a connection. “Probe requests are plainly observable to everyone around a sending device. Since they can contain sensitive data, they should be sent more carefully and with privacy in mind.” concludes the paper. Security Affairs is one of the finalists for the best European Cybersecurity Blogger Awards 2022 – VOTE FOR YOUR WINNERS. I ask you to vote for me again (even if you have already done it), because this vote is for the final. Please vote for Security Affairs and Pierluigi Paganini in every category that includes them (e.g. sections “The Underdogs – Best Personal (non-commercial) Security Blog” and “The Tech Whizz – Best Technical Blog”) To nominate, please visit: (SecurityAffairs – hacking, probe requests)
<urn:uuid:5c0fc732-828d-4de2-b035-d5a2a4b804cf>
CC-MAIN-2022-40
https://cybersecurityworldconference.com/2022/06/13/using-wifi-connection-probe-requests-to-track-users/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00109.warc.gz
en
0.917175
936
2.890625
3
New Training: Carry Out Version Control Using Git In this 9-video skill, CBT Nuggets trainer Shawn Powers teaches you how to execute version control by using a Git system. Gain an understanding of the staging process, global settings, GitHub, and the .gitignore file. Watch this new CompTIA,Linux training. Watch the full course: CompTIA Linux+ This training includes: 52 minutes of training You’ll learn these topics in this skill: Understanding the Need for Version Control Setting Up Your Git Environment Creating a New Git Repository with clone and init Understanding Staging and Committing Examining the Commit Log and Viewing Past Commits Creating and Merging Branches Pushing and Pulling Changes to and from a Remote Repository Keeping Private Data Local with .gitignore Cloning a Repo and Uploading it to Personal Github Account Lab What is a Git System? Git is a widely-adopted open-source version control system used to support the development process of software projects. At a high level, Git is used to track and store changes made to an ongoing software project to maintain change history and support collaboration. Git provides such a tremendous benefit to project development, as most often, multiple developers are working on a single project. With the use of Git version control, developers can avoid the risk of introducing code conflicts as multiple developers work on a single project. To build on this, as developers make changes to code, they can save their own checkpoint or revision of the code and introduce that checkpoint to other developers or push it into the master working copy of the project. This allows developers to work in parallel on the same project and introduce changes to the master as they see fit. This simple yet profound open-source version control platform has streamlined projects and removed some of the inherent challenges and roadblocks seen within large development teams.
<urn:uuid:b180ccd5-f5b4-4756-8704-c9f8404482b6>
CC-MAIN-2022-40
https://www.cbtnuggets.com/blog/new-skills/new-training-carry-out-version-control-using-git
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00109.warc.gz
en
0.885482
395
3.015625
3
Domain Name System (DNS) is a distributed directory that resolves human-readable hostnames, such as www.dyn.com, into machine-readable IP addresses like 184.108.40.206, pretty basic stuff huh? So why is DNS important and how does it actually work? Join us in this on-demand webinar as we discuss in-depth the phone book of the internet. We will cover: - A short overview on the history of DNS and why it is critical for the Internet to function. - Review what Authoritative and Recursive DNS are and how they work. - How Cloud DNS improves the reliability and functionality of DNS with large Anycast networks, intuitive configuration and zone management, Intelligent DNS and Traffic Management capabilities, RESTful APIs, and 24/7/365 support.
<urn:uuid:191e353a-058d-46d3-a408-dbff8fa39b29>
CC-MAIN-2022-40
https://help.dyn.com/what-is-domain-name-system/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00109.warc.gz
en
0.912238
167
3.09375
3
PKI and Digital Certificates — Your Questions, Answered PKI and digital certificates are critical tools when it comes to protecting your networks and systems. It’s vital to understand the various terms related to PKI, so we’re answering all your questions around it. What Is Public Key Cryptography? Public key cryptography is an encryption technology where the encryption and decryption of data is carried out using separate but related cryptographic keys, one that is kept private and one that is made public. This encryption technology is the basis for public key infrastructure. What Is Public Key Infrastructure? A public key infrastructure (PKI) is a collection of policies, activities, technologies and processes for managing digital certificates and key encryption. PKI is a foundation for transferring information between parties across a network in a secure and encrypted way. PKI makes it possible for individuals and organizations to securely share and transfer information. It is necessary for applications including eCommerce, online banking, private email and other online tasks where encryption is paramount. It ensures the security and protection of electronic communications and data through the use of certificates and private/public key pairs. How Does PKI Work? PKI is based on public key cryptography. This starts with an organization requesting a digital certificate. A trusted “Certificate Authority” creates a key for the organization linked to the digital certificate. When two parties want to communicate with each other, they check the other party’s key against their digital certificate; this establishes they are who they say they are. This marks the other part as trusted and means information can be encrypted and passed between the two parties. What Is a Certificate Authority? A Certificate Authority, also known as a Certificate Service Provider (CSP), is a trusted issuer of digital security certificates that allows for the trusted transmission and receipt of data over networks or the public internet. What Is a Digital Certificate? Certificate authorities provide digital certificates. A digital certificate is a specialized electronic credential that certifies a relationship between a public key and the identity of the key holder. Public keys are part of cryptography and allow for the encryption and decryption of secure information, together with validating the sender and recipient of information. What Is a Qualified Certificate? A Qualified Certificate is a special kind of digital certificate that contains a minimum set of elements specified in European Directive (99/93/EC). It is produced by a qualified CSP, which meets certain specific technical and procedural requirements. These requirements include: - Automated processing indications that the certificate is a qualified certificate for electronic signatures - Information that defines the qualified trust service provider who is issuing the certificate. This information must include: - The service provider’s member state - The name and registration number of the provider - The name of the signatory - Electronic signature creation and validation data - The start and end times for the certificate’s validity - The qualified trust service provider’s unique certificate identity code - The qualified trust service provider’s advanced electronic signature or electronic seal What Is an Advanced Electronic Signature? An Advanced Electronic Signature is an electronic signature that is: - Uniquely linked to the signatory - Capable of identifying the signatory - Created using means that the signatory can maintain under his sole control - Linked to the data to which it relates in such a manner that any subsequent change of the data is detectable The requirements that govern electronic signatures are defined under the European regulation for the electronic identification and trust services for electronic transactions (EIDAS). What Is a Qualified Electronic Signature? A Qualified Electronic Signature (QES) is a special signature that follows European Directive (99/93/EC). A QES needs to be: - An advanced electronic signature as defined in the directive. Currently, only PKI digital signatures (using asymmetric cryptography) fulfil those requirements. - Based on a Qualified Certificate (QC) issued by a suitably certified certification service provider - Created through a Secure Signature‐Creation Device (SSCD) that meets specific conditions Qualified electronic signatures are a digital equivalent to handwritten signatures. Qualified electronic signatures are defined by EIDAS. It must meet three main criteria: - The signatory must be linked and uniquely identified to the signature - The data used to create the signature must be under the sole control of the signatory - It must be able to identify if the data that accompanies the signature has been tampered with since the signing of the message What Is the CA/B Forum? The Certification Authority Browser Forum, also known as the CA/Browser Forum, is a voluntary consortium of certification authorities, vendors of internet browser software, operating systems and other PKI-enabled applications. What Is an Extended Validation (EV) SSL certificate? An EV SSL is a certificate that meets the Extended Validation Guidelines (EVGs) produced by the CA/B Forum. An EV SSL verifies the identity of the website owner, its exclusive use of the domain and the authority of its personnel. Only certification authorities who are audited for compliance to EVGs may issue an EV certificate. What Is a Wildcard Certificate? A wildcard certificate allows you to secure unlimited first-level sub-domains on a single domain name. For example, you can get a wildcard certificate with the common name *.yourdomain.com. This certificate can be used to secure related subdomains like: Wildcard certificates do not work for multiple-level subdomains. For example, a wildcard for *.yourdomain.com will not work on www.secure.yourdomain.com or server.name.yourdomain.com. The advantage of a wildcard certificate is that you only need one certificate to secure multiple subdomains rather than buying and managing multiple certificates. Some devices do not support wildcard certificates, and you will need to use a subject alternative name certificate instead. What Is a Subject Alternative Name Certificate? A Subject Alternative Name (SAN) certificate allows one certificate to secure multiple different domain names using the SAN fields. A SAN certificate can secure multiple external domain and subdomain names. For example, one SAN SSL certificate could secure the following: A SAN certificate is required for some Microsoft products. To explore more about PKI, download our white paper on the role of PKI in securing enterprise networks. Or learn more about the future of digital certificates in this blog post. Mrugesh Chandarana is Product Management Director for Identity and Access Management Solutions at HID Global, where he focuses on IoT and PKI solutions. He has more than ten years of cybersecurity industry experience in areas such as risk management, threat and vulnerability management, application security and PKI. He has held product management positions at RiskSense, WhiteHat Security (acquired by NTT Security), and RiskVision (acquired by Resolver, Inc.).
<urn:uuid:3c785bb1-ee4f-42a5-b506-71f3c7977cb2>
CC-MAIN-2022-40
https://blog.hidglobal.com/es/node/39122
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00109.warc.gz
en
0.892231
1,513
3.859375
4
If you’re like most people, you want to be in shape. And if you’re like most people, you’ve probably tried a bunch of different methods, from dieting to working out to taking weight loss supplements. But have you ever considered using a Fitbit? Fitbits are wearable devices that track your activity and calories burned. They can help you lose weight and get in shape by providing you with accurate information about how many calories you’re burning each day. And since they come with a built-in tracking system, you can see how your progress is coming along over time. We will discuss today how does Fitbit tracks calories! What is a Fitbit Tracker and how does it work The Fitbit line of products is designed to help people be more active and monitor their physical activity. All Fitbits use an accelerometer to track movement, which is how they count steps, measure distance, and calculate calories burned. Some Fitbit trackers and smart watches also have a GPS tracker to map routes and monitor elevation. Additionally, many Fitbits can automatically recognize certain exercises, like running or biking. Users can track their progress on the Fitbit app, which is available for iOS and Android devices. The app allows users to set goals and see a breakdown of their activity by day or week. Additionally, users can add Exercise Shortcuts for workouts that are not automatically recognized. These shortcuts allow users to set goals for time, distance, or calories burned for a specific exercise. Fitbit also uses an altimeter to detect whether you are going up or down, and this is how they track floors climbed. With all of this data, Fitbits help people be more aware of their physical activity and see improvements over time. How does Fitbit calculate calories burned? Fitness trackers and Fitbit calculates calories burned through a variety of methods to estimate the number of calories burned but the most common method is to use the basal metabolic rate. However, there is no such thing as a perfect fitness tracker and this method can be a bit inaccurate because everyone’s basal metabolic rate is different. Another way that Fitbits estimate calories burned, is by tracking movement and exercise. Nevertheless, this can also be a bit inaccurate because not all movements are tracked and not everyone exercises the same amount. As a result, the number of calories burned by Fitbits is often inaccurate. Basal Metabolic Rate in a Nutshell Your BMR is the energy cost of your basic activities, such as walking or sitting. It varies depending on factors like age, height, gender, and weight. This is why fitness trackers are so unreliable; not everyone in the same age group, height group, gender category, and weight classification has the same level of exercise. As a result, the tracker is guessing calorie expenditure on a data point that may not be true to your health. The base metabolic rate (BMR) is the number of calories your body needs to exist every day. This covers things like breathing, heart function, and food digestion. But other factors come into play when it comes to computing the total calories burned you had on that day. For example, did you know that your activity level also plays a role? Listed below are some of the factors that affect your BMR. Your genetics also play a role in calories you burn. Some people are born with a higher or lower metabolism. This is completely out of their control and there’s not much they can do to change it. This is probably the most important factor when it comes to the calorie count. If you’re more fit, you’re going to burn more calories than someone who is less fit. This is because your body is more efficient at using energy when it’s in shape. As you age, your metabolism naturally starts to slow down. This is why it’s harder to shave those weight and fat as you get older. Your body just doesn’t burn calories as quickly as it used to. The more you weigh, the more calories you’ll burn. This is because it takes more energy to move a larger body. Men generally have a higher metabolism than women. This is because they have more muscle mass, which burns more calories than fat. The taller you are, the more calories you’ll burn. This is because it takes more energy to move a taller physique. So, as you can see, many factors go into daily calorie burn. Your BMR is just one of them. But it’s still an important factor to consider, especially if you’re trying to be fit. Trying to lose weight without taking your basal metabolic rate into account is like trying to hit a target without knowing what the target is. You might get close, but you’re not going to hit it every time. If you’re looking to achieve your fitness goals it’s important to calculate your BMR and use that number as a starting point. From there, you can start to make changes to your diet and exercise routine that will help you reach your goals. And don’t forget to factor in your activity level, genetics, and fitness level; they all play a role in computing your calorie burn estimate. How does this information help me lose weight? The information from the Fitbit can help you lose weight in two ways. First, it can help you to be more mindful of how many calories you’re burning each day. Second, it can help you to set goals and track your progress over time. This information can be extremely helpful in reaching your fitness goals. How to set up your Fitbit To set up your Fitbit, you’ll need to download the Fitbit app and create an account. Once you’ve created an account, you can connect your Fitbit to your phone and begin tracking your activity. To connect your Fitbit to your phone, open the Fitbit app and sign in. Tap the “Devices” tab at the bottom of the screen, then tap “Add a Device.” Your phone will begin scanning for your Fitbit. Once it’s found your Fitbit, you’ll be prompted to enter your username and password. Once you’ve connected your Fitbit to your phone, you can begin tracking your activity. The app will track your steps, distance, active minutes, and calories burned. You can also track your weight, water intake, and food intake. The app will also show you how many calories you’ve burned each day and how many you’ve left to burn. You can use this information to set goals and track your progress over time. Why use a Fitbit I use a Fitbit because it helps me to be more mindful of how many calories I’m burning each day. It also helps me to set goals and track my progress over time. The information from Fitbit can be extremely helpful in reaching my weight loss goals. The benefits of using a Fitbit There are a number of benefits to using Fitbit devices, including: - Helping you to be more mindful of the total calories you’re burning each day. - Helping you to set goals and track your progress over time. - Providing you with information about how many calories you’re burning. - Allowing you to connect with other Fitbit users and compare your progress. - Helping you to stay motivated and on track with your fitness goals. - Can monitor other health metrics like heart rate data, oxygen levels, count steps, etc. Drawbacks of using a Fitbit There are a few drawbacks to using Fitbit devices, including: - The information from Fitbit can be inaccurate. - Fitbit devices often do not track all movements. - Not everyone exercises the same amount. - The number of calories burned by Fitbits can be inaccurate. - Fitbits can be expensive. Final thoughts on the article Fitbit trackers can estimate the number of calories burned through a combination of activity tracking, metabolic rate estimation, and calorie intake logging. While not 100% accurate, they provide a good starting point for people looking to lose weight. Additionally, Fitbits can monitor other health metrics like heart rate data, oxygen levels, count steps, etc. There are a few drawbacks to using Fitbits, like the inaccuracy of calorie tracking and the expense, but the benefits outweigh the drawbacks for many people. I hope this information can be a guide to you and help you on your fitness journey.
<urn:uuid:23c80f48-a9e9-439b-a95b-370efdcfbd0c>
CC-MAIN-2022-40
https://gigmocha.com/how-does-fitbit-track-calories/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00310.warc.gz
en
0.931911
1,815
2.875
3
Earlier this year, the UK enshrined in law a target to slash carbon emissions by 78 percent by 2035 as part of a bid to achieve net-zero by 2050. Achieving the UK’s target, and hopefully exceeding it, will require taking advantage of every possible tool. Susanne Baker, Associate Director, Climate, Environment and Sustainability at TechUK, said: “It is critical that Government, regulators and business work together to fight climate change and achieve net-zero by 2050.” “Digital technology has a huge potential to help us achieve our climate goals. Digital is already emerging as a key tool to support the net-zero transition, and as this report shows, existing digital technology can have an even more significant impact in reducing our carbon footprint across the economy. With COP26 around the corner, now is the time to act.” Technologies including the IoT and 5G will be key in delivering the efficiency improvements that reduce our carbon footprints while minimising the impact on people’s ability to live, work, and travel without significant disruption. The new report – ‘Connecting for Net Zero: addressing the climate crisis through digital technology’ – focuses on three key but high-polluting sectors: agriculture, manufacturing, and transport. - Agriculture ensures there’s enough produce for all citizens but, in the process, has historically created a large amount of carbon. The report estimates that around 4.8 million tonnes of CO2e – equivalent to producing three billion pints of milk – could be saved annually by adopting smart sensors for improving the monitoring of crops, soil, fertiliser, feed, and water to improve resource efficiency and reduce waste. - Manufacturing helps to improve lives but does so at a significant environmental cost. Around 3.3 million tonnes of CO2e could be saved annually – equivalent to producing almost 600,000 cars – by combining emerging technologies like AI and smart building solutions to improve production lines and energy efficiency. - Transport is essential for visiting loved ones, work, daily tasks, and seeing the world, but it’s also the biggest polluter. Figures from the Department for Transport have found “little change over time in transport emissions” with just a three percent reduction in 2019 since 2009. Around 9.3 million tonnes of CO2e could be saved annually – the equivalent of taking two million cars off the road – through solutions including enhanced telematics which enable logistics companies to shorten delivery routes and cut idle time through intelligent route planning. Andrea Dona, Chief Network Officer at Vodafone UK, said: “Significantly reducing emissions from traditionally carbon-intensive sectors – such as manufacturing, transport and agriculture – is one of the biggest opportunities of the next decade. “Businesses and government must work together to drive the adoption of technology that will maximise efficiencies and help the UK decarbonise more rapidly to meet vital environmental goals.” Vodafone, for its part, recently announced that its European operations are now powered via 100 percent renewable electricity. The operator plans to achieve net-zero for its UK operations by 2027 and globally by 2040. Over the last year, Vodafone UK estimates that 54 percent of its 123 million IoT connections have enabled customers to reduce their emissions. Want to find out about digital twins from executives and thought leaders in this space? Find out more about the Digital Twin World event, taking place on 9 November 2021, which will explore augmenting business outcomes in more depth and the industries that will benefit.
<urn:uuid:5f610e54-71e7-4c0f-8b8d-1479e74b5afc>
CC-MAIN-2022-40
https://www.iottechnews.com/news/2021/sep/14/the-iot-help-uk-cut-17-4m-tonnes-co2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00310.warc.gz
en
0.939499
736
3.171875
3
Initial specifications of asynchronous transfer mode were developed by ITU (International Telecommunication Union) back in 1980. The goal was to develop a communications protocol based on plesiochronous digital hierarchy (PDH) and synchronous digital hierarchy (SDH), which allow the integration of various services. ATM not only supports line-switched data transfer, but also packet-based protocols such as frame relay or internet protocol. The masterminds in the development of ATM were initially telecommunications companies and the U.S. Department of Defence. In 1991 the ATM manufacturers set up the ATM forum, which is still the developer and submits specifications to ITU-T for standardisation. Characteristic features of asynchronous transfer mode protocol are its virtual connections and the cell structure in data transfer. One reason for the relatively small cells of 53 bytes is the minimisation of jitter when transmitting multiplex data streams. Large packets in a data transfer can therefore no longer block voice packets on a low-speed line. This minimises jitter of voice transmission. ATM uses virtual connections which may be either temporary or permanent. The standard specifies virtual paths (VPs) and virtual channels (VCs) for this purpose. Packets with identical VPIs (Virtual Path Identifiers) and VCIs (Virtual Channel Identifiers) follow the same route through the network. Different virtual connections are also used to multiplex various services within the network. Traffic management and traffic policing are important components of ATM. As soon as an ATM connection has been established, the nodes along the route receive information on the traffic class of the connection. This mechanism allows bandwidths on the network to be reserved for virtual connections and a defined quality of service. A network traffic contract for telephone systems, for example, can be fulfilled by classifying packets, queuing and policing.
<urn:uuid:95270dcb-fc88-42ec-bd4f-baf34060e5ae>
CC-MAIN-2022-40
https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/atm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00310.warc.gz
en
0.9247
371
3.46875
3
What Can I Do With A Bachelor’s in Cybersecurity? Fact: Bachelor’s Degrees in Cybersecurity Have an Excellent Career Outlook Governments, corporations, and private citizens all need the kind of information assurance that comes from trained cyber professionals who can monitor, protect, and serve the best interests of their online accounts, networks, and data. That’s why Cybersecurity is one of the fastest growing professions in the world. Because as cyber criminals find more and more ways to infiltrate our computer systems and digital infrastructure every day, the demand for cyber professionals will only grow, especially for those who possess fundamental skills and specialized knowledge in computer network architecture, digital forensics, and intelligence and investigation, all of which learners can earn with a bachelor’s degree in cybersecurity. FYI: 3 Common Bachelor’s Degrees in Cybersecurity Bachelor of Science (B.S) in Cybersecurity A Bachelor of Science in Cybersecurity will train learners with the technical skills and theoretical knowledge they need to earn careers as cyber security professionals in an number of areas, including but not limited to cybercrime, cyberwarfare, cyberlaw, biometrics, cryptography, digital forensics, homeland security, and wireless or mobile defense mechanisms. Many affordable B.S. programs in Cybersecurity also provide preparation to sit for certification in the learner’s preferred area of cybersecurity specialization. Bachelor of Science (B.S.) in Information Assurance A Bachelor of Science in Information Assurance will typically train learners to tackle tough subject areas that include but are not limited to hackers, data protection, Internet security, network security, infrastructure design, e-commerce, and digital forensics. Some affordable B.S. programs in Information Assurance also provide preparation to sit for certification in the student’s preferred area of specialization in cybersecurity. Bachelor of Science (B.S.) in Computer Science A Bachelor of Science in Computer Science often requires learners to sample from a wide range of classes in computer science, including but not limited to computer programming, algorithms, data structures, computer architecture, and information theory. This degree program is ideal for prospective cybersecurity professionals who want to experience a full range of computational awareness and build a strong foundation in the basics that form the foundation of cybersecurity. Some affordable B.S. programs in Computer Science also provide preparation to sit for certification in the student’s preferred area of specialization in cybersecurity. FYI:Job Prospects for Cybersecurity Bachelor’s Degree-Holders The Bureau of Labor Statistics projects a much faster than average job growth rate between now and 2024 for cybersecurity jobs that require a bachelor’s degree to work, which include jobs titles like Computer Network Architect and Information Security Analyst. Professionals who work either of these jobs will be at the top of the IT industry, working to monitor and secure digital infrastructure with their field-specific knowledge of cybersecurity. There are quite a number of other jobs that a bachelor’s degree in cybersecurity can prepare learners to earn, including those on the following list: Cybersecurity Jobs that a Bachelor’s Degree Can Prepare You To Attain and their Median Salaries - Computer Support Specialist: $52,160 - Computer Programmer: $79,840 - Database Administrator: $84,950 - Information Security Analyst: $92,600 - Computer and Information Research Scientist*: $111,840 Source: Bureau of Labor Statistics Tip: Possess High-Demand Skills and Pass Certification Exams With the future of cybersecurity so bright and expansive—particularly via the online format that is its natural habitat, considering cybersecurity professionals work and communicate via connected computer systems—the skills that a Bachelor’s degree in cybersecurity will prepare online learners to possess are in high demand. The U.S. Department of Homeland Security, one of the world’s largest employers of cybersecurity companies and professionals, lists the following areas of expertise among their most high-demand skillsets for new employees and partners to possess: - Cyber Incident Response - Cyber Risk and Strategic Analysis - Vulnerability Detection and Assessment - Intelligence and Investigation - Networks and Systems Engineering - Digital Forensics and Forensics Analysis - Software Assurance These skills are also highly tested. Cybersecurity certifications have increased immensely in popularity over the past several years, with over two dozen certifying bodies providing testing to degree-holders who need professional credentials to kickstart their careers. And as more cybersecurity programs emerge to meet growing demand, the certification and regulation processes become more rigorous, meaning it’s more important than ever to earn a Bachelor’s degree that will prepare you for certification properly. Want a little more job security? See What Can I Do with a Master’s in Cybersecurity? for more information on how completing a master’s degree in cybersecurity can help ensure you pass all your examinations and secure the job you want. *Job titles with an asterisk can also require a master’s degree.
<urn:uuid:70aea5f7-4590-4c12-b744-a2d368fbdac1>
CC-MAIN-2022-40
https://www.cybersecuritydegrees.com/faq/what-can-i-do-with-a-bachelors-in-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00310.warc.gz
en
0.91798
1,031
2.59375
3
The use of digital signatures is becoming more commonplace both in the workplace and for personal use. Digital signing allows organizations to streamline signature and approval processes, eliminate paper and establish an audit trail. This document was prepared by IdenTrust to help engineering firms and agencies understand how to prepare and accept documents, such as plans and CADD drawings, which include digital signatures. Information provided in this document includes: - Background about the ESIGN Act - Digital signing vs. electronic signing - Overview of identity-based digital certificates - Using digital seals - How digital signatures are created using a digital certificate - How digital signatures are validated The ESIGN Act Authorizes the Use of Digital Signatures The Electronic Signatures in Global and National Commerce Act (ESIGN, Pub.L. 106–229, 114 Stat. 464, enacted June 30, 2000, 15 U.S.C. ch. 96) is a United States federal law passed by the U.S. Congress to facilitate the use of electronic records and electronic signatures in interstate and foreign commerce by ensuring the validity and legal effect of contracts entered into electronically. Although every state has at least one law pertaining to electronic signatures, it is the federal law that lays out the guidelines for interstate commerce. The general intent of the ESIGN Act is spelled out in the very first section (101.a), that a contract or signature “may not be denied legal effect, validity, or enforceability solely because it is in electronic form”. This simple statement provides that electronic signatures and records are just as good as their paper equivalents, and therefore subject to the same legal scrutiny of authenticity that applies to paper documents. Digital Signing vs. Electronic Signing It must be noted that there are important distinctions between a digital signature and an electronic signature. This is shown in the chart below. |Digital Signing||Electronic Signing| |A legal term Tied to a specific individual via a PKI-based digital certificate Created using a digital algorithm to bind the document using a certificate, resulting in a unique “fingerprint” Non-repudiable and auditable A “hash” of the content being signed - any tampering will be evident Digital signing is required when using a digital seal||A functional term Not technically bound to a specific individual or validation process Created options such as typed names, scanned images or a “click wrap” agreement on a web site Legal, but not easily audited and can be repudiated Cannot be validated through electronic means| Overview of Identity-Based Digital Certificates Digital signing requires that the signer use a credential (such as a digital certificate) that is bound to his or her identity. Binding the identity of a signer to the credential that is used for signing creates assurance that the individual who is signing a document really is who they say they are. When an identity-based credential is used, the signature is considered non-repudiable and is legally binding. IdenTrust issues certificates under the IdenTrust Global Common (IGC) program, a policy owned and managed by IdenTrust and cross-certified with the Federal Bridge PKI program to use for digitally signing and sealing plans and other documentation. IGC certificates are considered identity-based, because the identity of the individual who applies for the certificate is “vetted” by validating that the information provided during the application process is accurate. Only then can a digital certificate be issued. Using Digital Seals Many federal, state and local agencies now accept digital professional seals (such as engineer, architect, surveyor and notary seals) in conjunction with a digital signature. In fact, some State Department of Transportation (DoT) agencies and cities now require the use of digital signing and sealing on plan submissions. Digital seals can be purchased from various vendors who provide traditional rubber stamps and embossing seals. A digital seal is easily incorporated into a digital signature that is produced when signing with an identity-based digital certificate. How Digital Signatures are Created Using a Digital Certificate Typically, documents that are submitted with a digital signature and a professional seal are created using the digital signature function that is incorporated into Adobe®. A digital signature is created by using an identity-based certificate. The digital signature can be configured to incorporate an official digital seal to replace the traditional stamped or embossed seal paired with a wet ink signature. The following graphics illustrate how digital signing is accomplished in Adobe® Adobe® also has the ability to validate digital signatures, including validation of the certificate used to create the signature, as depicted here. Each time a signed Adobe® PDF is opened, the application automatically validates the signature. If more than one digital signature has been applied to the document, then all signatures are validated when opened.
<urn:uuid:94ac9eaa-d9d3-4e1d-a314-c4b2d4433a5b>
CC-MAIN-2022-40
https://www.identrust.com/support/howdoi/accepting-digital-signatures-created-identrustr-igc-certificates
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00310.warc.gz
en
0.921032
987
2.828125
3
“The fact that a company hasn’t noticed a breach doesn’t mean that it hasn’t been breached” – itgovernance.co.uk An APT is a type of malware which uses social engineering or various phishing techniques to gain access to a network. Once the malware has gained access, it will conceal itself by hiding in unsuspected files, where it can remain undetected for weeks, months, or even years. In which time, it is able to steal or compromise sensitive data. Such attacks are hard to defend against as traditional security tools such as antivirus software, firewalls, and IPS/IDS, are not capable of detecting them. As such, a new set of solutions are required. Cyber-attacks are rapidly evolving and diversifying. They are becoming more targeted, covert, and persistent. More than 100,000 new types of malware are discovered every day, and according a report by Fireeye, it takes organisations an average time of 205 days before they notice they have been compromised. This is clearly way too long. The majority of attacks come from phishing emails that impersonate an organisation’s IT department or anti-virus vendor. As a result of this new wave of sophisticated attacks, there has been a shift towards a new approach that focuses on real-time threat-detection, automated threat-response and advanced data analytics. What is Advanced Threat Protection (ATP)? Not to be confused with APT (as mentioned above), Advanced Threat Protection (ATP) is a category of security solutions that are designed to defend against attacks that target sensitive data. Such solutions are available as either software or as managed services. They typically include some combination of endpoint and device monitoring, email security gateways, and a centralized management console that aggregates log data from multiple sources and provides alerts and reports based on important system events. SIEM and its Drawbacks Even if Security Information and Event Management solutions alert on suspicious activity that may be associated with a cyber-attack, a huge amount of audit logs will be gathered in the process and administrators will have to sift through this to find the exact event about which s/he was notified. There would be large amount of data to be deal with and most of the times Administrators will not have an idea where to look for a critical event. Whilst SIEM solutions can be helpful, they don’t come without their drawbacks. It’s important that you consider both the advantages and disadvantages of deploying a SIEM solution. Some of the drawbacks are as follows: 1. The data analysis you receive from a SIEM solution is very difficult to draw any real meaning from. It contains far too much noise and can be very difficult to understand. 2. SIEM solutions do not necessarily provide organisations with the audit data they require to meet regulatory compliances or ensure IT security. It’s hard to use a SIEM solution when you want to quickly find the necessary data for meeting PCI compliance, for example. SIEM reports sometimes need to be adapted for non-tech staff or external regulators. 3. SIEM solutions are expensive. There are serious costs associated with deploying SIEM solutions and training staff to operate them. Is there an alternative? One way many organisations are overcoming the limitations of SIEM solutions and giving themselves better visibility into critical changes taking place in their organisation is with Lepide Data Security Platform. Lepide Data Security Platform enables organisations to detect, report and respond to changes to their critical data and systems. It enables them to keep track of permission changes, user account modifications and deletions, inactive user accounts, failed logon attempts, privileged mailbox access and provides reminders about password resets and when passwords are due to expire. On top of this, it is able to generate real-time alerts and over 270 pre-set reports, which can be used to satisfy regulatory requirements. It helps organisations cut through the noise associated with SIEM solutions and provides them with immediately actionable reports for all manner of security, IT operations and compliance challenges.
<urn:uuid:dc2a6a19-898d-4aa7-9703-6d21f1a6f2ba>
CC-MAIN-2022-40
https://www.lepide.com/blog/how-do-siem-solutions-help-mitigate-advanced-persistent-threats-apt/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00310.warc.gz
en
0.952606
838
2.546875
3
Healthcare is changing, and it all comes down to data. Leaders in healthcare seek to improve patient outcomes, meet changing business models (including value-based care), and ensure compliance while creating better experiences. Data & analytics represents a major opportunity to tackle these challenges. Indeed, many healthcare organizations today are embracing digital transformation and using data to enhance operations. In other words, they use data to heal more people and save more lives. How can data help change how care is delivered? Value-based care is a new concept, growing in popularity and transforming the business model. It introduces a new incentivization structure for physicians, which rewards them for the value of their care instead of the quantity of care. The goal is to support better patient outcomes. Hospitals and pharmacies, too, are increasingly considering this model. Leaders are asking how they might use data to drive smarter decision making to support this new model and improve medical treatments that lead to better outcomes. Yet this is not without risks. Protected health information (PHA) and personally identifiable information (PII) that providers of healthcare and clinical trials manage is pursuant to privacy laws, like the HIPAA, CCPA, and GDPR, which mandate how such data can be used. This data is also a lucrative target for cyber criminals. Healthcare leaders face a quandary: how to use data to support innovation in a way that’s secure and compliant? Data governance in healthcare has emerged as a solution to these challenges. It defines how data can be collected and used within an organization, and empowers data teams to: - Maintain compliance, even as laws change - Uncover intelligence from data - Protect data at the source - Put data into action to optimize the patient experience and adapt to changing business models What is Data Governance in Healthcare? Data governance in healthcare refers to how data is collected and used by hospitals, pharmaceutical companies, and other healthcare organizations and service providers. It combines people, process, technology, and data within a system founded on transparency and compliance. In this way, it builds human trust in the data while ensuring the data is used properly. An active data governance framework supports data-driven decision-making. This, in turn, empowers data leaders to better identify and develop new revenue streams, customize patient offerings, and use data to optimize operations. Whether it’s an out-patient clinic, drug discovery and clinical research lab, or any other organization that provides treatment, tests, rehabilitation, or therapy – data security is critical. Healthcare organizations need to manage and protect sensitive information in a consistent, secure, and organized way. As Michelle Hoiseth, Chief Data Officer of Parexel, a global provider of biopharmaceutical services, said in a recent interview: “We needed to understand how we could leverage data that was forming in electronic medical record systems, claim systems, and pharmacy claims systems to really see the impact of new treatments.” For Michelle, step one was “appreciating that your data is an asset to enable your business.” To make good on this potential, healthcare organizations need to understand their data and how they can use it. This means establishing and enforcing policies and processes, standards, roles, and metrics. These systems should collectively maintain data quality, integrity, and security, so the organization can use data effectively and efficiently. Why Is Data Governance in Healthcare Important? Healthcare data is valuable and sensitive, so it must be protected. This is why healthcare organizations are subject to strict compliance mandates. These mandates ensure that PHA and PII data are protected and managed properly, so that patients are protected in the event of data breaches. Yet this same data is critical to improving patient outcomes. It can guide adaptation to changing business models and aid innovation, creating better patient experiences. But again, how you work with this data is subject to compliance scrutiny. The people working with it need guidance if they’re to use it appropriately. Here is a closer look at some of the leading reasons your team should implement data governance to enable you to use and protect this data: Ensures High-Quality Data Analysis Healthcare organizations often have many different databases to manage their diverse data and often have multiple databases handling the same information. However, grouping that data intelligently and making sure the right data is being properly used is a challenge. Intellectual property, like medical research data, often contains PHI and PHA. For example, in large databases for pharmaceutical companies, medical trial data may include both the pharmaceutical research and the study population’s personal information. Anonymized versions of that data may also be generated and shared, creating multiple data sources with the same information. Hospitals, too, often collect PII and PHA in multiple systems. Duplicative data is common, as a patient may see more than one specialist or have visits in more than one facility. Storing the same data in multiple places can lead to: - Human error: mistakes when transcribing data reduce its quality and integrity - Multiple data structures: different departments use distinct technologies and data structures Data governance is the solution to these challenges. How can you improve the patient journey, when you don’t have accurate data from every touchpoint of that journey? How can you analyze business models without great operational data from across the organization? Improving the patient experience requires combining this data to put it into action. Data governance not only provides a transparent framework for correct usage. It ensures quality data forms the foundation of all insights. A mountain of duplicate data can open the door to unintentional non-compliance. It can even diminish the overall quality of the data over time. Meet Compliance Requirements State, federal, and regional governments all understand that cybercriminals want PHI and, increasingly PHA. To protect this information, legislative bodies mandate strict rules for handling this sensitive data. Today, lawmakers impose larger and larger fines on the organizations handling this data that don’t properly protect it. More and more companies are handling such data. No matter where a healthcare organization is located or the services it provides, it will likely host data pursuant to a number of regulatory laws. Some important compliance regulations include: - Health Insurance Portability and Accountability Act (HIPAA): US federal law protecting patient data privacy - General Data Protection Regulation (GDPR): European Union law protecting data subject privacy - California Privacy Rights Act (CPRA): US state law protecting consumer personal information privacy - Payment Card Industry Data Security Standard (PCI DSS): Payment industry compliance requirement protecting cardholder data To meet compliance requirements, healthcare organizations need to know where all sensitive information is located and be able to prove it’s governed effectively. Protect From Cybercriminals Cybercriminals have nearly always targeted PHI and are increasingly focusing on healthcare. Whether they want to steal identities, sell data, or hold information hostage, these actors recognize that such data has a financial value. The 2021 Data Breach Investigations Report found that in healthcare: - 61% of data breaches were caused by external actors - 91% of data breaches were financially motivated - 66% of data breaches involved personal information - 55% of data breaches involved medical information An overabundance of data can challenge an entity’s ability to protect it. Indeed, an organization can’t protect information if it doesn’t know what it has or where it lives. Clear data governance policies and processes start with implementing a data catalog and labeling private data accordingly. This knowledge empowers data leaders to take appropriate action to both protect and use it compliantly. 5 Steps for Creating Effective Data Governance in Healthcare As healthcare organizations grow, they need scalable data governance practices to both keep private data secure and remain financially competitive. From engaging in research to providing emergency care, healthcare organizations must ensure that they can efficiently and effectively use data. 1. Determine Business Goals and Objectives Healthcare organizations have many data use cases. At the outset, the organization must decide how data governance fits into the business goals and define objectives accordingly. For example, some goals might include: - Determine competitive strategies - Increase patient engagement - Decrease adverse medication effects - Increase patient telehealth services usage - Reduce audit times - Mature security and privacy posture Each of these goals will require different types of information. To use that information compliantly, data teams must work within a transparent governance framework. 2. Identify, Categorize, and Prioritize Your PHI PHI is arguably the highest risk data that a healthcare company manages. In order to stay compliant and provide the best patient care possible, identifying and categorizing PHI should be a top data governance priority. It’s also important to make sure that information is properly categorized across all areas of the organization, including: - Clinical data - Lab data - Payment processing data Where data lives and how it’s classified will determine how it’s governed. Compliance audits require that sensitive data be marked accordingly, with evidence that demonstrates usage in line with regulatory law. 3. Assess and Assign Privileges and Permissions Privileges and permissions define who can access what data, and what they may do with it. As a best practice, data access should be governed according to the principle of least privilege. This means limiting access to information as much as possible without getting in the way of someone’s ability to do their job. The healthcare industry has a growing number of interoperability standards, which dictate how information is stored and shared between devices. Before you assign privileges it’s important to: - Define types of data that different areas need to access - Define who within a functional area needs to access the data - Outline how they can access the data, including details about devices, geographic locations, and time of day For example, a phlebotomist needs to know the patient’s name and date of birth. However, they may not need access to the patient’s entire medical history. Too much access increases the risk that data can be changed or stolen. 4. Remove Low Quality, Unused, or “Stale” Data In healthcare especially, data integrity is incredibly important. Low quality, unused, or “stale” data can negatively impact research by skewing findings. From a physician’s perspective, bad data can lead to care issues. For example, outdated patient prescription information can impact a doctor’s diagnosis and treatment plan. Keeping data fresh helps to achieve both care and operational goals. 5. Assign Key Roles and Train Employees Finally, it’s important to have the right people with the right training in charge of data governance. To do this, you should create teams based on role, including practitioners, IT team members, and finance. Accountability is important. Every functional area that manages sensitive information needs to ensure that the data managers, data owners, and data analysts understand their responsibilities. Data owners are in charge of their data, and they must know who has access and who should have access. In addition, adding a Chief Data Officer (CDO) can help maintain best data governance practices. The CDO acts as a point-of-contact within the organization for data managers maintaining the daily activities. Monitor, Measure, and Continuously Improve At this point, you should reference back to the goals you set in Step 1. If your goal was to increase patients’ telehealth services usage, for example, you’ll need benchmarks of current usage to measure change with time. Dashboards are useful means to track such change. Once you have baseline metrics, you can monitor change over time and measure the impact of business efforts on achieving the goals you’ve set. This takes time, attention, and patience! Don’t feel frustrated if you don’t see results immediately. Finally, data governance is a cycle. As you measure your progress, you may spot areas where you could get better. It’s important you make those changes as you go. This ensures you continuously improve your governance process. Implement Data Governance in Healthcare with Alation Whether your healthcare organization is looking to optimize patient care, improve research processes, or meet compliance requirements, data governance is mission-critical. Alation’s data catalog creates a standardized view of assets and ensures consistent data quality. Alation’s Data Governance App then helps you create the policies and procedures needed to make sure that the right data is used and that it is used properly. For Michelle Hoiseth, Chief Data Officer of Parexel, this approach now means that “People see who is accountable for that data, the viability or quality of that data, classification or other limitations of use. They are then able to create a direct connection with people whose job it is to help them get their data needs met, no matter who you are or where you are in the business” By consolidating data in a single location and making sure it is used properly, everyone in healthcare, including researchers, clinical trials, and care providers, can make better-informed decisions. Better decisions impact the outcomes for patients, help navigate changing business environments and value-based care and, overall, improve the experiences for everyone in their organization.
<urn:uuid:a995a88e-e7a4-4ed8-bc6d-de24fd3e0be5>
CC-MAIN-2022-40
https://www.alation.com/blog/data-governance-in-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00310.warc.gz
en
0.928159
2,764
2.53125
3
Administration > General Settings > Setup > DNS This tab lists the Domain Name Servers that the appliances reference. A Domain Name Server (DNS) uses a table to map domain names to IP addresses so you can reference locations by a domain name, such as mycompany.com, instead of using the IP address. Each appliance can support up to three name servers. |Appliance Name||Name of the appliance.| |Primary DNS IP addr||IP address of the DNS the system uses first.| |Secondary DNS IP addr||IP address of the DNS the system uses second.| |Tertiary DNS IP addr||IP address of the DNS the system uses last.| To add the three domain name servers, click the Edit icon. On this dialog box, you can configure up to three name servers. Enter the three server DNS IP addresses, and then click Add to apply the name to the domain.
<urn:uuid:b3586c03-e587-40a8-92ff-6b028d8e0bf8>
CC-MAIN-2022-40
https://www.arubanetworks.com/techdocs/sdwan/docs/orch-legacy-913/administration/general/dns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00310.warc.gz
en
0.715514
202
2.921875
3
AQM stands for Audio Quality Measurement. Audio means voice and music, but up till now there hasn't been much music in the plain old telephony systems. That might change with the new audio codec called EVS. Enhanced Voice Service. Not only does it promise this new codec enables you to transmit music with good quality from for example a concert. One use case is to hold up your phone at the concert and let your friend listen in to the music. This is probably not a killer application and of course not the driving force behind EVS... The driving force is to provide better quality than OTT services. As one example, in order to compete for the ear of the consumer EVS needs to spank the OPUS codec used in Skype. Simulation tests are being performed and here are two examples from Ericsson and Qualcomm. But simulations is never enough. We see a large demand for comparisions between the service provided by operators and the OTT services. We have test activities ongoing with Ericsson and VFE for comparing Skype to AMR WB. I guess this will be even more important when EVS is released. Audio Quality — a time journey Here are two strong points for the EVS codec: - It is designed for Music, enabling a richer experience with better audio quality. - It is designed for Packet based voice especially for VoLTE networks and provides higher quality using no more capacity. How about we listen to some music simulated on the common types of codecs? It is a small time journey from the early nineties to the future; HR → FR → EFR → AMR WB 12.65 → EVS 13.2 → EVS 24.4 For VoLTE the Jitter buffer and time scaling, called PWSOLA in this sketch was not standardized. This means that device vendors could implement it differently and thus the same radio environment would give different speech quality. With EVS all of the audio transmission is standardized. Time scaling is used for increasing the tolerance to jitter. When delay increases in the network the audio can be played out at a slower pace avoiding the jitter buffer to go empty. When the delay decreases the playout speed is increased. You can listen to some jitter simulations with EVS that show how it can sound when the jitter is really bad. Improved voice quality taking no more capacity For each codec generation the frequency bandwidth has basically doubled while keeping transmission bit rate the same. The commonly used AMR NB 12.2 gives 3400 Hz which sounds a bit like talking in a can. AMR WB gives up to 7000Hz and is a big improvement while EVS handles 14000Hz stereo is planned for. Even full band audio reaching up to 20000 Hz can be used at 16.4 kbit/s AMR NB 12.2 kbit/s AMR WB 12.65 kbit/s EVS SWB 13.2 kbit/s EVS was standardized as part of Rel 12 with work items in Rel 13, which means it's available now and if you scan the news you'll see operators announcing EVS support in their networks, e.g. Vodafone Germany, T-Mobile USwith more to come as operators feel comfortable moving this out from the labs into a commercial network. For mobile operators and network vendors looking to deploy and optimize the user experience with this new codec it's important to secure test & measurement solutions which are capable to handling up to audio 48k sample rates and generating KPIs such as MOS and SIT (Speech Interruption Time). This will allow a comparison between OTT, CS and emerging VoLTE services, ensuring a competitive and superior VoLTE service deployment.
<urn:uuid:91a51560-15ea-4222-9a6c-067b260bad11>
CC-MAIN-2022-40
https://www.infovista.com/blog/evs-enabling-the-a-in-aqm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00310.warc.gz
en
0.939608
797
2.765625
3
IBM is teaming up with The Netherlands Institute for Radio Astronomy, otherwise known as “Astron,” on a five-year project to look into very fast, low-power exascale computer systems for the world’s largest and most sensitive radio telescope. The project, to be called “DOME,” will cost about US$44 million. It will investigate emerging technologies for large-scale and efficient exascale computing, data transport and storage processes, as well as streaming analytics that will be required to read, store and analyze all the raw data that will be collected daily by the Square Kilometre Array, as the radio telescope is called. The SKA will gather several exabytes of raw data daily. An exabyte is 1 million terabytes. “Astron will be focused on the antenna; IBM will look at exascale computing, in which we need to address energy consumption, cost and space,” Christopher Sciacca, spokesperson for IBM Research Zurich, told TechNewsWorld. “IBM and the SKA are investing where it matters — driving down costs of closely coupled processing by driving up the efficiency of moving and storing data,” Joshua Bloom, an associate professor at U.C. Berkeley‘s astronomy department, told TechNewsWorld. Let’s Talk Techie The DOME project is a preliminary phase in the SKA project, and the next five years will see IBM and Astron “building a technological roadmap based on technologies that we already have in development, such as 3D chip stacking and phase-change memory,” Sciacca said. The SKA will consist of millions of antennae spanning an area of more than 3,000 km — approximately the width of the continental United States — forming a collection area equivalent to one square kilometer. It will be 50 times more sensitive than any radio device and more than 10,000 times faster than today’s instruments. IBM is considering nanophotonics to transport to data, IBM Zurich’s Sciacca said. Also, it needs to determine if the processing will be done on the antenna or in a data center. Nanophotonics is the study of the behavior of light on the nanometer scale. It can make for highly power-efficient devices for engineering applications. “What’s so inspiring about the project is that IBM is looking at technology that doesn’t yet exist,” Darren Hayes, CIS program chair at Pace University, told TechNewsWorld. “The project will also enable IBM to showcase their development of 3D stacked chips that will be used to handle the massive processing requirements. If successful, the project could propel IBM to the forefront of 3D microchips.” What’s a 3D Chip? 3D chip stacking simply means that chip components are mounted vertically to achieve greater density and higher performance. Stacking chips “will reduce energy because the data no longer travels 10 centimeters, but less than a millimeter,” IBM Zurich’s Sciacca said. “Ninety-eight percent of the energy in a data center is used for moving data, and 2 percent makes up the computations.” stacking integrated circuits is something companies like Irvine Sensors have been doing for about 20 years or so. Irvine is working on the project of putting whole systems — computers, data recorders and signal processors — in cubes, which is the next step. That project is sponsored by the Defense Advanced Research Project Agency (DARPA), the United States Army and the U.S. Missile Defense Agency. Back in 2007, IBM built chip stacks connected by metal links formed by drilling through each die’s silicon and filling the resulting holes with metal, replacing the wires normally used. These reduced the distance signals needed to travel between dies by a factor of 1,000 and enabled a hundredfold increase in the number of links that could be established between dice, the company claimed. Stacked, or 3D, chips can generate quite a bit of heat. IBM has stated they have an aggregated heat dissipation of nearly 1 Kw, which is 10 times greater than the heat generated by a hotplate, in an area measuring only four square centimeters and 1mm thick. So in 2008, researchers at IBM Zurich and the Fraunhofer Institute in Berlin, Germany, came up with the concept of running water through the stacks to cool them. The data produced by the SKA will “be particularly challenging because it needs to be cross-correlated with itself, so the very nature of the initial workflow does not lend itself to embarrassing parallelism,” UC Berkeley’s Bloom pointed out. Hadoop and “geographically sharded databases work well because some forms of computation can be done locally, then aggregated and summarized centrally,” Bloom added. “SKA data needs to be collocated and analyzed, at some level, as a single entity.”
<urn:uuid:59c05e79-6d9f-4e6c-94e2-ff60981a39e4>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/ibm-plans-massive-computer-system-to-digest-big-telescope-data-74765.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00510.warc.gz
en
0.933477
1,045
2.921875
3
How Does Artificial Intelligence Disrupt the Manufacturing Industry? Let’s have a closer look at how exactly AI transforms contemporary manufacturing, and how entrepreneurs may benefit from adopting this technology. Optimizing Supply Chain Management with Artificial Intelligence The key objective of manufacturing supply chains is to have a ready product available in the appropriate place at a specific time. Today this objective is challenged by numerous factors like volatility of the markets, fierce competition, ever-changing regulations, and geopolitical issues. However, the major challenge would be growing customer expectations. Modern customers’ demands include hyper-personalized experience with the possibility to purchase, receive and return the products anywhere at their convenience. As a result, manufacturers struggle to manage efficient execution within the supply chain, while simultaneously increasing commercial viability. Entrepreneurs may resolve these issues by leveraging artificial intelligence for their manufacturing businesses. One of the key areas of manufacturing, where AI is currently implemented, would be supply chain management, since AI improves transparency, excludes operational friction, accelerates decision-making and produces accurate demand forecasting. Rapid AI-Based Decision-Making and Management Artificial intelligence is able to process large volumes of data and perform real-time analysis that will optimize the work of various supply chain operations. Having instant access to supply chain systems data, employees may evaluate and analyze it faster. This accelerates the decision-making process and benefits the entire supply chain management model. AI is also able to save the exchanged sales and operational planning data, resulting in more efficient and collaborative supply management. Most importantly, enhanced data visibility creates better insights and benefits the understanding of customer demands and requirements. AI and Advanced Demand Forecasting Precise demand forecasting is one of the most significant aspects of the manufacturing business. If customer demands are measured inefficiently, this leads either to product shortage or to overstocking, which results in considerable financial losses. The implementation of artificial intelligence may resolve this issue. AI analyzes customer insights, their loyalty, general market specifics and a number of other factors to generate accurate demand predictions. These forecasts are centered in five major areas: - Trade promotions. This type of AI-based forecasting analyzes the multitude of interrelations between the elements of trade promotion and historical data to create clear-cut demand predictions for future campaigns. AI allows identifying specific trade promotion types that are capable to uplift sales and achieve maximum ROI. - Product launching. AI is able to process the results of past launches, select the most preferable aspects of the product, outline them, and precisely predict the future customer demands and preferences. - Social Media. Artificial intelligence can conduct a thorough analysis of social media comments and feedback, and provide relevant brand-related insights that are beneficial for supply chain planning. - Seasonality. Artificial intelligence can efficiently track and evaluate the seasonality trends and patterns within various time periods, which is crucial for future demand forecasting. - Weather conditions. Demand forecasting depends on geographical factors. AI enables manufacturers to analyze weather parameters and determine the demand for a certain product during a specific time span. Modernizing Procurement with AI Chatbots An AI-powered Chatbot has the potential to become the right-hand man for procurement specialists by assisting them in daily activities and providing better services for both suppliers and internal customers. AI-powered procurement software can significantly improve contract management by highlighting key data and distinguishing possible issues. Moreover, chatbots may efficiently process both internal and external queries relating to invoices or order purchasing. With AI Chatbot assistance, procurement experts won’t have to look through the endless databases for the information they need. This will significantly accelerate the speed of the procurement branch. Enhancing Quality Control with Artificial Intelligence The manufacturing industry is fraught with errors and defects. The main challenge is to recognize the early signs of potential production failures within the shortest terms in order to save resources and sustain operational efficiency. Unfortunately, manual tracking of possible faults is inefficient and error-prone. By leveraging artificial intelligence into enterprise, manufacturers can reduce the probability of errors and significantly improve many aspects of quality control. In essence, artificial intelligence in manufacturing transforms the assembly lines into interconnected networks, which are guided by the set of specific parameters and algorithms. AI identifies the slightest deviations from the initial parameters in real-time and instantly sends a notification about the occurring faults. This improves the response time and allows eliminating possible failures much faster in comparison with manual quality control. Moreover, advanced AI systems may be constituted into convolutional neural networks that are able to further accelerate data processing and provide exceptional benefits for quality control. Efficient AI-based Predictive Maintenance Not only is AI able to detect production failures in real-time, but it may also be used as a powerful tool for precise predictive maintenance. By analyzing various production attributes and parameters during specific timeframes, AI detects possible production malfunctions that may cause product quality issues in the future. By collecting and processing information from different sources, like previous maintenance records, current sensor data or even meteorological reports, AI creates accurate forecasts as to when the machinery must be repaired. The insights generated by AI analytics are critical for business of any manufacturer, because they allow diminishing expensive unplanned downtime, while sparing a considerable amount of resources. Smart Production and Design with Artificial Intelligence Digital Twin Technology in Manufacturing Digital Twin is a technology powered by the Internet of Things (IoT) and artificial intelligence, creating unparalleled opportunities for manufacturers. Implementation of the Digital Twins allows recreating an accurate virtual representation of a physical product during any period of its lifecycle. The technology lets the user view the manufactured products in terms of design, embedded software, flow mechanics, and other important production aspects. These digital simulations allow inspecting how well the product is made for manufacturing, and creating a feedback loop showing all aspects of the developed product that may be optimized. The operations stage becomes swifter due to the digital twin technology, because different kinds of adoptions and improvements may be applied in real-time. Most importantly, precise digital simulations substitute time-consuming and costly prototyping, which allows finding optimal solutions for product redesign much faster, while spending fewer resources. For more information, read our expert’s article Manufacturers Aspire to Digital Twinning and Virtual Commissioning on ReadWrite. Jan Keil, VP of Marketing at Infopulse, explains key differences between Digital Twin, 3D CAD model and Digital Thread, also covering Digital Twin types, use cases and technologies that influence its development. Artificial Intelligence and Generative Design Generative design is an AI-powered software that processes specific product parameters and generates various design options. It’s important to understand that it does not improve the pre-existing design, but creates a wide range of new possible design options. The technology analyzes a brief that includes different criteria, e.g., material type, weight, size, budget constraints and manufacturing methods, such as CNC, lathes, injection molding, and others. The AI algorithms generate hundreds of feasible design options, allowing to select the most optimal solution. The design choice may then be tested across various manufacturing scenarios and conditions to achieve supreme product quality. A good example is Autodesk’s Dreamcatcher AI system that allows inserting a wide range of criteria into the system, and then quickly generates unique design options. The AI-based generative design offers a multitude of benefits for the manufacturing industry. By rapidly offering numerous design options, it improves enterprise productivity and allows shifting the designer’s focus into other primary tasks. With the newly-developed design combinations, manufacturers can significantly reduce production costs, decrease material waste, prevent costly reworks and meet the ever-growing customer requirements. AI and the “Missing Middle” in Manufacturing The majority of people perceive artificial intelligence and human workforce in terms of competition, but not collaboration. Currently, the leading adopters of AI in the manufacturing industry do not have the dualistic view of smart machines competing against people, but they look into the middle ground – an area of human and machine collaboration, termed as the “missing middle”. This median area has the ability to shape outcomes that neither humans nor machines can create alone. These outcomes are the new types of jobs and advanced collaborative business models that will disrupt the current manufacturing sector, making it more efficient while benefiting the safety of the working environment. The future working opportunities of the AI and human dyad include: - Trainers. AI systems must be taught how to perform the tasks they were designed for, and how to properly interact with human employees. Trainers will manage the proper learning of the AI systems. - Explainers. AI draws conclusions through complex algorithms that are incomprehensible for laymen. The “explainers” will clarify the necessary work details to non-experts. - Sustainers. These employees will ensure seamless, safe and efficient work of the AI systems. Artificial intelligence in manufacturing will also augment the human workforce in three ways. AI systems will be able to amplify human cognitive capabilities, effectively interact with the customers and employees, and embody human skills, which will allow enhancing the physical capacities of manufacturing workers: - Amplification. This way of AI-assistance will allow receiving real-time insights, thus empowering analytics and decision-making. - Interaction. Modern AI agents like Microsoft’s Cortana or SEB’s virtual assistant Aida can facilitate both internal and external communications by using natural language processing. Such AI systems may interact with thousands of customers simultaneously, while also being used by internal departments to execute different tasks. - Embodiment. AI systems may be embodied in robots with advanced sensors. Such “cobots” will work alongside humans and create a more safe, efficient and intelligent manufacturing working environment. Shaping Sustainable Manufacturing with Artificial Intelligence The manufacturing sector is responsible for a significant part of the worldwide energy consumption. According to the International Energy Agency, energy demands keep rising. Yet, the latest report of Microsoft in association with PwC shows that artificial intelligence has an enormous potential to benefit environmental sustainability and pave the way to a more eco-friendly and energy-efficient manufacturing sector. Artificial intelligence can solve a number of issues that are critical for sustainable manufacturing. This includes excessive use of certain materials, redundant production scrap waste, inefficient supply chain management, logistics and unequal distribution of energy resources. Most importantly, manufacturing entrepreneurs won’t have to invest in numerous solutions, because AI alone can eradicate all of the aforementioned difficulties. AI is able to analyze specific data and accurately predict the expected output, thus eliminating exorbitant material use or waste. Additionally, AI algorithms may be set to make precise recommendations that will strike a balance in energy use. Lastly, artificial intelligence can be used to benefit the supply chain management and logistics with demand forecasting, improved communications, and real-time decision-making solutions. Artificial intelligence is an advanced technology that has the potential to fundamentally transform the manufacturing industry. Leveraging AI can create an efficient and transparent supply chain with significantly decreased operational friction. Moreover, AI is the backbone of efficient quality control systems that instantly identify even the slightest deviations and inform of possible failures in advance. By implementing AI-based technologies like digital twins or generative design, manufacturers are able to avoid expensive product prototyping and generate hundreds of feasible product design options. Most importantly, artificial intelligence will soon create unparalleled working opportunities within the “missing middle” and forge the path towards smart, efficient and sustainable manufacturing. Infopulse has solid experience in the development of advanced AI and data science solutions for a broad range of business domains. The developed solutions include financial forecasting tools, sophisticated virtual assistants, chatbots and effective support solutions. Our team would be honored to provide further counseling services in regards to the adoption of AI for your manufacturing enterprise. Feel free to contact us for more details.
<urn:uuid:124b31bd-565e-4692-839d-aefd2df8abaf>
CC-MAIN-2022-40
https://www.infopulse.com/blog/how-does-artificial-intelligence-disrupt-the-manufacturing-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00510.warc.gz
en
0.916493
2,413
2.6875
3
This topic explains major concepts related to using SQL Server with the Delphix Engine. For explanations of general Delphix concepts, see Glossary of Major Delphix Concepts. A computer system of some kind, either physical or virtual, that has an operating system and software on it. This system can run: Some synonyms are "location" dependent, noted below. Some apply to all hosting locations. Physical systems in a local datacenter: Virtual systems in VMWare or another hypervisor: Virtual system in a cloud: One or more servers that share the same data and are capable of being connected to as a highly available whole. A metaphor for this is "many doors leading to the same room": there are multiple servers leading to the same database(s) for redundancy and horizontal scaling. dSource for SQL Server A dSource is the Delphix-side representation of a source database. Strictly speaking, it is a backup of the source database that has been compressed and stored inside of Delphix to be kept up to date over time. To add a SQL Server dSource the following need to be added: Staging Environment, Source Environment. Then a backup needs to be stored on SMB location along with Differential backups and transaction log backups if required. A Windows service that enables communication between the Delphix Engine and the Windows target environment where it is installed |Domain||Collective name for data objects, such as dSources, virtual databases (VDBs), users, groups, and related policies and resources| A Delphix environment is a pointer to a server or cluster, and the subsequent discovery of all database instances and databases. Depending on the type of server or cluster being attached and defined, an environment can be a: VDB for SQL Server In a normal cloning operation, SQL Server Data files consisting of blocks are copied in full from one system to another using some form of backup and recovery. With Delphix, the data already exists in compressed form as a dSource. Virtual SQL Server Data files are created from snapshots and presented to the Target Environment via iSCSI. No data blocks are actually copied, allowing for near-instant cloning. SQL Server Concepts in Delphix |Microsoft SQL Server| A relational database management system (RDBMS) traditionally hosted on the Microsoft Windows platform Database discovery is initiated during the configuration of a Source Environment. Delphix will use a Target Environment as a proxy to find SQL Server instances and databases on the Source Database Server. Once a database has been discovered, it can be linked as a dSource. During initial discovery and environment refreshes, Delphix pushes a fresh copy of the toolkit to each host environment. Included in the toolkit are: A JRE (Java Runtime Engine), Delphix .jar files, the hostchecker utility, scripts for managing the environment and/or VDBs, and Delphix Connector log files. Delphix then executes some of these scripts to discover information about the database instance and databases. In some environments, the scripts are customized to fit your particular server configuration. A server with SQL Server installed and a SQL Server Instance, capable of hosting one or more databases The software (set of memory structures) used to manipulate and manage data in the database. Typically describes a complete database environment, including the database software, user configuration, databases, and other functionality associated with the database software. SQL Server Instance An instance (see Instance) using the SQL Server software. The running SQL Server software on a Windows Server, capable of opening and running SQL Server databases. SQL Server Managed Backups Backups taken via SQL Server natively, output to a location on disk. This location must be shared via SMB with the Staging Server. Standalone Windows server A Windows Server that is not part of a Cluster. If this Server contains a SQL Server Instance, it can be connected to Delphix as a Source, Staging, or Target depending upon its purpose. Validated Sync environment The process used by Delphix to sync changes from a Source Database into Delphix. Based on the dSource Validated Sync Mode, Delphix will synchronize the dSource over time using full backups, differential backups, and transaction log backups. Windows Target Environment A Delphix environment pointing to a Windows Target Database Server. This is the type of environment to which you provision VDBs. It can also be the basis for a staging environment or connector environment with possible additional prerequisites. Concepts about the Source A SQL Server database that is linked as a Delphix dSource Source database server A database server containing one or more databases that will be linked to Delphix. |Source Environment for SQL Server||A Delphix environment pointing to a Windows Source Database Server.| |Source Host for SQL Server| A Windows source host on which the source database (which is within a SQL Server instance) resides. Concepts about the Target Target Database Server for SQL Server A Database Server that meets all the requirements under the Target section of Windows Database Server Requirements. See Windows Target Environment for more information. Target Server, Target Server for SQL Server Target Environment for SQL Server Concepts about Staging Staging database for SQL Server A database that is automatically configured by Delphix on a Staging Server while consuming backups from a source database. Staging environment for SQL Server A Delphix Environment pointing to a Windows Staging Server. Required for the ongoing population of backup data from your source database into the Delphix Engine. This server behaves much like a VDB target environment. Must run the same version of SQL Server as your source. Windows Staging Server A Database Server configured as a Standalone Target Server, additionally configured to access and consume backups from a Source Database. This Server should have the same version of SQL Server installed as the Source Database Server. Staging Target Database Servers |See Windows Staging Server| Staging Target for SQL Server Types of Users Source SQL Server database user A SQL Server database user configured with the Delphix prerequisite permissions on a source database server Target SQL Server database user A SQL Server Database user configured with the Delphix prerequisite permissions on a Target Database Server Staging Target SQL Server database user A SQL Server Database user configured with the Delphix prerequisite permissions on a Staging Database Server SQL Server Data Operations in the Delphix Engine The terms below describe actions you can perform on a Delphix Engine. For a glossary of data operations for Delphix in general, including refreshing a VDB from a dSource and rewinding a VDB, see the Data Operations section of the general glossary. Configure the source environment The act of configuring a Delphix environment to point to a source database server. Prior to performing this step, you must have one target environment configured to act as a connector target. Configure the target environment using a Validated Sync environment |The Validated Sync environment is required for the ongoing population of backup data from your source database into the Delphix Engine. This server behaves much like a VDB Target Environment. The Validated Sync Environment must run the same version of MS SQL Server as your Source. Synonymous to staging target for SQL Server.| Link a SQL Server dSource |The action of defining the connection between an SQL Server database from which changes will be captured, and the Delphix dSource. This action includes the initial full restore of the SQL Server database within the Delphix dSource.| Manage SQL Server Environments The act of configuring or managing the Delphix environments that point to SQL Server database servers or clusters. Refresh SQL Server environment Updating an environment on which the underlying software has been updated. After changes are made to an environment that was already set up in the Delphix Management application (such as installing a new database home, creating a new database, or adding a new listener) it may be necessary to refresh the environment to reflect these changes. During environment discovery and environment refreshes, Delphix pushes a fresh copy of the toolkit to each host environment. Included in the toolkit are: A JRE (Java Runtime Engine), Delphix jar files, the hostchecker utility, scripts for managing the environment and/or VDBs, Delphix Connector log files. Delphix then executes some of these scripts to discover information about the objects in the environment (such as where the databases are installed, the database names, information required to connect to these databases, etc.). In some environments (Windows in particular), the scripts are customized to fit the customer’s environment. SQL Server and Delphix Self-Service (Jet Stream) Refresh a SQL Server VDB from parent VDB in Delphix Self-Service Refresh is faster since Refresh will update the data (on the active branch) of a user's data container. The user will then have the latest data in the sources of the data template from which the container was provisioned. Reset SQL Server VDB in Delphix Self-Service Reset in Delphix Self-Service, is a simplified version of Restore built to support an "undo." It allows a user to reset the state of their application container to the last operation. The complete set of operations that reset will reset to is: CREATE_BOOKMARK, CREATE_BRANCH, REFRESH, RESET, RESTORE, and UNDO. This can be useful for testing workflows where, after each test, users want to reset the state of their environment. Similarities and Differences between a SQL database and a Delphix VDB - is read/write capable - has its own transaction logs - is accessible via a SQL Server Instance - can be upgraded to a higher release of SQL Server Connections to a VDB can be made with ADO.NET, OLE DB and all other methods You can rewind a VDB to a prior point in time without the need for special SQL Server features. You can provision a VDB from a VDB. You can refresh VDBs at any time from their source database with a few clicks. You can analyze I/O-related bottlenecks by drilling into Delphix’s Network. You can take a snapshot of a VDB at any point without impacting performance or usability, effectively bookmarking key points in a VDBs timeline.
<urn:uuid:77a327b5-31d8-4f29-9092-b8e63ab1bcdb>
CC-MAIN-2022-40
https://docs.delphix.com/docs538/delphix-administration/sql-server-environments-and-data-sources/concepts-and-overview-of-delphix-for-sql-server/glossary-of-major-sql-server-concepts
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00510.warc.gz
en
0.837227
2,348
2.890625
3
Bayesian spam filtering is based on Bayes rule, a statistical theorem that gives you the probability of an event. In Bayesian filtering it is used to give you the probability that a certain email is spam. Named after the statistician Rev. Thomas Bayes who provided an equation that basically allows new information to update the outcome of a probability calculation. The rule is also called the Bayes-Price rule after the mathematician Richard Price, as he recognized the importance of the theorem, made some corrections to Bayes’ work and put the rule to use. When dealing with spam the theorem is used to calculate a probability whether a certain message is spam based on words in the title and message, learning from messages that were identified as spam and messages that were identified as not being spam (sometimes called ham). The objective of the learning ability is to reduce the number of false positives. As annoying it might be to receive a spam message, it is worse to not receive a message from a customer just because he used a word that triggered the filter. Other methods often use simple scoring filters. If a message contains specific words a few points are added to that messages’ score and when it exceeds a certain score, the message is regarded as spam. Not only is this a very arbitrary method, it’s also a given that this will result in spammers changing their wording. Take for example “Viagra” which is a word that will surely give you a high score. As soon as spammers found that out they switched to variations like “V!agra” and so on. A cat and mouse game that will keep you busy creating new rules. If the filtering is allowed for individual input the precision can be enhanced on a per-user base. Different users may attract specific forms of spam based on their online activities. Or what is spam to one person is a “must-read” newsletter to the next. Every time the user confirms or denies that a message is spam, the filtering process can calculate a more refined probability for the next occasion. A downside of Bayesian filtering in cases of more or less targeted spam is that spammers will start using words or whole pieces of text that will lower the score. During prolonged use, these words might get associated with spam, which is called poisoning. A few methods to bypass “bad word” filtering. - The use of images to replace words that are known to raise the score - Deliberate misspelling, as mentioned earlier. - Using homograph letters, which are characters from other character-sets that look similar to letters in the messages’ character set. For example the Omicron from the Greek which looks exactly the same as an “O”, but has a different character encoding. Bayesian filtering is a method of spam filtering that has a learning ability, although limited. Knowing how spam filters work will make it more clear how some messages get through and how you can make your own mails less prone to get caught in a spam filter.
<urn:uuid:97999a50-f9db-4f6f-82cc-5223ce419332>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/news/2017/02/explained-bayesian-spam-filtering
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00510.warc.gz
en
0.96118
659
3.03125
3
The creator of the venerable Linpack benchmark says its value as a proxy for real-world performance "has become very low," so he's created an alternative. The TOP500 list of the fastest supercomputers in the world has been a staple of computing for over 20 years. Universities, governments and even private companies compete to build the fastest supercomputers and earn a place on the list. How long a supercomputer can stay on the list is faithfully reported by technology media outlets around the world. To rate one supercomputer against another, the TOP500 list uses the Linpack benchmark, which was created by Jack Dongarra, a professor at the University of Tennessee. The benchmark as worked well, but there are those who say that it no longer accurately represents true supercomputing power. The scientists working on supercomputer development at the Energy Department's Oak Ridge National Laboratory said much the same thing when I interviewed them earlier this year about the United States’ supercomputer development plans. Now, Dongarra shares that sentiment, too. In a post on the University of Tennessee blog, he explains why Linpack no longer works as well as it once did. "We have reached a point where designing a system for good Linpack performance can actually lead to design choices that are wrong for the real application mix, or add unnecessary components or complexity to the system," Dongarra said, according to the blog. “The Linpack benchmark is an incredibly successful metric for the high-performance computing community,” Dongarra said. “Yet the relevance of the Linpack as a proxy for real application performance has become very low, creating a need for an alternative.” The alternative is a new benchmark he and Sandia National Laboratories’ Michael Heroux are developing called the High Performance Conjugate Gradient. Dongarra explained the new benchmark measures how supercomputers are able to drive applications through their increasingly diverse makeup of CPU and GPU chips instead of just taking a raw power snapshot. Data Center Knowledge reports the new HPCG won't replace Linpack, but will instead be used at the same time. The two together will determine how supercomputers place on the new TOP500 lists. Given how competitive the agencies, governments and companies whose computers make the list seem to be, it’s a sure bet that using two benchmarks will be controversial, and give people ammunition to argue that they should be ranked higher. But having benchmarked everyday nonsupercomputers for the past 15 years in the GCN Lab, I can say that benchmark technology has certainly changed. Where we used to use a standard benchmark that just looked at raw performance, we now make use of the Passmark Performance Benchmarks, which takes a much more well-rounded look at how a system is doing, component by component, while also considering how they are linked. It makes sense that supercomputer benchmarks would also need some updating as the computing landscape changes. On another front, a group of supercomputing experts have started the Graph 500, to measure how well the machines handle big data, as well as the Green 500 to measure their energy efficiency. With HPCG added to the test for overall performance, it will be interesting to see if, or by how much, the current rankings change once HPCG is factored in with Linpack. NEXT STORY: New program could foil hackers in the cloud
<urn:uuid:d275ac16-e043-4a41-aed0-5e3c74e3209a>
CC-MAIN-2022-40
https://gcn.com/cloud-infrastructure/2013/07/a-new-way-to-measure-the-best-supercomputers/317707/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00510.warc.gz
en
0.951029
700
2.578125
3
What is domain spoofing? Domain spoofing is a tactic used by cybercriminals to commit scams and frauds on the internet. It occurs when an attacker tries to impersonate a company, an employee or someone known to confuse and persuade another person. How? Forging a domain. Domain spoofing is directly linked to social engineering, spam campaigns, phishing and spear phishing scams, such as Business Email Compromise (BEC) and Email Account Compromise (EAC). In practice, domain spoofing is used by hackers in different ways. It could be, for example, by simply adding a letter to an email address or creating a fake website that has an address very similar to the legitimate one. In the day-to-day routine, these small changes end up being overlooked by many people. But the success of a scam doesn’t depend only on that, of course. Cybercriminals are often smart people. Scams involving domain spoofing are carefully prepared. Therefore, the entire visual identity of a company is usually forged in fake emails and websites, including logo, colors and any other visual detail that imitates the original and official one. Table of Contents Types of domain spoofing We can say that there are two main categories of domain spoofing. Within these categories, there are some variations. 1. Email spoofing Email spoofing is the act of forging email addresses. This can happen basically in two ways. First, when an attacker hacks an email account and uses it to commit fraud. Second, when the attacker creates a similar email address or falsifies some part of an email to imply that the message is legitimate. The email spoofing purpose is to gain the recipient’s trust. That involves social engineering, spam campaigns, and phishing and spear phishing scams. 2. Website spoofing Website spoofing is about creating a fake site address. The goal remains to gain the victim’s trust and then deceive him. In these cases, the fake websites are very similar to the legitimate ones. Frequently, URLs have only a few letter variations. Website spoofing is even more used by crooks during festive and shopping dates, such as Black Friday. It also heavily involves social engineering, spam, phishing, and spear phishing. Cases and examples of domain spoofing Imagine that a hacker has created a fake website that looks a lot like your bank’s website. Then you receive an email apparently sent by your bank. The email says that someone tried to access your account in some distant country. You’re then prompted to click on the link, to access the website, and to provide information to solve the issue. You already know where this is going, right? It’s interesting to note in these cases that email spoofing and website spoofing are often used within the same scam. It’s also a possibility that the cybercriminals use a malicious file to infect your machine with ransomware, spyware or any other type of malware. Imagine that you need to open a file that you received by email and was supposedly sent by your company’s CEO. The damage can be huge to your business. In more sophisticated cases, the attacker will only use email spoofing. He’ll try to impersonate a partner or someone inside the company, especially a C-level executive, to request a bill payment or a wire transfer. Both website spoofing and email spoofing are linked to making money illegally, stealing confidential information that can be used in other scams, espionage, secret data sale, and even the invasion of machines with the purpose of turning them into robots, or botnets. How to prevent spoofing In the case of email, there are authentication mechanisms that help fight spoofing, such as SPF (Sender Policy Framework), DKIM (DomainKeys Identified Mail) and DMARC (Domain-based Message Authentication Reporting & Conformance). There are also more comprehensive tools, such as a Secure Email Gateway, which allow you to configure these authentication mechanisms we just mentioned. Overall, the security tips are: have attention with emails and websites that require important information, carefully check email and website addresses, don’t click on suspicious attachments and URLs. If you need to verify a message’s legitimacy, look for other ways to do so. And, if you receive a message to update an account or get an unmissable deal, don’t click on the link. Instead, go directly to the company’s official website. These are the main tips to keep you and your company safe against domain spoofing and other threats.
<urn:uuid:e089c433-8227-4d12-9e60-982f0f1a42a5>
CC-MAIN-2022-40
https://gatefy.com/blog/what-is-domain-spoofing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00710.warc.gz
en
0.940807
962
3.4375
3
Nowhere in the UK is more closely associated with British sea power than Portsmouth. Home to two-thirds of the Royal Navy’s surface fleet, its historic dockyard and museums also play host to the fragile timbers of the Mary Rose and the scourge of Napoleon’s navy, HMS Victory. Earlier this month, they were meant to be joined by another, equally illustrious vessel. White, sleek and bedecked with solar panels instead of sails, the Mayflower is the autonomous successor to the boat that bore the Pilgrim Fathers to their destiny in Massachusetts – and, say its creators, the future of shipping for the next four hundred years. “Unfortunately, she couldn’t make it,” says Andy Stanford-Clark, a master engineer at IBM and one of the boat’s leading designers. The reason, he explained earlier to an auditorium of journalists and businesspeople, was the weather. While the Mayflower was perfectly capable of making the short voyage from Plymouth alone, the waters proved too choppy for its manned escort vessel. Consequently, its robotic charge was confined to a much shorter patrol off Rame Head in Cornwall, its progress awkwardly depicted behind Stanford-Clark on a blocky livestream. Despite these minor setbacks, the Mayflower is one of the most advanced examples of a series of unmanned vessels trialled in recent years. Motivated by the obvious cost and safety implications involved in removing the need for a human crew, unmanned cargo ships have already been tested in Japan and Norway. Trials of autonomous ferries and water taxis, meanwhile, have also taken place in Finland and Germany. While these vessels have been designed to operate in coastal waters, however, the Mayflower was built to cross oceans. Constructed in collaboration with ProMare, a marine research organisation, the boat’s hull is bursting with sensors, edge computing and AI systems that help it make swift and complex decisions on route planning, how and when to avoid obstacles, and conduct climatological and ecological research. Stanford-Clark insists, however, that the boat is not IBM’s attempt to dominate world shipping. “Our mission is cloud and AI,” he says, and in that sense, the Mayflower is a useful testbed for a whole host of technologies that have direct applicability to land-based problems. Indeed, a lot of the systems on board are already commercially available, explains IBM’s business automation specialist, Doug Coombs. Consequently, businesses of all stripes are “looking at how we’re applying this technology and thinking, ‘Okay, I’ve got similar use cases’,” he says, in everything from product recommendation to assessing the eligibility of organ donations for transplant patients. There are also very clear maritime applications for the lessons learned in operating Mayflower, explains Stanford-Clark. One, he says, “is the transferability of the ‘AI captain’ technology, to other types of ship.” This system is not only capable of operating the vessel in conditions where it’s cut off from communications with the outside world – important, say, if it finds itself in the middle of a storm – but also in full compliance with the Convention on the International Regulations for Preventing Collisions at Sea and the SOLAS Convention. Crucially, explains Stanford-Clark, the AI captain also has a sense of self-preservation, which could make it a useful safety system for manned vessels. “I’ve described it as like a ‘guardian angel,’ looking over the shoulder of the human captain,” he says. It certainly sounds like a more realistic application of AI in maritime navigation in the near term. While there have been trials of unmanned vessels around the world, the shipping industry retains a stand-offish attitude towards removing the human from the bridge. “I think the marine industry recognises that autonomy is coming,” says Stanford-Clark, who has participated in conversations with cargo companies about the applicability of IBM’s systems to their own vessels. “It’s not if, but when. [But] we accept it’ll be a while yet before every ship is run autonomously.” Another step toward autonomous operations may be remote control. Recent advances in satellite connectivity via LEO constellations raise the possibility that entire fleets should be controlled by a navigator sitting thousands of miles away. While companies like Inmarsat and Rolls-Royce have conducted their own research into this area, Stanford-Clark remains ambivalent. “There isn’t an AI challenge there,” he says. “Just being able to remotely pilot a ship – well, that’s like a radio control boat on a lake.” IBM and ProMare’s aims for the Mayflower are rather more ambitious. Last year, the vessel attempted its first transatlantic voyage, following the same route charted by its namesake in the 17th century. Shortly after setting sail from Plymouth harbour, however, the ship developed a fault in a coupling that helped charge its batteries and decided to turn back. Ironically, all the AI systems worked perfectly. Having spent the past nine months re-testing all the technology aboard, the Mayflower and its AI captain will try crossing the Atlantic again later this year. Success will mean the first civilian crossing of an ocean by an autonomous vessel and provide a window into the future of shipping as we know it – that is, “as long as it doesn’t break again,” says Stanford-Clark. “That’s always the hope.”
<urn:uuid:df07766c-0cd7-4dc6-abcd-e467226f117d>
CC-MAIN-2022-40
https://techmonitor.ai/technology/mayflower-will-cross-atlantic-with-ai-captain
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00710.warc.gz
en
0.961866
1,174
2.59375
3
In a sentence, NPMD, or network performance monitoring and diagnostics, is a proactive effort to collect network telemetry that will be useful in future troubleshooting efforts pertaining to both the end-to-end performance and security of a network. This short definition, however, is entirely too vague because it doesn’t include details on the types of telemetry collected, how it is evolving, the types of devices sending the data, or the processes that have matured over the last decade. Not even a brief paragraph can fully explain how NPMD is improving the collective effort toward root cause identification. Read on to learn more… It is well understood that an optimized network is critically important to digital business operations. The push to maintain a finely-tuned infrastructure is fed by a constant urge to stay competitive and grow the business. It has also accelerated the migration to the cloud and the adoption of container architectures which in turn has introduced blind spots into the traditional hop-by-hop visibility that network professionals have come to rely on. The insights generated by NPMD have led to a recognition that there is a need to better align the goals of network operations with those of security operations. These mutual interests include: Whether it is a router, switch, or network device, it will likely be monitored the first time it transmits a packet onto the network. It doesn’t matter if the environment is a LAN/WAN, software-defined networking (SDN), or network function virtualization (NFV) component. One of the goals is to monitor, measure, diagnose and generate alerts for any IP address in the aforementioned environment. This includes internet of things (IoT), cloud-hosted services (e.g., containers), wireless endpoints, and servers/VMs. To gain insight into these devices, they must send messages on their health to a collection point. Devices connected to the network send data in multiple formats. Most of these transmissions are standards-based or in a syntax common enough to be considered a standard such as NetFlow, IPFIX, sFlow, SNMP, Syslog, event logs, and packet capture. More recently, the JSON format has been used when sending network performance and security telemetry. DNS data exfiltration is another transmission technique that can be used for both the malicious and legitimate transmission of information. There are many others, and more methods are sure to become available. Data sources ingested by NPMD solutions may include many different types of events, device metrics, streaming telemetry and contextual information. In this short video, Kentik CEO Avi Freedman discusses the many types of data and integrations that are important to improving network observability. This video is a brief excerpt from “5 Problems Your Current Network Monitoring Can’t Solve (That Network Observability Can)” — you can watch the entire presentation here. Finding the root cause of network problems has been the goal of just about every network troubleshooting tool or platform released over the last 30 years. Although the information from these systems helps form a greater context around a specific problem, in most cases, only the technician can unearth the exact source of the problem. Seldom will network monitoring by itself tell us exactly the who, what, when, where and why of an issue and even more rarely, the order of events that lead up to the problem. NPMD however, hopes to make strides here as well. NPMD platforms intend to aid the troubleshooter by guiding them with diagnostic workflows. The interfaces supporting this initiative serve up the forensic data needed to more methodically guide the NetOps team, for example, to an ultimate grasp of how exactly the performance degradation was introduced. To further aid in this effort, artificial intelligence for IT operations (“AIOps”) functionality can be used to provide insight into the quality of the end-user experience or to help surface problems that might not get noticed by the ops team. It’s basically more context. By studying the same network-derived performance telemetry outlined above, some vendors are delivering AI-driven insights. The ultimate root cause of most issues, however, will likely continue to be derived by the human. Just as its name implies, NPMD tools have the ability to monitor, diagnose and generate alerts for dynamic end-to-end network service delivery. An adjacent technology, DEM or digital experience monitoring, focuses more on the end-user. Although NPMD and DEM share a similar goal—improving performance—one focuses more on how the network is dealing with connections and the other on how the end-user is experiencing the connections. There is certainly a division albeit a bit blurry at times. Consider a few differentiators: Path: NPMD is aware of the network path from one AS (Autonomous System) to another, or router to router to any destination taken. DEM might issue traceroutes from an end system that returns the hop-by-hop route taken to a very specific destination. Availability: NPMD might ping all devices on a network to ensure they are up and running and supporting all possible paths whereas DEM focuses largely on the availability of selected applications. Latency and Jitter: In the past, NPMD predecessors delivered latency information (e.g., Cisco IPSLA, Medianet, etc.) but for the most part, the market hasn’t found tremendous value here. DEM, on the other hand, provides more accurate latency metrics closer to the source (i.e., the application itself) and tends to be more representative of the end-user experience. Synthetic testing can be used to provide network latency, jitter, and packet loss information that complements DEM functionality as it pertains to network performance. Holistic: NPMD is engineered to ingest data from nearly everything from any device and gives accurate general information about all devices on the network. DEM was developed to share very detailed performance information only from the end systems taking measurements and only on selected applications. Where there are challenges, there are opportunities. The NPMD space is no different. As just one example, the evolution to cloud computing and cloud (and hybrid-cloud) networking has brought new challenges in observing, monitoring, and diagnosing new types of network infrastructure—where some or all of an organization’s network capabilities and resources are hosted in a public or private cloud platform. In response to the increasing complexity of today’s networks and the sheer volume of data collected, some vendors have begun incorporating artificial intelligence/machine learning (AI/ML) analytics. Although the logic involved in anomaly detection, event correlation, and root cause analysis (RCA) has had to change, some vendors are seeing an improvement in event detection through the use of AI/ML technologies. As of late, the emphasis appears to be on optimizing the customer experience. Some NPMD vendors are listening and providing views that are more service- and application-focused. Other vendors are incorporating DEM capabilities in the form of synthetic transaction monitoring (STM) and path awareness into their NPMD platforms. The Kentik Network Observability Cloud offers a modern, SaaS-based approach to network performance monitoring and diagnostics, combining flow-based monitoring, cloud network observability and synthetic monitoring features that allow for proactive monitoring of all types of networks. Start a free trial to try it yourself.
<urn:uuid:ca8ede48-7d51-42b6-a5f6-1955998f31df>
CC-MAIN-2022-40
https://www.kentik.com/kentipedia/what-is-npmd-network-performance-monitoring-and-diagnostics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00710.warc.gz
en
0.932143
1,529
2.515625
3
This page provides some useful formulas that can be used in the Data pane. Any combination of the formulas below is acceptable. The formulas refer to the data in cell A1 but may be applied to any cell. For details on Data pane formulas, see Microsoft Excel documentation. All Microsoft Excel formulas can be used in the Data pane. First word in a sentence: First three words in a sentence: Part of the sentence starting with the fourth word: The part of the sentence starting with "word": Check if two strings are equal: Duplicate the string three times: Concatenation of two strings: ="Hello " & "There" Generate a random uppercase letter: =CHAR((RAND() *26) +65)
<urn:uuid:48cf324f-c5e3-4208-9969-6e95cd33b52f>
CC-MAIN-2022-40
https://admhelp.microfocus.com/uft/en/all/CodeSamplesPlus_Help/Content/Code_Samples_Plus/CSP_DataPaneFormulas.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00710.warc.gz
en
0.757581
298
2.6875
3
We are nearing the end of our year living in Europe. One of our goals for this trip was to experience the rich Roman and Medieval history in the area – most of which requires driving south. It is very quick to get around France once you are on the freeway, but freeway entrances are fairly far apart. From our house, you can get on the freeway by driving north or south. The south entrance, we will call this Route A, is further away, but is in the right direction once you are on the freeway. Here is the Google Map from close to our house to the town of Orange. The north freeway entrance, we will call this Route B, is closer, but is north from where we live so takes you out of the way. However, you spend more of the time driving freeway speeds. Here it is on Google Maps. Now here’s the thing – my husband and I do NOT agree on which route to take, and driving in the car offers plenty of time to discuss why our particular point of view is the correct one. My husband started with the stats from Google Maps to support his point of view: - Route A: 46.3 km, 47 minutes (shorter and faster) - Route B: 64.3 km, 51 minutes (longer and slower) I pointed out that Google Maps is based on algorithms and not reality, so suggested we do some real-life measurements. The next few trips south I wrote down our departure times and arrival times at a set point on the freeway both coming and going (yes, we are that kind of couple!) using both routes: - Route A: 46 minutes, 58 minutes, 53 minutes, 52 minutes (faster 1 out of 4 times) - Route B: 50 minutes, 52 minutes, 51 minutes, 49 minutes (faster 3 out of 4 times) Now, it should be immediately obvious that my husband is an advocate of Route A, and I prefer Route B. And we both have data to prove that we are right. My husband would point out that my measurements were invalid since on the trips where Route A was slower we ran into unusual traffic. I would point out that if we ran into unusual traffic 3 out of 4 times it probably means that the traffic is not that unusual. Of course, we should know better. He is an engineer and I am a mathematician. Before we started all our “fact finding,” we should have just put on the table that we are biased. I don’t like roundabouts, so getting on the freeway sooner is a way to avoid them. My husband finds the freeway boring and prefers going through the towns. So back to the market research topic. In this case the final result doesn’t really matter. But in any market research project, it’s important to know and confront biases at the beginning of a project, so you can ensure they are addressed. Findings must be strong enough that exceptions that support existing biases are identified and understood.
<urn:uuid:b1c5485f-fcb1-4efe-b021-815b4e143b17>
CC-MAIN-2022-40
https://dimensionalresearch.com/blog/tag/research-bias/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00710.warc.gz
en
0.971055
622
2.515625
3
One of the biggest problems that most organizations deal with these days has less to do with the sheer volume of data they're working with on a daily basis and is more about just how much of it is unstructured. Unstructured data, as the term suggests, is that information that either doesn't have some type of pre-defined model, or that isn't organized in any pre-determined way. Typically, this information is very text heavy and contains not only dates and numbers, but other pieces of critical insight as well. Unstructured data can also come in the form of text in a scanned document, or some other type of photographed image. That, in essence, is what Optical Character Recognition (otherwise known as OCR) is designed to help with. It's a technological process that can actually convert the text in those documents into something that can not only be edited and stored but searched for electronically as well. What is Optical Character Recognition (OCR)? Breaking Things Down By far, the biggest reason why OCR is so important is because it A) helps to eliminate the types of manual data entry tasks that eat up too much of your employee's valuable time, and B) assists in capturing critical information in a way that makes it easier than ever to extract raw value from moving forward. Typically, the process goes a bit like this: - First, documents will be scanned (or photographed) into some type of electronic form. - Then, Optical Character Recognition is applied. - The text contained in that document is then saved as text with searchable metadata. - This can then be used to initiate workflows and better support processes across the board. Indeed, Optical Character Recognition is designed to eliminate the need for manual data entry by automatically recognizing characters like letters, numbers, and symbols. Then, that information can be easily worked with on a digital platform. It can even be stored in an intelligent information management platform like M-Files, dramatically increasing the ease at which you can search for and retrieve that information. All told, Optical Character Recognition brings with it a wide range of different benefit that organizations of all shapes and sizes can make use of. In addition to improving the ease at which you can find information, OCR is also at the heart of many developing processes like machine learning. Take document management, for just one example. Machine learning is a critical component here because it helps to eliminate manual tasks that are both redundant and time-consuming, thus freeing up as much time on behalf of your employees so that they can focus on those matters that truly need their attention. In addition to intelligent character recognition for handwriting, OCR (when paired with machine learning) can help improve sentimental analysis, object recognition and more. It can even become the cornerstone of your data privacy protection policy, as if you know exactly what information you're working with and where it is stored, you're in a far better position to actually keep it safe from those who want to do you harm. Optical Character Recognition is even at the heart of many digital transformation efforts for companies. Remember that the first part of any successful digital transformation involves digitizing all of those paper-based documents that you're working with, all so that you can properly store them and make them available to those across your organization who need that information to do their jobs. With the right OCR solution by your side, you can do this by digitizing things like meeting notes, agendas, letters, client records and even photographs. At that point, it can all be inserted into your digital content management system so that it can move freely across your enterprise as needed. But in the end, the real benefit is that Optical Character Recognition is the first step towards kicking off larger workflow automation projects within your business. A lot of the manual tasks that your employees are responsible for may be important, to be fair — but they also take up far too much time and leave open the door to issues like human error. With workflow automation, you have none of these side effects to deal with. The processes themselves get completed faster, more efficiently and more accurately than ever before — all by way of steps that are easily repeatable as well. So not only do you get guaranteed consistency, but your employees can now focus on those matters that are actually generating revenue for your business — which may very well be the most important benefit of all.
<urn:uuid:8a96603f-8b50-4b27-b81f-c49c26443d50>
CC-MAIN-2022-40
https://resources.m-files.com/blog/what-is-optical-character-recognition-ocr
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00710.warc.gz
en
0.961289
903
3.03125
3
Park Service backs local North Carolina project. The National Park Service is hoping remote sensors will allow it to protect sea turtle hatchlings while keeping the beach open a little more often on Hatteras Island, North Carolina. Currently, when officials spot a nest, they close off a 10-meter-by-10-meter area around it for the first 50 days of incubation. “We go out there every single morning, starting May 1 through at least Sept. 15, patrolling 67 miles of shoreline for turtle nests,” said Britta Muiznieks, an NPS research coordinator and wildlife biologist at Cape Hatteras National Seashore. They have found 110 nests so far this year, she said. The cordoned-off areas don’t usually interrupt fishing operations, beachgoers or vehicles. But when the turtles are ready to hatch, the enclosure is extended all the way to the shoreline “to protect the emerging hatchlings on their trek to reach the water,” procurement documents said. The trouble is the eggs may hatch right after 50 days or they may take another six weeks, and there’s no great way to predict it. Some nests aren’t viable in the first place, but local regulations prohibit park officials from checking on them regularly. Sea turtle populations worldwide declined by as much as 95 percent in the last century, according to Nerds Without Borders, a group that helps develop the sensors and provides volunteer labor. Several species nest at Hatteras, but primarily loggerheads, which are considered endangered internationally and threatened in the United States. Eggs in a single nest hatch simultaneously and generally at night, when the 100-150 hatchlings march to the sea. Sometimes, the hatchling tracks disappear in the sand after strong wind or rain, so officials monitoring the nests don’t know the turtles have hatched and keep the beach closed unnecessarily. The remote sensors the Park Service plans to procure will ensure officials monitoring the nests know when the creatures have hatched -- and when they can open the beach again. The devices measure temperature and motion inside the nests, regularly sending data over the Internet using mobile-phone technology. The technology is still experimental. “We hope that we will soon understand how to use the data to accurately predict when hatchlings will emerge from the nest,” said Hatteras Island Ocean Center, the organization selling the sensors the government intends to buy. The center's founder, Eric Kaplan, formerly a CEO of an electronics company, said he was not aware of sea turtle research using similar devices. The Park Service tried a handful of the sensors last year, but they weren’t as sophisticated as they are now, and they were acquired too late in the season to provide helpful data, Muiznieks said. Knowing when the turtles will hatch would not only help keep the beach open more but also would allow park officials to alert interested tourists when they have a good chance of seeing the hatchling migration to the sea. “This is a great place for ecotourism, and my organization is focused on trying to use ecotourism to better the economic conditions here,” Kalpan said. “The beach closures have a very negative impact on the local economy.”
<urn:uuid:3291cfb2-6dc5-4fb4-9248-63f8fef277d6>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2014/08/remote-sensors-mean-more-sea-turtles-fewer-beach-closures-hatteras/90545/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00710.warc.gz
en
0.939702
677
2.953125
3
Imagine a construction worker has stayed two hours longer than his normal shift, perhaps he is covering for a friend. He’s stressed and tired, his heart rate is up, and his alertness is reduced. It is potentially an accident waiting to happen. Fortunately, your company has invested in the Internet of Things (IoT). An automatic alert has already been sent from the device he wearing to his shift manager, who can pinpoint his exact location and quickly intervene. This is just one of the many potential uses that the IoT could bring to construction. Interventions like this could prevent mistakes and even save lives. That is why the Internet of Things is one of the most exciting new innovations in construction technology. Consultants at McKinsey believe it could have a global economic impact of nearly $1 trillion on worksites worldwide by 2025. Already, $8 billion has been invested in IoT in construction worldwide and with the sheer variety of applications and some serious benefits, it could be a very promising market indeed. - What is the IoT? Everything you need to know (opens in new tab) What is the internet of things? The definition of the IoT has evolved significantly in the last two decades due to the convergence of multiple technologies and fields. In 1999, it was viewed that radio-frequency identification (RFID) would be essential to its development, but a growing portion of IoT devices such as smartphones, tablets and computers have since been the driving force of the idea. Put simply, the internet of things is the internet for things, it is a way for our devices to connect through the internet in a similar way to how people already do. IoT in construction, for example, can involve the use of internet-connected sensors which are placed around the construction site or are worn by workers. These devices collect certain types of data about activity, performance and conditions on the building site. They would send these to a central dashboard where the data can be analyzed and used to inform decision-making. Traditionally, most internet-connected devices have been computers and smartphones. Now however, a huge variety of sensors can easily and cheaply be upgraded with a chip like a SIM card. This could include wristbands that monitor heart rate, temperature sensors and vibration monitors. Connecting these devices to a central database means many more aspects of your sites can be monitored in ‘real time’. This can have huge implications for safety, security, productivity, and cost reduction. What are its benefits and what are its drawbacks? Improving safe working practices: Wearable IoT devices have real potential for improving safety on building sites. If all staff on a job site wear a wrist band or clip-on device, data about their movements and activity can be used to discover any risky behavior. Take the example of a New York construction firm who did just that. The device would send an automatic alert to the company’s site safety manager whenever the device physically dropped by three foot or more (the idea being to immediately notify health and safety of any falls). The manager noticed that one worker appeared to be repeatedly falling and went to investigate. As it turned out he was jumping into a pit, rather than using the ladder provided. He was naturally reminded of the dangers of his behavior! Improve resource management: If all machines, staff and materials are connected to the internet with a chip, you can geolocate them immediately. How many hours get wasted on building sites searching for materials? How many liters of fuel are burned by idle engines? How much time do workers spend underemployed when they could be sent to support other tasks? The potential cost and time savings here are significant. Take truck monitors by the IoT firm Trimble – their rugged IoT construction solutions can identify location and activity of a wide variety of vehicles and other assets. Better reporting and cost-efficient maintenance: IoT devices can continually feed back information about conditions in both completed and under-construction sites. Sensors can monitor for things like unusual vibrations on a piece of machinery that suggests it needs to be fixed. They can detect increases in humidity which can tell your inspection teams about damp issues. At PlanRadar, this is an area we are particularly excited about. Our app already functions as an IoT solution for construction by providing real-time information about site problems to maintenance teams. Using hardware that users are already familiar with, the app enables users to collect a central bank of evidence that makes oversight and reporting more efficient. With improvements in IoT sensors, this could be achieved even faster and the data passed from person to person could be even more accurate. Reduce insurance premiums: Having this kind of data could additionally be used to renegotiate insurance premiums. Not only will you be improving safety on-site, but you will also be able to prove it. In the case of IoT building construction company Pillar, these devices can even help prevent fires. - IoT recipe for success part 8: Transform culture, not just technology (opens in new tab) Data security and safety issues: An IoT database could be a real goldmine of sensitive information to organized criminals. If a malicious actor found a way to hack into a company’s IoT database, they could easily access a list of where all your machinery is currently located or where expensive materials are stored. It is essential IoT databases are confidential and unauthorized parties cannot gain access. Privacy issues: With security issues established, most concerns that exist to do with IoT are framed around the concerns that many people have with tracking the physical movement of workers. Many would object on privacy grounds, as could labor organizations. Price: While most IoT devices are relatively cheap, many job site owners will need convincing that they are worth the investment. Especially on smaller sites where you can conduct all checks in just a couple of minutes, it might seem unnecessary to spend money on sensors and learn to use a dashboard when it can continue being done manually. For the time being, it is likely that IoT construction solutions will mainly be used on large building and civil engineering projects. Deployment can be a big learning curve: Like any technology, sensors are only as good as the way they are deployed. There is little sense in spending money on, say, moisture sensors, when the biggest risk on a site is heat. Sensors need to be placed and chosen strategically. Many construction firms will have to follow a big learning curve before being able to truly benefit from these tools. Simply placing IoT sensors around a site will not solve issues on its own. - B2B leads the way when it comes to IoT (opens in new tab) Ibrahim Imam, co-founder and co-CEO, PlanRadar (opens in new tab)
<urn:uuid:e79ddb01-62dd-47ba-931c-257f1b111c04>
CC-MAIN-2022-40
https://www.itproportal.com/features/the-internet-of-things-in-construction-what-are-the-pros-and-cons/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00710.warc.gz
en
0.966942
1,377
2.96875
3
Direct-to-chip liquid cooling can handle higher densities than air cooling, while using almost half of the power required by server fans and computer-room air conditioners. Coolant circulates through cold plates, mounted directly to server processors, to remove heat at its source and cool data centers more efficiently. - Standard cold plates, designed for major server manufacturers, make installation easy. - Custom cold plates can be designed and delivered quickly, for all server types. - The hybrid cold plate design uses fins for additional air cooling as a temporary backup, allowing the system to continue operating during service, testing, and upgrades. - Using channels in the cold plate, called turbulators, mixes cooling fluid to improve heat absorption. Chilldyne uses turbulators, placed inside flow paths (channels) in the cold plate, to mix the coolant and increase thermal transfer. Warmer liquid on the outside of the channel is mixed with cooler liquid in the center to improve heat absorption. This also allows data center operators to fine-tune the liquid flow and set the optimal balance between heat removal and pressure required for each server. Hybrid cold plates deliver cooling fluid direct to the chip for the best possible performance while still retaining the air cooling fins on top of the cold plate. This allows the cold plate to also serve as an air-cooling backup. If the cooling system needs to be taken off line for service, the servers can still perform on air cooling. This reducers down time and simplifies service. Thermal Resistance vs. Flow Rate for the Skylake, Sapphire Rapids and Milan Hybrid Heat Sinks Thermal resistance is the heat property measurement of a material's ability to transfer heat. The lower the thermal resistance, the better the material is at transferring heat. The Thermal Resistance graph shows that as the coolant flow rate increases (through the heat sink or cold plate) the thermal resistance decreases. Chilldyne cold plates are designed to have the optimal flow rate and ensure the highest level of thermal performance. Pressure Drop vs. Flow Rate for the Skylake, Sapphire Rapids and Milan Hybrid Heat Sinks Pressure drop is the losses (or resistance) associated with fluid moving through a passage. The lower the pressure drop, the easier it is for the coolant to move through the system's tubing, heat sinks/cold plates, and connectors. Pressure drop increases as the flow rate increases, as seen in the Flow Characteristics graph. Chilldyne cold plates were designed to strike a delicate balance between flow rate and pumping losses (pressure drop of the system) and heat transfer.
<urn:uuid:f5fbef17-c782-4643-9be7-54a0554d2f3a>
CC-MAIN-2022-40
https://chilldyne.com/cold-plate/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00710.warc.gz
en
0.898439
544
2.546875
3
Internal Audit 101: This series explores the foundations of internal audit by industry, including basic definitions and concepts relative to auditors in specific sectors. What Is a Financial Audit? A financial audit typically refers to the annual audit of an organization’s financial statements to ensure that its records are a fair and accurate representation of the organization’s financial transactions. The audited financial statements that are reviewed yearly include the income statement, balance sheet, and cash flow statement. A financial audit can also include an audit over the organization’s internal control over financial reporting, which is commonly integrated with an audit of financial statements. Both internal auditors and external auditors can conduct financial audits. The biggest difference between external and internal audits is the objectivity and independence of the external audit firm’s opinion on the financial statements and internal controls audited. History of Financial Audit Most companies receive a yearly audit of their financial statements to satisfy debt covenants to lenders. For publicly traded companies, financial audits are a legal requirement under the Sarbanes-Oxley Act (SOX) of 2002. In addition to requiring an audit of the company’s financial statements, SOX also requires public companies to receive an audit of management’s assessment of the effectiveness of the company’s internal control over financial reporting. SOX established the Public Company Accounting Oversight Board (PCAOB) that oversees rules and standards for such audits. SOX audit programs can vary in maturity and status based on when the organization has gone public and whether or not the organization has undergone any updates to their SOX program since it was initially required in the early 2000s. Organizations that are planning for an initial public offering (IPO) will usually perform audit readiness activities in order to ensure that they can meet SOX compliance once required. Types of Financial Audits External Financial Audit External financial audits are usually conducted by employees of an independent certified public accountant (CPA) firm and include an audit of both financial statements and internal controls over financial reporting. External audits seek to identify if there are any material misstatements in the financial statements, as well as evaluate the effectiveness of internal controls over financial reporting. An external auditor’s findings result in an auditor’s opinion that is included in the financial audit report. This opinion is a crucial accompaniment to the financial statements in helping analysts and investors gain comfort in an organization’s financial condition and performance as stated by management. Internal Financial Audit Internal financial audits are conducted by employees of the organization known as internal auditors in order to provide management with an assessment over the effectiveness of financial reporting processes and internal controls over financial reporting. Internal audit teams may complement the work of external auditors based on a pre-agreed plan and meetings. Internal audits help an organization improve its processes and internal controls by performing projects and controls assessments in order to identify any areas of improvement or deficiencies in the controls and reporting process, giving the organization the opportunity to remediate those issues prior to them becoming a material error (under generally accepted auditing standards, misstatements and omissions are considered material if they could “influence the judgment made by a reasonable user based on the financial statements.”) The results of an internal audit, along with the internal audit team’s recommendations for improvement, are recorded in a financial audit report that is provided to the organization’s management and board of directors. Financial Audit Procedures Substantive procedures are the procedures performed to support financial audits. A substantive procedure may be a process, step, or test that creates conclusive evidence regarding the completeness, existence, disclosure, rights, or valuation (the five audit assertions) of the financial statements. To qualify as a substantive procedure, enough documentation must be collected so that another qualified auditor could conduct the same procedure on the same documents and come to the same conclusion. Financial audit procedures are built around the five audit assertions at the account or asset level. Planning for a financial audit involves performing scoping and risk assessments prior to the audit project in order to understand areas that are material to the organization as well as evaluate areas of significant risk. External auditors will usually determine their level of reliance on the work of the internal audit function in obtaining audit evidence ahead of the audit. External auditors will evaluate the extent of their reliance against requirements set forth by the American Institute of Certified Public Accountants (AICPA). Financial Statement Review: Checklist While there is variance across industries, the generic worksteps of a typical financial statement review would include: - Audit Planning: Risk Assessment and Scoping - For financial scoping, a determination of materiality in light of the financial review process is required. Any accounts identified over that benchmark individually would be considered. Additionally, the remaining accounts should be assessed in the aggregate to determine appropriateness of coverage. Teams should confirm that the remaining balance of accounts not tested is below the materiality threshold determined by the team. - Reconciliation: Compare the subledger balances received to the general ledger balance. - Subledger analysis: Analyze all of the detailed transactions from the subledger, and ensure that the sum of all transactions agrees to the reconciliation. The subledger should be at the lowest level of detail. - Sampling of transactions: Select a sample of transactions, typically using statistical analysis, to obtain comprehensive evidence over the execution of the transaction. Samples should involve one transaction - if more than one transaction rolls up into the sample, consider whether you’ve selected a sample of a sample. - Within the sampling of transactions, consider the coverage obtained from controls in place and the potential reduction of testing procedures based on control activities that are performed. - Performance of account-specific procedures: Such as comparing transactions to the source invoice and confirming the completeness, accuracy, and validity of the transaction. - Issue Management and Followup - Errors identified should be analyzed and extrapolated to determine the impact to the organization. - Remediation plans should be developed to remediate the current issue and to prevent it from happening again in the future. - Prepare for the Formal External Audit - Hold conversations with the External Audit team to discuss findings, and be prepared to share documentation of testing procedures performed. - Leverage Technology to Streamline the Process Some of these steps can be reduced if control coverage is identified to be sufficient; for example, for a fully automated transaction type. Optimizing Financial Audits Using Technology Performing a financial audit without technology can lead to breakdowns over version control, team communication, and comparisons to prior year. For organizations performing financial audits not related to SOX, leveraging internal audit management software can help streamline the entire financial audit process and create automated workflows to promote efficiency and effectiveness throughout the end-to-end audit lifecycle. SOX compliant organizations can easily link between controls testing and financial audit testing to identify efficiencies. Research performed over the last decade by global consulting firm Protiviti consistently reveals rising key control counts, increased hours spent on compliance, increasing internal and external costs, and the continued inefficiency of manual processes specific to SOX. Organizations that have successfully implemented audit management software report time savings of 33% to 50% on administrative audit work performed during testing and documentation, time savings that can ultimately convert into more value-add projects for the business. This ongoing research points to one conclusion: the time has never been better to embrace SOX and audit automation software. First-rate audit management software can not only help strengthen internal controls, but also seamlessly link together controls and substantive testing, which can reduce the amount of financial audit testing that auditors need to perform to accomplish audit goals. To learn how AuditBoard can help you streamline your financial audits and SOX audits, fill out the form below.
<urn:uuid:45fe9622-bc8d-4d5b-8a3a-8801f613af0c>
CC-MAIN-2022-40
https://www.auditboard.com/blog/understanding-financial-audit-best-practices-in-2021/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00710.warc.gz
en
0.93089
1,607
3.296875
3
SBVR Speaks on Rules: Kinds of Guidance Figure 1. Kinds of Elements of Guidance Definition: rule that is practicable and that is under business jurisdiction Note: A rule's being under business jurisdiction means that it is under the jurisdiction of an authority that can opt to change or discard the rule at its own discretion. Laws of physics may be relevant to a company; legislation and regulations may be imposed on it; external standards and best practices (other than business rules) may be relied upon. These things are not business rules from the company's perspective, since it does not have the standing to change them. The company will decide how to react to laws and regulations, and will create or adopt business rules to ensure compliance with the laws and regulations. Similarly, it will create or adopt business rules to ensure that standards or best practices (other than business rules) are implemented as intended. element of guidance is practicable Definition: the element of guidance is sufficiently detailed and precise that a person who knows the element of guidance can apply it effectively and consistently in relevant circumstances to know what behavior is acceptable or not, or to what things a concept corresponds - The sense intended is: "It's actually something you can put to use or apply." - The behavior, decision, or calculation can be that person's own. - Whether or not some element of guidance is practicable is decided with respect to what a person with legitimate need can understand from it. - For a behavioral business rule, this understanding is about the behavior of people and what form compliant behavior takes. - For a definitional rule, this understanding is about how evaluation of the criteria vested in the rule always produces some certain outcome(s) for a decision or calculation as opposed to others. A practicable business rule is also always free of any indefinite reference to people (e.g., "you," "me"), places (e.g., "here"), and time (e.g., "now"). By that means, if the person is displaced in place and/or time from the author(s) of the business rule, the person can read it and still fully understand it, without (a) assistance from any machine (e.g., to "tell" time), and (b) external clarification. Definition: element of guidance that is practicable and that is a proposition that permits a state of affairs or that acknowledges as possible a given state of affairs - No business policy is an advice. - No business rule is an advice. Definition: element of governance that is not directly enforceable whose purpose is to guide an enterprise Note: Compared to a business rule, a business policy tends to be: - less structured - less discrete or not atomic - less carefully expressed in terms of a standard vocabulary - not directly enforceable. Necessity: No business policy is a business rule. - The policy expressed as "A prisoner is considered to be on a hunger strike after missing several meals in a row." - The policy expressed as "The prison medical authority will intervene if a hunger striker's life is in danger." - The EU-Rent policy expressed as "Rental cars must not be exported." - The policy expressed as "Each customer who complains will be personally contacted by a representative of the company." element of governance Definition: element of guidance that is concerned with directly controlling, influencing, or regulating the actions of an enterprise and the people in it element of governance is directly enforceable Definition: violations of the element of governance can be detected without the need for additional interpretation of the element of governance Note: 'Directly enforceable' means that a person who knows about the element of governance could observe relevant business activity (including his or her own behavior) and decide directly whether or not the business was complying with the element of governance. Necessity: Each element of governance that is directly enforceable is practicable. # # # About our Contributor: All About Concepts, Policies, Rules, Decisions & Requirements We want to share some insights with you that will positively rock your world. They will absolutely change the way you think and go about your work. We would like to give you high-leverage opportunities to add value to your initiatives, and give you innovative new techniques for developing great business solutions.
<urn:uuid:e0d46adb-a9a8-4f4f-bb5e-1d5511b1d2e1>
CC-MAIN-2022-40
https://www.brcommunity.com/articles.php?id=c005
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00710.warc.gz
en
0.939276
916
2.703125
3
Over the last couple of years, as security vulnerability reports have piled up on products from such big vendors as Microsoft Corp., Oracle Corp. and Cisco Systems Inc., open-source advocates have snickered. If only those vendors would release their source code and let the open-source community at it, all their problems would go away, they said. And when the Code Red and Nimda worms chewed their way through hundreds of thousands of unpatched Microsoft Internet Information Services servers last year, Apache users sat back and smiled, believing nothing like that could happen to them. Then it did. In late July, researchers found several flaws in the OpenSSL tool kit, which is commonly used for secure transmissions on Apache servers. About six weeks later, someone released a worm called Slapper that exploited the vulnerability and not only installed a back door on each infected server but also turned machines using OpenSSL into a waiting army of zombies by dropping in a DDoS (distributed-denial-of-service) tool kit as well. The infected machines can communicate with one another via a private, peer-to-peer network. Security experts predicted that it was only a matter of time before someone used the thousands of compromised servers to launch a devastating DDoS attack. Despite the mantra that open-source software is more secure thanks to its communal writing and review process, the vulnerabilities in OpenSSL were all buffer overruns, the most common and, many say, most preventable flaws in software. That such flaws were found in an open-source tool kit and subsequently exploited by a destructive worm comes as no surprise to some experts. Still, its enough to prompt some to question the long-held belief that open-source software is more secure. “Linux is awful. There are no design specs. Everybody and their half-brother who knows some [C code] writes code for it, and they all have the same lack of knowledge,” said Gene Spafford, professor of computer science at Purdue University, in West Lafayette, Ind., and an expert on network security. “Its who writes it and whether its planned [that makes a difference], not who looks at the code.” Despite such rumblings, however, few open-source believers are ready to drop Linux or other open-source products because of newly spawned security concerns. Mike Prince, for example, thought long and hard about security before deciding, in 1999, to roll out Linux companywide to thousands of users in hundreds of locations across the country. By the time Prince made the call, however, the CIO at Burlington Coat Factory Warehouse Corp. had no doubts about the reliability of the new software. As a longtime user of a variety of back-office open-source applications, Prince said he believed the security of the software was a given. And he hasnt changed his mind. “The security of the open-source software hasnt been an issue. Its excellent,” said Prince, at Burlingtons headquarters in Burlington, N.J. “On the operating system side, although there are loopholes found, the speed with which theyre fixed and the commitment to making the problem known and resolved are excellent. The stability rivals the best of the proprietary Unix systems. The whole security model in Linux is better than in Windows.” : Whos Right?”> So whos right? Does patent-protected development behind closed doors produce more secure software? Or does the collaborative, open-source community, where thousands of smart, independent developers are poised to spot and fix security problems? Many IT managers and security experts say its not that simple. Security, they insist, comes down to attention to detail and careful coding, not whether the code is freely available on the Internet or locked in a vault on a corporate campus. “Unless theres a great deal of discipline underlying the development, theres no difference in the security [of proprietary and open-source software]. Open source is not inherently more secure,” said Peter Neumann, principal scientist at SRI International, in Menlo Park, Calif., and a security and networking expert who in 1965 helped design the file system for Multics, which is still considered one of the most secure and reliable operating systems ever written. “If everyone has the same bad skills, all the eyeballs in the world wont help you. Unless theres discipline, you still come up with garbage.” Advocates of Linux and other open-source software often cite users ability to modify the code and adapt it to their environments as a key advantage of open-source applications. However, that can be a drawback if the people doing the modifications arent well-trained. Some devotees say the real strength of open source lies in its transparency and the flexibility it gives customers. “The transparency gives you security because you can pick and choose whats in your environment,” said John Alberg, co-founder and vice president of engineering at Employease Inc., an Atlanta-based developer of human resources software and a user of numerous open-source applications. “Commercial software tends to have a lot of doors you dont know about,” Alberg said. “What open source does is allow you to manage a more secure environment. There are fewer moving parts in the products, and, hence, you have fewer problems.” “Open-source software is developed by people who are more attuned to security. Commercial software vendors are trying to hit feature sets and target dates,” said Dan Agronow, vice president of technology at Weather Channel Enterprises Inc.s Weather.com site, in Atlanta, which uses Linux, Apache and other open-source software. “With open source, it isnt released until its ready, and thats it. But we still pay a lot of attention to security. You have to.” To the extent that open-source products such as Linux still suffer security holes, however, they may soon get help from a small number of startups dedicated to hardening the operating system. Guardian Digital Inc., of Allendale, N.J., recently released EnGarde Secure Linux Professional, which features a litany of added security functionality, such as a network gateway firewall, a network IDS (intrusion detection system) and a host IDS, and a security control center. Even the National Security Agency, of Fort Meade, Md., has gotten in on the act, producing its own Security Enhanced Linux distribution. For as much criticism as Microsoft takes for the lack of security in its products, some Linux distributions have begun to experience more problems. Red Hat Inc., of Raleigh, N.C., for example, has issued fixes for 35 security problems in its Red Hat Linux 7.3 since June, while Microsoft, of Redmond, Wash., has released six patches in the same time period for Windows XP Pro. However, the list of patches included in the new Service Pack 1 for XP Pro shows 30 security-related fixes, including several that were never publicized or issued separately. : Is All Software Insecure?”> But, some observers say, comparisons of bug reports simply prove that all software is insecure. The real determinant of security is competent programming and code review, they say. “I dont think its a good idea to have one rule as to whether code should be open. If Microsoft opened the [Internet Explorer] code now, it would probably be very bad because its full of all kinds of bugs. But if it had been open from the start, that would have been good,” said Avi Rubin, a principal researcher in the secure systems research department at AT&T Labs-Research, in Florham Park, N.J. “Apache is a good example. Anything like that that has a formal structure and people working on it is good,” Rubin said. “Part of the beauty of the open-source process is that they take into account that vulnerabilities will happen, so theyre prepared for it. The people making decisions are responding out of pride, not from a business perspective.” Indeed, the response to open-source software security problems that Rubin has experienced is one of the things that convinced Burlington Coat Factorys Prince that the open-source community was more dedicated to security than commercial vendors. Prince once found a bug in an open-source operating system utility and posted a question about it to a newsgroup. The author of the utility soon replied, confirming the problem, telling Prince how to work around it and saying he had a new version of the utility on the way that would fix the bug. “Thats what open source does. They have brilliant people who, once they understand the problem, are probably in competition with each other to fix it,” Prince said. “There hasnt been a minute of time wasted being jerked around.” However, even hard-core advocates of open-source software concede that simply making source code available doesnt make an application more secure. “What really makes a difference is having someone who knows what theyre doing writing the code and looking at the code,” said Crispin Cowan, chief scientist at WireX Communications Inc., in Portland, Ore., a developer of secure Linux solutions. “But I think that the open-source process does enable greater security.” - eWEEK Labs: Open Source Quicker at Fixing Flaws - Six Questions to Ask About Open Source - Open Source Gets IT Scrutiny - Open-Source Enterprise
<urn:uuid:9aece1ed-70d2-410d-a88e-38149cdbab60>
CC-MAIN-2022-40
https://www.eweek.com/servers/open-source-a-false-sense-of-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00710.warc.gz
en
0.960647
1,991
2.6875
3
QuintessenceLabs – The Move to Quantum Random Number Generation (SPONSORED from Quintessence Labs) The performance and characteristics of random number generators have a strong impact on security. Inferior quality or insufficient quantity of random numbers have the effect of making crypto systems vulnerable, reducing security to well below its designed level. As computing power increases so too will the importance of strong, reliable random number generation, particularly with a growing number of practical applications in quantum computing. Some of the important parameters to look for in a random number generator are entropy density and throughput. Entropy is a measure of the randomness of the data, while throughput is a measure of the quantity of random delivered. For a given throughput, lower entropy will result in keys that are less random, making them more vulnerable to hacking. Similarly, the throughput for a given entropy density determines how much high quality random can be delivered over a certain time interval. This sets for example the frequency at which keys can be rotated. Some random number generators with seemingly high maximum throughputs and maximum entropy will only deliver those high entropy levels at exceptionally low throughputs. Methods of random number generation have their strengths and weaknesses, but most approaches struggle to deliver high entropy and high throughput at the same time. One technique for QRNG is quantum tunneling. Quantum tunneling is a well-known quantum phenomenon where charged particles travel (“tunnel”) through a barrier that classical (or Newtonian) physics predict they shouldn’t be able to cross. Within the QRNG, a voltage is applied to a forward-biased diode junction which serves as the barrier through which some of the charged particles tunnel. This tunneling within the diode creates random fluctuations in the current flowing through it. Though many particles tunnel through the barrier, the exact number at a given time can’t be predicted, yielding a truly random quantum effect and an ideal source of natural entropy. This effect is measured, then digitized and processed to ultimately generate ultra-high-bandwidth random numbers. The U.S. National Institute of Standards and Technology (NIST) currently does not have standards that result in certification for random number generators. NIST does have recommendations for random number generators that are described in Special Publications 800-90A (Deterministic Random Bit Generators), 800-90B (Entropy Sources Used for Random Bit Generators) and 800-90C (Recommendation for Random Bit Generator Constructions – Draft). For information on qStream, the world’s fastest quantum random number generator (QRNG), or our portfolio of quantum-safe crypto solutions and services, visit www.quintessencelabs.com, or follow us on LinkedIn at https://www.linkedin.com/company/700055/admin/. We also invite you to listen to our Founding CTO John Leiseboer, speaking on Quantum Encryption Hardware Evolution, November 1st at 1:00 PM ET. Register now https://iqtevent.com/fall/
<urn:uuid:11185029-886c-4492-b869-e6971ad75769>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/quintessencelabs-the-move-to-quantum-random-number-generation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00710.warc.gz
en
0.881359
624
3.015625
3
The Internet of Things (IoT) continues to digitize our already internet-dominated business and consumer landscapes. Indeed, physical objects such as lights, gates, and even cars – can be equipped with sensors, processing chips, and other technologies that allow them to be part of the IoT. IoT has offered numerous new opportunities across virtually all industries, allowing them to streamline and enhance the efficiency of their processes. Automobiles and the Internet of Things: A Long History IoT has had a dramatic impact on the automotive industry, introducing new concepts such as smart cars, self-driving cars, and partially autonomous vehicles – all of which rely on rapid data processing. While electronic computer units in cars were introduced in the 1970s, computer systems that store data for future retrieval became popular in the 1990s. Initially, 16-pin data ports could allow maintenance personnel to examine the history and other recorded data of a vehicle’s computer network. Decades later, IoT began permeating the automotive industry with the development of computer-equipped dashboards in 1995. These dashboards – which represented the first step in making cars “digital” – were initially designed to help to diagnose car trouble. Then came game-changing new concepts such as integrated and interconnected intelligent cars and self-driving cars. The introduction of electric cars has provided a unique opportunity for IoT for cars, with app makers developing ways for users to view the status (such as battery life) of cars remotely. In fact, this introduced the “sharing economy” concept – a society where ownership of particular objects such as vehicles is shared via peer-to-peer networks. These networks allow users to acquire, provide, or share access to goods and services via a community-based online platform. Mobility-as-a-service provider, Uber as well as everyday service provider, Grab, function similarly. This trend is a response to the fact that resources are growing ever more expensive, and our awareness of reducing our consumption has grown. In fact, the sharing economy is estimated to reach $300 billion by 2025. Auto Rental Industry Speeding to Leverage IoT With data as the centerpiece of IoT, car rental companies – which comprise a good chunk of the automotive industry’s revenue – are taking advantage of its many benefits. To be sure, automotive IoT solutions – with their ability to quickly address customer concerns – can potentially maximize profits within the auto rental industry. Specifically, IoT allows maintenance to be planned more efficiently as real-time data from vehicles can be sent to a single mobile application. Meanwhile, many car manufacturers have incorporated artificial intelligence (AI) into their vehicles; this dramatically improves driving safety, which, of course, the auto rental industry can take advantage of. While car insurance is a standard option, many small auto rental companies’ greatest fear is that reckless drivers could damage their rental vehicles. To combat this fear, AI-enabled smart cars could reduce the risks of profit-damaging accidents. IoT Integration Coming in Clutch in the Auto Rental Industry Let’s explore the other potential benefits that IoT can bring to the auto rental industry… - Simplification of Customer-Company Relations In addition to making smart cars safer, IoT can transform companies’ relationships with their customers into ones that are more interactive and personal. For example, smartphone apps can facilitate easier online transactions, with customer information being securely stored in an online database. This allows for easier management with fewer risks of error, giving customers a smooth and superior experience. Gone are the days of tedious manual cross-checking of hand-written records. - More Efficient Car Maintenance Tedious and menial car maintenance work can be made more efficient using IoT. Sensors, for example, allow auto rental workers to monitor in real time their fleets of cars. Additionally, vital car stats such as tire pressure, fuel level, oil status, and battery strength can be easily – and securely – monitored, thanks to a smart cloud system. Web or mobile applications designed for auto rental industry operators can easily access this data, allowing management to make decisions based on actual data. IoT can also use data trends to predict the impending failure of car parts. Similarly, data regarding scheduled maintenance can be used to order spare parts in advance, reducing downtime and scheduling delays. Partnerships between companies that are part of the general auto industry (for example, car parts manufacturers) also benefit from data being shared via IoT. For example, IoT can help with streamlining the maintenance process. - Effective Use of Operational Costs With maintenance streamlined, many of the costs for operations can be used more efficiently. Profit loss during downtime (for example, car repair) can also be reduced, thanks to more efficient IoT-driven maintenance scheduling. IoT applications – with optional embedded security features – can also make the process of deploying rental cars more manageable and more secure. These IoT apps can also manage reservations and bookings, which can effectively reduce labor costs. Additionally, since most IoT app developers try to develop apps with a smooth user experience in mind, only a handful of employees will need to be trained in the use of them. - Enhanced Marketing An IoT-equipped car rental web or mobile app can be integrated with social media websites or applications for enhanced marketing. This is crucial since younger tech-savvy generations rely on social media for information. Indeed, a solid social media presence can give a car rental business an edge over its competitors. Newer, smart auto designs can allow a customer’s vehicle to recognize his or her phone, which can then act as a digital “car key.” IoT apps can also guide cars to areas with light traffic. The convenience that IoT affords is endless, really. IoT Offers the Auto Rental Industry a Big Brake IoT is quite literally driving the auto rental industry into the future with the convenient, profitable and secure benefits it affords both businesses and end users.
<urn:uuid:08941f3c-85b8-4b13-be7a-7512729e72be>
CC-MAIN-2022-40
https://www.iotinnovator.com/category/automotive/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00710.warc.gz
en
0.938933
1,234
2.953125
3
Recently, English- and Russian-speaking people were attacked with a new ransomware Trojan called Ded Cryptor. It’s voracious, demanding a whopping 2 bitcoins (about $1,300) as ransom. Unfortunately, no decryption solution is available to restore files held hostage by Ded Cryptor. When a computer is infected with Ded Cryptor, the malware changes the system wallpaper to a picture of an evil-looking Santa Claus. A scary image and a ransom demand — sounds like any other ransomware, right? But Ded Cryptor has a really interesting origin story, kind of a thriller, with good and bad guys battling it out, making mistakes, and facing consequences. Ransomware for all! It all started when Utku Sen, a security expert from Turkey, created a piece of ransomware and published the code online. Anybody could download it from GitHub, an open and free Web resource that developers use for collaborating on projects (the code was later removed; you’ll see why in a bit). It was a rather revolutionary idea, making source code freely available to criminals, who would undoubtedly use it to make their own cryptors (and so they did). However, Sen, a white hat hacker, felt certain that every cybersecurity expert needs to understand how cybercriminals think — and how they code. He believed his unusual approach would help the good guys to oppose the bad guys more efficiently. An earlier project, the Hidden Tear ransomware project, was also part of Sen’s experiment. From the very beginning, Sen’s work was meant for purposes of education and research. With time, he developed a new type of ransomware that could work offline. Later, EDA2 — a more powerful model — emerged. EDA2 had better asymmetric encryption than Hidden Tear did. It also could communicate with a full-fledged command-and-control server, and it encrypted the key it transferred there. It also displayed a scary picture to the victim. EDA2’s source code was also published on GitHub, which brought a lot of attention and criticism to Utku Sen — and not for nothing. With the source code freely available, wannabe cybercriminals who hadn’t even learned to code properly could use Sen’s open-source ransomware to relieve people of their money. Didn’t he understand that? He did: Sen had inserted backdoors in his ransomware that enabled him to retrieve decryption keys. That means if he heard about his ransomware being exploited for malicious purposes, he could obtain the command-and-control server’s URL to extract the keys and give them to the victims. There was a problem, however. To decrypt their files, the victims needed to know about the white hat hacker and ask him for the keys. The vast majority of victims had never even heard of Utku Sen. You made the ransomware, now pay the ransom! Of course, third-party encryptors created with Hidden Tear and EDA2 source code were not long in coming. Sen dealt with the first one more or less successfully: He published the key and waited for victims to find it. But things did not go so well with the second cryptor. Magic, ransomware that was based on EDA2, looked just like the original and promised to be nothing of interest. When Sen was informed about it, he tried to extract the decryption key as he had done before (through the backdoor) — but there was no way in. The cybercriminals using Magic had chosen a free host for their command-and-control server. When the hosting provider received complaints regarding the malicious activity, it simply deleted the criminals’ account and all of their files. Any chance of getting the encryption keys disappeared with the data. The story doesn’t end there. The creators of Magic reached out to Utku Sen, and their conversation developed into a long and public discussion. They began by offering to publish the decryption key if Sen agreed to remove the EDA2 source code from the public domain and pay them 3 bitcoins. In time, both parties agreed to leave ransom out of the deal. The negotiations turned out to be rather interesting: Readers learned about the hackers’ political motivation — and that they almost published the key when they heard from a man who lost all photos of his newborn son because of Magic. In the end, Sen removed the EDA2 and Hidden Tear source code from GitHub, but he was too late; many people had already downloaded it. On February 2, 2016 Kaspersky Lab expert Jornt van der Wiel noted in an article on SecureList that there were 24 encryptors based on Hidden Tear and EDA2 in the wild. Since then the number has only increased. How Ded Cryptor emerged Ded Cryptor is one of those descendants. It uses EDA2 source code, but its command-and-control server is hosted in Tor for better security and anonymity. The ransomware communicates with the server over the tor2web service, which lets programs use Tor without a Tor browser. In a way, Ded Cryptor, created from various pieces of open code published on GitHub, recalls Frankenstein’s monster. The creators borrowed code for the proxy server from another GitHub developer; and the code for sending requests was initially written by a third developer. An unusual aspect of the ransomware is that it doesn’t send requests to the server directly. Instead, it sets up a proxy server on the infected PC and uses that. As far as we can tell, the Ded Cryptor developers are Russian speaking. First, the ransom note exists only in English and Russian. Second, Kaspersky Lab senior malware analyst Fedor Sinitsyn analyzed the ransomware code and found the file path C:UserssergeyDesktopдоделатьeda2-mastereda2eda2binReleaseOutputTrojanSkan.pdb. (By the way, the aforementioned Magic ransomware was also developed by Russian-speaking people.) Unfortunately, little is known about how DedCryptor spreads. According to the Kaspersky Security Network, the EDA2-based ransomware is active mostly in Russia. Next come China, Germany, Vietnam, and India. Also unfortunately, there is no available way to decrypt files maimed by Ded Cryptor. Victims can try to recover the data from shadow copies the operating system may have made. But the best protection is proactive — it’s much easier to prevent infection than deal with consequences. Kaspersky Internet Security detects all Trojans based on Hidden Tear and EDA2 and warns users when it encounters Trojan-Ransom.MSIL.Tear. It also blocks ransomware operations and does not allow them to encrypt files. Kaspersky Total Security does all that plus automates backups, which can be useful in all sorts of cases, from ransomware infection to sudden hard-drive death.
<urn:uuid:5d5b200e-fa1d-40e3-bdf6-76c6c1780060>
CC-MAIN-2022-40
https://www.kaspersky.com/blog/ded-cryptor-ransomware/12526/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00710.warc.gz
en
0.959792
1,435
2.546875
3
In Part 1 of our series, we talked about how the IoT is already creating a fundamental shift in how consumers will soon think about their relationship with technology, and how Qualcomm is helping drive that future. In Part 2, we explored the four fundamental pillars of the platform thinking business model that has made technology-centric companies the new kings and queens of business, and outlined how Qualcomm is leveraging these four pillars to empower the world’s most innovative companies to build the technology ecosystems of the future. Today, let’s start talking about what makes devices and environments “smart” – and how Qualcomm (again) appears to be at the forefront of the next evolutionary leap in IoT, including areas such as machine learning and artificial intelligence. Specifically, let’s talk about where smart computing lives, and why that matters within the context of the IoT’s future. 1 – Smart devices vs. connected devices: Before we get too far into today’s discussion, it is important to note that not every IoT device is (or needs to be) “smart.” Smart devices are a subset of the IoT. Many IoT devices today, such as sensors, are merely connected, as opposed to “smart.” It is also worthy of note that significant computing power doesn’t necessarily make smart devices. An entire category of connected IoT devices like parking meters, industrial sensors, agriculture monitors, and utility meters may not have been built with the ability to process and analyze data, self-diagnose, self-organize, or self-optimize. Just because a device is connected to the Internet or a wireless network, and technically belongs to the IoT, doesn’t make it “smart.” It can easily be made into a smart device, but it doesn’t have to be. For the sake of simplicity, what makes a device “smart” can be broken down into three traits: - The ability to not only collect but process, parse and analyze data, sometimes without having to be prompted by a user to do so, and communicate only certain data. - The ability to autonomously optimize its own performance and/or behavior based on that analysis. - The ability to learn how to improve its own performance and/or better deliver positive outcomes over time, autonomously. Some smart devices can do all three, while some are limited to only one or two. That’s all right. Intelligence comes in many different forms. Broken down a little further, they can take on the form of complex functions like voice and face recognition, real-time language processing, autonomous driving, network self-organization, and even the sorts of human-machine interactions that blend yesterday’s science fiction with tomorrow’s reality, but they can also be as simple as a robot vacuum cleaner’s ability to recognize and avoid dog poop, or a smart camera that alerts you when there is an intruder at home. Today, I want to focus on where in physical space this computing power actually lives versus where many people may think it lives. Why? Because unless you understand this clearly, your investments in IoT technologies, as well as your understanding of which companies bring the most value – not only to the IoT ecosystem as a whole but to yours – won’t be what it needs to be. 2 – Edge vs Cloud: What you need to know about the structure of the IoT: For months now, we have been writing about the Cloud: Cloud services. Big Data and the Cloud. Big Compute and the Cloud. Storage. Applications. The democratization of computing and IT. Deep Learning. Watson. Cloud this, Cloud that, and then some. That won’t stop anytime soon because the Cloud is one of the core technologies driving IT today, and digital business, and most business and productivity services. Without the Cloud, there would be no mobile internet, no e-commerce, no app stores, no social media. The world we live in today would not work the way it does. And yes, even machine learning and AI technologies, especially the ones that require massive amounts of computing power, the ones capable of processing gargantuan volumes of data, tend to be cloud-based. But not all machine learning and AI is cloud-based. A lot of it is already device and network-based. Here’s something to think about: As the IoT continues to grow and become an integral layer of the increasingly interwoven world around us, the lion’s share of the kind of “smart” computing power we just talked about will increasingly not live in the cloud at all. It will live on devices. It will live on gateways and networks. Those “smart” capabilities will increasingly live out here in the physical world right alongside us. Break open any device, and you will see that it is already embedded in sensors and phones, in cameras and speakers, in thermostats and appliances, in cars and traffic lights, and hidden inside the shelves of your favorite retailers. This is what’s called the edge (or edge computing if you want to be a little more formal and old school). The edge is essentially computing power that doesn’t live in the Cloud, and doesn’t require the Cloud to function. The edge is what makes objects truly “smart.” And there are some serious economics involved as well. Sending data to and storing in the cloud is expensive, especially if you are talking terabytes of data coming from say, sensors and cameras on an oil rig. By managing some of that data at the edge, and sending back only what is truly important, the guy breaking into the building for instance, “smart” devices can greatly reduce the amount of data being analyzed at the cloud level. If your new smart phone is capable of being unlocked by your voice, your face, or your iris, but no one else’s, that’s because it has built-in (on-device) processing capabilities that enables it to do this. It can learn to recognize your voice in a crowded room. It can process natural language, or at the very least learn to recognize and process specific keywords. It knows on its own when to power down to conserve energy, and can simultaneously set itself to passive listening mode, in case you decide to wake it up by talking to it. If it is equipped with an advanced enough chipset (like many Qualcomm’s high-end mobile SoC platforms), its built-in digital signal processor can isolate your voice in a noisy space, cancel out all ambient sound, and digitally enhance your voice so that it sounds crystal clear during calls or recordings. (Handy if you need to take a call or shoot a video in the middle of a noisy stadium, or in a car with the windows rolled down, or in a crowded conference hall.) Other examples: If you own smart speakers already, or a smart TV, or a smart thermostat; if you own a smart drone that flies itself, follows you around, and snaps pictures when you tell it to; if you own a car that can drive or park itself; if you have interacted with voice-activated devices, including those that talk back, you have experienced some of what edge computing can do. This type of smart functionality is made possible by advanced chipsets and platforms designed to make objects not merely connected but smart. If we drew a distinction between “smart” objects and “connected” objects earlier, it is because merely being connected to the cloud, the web, or a network is no longer enough when user and consumer expectations have already evolved from being impressed with marginal smart capabilities from connected devices and environments to an expectation of reliable, consistent, uninterrupted, 24/7 smart capabilities from the same devices and environments. Think about it: If your smart speaker or smart phone’s voice detection capability depends solely on a connection to a cloud-based service, it isn’t going to deliver the kind of reliable performance you need it to. If your thermostat stops working properly whenever it loses its internet connection, it won’t be all that useful to you. If your smart home loses the ability to turn lights on and off whenever there is an Internet connection outage, that’s an annoying problem. Even when it comes to natural language processing (think of the sort of interaction you have with Cortana and Siri, for instance), lag between your device and the cloud, then the cloud back to your device, can make a “conversation” with a virtual assistant choppy and untenable. At least some fundamental language processing and machine learning capability has to occur on the device in order to balance the load between a basic library of voice-AI interactions and far more complex ones that may require internet searches or complex analysis relative to an unexpected topic. The point being that while connectivity to the cloud can enhance both the functionality and utility of smart objects, the value of smart objects is fundamentally built on their ability to be smart independently of the cloud. That is why edge computing is so important to the IoT, and why IoT chip manufacturers, not cloud-based deep learning and cognitive computing companies, may have a significant edge (no pun intended) in both the development and growth of the IoT. 3 – SoCs, SONs, and the Cloud: Understanding the fundamental hierarchies of machine learning and human-to-machine interactions across the IoT ecosystem: If you aren’t familiar with the SoC acronym yet, now is as good a time as any to change that. SoC stands for System On Chip. The difference between an SoC (System on Chip) and a CPU (Central Processing Unit) is a topic we will get to in another post soon. What you need to know for the purposes of today’s discussion is that because SoCs bring a lot more functionality to small, portable, mobile electronics than traditional CPUs, they essentially open the door to a completely different type of computing – more versatile, more compact, more energy efficient, and with much greater flexibility when it comes to overall connectivity – the so-called heterogeneous computing. SoC technology is at the heart of the IoT, and because of this, leaders in SoC innovation are driving the IoT. SoC’s with layered software and security bring a high level of flexibility to device manufacturers as well. Structurally-speaking, this is how you should think about the IoT: Yes, that diagram is an oversimplification. It doesn’t clearly show all the ways in which smart and connected devices interact. It doesn’t make mention of various types of connectivity standards like LTE or Wi-Fi. It doesn’t mention SoC hierarchies like Mobile SoCs, Application SoCs, LTE SoCs, Connectivity SoCs, and Bluetooth SoCs. There’s a lot that this diagram doesn’t show, and we will get to all of that in future posts. For now, we just want you to think about the basic layout of machine learning real estate across the IoT ecosystem. In its simplest form, what we have are four principal layers / hierarchies: - Edge-computing in devices - Edge-computing in local networks - Edge-computing in wide area networks - Cloud-based computing 4 – Key takeaways: First, we want you to start thinking about how much of the IoT’s “smart” functionality is bound to be edge-based as opposed to Cloud-based, and what that means in regards to where IoT innovation is most likely to come from in the next two decades. (Companies like Qualcomm – whose SoCs and fundamental connectivity-driven technologies make the IoT possible.) If you need a few metrics to put this insight into perspective, Qualcomm is now shipping 1 million chips to the IoT industry every single day, and to date, over 1.5 billion IoT devices from hundreds of brands already leverage Qualcomm technologies. We also want you to start thinking about the IoT’s smart functionality as it has been in the past in terms how edge devices use to have basic autonomous computing and analysis capacity, while things like big compute, big analysis, deep learning resided in the Cloud layer. Lastly, we want you to think about what comes next. Specifically, how improvements in SoC technologies and the transition from 4G LTE to 5G will shift much of the capabilities we generally expect from the Cloud to the edge in the next few years: - A transition from the current levels of basic autonomous computing and analysis capacity built into smart devices and smart hyperlocal environments to deep learning and complex decision-making making their way into smart devices and smart hyperlocal environments. - An exponential improvement in the utility and speed of the intermediate (network) layer with the introduction and deployment of 5G. - A decrease in the IoT’s reliance on Cloud-based big compute, big analysis, and deep learning capabilities over time, proportional to devices’ ability to perform compute, analysis, and deep learning on their own. - A dissemination of the Cloud layer relative to the IoT, as Cloud capabilities migrate to the edge. If you weren’t thinking about the IoT in this way until now, our job here is done – at least for today. Stay tuned for Part 4. There is a lot more to unpack, and we’re just getting started. This article was first published on Futurum Research.
<urn:uuid:40462511-ec16-4083-938a-da80e49132d9>
CC-MAIN-2022-40
https://convergetechmedia.com/iot-qualcomm-path-smarter-connected-world-part-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00110.warc.gz
en
0.943612
2,785
2.703125
3
Sexually transmitted diseases (STDs) are infectious diseases that spread from person to person through intimate contact. STDs can affect guys and girls of all ages and backgrounds who are having sex — it doesn’t matter if they’re rich or poor. Unfortunately, STDs (sometimes also called STIs for “sexually transmitted infections”) have become common among teens. Because teens are more at risk for getting some STDs, it’s important to learn what you can do to protect yourself. STDs are more than just an embarrassment. They’re a serious health problem. If untreated, some STDs can cause permanent damage, such as infertility (the inability to have a baby) and even death (in the case of HIV/AIDS). How STDs Spread One reason STDs spread is because people think they can only be infected if they have sexual intercourse. That’s wrong. A person can get some STDs, like herpes or genital warts, through skin-to-skin contact with an infected area or sore. Another myth about STDs is that you can’t get them if you have oral or anal sex. That’s also wrong because the viruses or bacteria that cause STDs can enter the body through tiny cuts or tears in the mouth and anus, as well as the genitals. STDs also spread easily because you can’t tell whether someone has an infection. In fact, some people with STDs don’t even know that they have them. These people are in danger of passing an infection on to their sex partners without even realizing it. Some of the things that increase a person’s chances of getting an STD are: - Sexual activity at a young age. The younger a person starts having sex, the greater his or her chances of becoming infected with an STD. - Lots of sex partners. People who have sexual contact — not just intercourse, but any form of intimate activity — with many different partners are more at risk than those who stay with the same partner. - Unprotected sex. Latex condoms are the only form of birth control that reduce your risk of getting an STD, and must be used every time. Spermicides, diaphragms, and other birth control methods may help prevent pregnancy, but they don’t protect a person against STDs. How common are STDs? STDs are common, especially among young people. There are about 20 million new cases of STDs each year in the United States. About half of these infections are in people between the ages of 15 and 24. Young people are at greater risk of getting an STD for several reasons: - Young women’s bodies are biologically more prone to STDs. - Some young people do not get the recommended STD tests. - Many young people are hesitant to talk openly and honestly with a doctor or nurse about their sex lives. - Not having insurance or transportation can make it more difficult for young people to access STD testing. - Some young people have more than one sex partner What can I do to protect myself? People who are considering having sex should get regular gynecological or male genital examinations. There are two reasons for this. - First, these exams give doctors a chance to teach people about STDs and protecting themselves. - Second, regular exams give doctors more opportunities to check for STDs while they’re still in their earliest, most treatable stage. In order for these exams and visits to the doctor to be helpful, people need to tell their doctors if they are thinking about having sex or if they have already started having sex. This is true for all types of sex — oral, vaginal, and anal. And let the doctor know if you’ve ever had any type of sexual contact, even if it was in the past. Don’t let embarrassment at the thought of having an STD keep you from seeking medical attention. Waiting to see a doctor may allow a disease to progress and cause more damage. If you think you may have an STD, or if you have had a partner who may have an STD, you should see a doctor right away. If you don’t have a doctor or prefer not to see your family doctor, you may be able to find a local clinic in your area where you can get an exam confidentially. Some national and local organizations operate STD hotlines staffed by trained specialists who can answer your questions and provide referrals. Calls to these hotlines are confidential. Not all infections in the genitals are caused by STDs. Sometimes people can get symptoms that seem very like those of STDs, even though they’ve never had sex. For girls, a yeast infection can easily be confused with an STD. G uys may worry about bumps on the penis that turn out to be pimples or irritated hair follicles. That’s why it’s important to see a doctor if you ever have questions about your sexual health. - The surest way to protect yourself against STDs is to not have sex. That means not having any vaginal, anal, or oral sex (“abstinence”). There are many things to consider before having sex. It’s okay to say “no” if you don’t want to have sex. - If you do decide to have sex, you and your partner should get tested for STDs beforehand. Make sure that you and your partner use a condom from start to finish every time you have oral, anal, or vaginal sex. Know where to get condoms and how to use them correctly. It is not safe to stop using condoms unless you’ve both been tested for STDs, know your results, and are in a mutually monogamous relationship. - Mutual monogamy means that you and your partner both agree to only have sexual contact with each This can help protect against STDs, as long as you’ve both been tested and know you’re STD-free. - Before you have sex, talk with your partner about how you will prevent STDs and If you think you’re ready to have sex, you need to be ready to protect your body. You should also talk to your partner ahead of time about what you will and will not do sexually. Your partner should always respect your right to say no to anything that doesn’t feel right. - Make sure you get the health care you Ask a doctor or nurse about STD testing and about vaccines against HPV and hepatitis B. - Girls and young women may have extra needs to protect their reproductive Talk to your doctor or nurse about regular cervical cancer screening, and chlamydia and gonorrhea testing. You may also want to discuss unintended pregnancy and birth control. - Avoid mixing alcohol and/or recreational drugs with If you use alcohol and drugs, you are more likely to take risks, like not using a condom or having sex with someone you normally wouldn’t have sex with. Can STDs be treated? Your doctor can prescribe medicine to cure some STDs, like chlamydia and gonorrhea. Other STDs, like herpes, can’t be cured, but you can take medicine to help with the symptoms. If you are ever treated for an STD, be sure to finish all of your medicine, even if you feel better before you finish it all. Ask the doctor or nurse about testing and treatment for your partner, too. You and your partner should avoid having sex until you’ve both been treated. Otherwise, you may continue to pass the STD back and forth. It is possible to get an STD again (after you’ve been treated), if you have sex with someone who has an STD. What happens if I don’t treat an STD? Some curable STDs can be dangerous if they aren’t treated. For example, if left untreated, chlamydia and gonorrhea can make it difficult—or even impossible—for a woman to get pregnant. You also increase your chances of getting HIV if you have an untreated STD. Some STDs, like HIV, can be fatal if left untreated. What if my partner or I have an incurable STD? Some STDs, like herpes and HIV, aren’t curable, but a doctor can prescribe medicine to treat the symptoms. If you are living with an STD, it’s important to tell your partner before you have sex. Although it may be uncomfortable to talk about your STD, open and honest conversation can help your partner make informed decisions to protect his or her health. What Is It? Chlamydia (pronounced: kluh-MID-ee-uh) is a sexually transmitted disease (STD) caused by bacteria called Chlamydia trachomatis. Although you may not be familiar with its name, chlamydia is one of the most common STDs. Because there often aren’t any symptoms, though, lots of people can have chlamydia and not know it. The bacteria can move from one person to another through vaginal, oral, or anal sex. If someone touches body fluids that contain the bacteria and then touches his or her eye, a chlamydial eye infection (chlamydial conjunctivitis) is possible. Chlamydia also can be passed from a mother to her baby while the baby is being delivered. This can cause pneumonia and conjunctivitis, which can become very serious for the baby if it’s not treated. You can’t catch chlamydia from a towel, doorknob, or toilet seat. How Does a Girl Know She Has It? It can be difficult for a girl to know whether she has chlamydia because most girls don’t have any symptoms. Because of this, it’s very important to see a doctor and get tested for chlamydia at least once a year if you are having vaginal, oral, or anal sex. Your doctor can tell you about how to test for chlamydia, even if you don’t have any symptoms. Much less often, a girl can have symptoms, such as an unusual vaginal discharge or pain during urination (peeing). Some girls with chlamydia also have pain in their lower abdomens, pain during sexual intercourse, or bleeding between menstrual periods. How Does a Guy Know He Has It? It also can be difficult for guys to know if they have chlamydia. Many who do have it will have few or no symptoms, so any guy who is having vaginal, oral, or anal sex should be tested by a doctor at least once a year. When symptoms are there, guys may have a discharge from the tip of the penis (the urethra — where urine comes out), or itching or burning sensations around the penis. Rarely, one of the testicles may become swollen. When Do Symptoms Appear? Someone who has chlamydia may see symptoms a week later. In some people, the symptoms take up to 3 weeks to appear, and many people never develop any symptoms. What Can Happen? If left untreated in girls, chlamydia can cause an infection of the urethra (where urine comes out) and inflammation (swelling and soreness caused by the infection) of the cervix. It can also lead to pelvic inflammatory disease (PID), which is an infection of the uterus, ovaries, and/or fallopian tubes. PID can cause infertility and ectopic (tubal) pregnancies later in life. If left untreated in guys, chlamydia can cause swelling and irritation of the urethra and epididymis (the structure attached to the testicle that helps transport sperm). How Is It Treated? If you think you may have chlamydia — or if you have had vaginal, oral, or anal sex with a partner who may have chlamydia — you need to see your family doctor, adolescent doctor, or gynecologist. Some local health clinics, such as Planned Parenthood, also can test and treat people for chlamydia. It’s now routine for doctors to check all teens 15 years of age and up for chlamydia, regardless of whether they say they’re having sex — this is to make sure that everyone who needs treatment gets it. Doctors usually diagnose chlamydia by testing a person’s urine. If you have been exposed to chlamydia or are diagnosed with chlamydia, the doctor will prescribe antibiotics, which should clear up the infection in 7 to 10 days. Anyone with whom you’ve had sex will also need to be tested and treated for chlamydia because that person may be infected but not have any symptoms. This includes any sexual partners in the last 2 months or your last sexual partner if it has been more than 2 months since your last sexual experience. It’s very important for people diagnosed with chlamydia to abstain from having sex until they and their partner have been treated. If a sexual partner has chlamydia, quick treatment will reduce his or her risk of complications and will lower your chances of being reinfected if you have sex with that partner again. (You can become infected with chlamydia again even after you have been treated — having chlamydia once does not make you immune to it.) It’s better to prevent chlamydia than to treat it, and the best way to prevent the infection is to abstain from all types of sexual intercourse. If you do have sex, use a latex condom every time. This is the only birth control method that will help prevent chlamydia. What Is It? Genital herpes is caused by a virus called herpes simplex (HSV). There are two different types of herpes virus that cause genital herpes — HSV-1 and HSV-2. Most forms of genital herpes are HSV-2. But a person with HSV-1 (the type of virus that causes cold sores or fever blisters around the mouth) can transmit the virus through oral sex to another person’s genitals. Genital herpes is a sexually transmitted disease (STD). It can cause sores in the genital area and is spread through vaginal, oral, or anal sex — especially during unprotected sex when infected skin touches the vaginal, oral, or anal area. Occasionally, it can cause sores in the mouth, and can be spread through saliva (spit). Because the virus does not live outside the body for long, you cannot catch genital herpes from an object, such as a toilet seat. Symptoms of an Outbreak Someone who has been exposed to the genital herpes virus might not be aware of being infected and might never have an outbreak of sores. However, if a person does have an outbreak, the symptoms can cause a lot of discomfort. Someone with genital herpes may first notice itching or pain, followed by sores that appear a few hours to a few days later. The sores, which may appear on the vagina, penis, scrotum, buttocks, or anus, start out as red bumps that soon turn into red, watery blisters. The sores might make it very painful to urinate (pee). The sores may open up, ooze fluid, or bleed; during a first herpes outbreak, they can take from a week to several weeks to heal. The entire genital area may feel very tender or painful, and the person may have flu-like symptoms (such as fever; a headache; and tender, swollen lymph nodes in the groin area). If future outbreaks happen, they tend to be less severe and don’t last as long, with sores healing faster. How Long Until Symptoms Appear? Someone who has been exposed to genital herpes will notice genital itching and/or pain about 2 to 20 days after being infected with the virus. The sores usually appear within days afterward. What Can Happen? After the herpes blisters disappear, a person may think the virus has gone away — but it’s actually hiding in the body. Both HSV-1 and HSV-2 can stay hidden away in the body until the next herpes outbreak, when the virus reactivates itself and the sores return, usually in the same area. Over time, the herpes virus can reactivate itself again and again, causing discomfort and episodes of sores each time. The number of future outbreaks can vary (some people might have four or five a year; others might have one or none) and usually lessen over time. At this time there is no cure for herpes; it remains in the body and can be passed to another person with any form of unprotected sex. This is the case even if blisters aren’t present, but more likely if they are. A person can lessen the chance of spreading the infection to someone else by taking an antiviral medicine. This is a medication that must be prescribed by a doctor. Genital herpes also increases a person’s risk of HIV infection because HIV can enter the body more easily whenever there’s a break in the skin (such as a sore) during unprotected sexual contact. If a pregnant woman with genital herpes has an active infection during childbirth, the newborn baby is at risk for getting it. To prevent this, she may have a C-section to avoid passing the infection to the baby. Herpes infection in a newborn can cause meningitis (an inflammation of the membranes that surround the brain and spinal cord), seizures, and brain damage. How Is It Prevented? The best way to prevent genital herpes is abstinence. Teens who do have sex must properly use a latex condom every time they have any form of sexual intercourse (vaginal, oral, or anal sex). Girls receiving oral sex should have their partners use dental dams as protection. These sheets of thin latex can be purchased online or from many pharmacies. If one partner has a herpes outbreak, avoid sex — even with a condom or dental dam — until all sores have healed. Herpes can be passed sexually even if a partner has no sores or other signs and symptoms of an outbreak. Finally, one way to lessen this risk is to take antiviral medication even when no sores are present if you know you have genital herpes. How Is It Treated? If you think you may have genital herpes or if you have had a partner who may have genital herpes, see your family doctor, adolescent doctor, gynecologist, or health clinic for a diagnosis. Right now, there is no cure for genital herpes, but a doctor can prescribe antiviral medication to help control recurring HSV-2 and clear up the painful sores. The doctor can also tell you how to keep the sores clean and dry and suggest other methods to ease the discomfort if the virus reappears. Genital Warts (HPV) What Are They? Genital warts are warts that are near or on a person’s genital areas. For a girl, that means on or near the vulva (the outside genital area), vagina, cervix, or anus. For a guy, that means near or on the penis, scrotum, or anus. Warts appear as bumps or growths. They can be flat or raised, single or many, small or large. They tend to be whitish or flesh colored. They are not always easy to see, so people who have genital warts often don’t know they have them. Genital warts are caused by a group of viruses called HPV (short for human papillomavirus). There are more than 100 types of HPV. Some of them cause the kind of warts you see on people’s hands and feet. Genital warts and the kinds of warts on hands and feet are usually caused by different types of HPV. More than 40 types of HPV cause genital warts. Genital warts can be passed from person to person through intimate sexual contact (touching someone’s genitals or having vaginal, oral, or anal sex). In some rare cases, genital warts are transmitted from a mother to her baby during childbirth. HPV infections are common in teens and young adults. The more sexual partners someone has, the more likely it is that the person will get an HPV infection. How Do People Know They Have HPV? Most HPV infections have no signs or symptoms. So someone can be infected and pass the disease on to another person without knowing. Some people do get visible warts. Although warts might hurt, itch, or feel uncomfortable, most of the time they don’t. This is one reason why people may not know they have genital warts. Doctors can diagnose warts by examining the skin closely (sometimes with a magnifying glass) and using a special solution to make them easier to see. Tests like Pap smears can help doctors find out if someone has an HPV infection. Experts believe that when a wart is present, the virus may be more contagious. But HPV can still spread even if you can’t see warts. When Do Symptoms Start? Warts can appear any time from several weeks to several months after a person has been exposed to them. Sometimes they might take even longer to appear because the virus can live in the body for a very long time before showing up as warts. When to See a Doctor See your doctor, gynecologist, or visit a health clinic if: - you are having sex or have had sex in the past or have touched someone’s genitals - you have a bump or lump “down there” - you think you might have genital warts - you have had a partner who might have genital warts Because many people who are infected with HPV don’t show any symptoms, everyone having sex should get regular medical checkups and tell their doctor about their sexual history. Not all bumps on a person’s genitals are warts. Some can be pimples, other infections, or growths. Turn to your doctor for help — he or she can help figure out what a bump is and what you can do. What Can Happen? If a person doesn’t get treated, genital warts can sometimes grow bigger and multiply. Even if warts go away on their own, the virus is still in the body. That means warts can come back or the virus can spread to other people. How Are They Prevented? The best way to avoid genital warts is not having sex of any kind (abstinence). That means not having vaginal, oral, or anal sex. Preventing HPV infection also means not touching the genitals of someone who is infected with HPV. People who have sex should use a condom every time to protect against STDs. Condoms are a good defense against warts, but they can’t completely protect against them. That’s because the virus can spread from or to the parts of the genitals not covered by a condom. Doctors recommend that girls ages 11 through 26 and guys 11 through 21 get the HPV vaccine. The vaccine protects against some types of HPV that cause genital warts and certain types of cancer. How Are Warts Treated? There is no cure that gets rid of the human papillomavirus completely. But treatments can reduce the number of warts — or help them go away faster. When the warts go away, the virus is still there. It could still spread to someone else. A doctor will do an examination, make a diagnosis, and then provide treatment, if necessary. A number of different treatments might be used depending on where the warts are, how big they are, and how many there are. The doctor might put special medications on the warts or remove them with treatments like laser therapy or chemical “freezing.” Sometimes warts can come back, so you might need to visit the doctor again. Anyone you’ve had sex with also should be checked for genital warts. What Is Gonorrhea? Gonorrhea (pronounced: gah-nuh-REE-uh) is a sexually transmitted disease (STD) caused by bacteria called Neisseria gonorrhoeae. The bacteria can be passed from one person to another through vaginal, oral, or anal sex, even when the person who is infected has no symptoms. Gonorrhea also can be passed from a mother to her baby during birth. You can’t catch it from a towel, a doorknob, or a toilet seat. What Are the Signs of Gonorrhea in Girls? A girl who has gonorrhea may have no symptoms at all or her symptoms may be so mild that she doesn’t notice them until they become more severe. In some cases, girls will feel a burning sensation when they pee, or they will have a yellow-green vaginal discharge. Girls also may have vaginal bleeding between menstrual periods. If the infection spreads and moves into the uterus or fallopian tubes, it may cause an infection called pelvic inflammatory disease (PID). PID can cause abdominal pain, fever, and pain during sex, as well as the symptoms above. What Are the Signs of Gonorrhea in Guys? Guys who have gonorrhea are much more likely to notice symptoms, although a guy can have gonorrhea and not know it. Guys often feel a burning sensation when they pee, and yellowish-white discharge may ooze out of the urethra (at the tip of the penis). How Long Until There Are Symptoms? Symptoms usually start 2 to 7 days after a person is exposed to gonorrhea. In girls, they might start even later. What Problems Can Happen? Gonorrhea can be very dangerous if it’s not treated, even in someone who has mild or no symptoms: - In girls, the infection can move into the uterus, fallopian tubes, and ovaries (causing PID) and can lead to scarring and infertility (the inability to have a baby). Gonorrhea infection during pregnacy can cause problems for the newborn baby, including meningitis(an inflammation of the membranes around the brain and spinal cord) and an eye infection that can result in blindness if it is not treated. - In guys, gonorrhea can spread to the epididymis (the structure attached to the testicle that helps transport sperm), causing pain and swelling in the testicular area. This can create scar tissue that might make a guy infertile. In both guys and girls, untreated gonorrhea can affect other organs and parts of the body, including the throat, eyes, heart, brain, skin, and joints, although this is less common. How Is Gonorrhea Diagnosed? Doctors now test teens 15 and older for STDs as part of annual checkups, regardless of whether the teens disclose they are having oral, anal, or vaginal sex. This is to make sure that everyone who needs treatment gets it. All teens who are having oral, vaginal, or anal sex should get tested at least once a year for gonorrhea. If you think you may have gonorrhea or if you have had a partner who may have gonorrhea, you need to see your doctor or gynecologist. He or she will do an exam that may include checking a urine (pee) sample. In some cases, testing may require swabbing of opening of the penis or the vagina or cervix for discharge. Talk to your doctor about which test is best for you. The doctor also may test for other STDs, such as HIV, syphilis, and chlamydia. Let the doctor know the best way to reach you confidentially with any test results. How Is Gonorrhea Treated? If you have gonorrhea, your doctor will prescribe antibiotics to treat the infection. Any sexual partners should also be tested and treated for gonorrhea immediately. This includes any partners in the last 2 months, or your last sexual partner if it has been more than 2 months since you last had sex. If a sexual partner has gonorrhea, quick treatment will reduce the risk of complications for that person and will lower your chances of being reinfected if you have sex with that partner again. (You can become infected with gonorrhea again even after treatment — having it once doesn’t make you immune to it.) Don’t have sex for at least 7 days after you and your partner have both finished taking your antibiotics. If you have sex earlier than that, you could be reinfected. Can Gonorrhea Be Prevented? It’s better to prevent gonorrhea than to treat it, and the best way to completely prevent the infection is to not have sex (oral, vaginal, or anal). If you do have sex, use a latex condom every time. This is the only birth control method that will help prevent gonorrhea. What Is Hepatitis B? Hepatitis B is an infection of the liver caused by the hepatitis B virus (HBV). In some people, HBV stays in the body, causing chronic disease and long-term liver problems. How Do People Get Hepatitis B? Most commonly, HBV spreads through: - sexual activity with an HBV-infected person - shared contaminated needles or syringes for injecting drugs - transmission from HBV-infected mothers to their newborn babies Who Is at Risk for Hepatitis B? In the United States, the most common way people get infected with HBV is through unprotected sex with someone who has the disease. People who share needles also are at risk of becoming infected because it’s likely that the needles they use will not have been sterilized. What Is Chronic Hepatitis B? Doctors refer to hepatitis B infections as either acute or chronic: - An acute HBV infection is a short-term illness that clears within 6 months of when a person is exposed to the virus. - A person who still has HBV after 6 months is said to have a chronic hepatitis B infection. This is a long-term illness, meaning the virus stays in the body and causes lifelong illness. An estimated 850,000 to more than 2 million people in the U.S. have chronic HBV. The younger someone is when infected, the greater the chances for chronic hepatitis B. What Are the Signs & Symptoms of HBV Infection? HBV can cause a wide range of symptoms, from a mild illness and general feeling of being unwell to more serious chronic liver disease that can lead to liver cancer. Someone with hepatitis B may have symptoms similar to those caused by other viral infections, like the flu. The person might: - be extra tired - feel like throwing up or actually throw up - not feel like eating - have a mild fever HBV also can cause darker than usual urine (pee), jaundice (when the skin and whites of the eyes look yellow), and abdominal (belly) pain. Someone who has been exposed to hepatitis B may start to have symptoms from 1 to 6 months later. Symptoms can last for weeks to months. In some people, hepatitis B causes few or no symptoms. But even someone who doesn’t have any symptoms can still spread the disease to others. What Problems Can Hepatitis B Cause? Hepatitis B (also called serum hepatitis) is a serious infection. It can lead to cirrhosis (permanent scarring) of the liver, liver failure, or liver cancer, which can cause severe illness and even death. If a pregnant woman has the hepatitis B virus, her baby has a very high chance of having it unless the baby gets a special immune injection and the first dose of hepatitis B vaccine at birth. Sometimes, HBV doesn’t cause symptoms until a person has had the infection for a while. At that stage, the person already might have more serious complications, such as liver damage. How Is Hepatitis B Diagnosed? If you think you may have hepatitis B or you might have been exposed to the virus through sex or drug use, see your doctor or gynecologist to get tested(they can test you for other infections as well). The blood test also can tell whether someone has an acute infection or a chronic infection. Let the doctor know the best way to reach you confidentially with test results. How Is Hepatitis B Treated? There’s no cure for HBV. Doctors will advise someone with a hepatitis B infection on how to manage symptoms — like getting plenty of rest or drinking fluids. A person who is too sick to eat or drink will need treatment in a hospital. In most cases, teens who get hepatitis B recover and may develop a natural immunity to future hepatitis B infections. Most feel better within 6 months. Health care providers will keep a close eye on patients who develop chronic hepatitis B. What Happens After a Hepatitis B Infection? Some people carry the virus in their bodies and are contagious for the rest of their lives. They should not drink alcohol, and should check with their doctor before taking any medicines (prescription, over the counter, or supplements) to make sure these won’t cause further liver damage. Anyone who has ever tested positive for hepatitis B cannot be a blood donor. Can Hepatitis B Be Prevented? Yes. Newborn babies in the United States now routinely get the hepatitis B vaccine as a series of three shots over a 6-month period. There’s been a big drop in the number of cases of hepatitis B over the past 25 years thanks to immunization. Doctors also recommend “catch-up” vaccination for all kids and teens younger than 19 years old who didn’t get the vaccine as babies or didn’t get all three doses. Anyone who is at risk for hepatitis B (including health care and public safety workers, people with chronic liver disease, people who inject drugs, and others) also should be vaccinated. If someone who hasn’t been vaccinated is exposed to HBV, doctors may give the vaccine and/or a shot of immune globulin containing antibodies against the virus to try to prevent the person from becoming infected. That’s why it’s very important to see a doctor immediately after any possible exposure to the virus. To prevent transmission of hepatitis B through infected blood and other body fluids, teens should: - abstain from sex (oral, vaginal, or anal) - if sexually active, always use latex condoms when having sex (oral, vaginal, or anal) - avoid contact with an infected person’s blood - not use intravenous drugs or share needles or other drug tools - not share things like toothbrushes or razors - research tattoo and piercing places carefully to be sure they don’t reuse needles without properly sterilizing them HIV and AIDS What Is It? The human immunodeficiency virus (HIV) is one of the most serious, deadly diseases in human history. HIV causes a condition called acquired immunodeficiency syndrome — better known as AIDS. HIV destroys a type of defense cell in the body called a CD4 helper lymphocyte (pronounced: LIM-foe-site). These lymphocytes are part of the body’s immune system, the defense system that fights infections. When HIV destroys these lymphocytes, the immune system becomes weak and people can get serious infections that they normally wouldn’t. As the medical community learns more about how HIV works, they’ve been able to develop medications to inhibit it (meaning they interfere with its growth). These medicines have been successful in slowing the progress of the disease. If people with HIV get treated, they can live long, relatively healthy lives — just as people who have other chronic diseases like diabetes can. But, as with diabetes or asthma, there is still no cure for HIV and AIDS. How Do People Get It? Thousands of U.S. teens and young adults get infected with HIV each year. HIV can be transmitted from an infected person to another person through body fluids like blood, semen, vaginal fluids, and breast milk. The virus is spread through things like: - having unprotected oral, vaginal, or anal sex (“unprotected” means not using a condom) - sharing needles, such as needles used to inject drugs, steroids, and other substances, or sharing needles used for tattooing Other risk factors: - People who have another sexually transmitted disease (STD) (such as syphilis, genital herpes, chlamydia, gonorrhea, or bacterial vaginosis) are at greater risk for getting HIV during sex with infected partners. - If a woman with HIV is pregnant, her newborn baby can catch the virus from her before birth, during the birthing process, or from breastfeeding. When doctors know a mom-to-be has HIV, they can do things to try to stop the virus from spreading to the baby. That’s why all pregnant women should be tested for HIV so they can begin treatment if necessary. How Does HIV Affect the Body? A healthy body has CD4 helper lymphocyte cells (CD4 cells). These cells help the immune system function normally and fight off certain kinds of infections. They do this by acting as messengers to other types of immune system cells, telling them to become active and fight against an invading germ. HIV attaches to these CD4 cells. The virus then infects the cells and uses them as a place to multiply. In doing so, the virus destroys the ability of the infected cells to do their job in the immune system. The body then loses the ability to fight many infections. When a person with HIV has an extremely low number of CD4 cells or certain rare infections, doctors call this stage of disease AIDS. People who have AIDS are unable to fight off many infections because their immune systems are weakened. They are more likely to get infections like tuberculosis and some rare infections of the lungs (such as certain types of pneumonia), infections of the surface covering of the brain (meningitis), or the brain itself (encephalitis). People who have AIDS tend to keep getting sicker, especially if they are not taking antiviral medications properly. AIDS can affect every body system. The immune defect caused by having too few CD4 cells also permits some cancers that are stimulated by viral illness to happen — some people with AIDS get forms of lymphoma and a rare tumor of blood vessels in the skin called Kaposi’s sarcoma. Because AIDS is fatal, it’s important that doctors detect HIV infection as early as possible so a person can take medicine to delay the onset of AIDS. How Do People Know They Have HIV? How long it takes for symptoms of HIV/AIDS to appear varies from person to person. Some people may feel and look healthy for years while they are infected with HIV. It is still possible to infect others with HIV, even if the person with the virus has absolutely no symptoms. You cannot tell simply by looking at someone whether he or she is infected. When a person’s immune system is overwhelmed by AIDS, he or she might notice: - extreme weakness or fatigue - rapid weight loss - frequent fevers that last for several weeks with no explanation - heavy sweating at night - swollen lymph glands - minor infections that cause skin rashes and mouth, genital, and anal sores - white spots in the mouth or throat - chronic diarrhea - a cough that won’t go away - trouble remembering things - in girls, severe vaginal yeast infections that don’t respond to usual treatment How Can It Be Prevented? One of the reasons that HIV is so dangerous is that a person can have the virus for a long time without knowing it. So making smart choices about sex and not using drugs is the best way to avoid HIV/AIDS. HIV transmission can be prevented by: - not having oral, vaginal, or anal sex (abstinence) - always using latex condoms when having oral, anal, or vaginal sex - avoiding contact with the body fluids through which HIV is transmitted - never sharing needles How Do Doctors Test for and Treat HIV? Doctors now recommend that all people have at least one HIV test by the time they are teens. If you are having sex, have had sex in the past, or shared needles with someone else, your doctor will probably recommend that you get tested at least once a year. If you have questions about HIV and want to get tested, you can talk to your family doctor, pediatrician, adolescent doctor, or gynecologist. People also can get tested for HIV/AIDS at pretty much any clinic or hospital in the country. Clinics offer both anonymous testing (meaning the clinic doesn’t know a person’s name) and confidential testing (meaning they know who a person is but keep it private). Most clinics will ask you to follow up for counseling to get your results, whether the test is negative or positive. The HIV test can be either a blood test or a swab of the inside of your cheek. Depending on what type of test is done, results may take from a few minutes to several days. Let the doctor know the best way to reach you confidentially with any test results. If you had unprotected sex with someone you know has HIV or if you were raped or forced to have sex by someone, see your doctor or go to the emergency room right away. They might be able to give you medications to prevent HIV infection (within 72 hours), and do the appropriate follow-up testing. If you’re not sure how to find a doctor or get an HIV test, you can contact the National AIDS Hotlines at 800-448-0440 (Monday-Friday 1pm-4pm EST). A specialist there will explain what you should do next. There is no cure for HIV. That’s why prevention is so important. Combinations of antiviral drugs and drugs that boost the immune system have allowed many people with HIV to resist infections, stay healthy, and prolong their lives, but these medications are not a cure. Right now there is no vaccine to prevent HIV and AIDS, although researchers are working on developing one. Pelvic Inflammatory Disease (PID) Pelvic inflammatory disease (PID) is an infection of the fallopian tubes, uterus, or ovaries. Most girls with PID develop it after getting a sexually transmitted disease (STD), such as chlamydia or gonorrhea. Girls who have sex with different partners or don’t use condoms are most likely to get STDs and be at risk for PID. If PID is not treated, it can lead to internal scarring that might cause ongoing pelvic pain, infertility, or an ectopic pregnancy. What Are the Symptoms of PID? PID can cause severe symptoms or very mild to no symptoms. Girls who do have symptoms may notice: - pain and tenderness in the lower abdomen - bad-smelling or abnormally colored discharge - pain during sex - spotting (small amounts of bleeding) between periods - chills or fever - nausea, vomiting, or diarrhea - loss of appetite - backache and perhaps even difficulty walking - pain while peeing or peeing more often than usual - pain in the upper abdomen on the right What Can Happen? Any girl who has signs of an STD should get medical care as soon as possible. An untreated STD has a greater chance of becoming PID. If PID is not treated or goes unrecognized, it can continue to spread through a girl’s reproductive organs. Untreated PID may lead to long-term reproductive problems, including: - Scarring in the ovaries, fallopian tubes, and uterus. Widespread scarring may lead to infertility (the inability to have a baby) and chronic pelvic pain. A teen girl or woman who has had PID multiple times has more of a chance of being infertile. - Ectopic pregnancy. If a girl who has had PID does get pregnant, scarring of the fallopian tubes may cause the fertilized egg to implant in one of the fallopian tubes rather than in the uterus. The fetus would then begin to develop in the tube, where there is no room for it to keep growing. This is called an ectopic pregnancy. An untreated ectopic pregnancy could cause the fallopian tube to burst suddenly, which might lead to life-threatening bleeding. - Tubo-ovarian abscess (TOA). A TOA is a collection of bacteria, pus, and fluid in the ovary and fallopian tube. Someone with a TOA often looks sick and has a fever and pain that makes it difficult to walk. The abscess will be treated in the hospital with antibiotics, and surgery may be needed to remove it. How Is PID Diagnosed and Treated? If you think you may have PID, see your gynecological health care provider (your family doctor or nurse practitioner, gynecologist, or adolescent doctor) right away. The longer a girl waits before getting treatment, the more likely it is that she will have problems. If a doctor thinks a girl has PID, he or she will do a physical exam, including a pelvic exam. The exam can show if a girl has a painful cervix, abnormal discharge from the cervix, or pain over one or both ovaries. The doctor may also take swabs of fluid from the cervix and vagina, and this fluid will be tested for STDs. He or she may also do a pregnancy test. Sometimes health providers take blood or do urine tests to look for signs of infection, including STDs like chlamydia and gonorrhea. Sometimes doctors need an ultrasound or CAT scan of the lower abdomen to see what’s going on with a girl’s reproductive organs. Ultrasounds are often used to diagnose a TOA or ectopic pregnancy. If doctors find that a girl has PID, they will prescribe antibiotics to take for a couple of weeks. It’s vital to take every dose of the medicine to completely treat the infection, even if a girl’s symptoms go away before she finishes the medicine. It’s also important that girls with PID get rechecked 2–3 days after beginning treatment to make sure that they are improving. A girl who has taken all her medicine for PID but still isn’t feeling better should follow up with her doctor. Girls with more severe cases of PID might have a fever or vomiting, and not respond to medicines by mouth. They, and girls with PID who are pregnant, often are treated in the hospital for a few days with antibiotics given directly into a vein through an IV. Surgery is sometimes needed if a girl has an abscess. Ectopic pregnancies can require emergency surgery. If a girl has PID, her sexual partners should be checked for STDs right away so they can get treatment. And, a couple should hold off on having sex again until at least 7 days after both partners have finished treatment. An untreated partner is likely to reinfect a girl with the same STD again. Can PID Be Prevented? The best way to prevent STDs or PID is to not have sex (abstinence). For those who choose to have sex, it’s important to use protection and to have as few sexual partners as possible. Using latex condoms properly and every time you have sex helps protect against most STDs. However, it’s also very important to have regular checkups with your doctor. And if either partner has any symptoms of STDs, both partners should be tested and treated as soon as possible. So when you’re making choices about sex, be smart and be safe. Pubic Lice (Crabs) What Are They? Pubic lice are tiny insects that can crawl from the pubic hair of one person to the pubic hair of another person during sexual contact. People also can catch pubic lice from infested clothing, towels, and bedding. Once they are on a person’s body, the insects live by sucking blood from their host. Pubic lice are sometimes called “crabs” because when seen under a microscope they look like tiny crabs. What Are the Symptoms? Pubic lice can cause intense itching. A person who has been exposed to pubic lice may notice tiny tan to grayish-white insects crawling in their pubic hair. He or she may also see tiny oval-shaped, yellow to white blobs called nits clinging to the hair. Nits are about the size of a pinhead, and are the louse eggs. Nits can’t be easily removed from the hair with the fingers — “nit combs” made especially to remove the eggs are sold at drugstores and many grocery stores. Someone who has been exposed to pubic lice may not notice symptoms for a few weeks. The primary symptom of pubic lice is itching, especially at night, but lice can also leave bluish-grayish marks on the thighs and pubic area from bites. What Can Happen? It’s unusual for pubic lice to create any serious health problems, but the itching can be very uncomfortable, and it’s easy to transmit pubic lice to others. The female louse survives an average of 25 to 30 days and each can lay 20 to 30 eggs. Lice can also live away from the body for 1 to 2 days. So it’s important to get properly diagnosed and treated, or it can take a while to get rid of them. How Is It Treated? If you think you may have pubic lice or if you have had a partner who may have pubic lice, see a doctor or gynecologist right away. If the doctor diagnoses pubic lice, you may be prescribed medication or told to buy an over-the-counter medicine that kills the lice and their eggs. The important thing to remember is that the treatment you use may need to be repeated after 7 to 10 days to kill any lice you didn’t get the first time. And anyone who is treated for pubic lice should be tested for other STDs as well. You will also need to dry clean or use very hot water and a hot dryer cycle to wash and dry all your bedding, towels, and recently worn clothing to properly kill the lice and their eggs. Anyone with whom you’ve had sexual contact (oral, anal, or vaginal) in the last month should check for pubic lice immediately. Properly using condoms every time you have sex is always important. But while condoms help protect against other STDs, they do not cover the entire pubic area, so someone who has pubic lice can still pass them to a partner. What Is It? Syphilis (pronounced: SIFF-ill-iss) is a sexually transmitted disease (STD) caused by a type of bacteria known as a spirochete (through a microscope, it looks like a corkscrew or spiral). It is extremely small and can live almost anywhere in the body. The spirochetes that cause syphilis can be passed from one person to another through direct contact with a syphilis sore during sexual intercourse (vaginal, anal, or oral sex). A person also can get syphilis by kissing or touching someone who has sores on the breasts, or on or inside the mouth or genitals. A mother can pass the infection to her baby during pregnancy. You cannot catch syphilis from a towel, doorknob, or toilet seat. In the 1990s, there was a decrease in the number of people infected with syphilis. More recently, though, there has been a steady increase in reported cases of syphilis, especially in young adults and in men who have male sexual partners. In its early stages, syphilis is easily treatable. But if left untreated, it can cause serious problems — even death. So it’s important to understand as much as you can about this disease. What Are the Symptoms? Syphilis happens in several different stages: In the first stage of syphilis, red, firm, painless and sometimes wet sores appear on the vagina, rectum, penis, or mouth. There is often just one sore, but there may be several. This type of sore is called a chancre (pronounced: SHANG-ker). Chancres appear on the part of the body where the spirochetes moved from one person to another. Someone with syphilis also may have swollen glands during this first stage. After a few weeks, the chancre will disappear, but the disease doesn’t go away. In fact, if the infection hasn’t been treated, the disease will get worse. Syphilis is highly contagious during this first stage. Unfortunately, it can be easy to miss because the chancres are painless and can appear in areas that may not be easy to see, like in the mouth, under the foreskin, on the cervix, or on the anus. This means that people may not know that they are infected, and can pass the disease to others without realizing it. If syphilis hasn’t been treated yet, the person will often break out in a rash (including on the soles of the feet and palms of the hands). The infected person might get flu-like symptoms, such as fever and achiness. This can happen weeks to months after the chancre first appears. Sometimes the rashes can be very faint or look like rashes from other infections and, therefore, may be ignored or not even noticed. Sores sometimes appear on the lips, mouth, throat, vagina, and anus — but many people with secondary syphilis don’t have sores at all. The symptoms of this secondary stage will go away with or without treatment. But if the infection hasn’t been treated, the disease can continue to get worse. Syphilis is still contagious during the secondary stage. If syphilis still hasn’t been treated yet, the person will have a period of the illness called latent (hidden) syphilis. This means that all the signs of the disease go away, but the disease is still very much there. Even though the disease is “hiding,” the spirochetes are still in the body. Syphilis can remain latent for many years. If the disease still hasn’t been treated at this point, some develop tertiary (or late-stage) syphilis. This means the spirochetes have spread all over the body and can affect the brain, the eyes, the heart, the spinal cord, and bones. Symptoms of late syphilis can include difficulty walking, numbness, gradual blindness, and possibly even death. How Long Until Symptoms Appear? A person who has been exposed to the spirochetes that cause syphilis may notice a chancre from 10 days to 3 months later, though the average is 3 weeks. If it’s not treated, the second stage of the disease may happen anywhere from about 2 to 10 weeks after the original sore (chancre). Remember, many people never notice any symptoms of syphilis. This means it is important to let your doctor know that you are having sex, so that he or she can test you for syphilis even if you don’t have any symptoms. What Can Happen? Syphilis can be very dangerous if left untreated. In both guys and girls, the spirochetes can spread throughout the whole body, infecting major organs. Brain damage and other serious health problems can happen, many of which can’t be treated. A woman who is pregnant and hasn’t been effectively treated is at great risk of putting her baby in danger. Untreated syphilis also can cause major birth defects. Syphilis also increases the risk of HIV infection because HIV can enter the body more easily when there’s a sore present. How Is It Treated? If you think you may have syphilis or if you have had sexual contact with someone who might have syphilis, see your doctor or gynecologist right away. It can sometimes be difficult to spot chancres. So it’s important to get checked on a regular basis, especially if you have had unprotected sex and/or more than one sex partner. Depending on the stage, the doctor can make a diagnosis by examining the discharge from chancres under a special microscope or by doing a blood test to look for signs of infection. Let the doctor know the best way to reach you confidentially with any test results. Early stages of syphilis are easily cured with antibiotics. Someone who has been infected for a while will need treatment for a longer period of time. Unfortunately, damage to the body from the late stage of syphilis cannot be cured. However, even in the late stage, it is important to get treatment to prevent further damage. Anyone with whom you’ve had sex also should be checked for syphilis immediately. How Is Syphilis Prevented? The best way to prevent any STD is to not have sex. However, for people who decide to have sex, it’s important to use protection and to have as few sexual partners as possible. Latex condoms are effective against most STDs; however, if there are any sores or rashes, avoid sex until the person has seen a doctor for treatment. What Is It? Trichomoniasis is one of the most common sexually transmitted diseases (STDs). The germ that causes trichomoniasis can be passed from one person to another during sexual intercourse. The good news is that trichomoniasis can be prevented and is curable. How Does a Girl Know She Has It? A girl with trichomoniasis can get vaginitis, which is the medical term for irritation of the vagina. A girl who has trichomoniasis may have vaginal discharge that can be gray, yellow, or green, and may be foamy. This discharge may have a foul odor, and a girl’s vagina may feel very itchy. A girl with trichomoniasis may find it very painful to urinate (pee). Trichomoniasis also can cause an achy abdomen and pain or bleeding during sexual intercourse. Some girls do not have any symptoms. How Does a Guy Know He Has It? In most cases, guys won’t notice any symptoms. However, a guy who has trichomoniasis may notice some temporary irritation inside his penis or a mild burning feeling when he pees or after sex. When Do Symptoms Appear? Symptoms usually appear 5 to 28 days after a person has been exposed. What Can Happen? Untreated trichomoniasis can turn into an infection of the urethra or bladder. Trichomoniasis can make someone more susceptible to getting HIV. In pregnant women, trichomoniasis can cause the baby to be born early or to be born with a low birth weight. If a patient has trichomoniasis, a doctor usually will also test for other STDs like gonorrhea and chlamydia because these STDs sometimes happen together. How Is It Treated? If you think you may have trichomoniasis or if you have had a partner who may have trichomoniasis, you need to see your family doctor, adolescent doctor, or gynecologist. He or she will do an exam and swab the vagina or penis for secretions, which will then be tested. Doctors usually prescribe antibiotics for people who are diagnosed with trichomoniasis. Sexual partners should be treated at the same time, and people being treated should not have sex until they have finished their treatment and no longer have symptoms. It’s better to prevent trichomoniasis than to treat it, of course. The only way to completely prevent infection is to not have oral, anal, or vaginal sex (this is called abstinence). People who choose to have sex should use a latex condom every time, and limit their number of sexual partners. Condoms are the only birth control method that will help prevent trichomoniasis.
<urn:uuid:3338ba97-c790-4f93-946e-f817bb85bc6b>
CC-MAIN-2022-40
https://debuglies.com/2018/06/13/what-are-sexually-transmitted-diseases-staying-healthy-and-preventing-stds/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00110.warc.gz
en
0.94932
12,705
3.375
3
TikTok Content Could Be Vulnerable to Tampering: ResearchersVideo-Sharing Service Does Not Always Use TLS/SSL Encryption TikTok, a Chinese video-sharing social networking service, has been delivering video and other media without TLS/SSL encryption, which means it may be possible for someone to tamper with the content, researchers say. That could be especially damaging with a service as globally popular as TikTok and in the current pandemic environment, where misinformation and confusion abounds. A demonstration video shows how it would be possible to make it appear misleading content, including messages such as “Washing hands too often causes skin cancer” and “Covid19 is a hoax,” came from widely followed accounts, HTTPS: Not Quite Everywhere There have been many efforts over the last years to get the world’s apps and websites to use TLS, including projects such as the Electronic Frontier Foundation’s HTTPS Everywhere and Let’s Encrypt, which offers free digital certificates. Use of TLS is indicated by HTTPS in the URL window, showing that content is encrypted while in transit. An absence of TLS means that attackers could modify content while in transit, known as a man-in-the-middle attack, or observe what content someone is requesting. Mysk tells Information Security Media Group that he and Bakry did not notify TikTok before releasing their findings. That’s because they did not discover a vulnerability in the usual sense but rather a questionable design decision. The decision may have been made for performance reasons. “The way TikTok uses HTTP is clearly by design and not by mistake,” says Mysk, a computer scientist who focuses on software for the automobile industry. “This is why we decided to address the public and raise awareness.” In a statement, TikTok says that it “prioritizes user data security and already uses HTTPS across several regions, as we work to phase it in across all of the markets where we operate.” It would appear that TikTok is in a position to easily flick on HTTPS. Mysk says TikTok’s website transmits all content over HTTPS. That’s perhaps because a browser will display a warning if there’s no HTTPS, a measure put in place by browser makers to encourage its use. Mysk says he was able to find a HTTPS URL for every HTTP URL that’s used when someone is on a mobile device. He says TikTok may intentionally use HTTP connections on mobile for some reason. TikTok does use TLS for some network traffic, writes Paul Ducklin, principal research scientist at Sophos, in a blog post. But much content, including profile photos, videos and still frames from the videos that comes back from its CDN are not encrypted, he writes. Ducklin, who used Wireshark to look at the traffic, writes that he was able to replicate the findings on Android version 15.5.44. Mysk and Bakry used Android version 15.7.4 and iOS version 15.5.6. There would be multiple opportunities to tamper with traffic. Some methods depend upon hijacking DNS services on a victim’s network or tampering with a router in between a victim and TikTok’s CDN. Mysk and Bakry set up a server that mimicked TikTok’s CDN and loaded it with misleading content. They then directed TikTok’s app to the fake server by modifying a DNS entry to map one of its CDN domain names to the IP address of the bogus server. “To make it simple, we only built a scenario that swaps videos,” Mysk and Bakry write. “We kept profile photos intact, although they can be similarly altered. We only mimicked the behavior of one video server. This shows a nice mix of fake and real videos and gives users a sense of credibility.” Any entity that sits in between a user and TikTok could have opportunities to do the same kind of trick, they write. That would include free Wi-Fi hotspots, ISPs, VPN providers and government or intelligence agencies. The researchers offer many timely examples of how tampering could be harmful. For example, another fake message they created was: “Staying home is the main cause of claustrophobia.” Some of their examples showed how harmful messages could be planted to appear as if posted by the World Health Organization, the British Red Cross and American Red Cross. “TikTok, a social networking giant with around 800 million monthly active users, must adhere to industry standards in terms of data privacy and protection,” they write.
<urn:uuid:4b117de7-a69c-4839-b13b-dfe72aa3461e>
CC-MAIN-2022-40
https://www.bankinfosecurity.com/tiktok-content-could-be-vulnerable-to-tampering-a-14125
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00110.warc.gz
en
0.942074
993
2.6875
3
The concept of “The Enemy at the Gates” has been used to describe the security threats facing organizations around the world. The problem with this analogy is that the threats are no longer limited to just the “gates”. Today’s borders are fluid because our endpoints live everywhere. In the traditional model, we focused on (and were limited to) protecting what’s inside our offices. With people working from anywhere and data everywhere, we can no longer depend on a single gate of firewalls to protect our entire environment. Let’s look back at a medieval concept, and see how we can apply it to the modern threats we face today, creating a layered defense able to fortify the modern day castle. According to Ponemon’s The State of Cybersecurity in Local, State and Federal Government, in the last ten years, there have been over 10,000 breaches in the United States resulting in around $1.6 trillion in losses. Sadly, California has been hit more than anybody else. In addition, the report highlights how ill prepared states are when it comes to preventing and recovering from attack. In the following chart, we can see the difference in preparedness between federal and state respondents in four key areas: So how do state and local governments improve their preparedness in a world where attacks are coming from multiple directions and locations? We believe the answer is what we call Defense in Depth. Defense in Depth takes into account your information security: how you manage it and how you protect it. But it also applies to your infrastructure: everything from your data centers to your literal end points – the PCs and Macs and mobile devices that are out in the field. Organizations need to account for both to understand how best to apply security broadly. Defense in Depth also incorporates an offensive and defensive approach. Not only should you be able to recover quickly from a failure, but also have the tools, capabilities and visibility to know an attack is happening early and prevent it, block it or stop it before it takes hold and causes damage. In medieval castles, perimeters would provide a first layer of defense against an enemy entering through the gates. Next would be a moat and a drawbridge. Then archers on the piers shooting oncoming attackers. Finally, there would be guards in the towers monitoring long distances for oncoming enemies. These layers of defense for a castle translate to today’s layers of security defense. However, rather than being a physical construct, it is a technological one. The moats are our firewalls, the gates are the ports that we open up and the ACLs that we set up in those firewalls. The guard towers are our SIEMs. And the security guards who are standing in those towers are our SOC analysts. Even with all of these tools in place, there is no guarantee that you can prevent adversaries from coming inside your network. So how do you respond? By building incident response plans, tools and systems. And to ensure you know when something is happening zero-day in your network, a User Behavior Analysis is critical. It is important to note that a SIEM without a SOC is like a guard tower without a guard. Being able to have people who are looking constantly at your environment, monitoring events, looking for threats and investigating those threats is the key to understanding what is happening in your environment and reacting before something happens. This multi-layer “in depth” approach protects every layer of your platform and also proactively monitors it. If you have all the right tools in place and you’re not monitoring and interpreting the data, you really don’t have visibility into threats that are happening. Unfortunately, the number of tools that are out there is massive, and many organizations find it difficult to pick the right tools to fit in each of those categories and address the problems in their environment. Possessing years of real-world experience with these products across many companies, DataEndure helps our customers understand what’s out there, what works and what doesn’t. Other organizations have made significant investment in tools already; yet still lack confidence in their security posture. A security controls validation can provide critical insight, determining if the tools are doing what they are supposed to. Are your defense systems robust enough to withstand today’s threats? Sign-up for our free Security Health Check and get answers within 14 days.
<urn:uuid:804257c7-2193-437b-b394-7576104dccb0>
CC-MAIN-2022-40
https://www.dataendure.com/blog/achieving-a-layered-cybersecurity-defense-for-state-and-local-government/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00110.warc.gz
en
0.957185
913
2.9375
3
Apple is expanding its renewable energy and environmental protection initiatives in China,currently the company’s largest overseas market and the base for majority of its manufacturing activities. Initiatives include collaboration with the World Wildlife Fund (WWF) to protect forests. The partnership aims to protect about one million acres of working forests, which offer fiber for pulp, paper and wood products. Apple, which has carried out similar projects in the US, plans to achieve a net-zero impact on the world’s supply of sustainable virgin fibre and make 100% of its global operations run on renewable energy. WWF China CEO Lo Sze Ping said: "This collaboration between our two organisations will seek to reduce China’s ecological footprint by helping produce more wood from responsibly managed forests within its own borders." The tech giant has also unveiled its plan to expand its renewable energy projects to manufacturing plants in China. Last month, Apple launched a solar energy project in Sichuan Province, China. The project, which will feature two 20MW solar farms will generate enough energy to power all of the company’s corporate offices and retail stores in the country. Apple CEO Tim Cook said: "We’ve set an example by greening our data centres, retail stores and corporate offices, and we’re ready to start leading the way toward reducing carbon emissions from manufacturing. "This won’t happen overnight — in fact it will take years — but it’s important work that has to happen, and Apple is in a unique position to take the initiative toward this ambitious goal."
<urn:uuid:3d096bdc-c08a-498e-ae40-bbd1c41128c5>
CC-MAIN-2022-40
https://techmonitor.ai/technology/data-centre/apple-expands-green-initiatives-in-china-110515-4573098
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00110.warc.gz
en
0.939575
326
2.515625
3
Maksim Kabakou - Fotolia Intangible information assets can represent more than 80% of an organisation’s total value, yet many organisations remain focused on protecting their security perimeter and IT infrastructure rather than actual data. This leaves organisations blind to the movement of data – both within and out of the network – and therefore unable to effectively protect data from exfiltration, or even detect when data has leaked. In today’s world of remote working, external collaboration and use of personal devices, data is accessed by multiple users through different devices and networks, which may not be controlled by an organisation. Meanwhile, the advent of cloud services and file-sharing applications has not only contributed to the ease of sharing data, but also created more locations for data to reside. Keeping track of where data flows, how it is used on endpoint devices, who accesses the data and where it resides is not a straightforward undertaking, but it is essential to prevent data from leaving an organisation. Monitoring the usage and movement of data also helps to safeguard intellectual property, such as trade secrets, and enables compliance with regulatory requirements like the General Data Protection Regulation (GDPR). The proliferation of data, coupled with technological advances, means data now exists in multiple places in different formats. Consequently, there are a range of channels through which data can leak, such as email, social media, instant messaging, web posts, cloud storage, screen capture and portable storage devices. By mapping data flows, organisations can establish where data is at risk of leaking and put in place the requisite controls to treat the risk. This may involve fixing insecure business processes, educating users on the proper handling of data or remediating user activities through technology, such as data leakage prevention (DLP) tools. Read more about information management - Information management means better security. - Data governance is good for business and security. - Data controllers are essential in modern business environment. - Data governance is essential to data security. - Use data flow information to protect systems. - Ignorance about data is tantamount to negligence. - Understand data for risk-based protection. DLP tools can detect data according to specified parameters and apply protective actions to stop users from leaking data (e.g. block the transfer of a message, encrypt data or move a file to a secure location). Over recent years, DLP tools have made a resurgence (driven in part by regulatory requirements) and become a common security control for protecting data that is in transit over the network, in use on endpoints and at rest in storage. By deploying DLP tools, organisations can gain visibility of their data, including where it is located and how it is processed. Importantly, research by the Information Security Forum identified that DLP tools deliver value and succeed in reducing the risk of data leakage only if they are implemented as part of a holistic DLP programme that incorporates a range of elements spanning technology, people and process. Integral to the success of a DLP programme is executive buy-in and the ongoing input of business stakeholders. DLP is a security control dedicated to addressing a business problem, so to realise its benefits, it needs to be properly deployed and maintained in a way that aligns with the business. As DLP continues to surge in popularity, those organisations who do not deploy some form of DLP technology will struggle to claim their information security is contemporary, robust and comprehensive.
<urn:uuid:ad7ce418-0561-4c65-9462-74e99751d3d1>
CC-MAIN-2022-40
https://www.computerweekly.com/opinion/Security-Think-Tank-Focus-on-data-protection-but-do-not-rely-on-DLP-alone
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00110.warc.gz
en
0.925332
708
3.09375
3
The means by which communication is conveyed in every given field including engineering, healthcare, and research must be as effective as possible in order to drive healthy result-yielding discussions. With the Big Data issue currently being experienced by most organizations, representation of data becomes even more important than ever before. If you have ever encountered the architects and urban planners, you must have definitely fallen in love with the manner in which they present their models. These kinds of 3D models are not only beneficial to them as the professionals but to their clients as well. Every organization needs to make informed decisions and advancements. They can only do these through analyzing, manipulating, and leveraging whatever data they have at hand. Unfortunately, valuable insights are lost as the days go by when viewing data with limited degrees of freedom. What 3D Visualization Solutions do is that they put into maximum use the cognitive powers the human mind bears. According to scientists and psychologists, vision is our primary sense of learning and amounts to approximately 80% of our cognition capabilities. Visual cognition is such a powerful tool that different industries including Building Information Modeling, Manufacturing, Process Power and Marine, Healthcare, Business, among others must fully exploit it to their advantage. In fact, if possible, all process can be supported with 3D visualization tools from renown dealers such as www.Spatial.com. Some of these tools include 3D Data Translations, 3D Modelling, 3D Visualization, and 3D add-ons. The main advantage that comes with 3-dimensional representations of complex structures is that they work best for discussions and even selling of elaborate solutions to potential clients. 2-dimensional diagrams such as those produced by CAD software are really nice for engineers and other professionals with an in-depth understanding of what is taking place but not business-friendly; the molecular biologists discovered this long time ago. When viewing 3D data on a screen, it provides an immediate immense visualization that enables team of researchers, engineers, disaster managers, among other groups quickly gain insights and solve the problem at hand. 3D visualization allows users to find the outliers in their huge chunks of information that really matters a lot. For example, in the manufacturing industry, not even the smallest details can be ignored since they might heavily negatively impact the final product outcome. While interacting with big data that many organizations currently handle, what matters most is seeing trends, linking patterns, discovering outliers, unleashing unanticipated relationships between different datasets, and doing all these in a fast and most effective way. These result in a seamless decision-making experience for organizations which boost how they sustain their engagements and enhance their collaborations by producing greater outcomes. With 3D visualization solutions; engineers will quickly evaluate their new products and process their designs, oil and gas professionals will fast unearth opportunities hidden in well locations, healthcare organizations will prosper in turning their enormous data into information, governments will comprehend where they are in comparison with where they target to be; every industry will realize fast progress as a result of informed decision-making processes.
<urn:uuid:b60fbd7f-f435-462c-a0e5-280a9fa4aa8a>
CC-MAIN-2022-40
https://drawntoscalehq.com/consider-3d-visualization-solutions-tackling-data-science/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00110.warc.gz
en
0.957169
616
2.765625
3
OpenSSL flaw unlocks the internet's 'crown jewels' If all of your system adminstrator friends are looking worried today it isn't the usual post Patch Tuesday blues, it's because of a bug in something that you may never have heard of, but which almost certainly affects your everyday use of the web. OpenSSL is a cryptographic library that is used to secure large chunks of the internet. If you use sites or apps that send and receive encrypted data then it’s very likely they use OpenSSL to do it. It's used by open source web servers like Apache as well as by mail protocols including SMTP, POP and IMAP. Thanks to a bug uncovered by Google Security that researchers are calling "Heartbleed" it's possible to fool OpenSSL systems into revealing part of the data in their system memories. This might be things like credit card transactions but it’s potentially even more serious than that. Security specialist Graham Cluley writing on his blog says, "...it could also disclose the secret SSL keys themselves. These are the 'crown jewels', and could be used by malicious hackers to do even more damage, without leaving a trace". If you want a detailed look at the flaw and how hackers may be able to exploit it, Elastica's CTO Dr Zulfikar Ramzan has posted a detailed walkthrough on his company's website. He stresses that the flaw is not inherent in the SSL/TLS protocol itself but in the specific OpenSSL implementation. Finnish security specialist Codenomicon has a Heartbleed website with details of the bug and which lists vulnerable versions. It also lists operating systems that have shipped with potentially vulnerable OpenSSL versions, these include major Linux distros like Debian and Ubuntu. It advises that site admins need to update to OpenSSL 1.0.1g immediately, and regenerate private keys. If an update to the latest version of OpenSSL isn’t possible it advises developers to recompile OpenSSL with the compile time option OPENSSL_NO_HEARTBEATS. The bug has been in OpenSSL since December 2011 though it was only publicly announced yesterday. Whilst it isn’t known if it’s been exploited in the wild, Heartbleed leaves no trace in the server’s logs so it’s hard to know if a system has been compromised. The official security advisory is available here.
<urn:uuid:1736397b-724d-4122-8f6d-8f2c25bad184>
CC-MAIN-2022-40
https://betanews.com/2014/04/09/openssl-flaw-unlocks-the-internets-crown-jewels/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00110.warc.gz
en
0.95307
498
2.65625
3
Two factor authentication adds an extra step to the process of logging into your accounts. It makes you safer. Until recently two factor authentication was only used by techies and high value targets in government and enterprises. The world has gotten more dangerous and two factor authentication has become easier to use. Now many of us should think about doing more to protect our identity, our data, and our clients’ secrets. Most descriptions of two factor authentication (AKA “two step verification”) make it sound complicated. Let’s see if we can de-mystify it. What is two factor authentication (short version)? When two factor authentication is turned on for one of your accounts – Google, LastPass, your bank – you have to enter your password, PLUS you have to supply one more thing. The extra thing might be a code that is sent as a text message to your phone, or a number generated by an app on your phone, or something else. You’ll go to a website and put in your password like usual. When two factor authentication is enabled, you’ll then be prompted for a code. You can’t get into the account until you put in the code. Why should you use two factor authentication? When you set up two factor authentication, your account is still secure even if the password is hacked. Security starts with good password habits. Before you do anything else, you should start using LastPass and make sure your passwords are unique and complex. But after you do that, you are still at risk, because passwords are hacked all the time. To say that your password is “hacked” just means that some bad guys have learned what it is. That generally happens in one of two ways: • You might give away your own password if you are fooled by a phishing message. More than 90% of successful cyberattacks start with phishing emails. • The bad guys might get your password when a big company gets hacked. There is nothing you can do to prevent that. It has happened many times in the last few years and it will happen many more times to come. If an account is secured by two factor authentication, then the bad guys can’t get into the account even if they get the password. They’ll be asked for the other thing – the text message code or the number from the app on your phone – and they won’t have any way to supply it. What is two factor authentication (long version)? Two factor authentication means your account can only be opened if you supply something you know with something you have or something you are. You haven’t thought about it but your bank uses two factor authentication every time you walk up to an ATM. You insert something you have – your debit card – and you type in something you know – your PIN. That’s two factor authentication. Online accounts almost always start with a password. Your password is something you know. Your phone is something you have. Since everyone always has their phone, it’s become the most common way to add an additional step to authenticate you. There are three ways that you can use your phone for two factor authentication – SMS text message, an authenticator app, or biometrics. • Code sent by text message to your phone This is the most common kind of two factor authentication. Your account is set up so it cannot be opened until a six or seven digit code is typed in. The code is sent by text message to the phone number that you have on file. • Code created by an authenticator app on your phone You can install an app on your phone that generates codes every 30 seconds. The best known and most widely used is Google Authenticator. There are also apps from LastPass, Microsoft, and others. Authenticator apps are considered to be more secure than SMS text messages. This should be your first choice, if it’s available. After you install the authenticator app on your phone, setting it up for one of your accounts is usually easy. Log into your LastPass Vault, for example. In Settings / Account Settings / Multifactor Options, choose two factor authentication with Google Authenticator. In the next window, you’ll see a barcode. On your phone, you’ll open Google Authenticator and hold it up so the camera can see the barcode. That’s all there is to it. In a second or two, the Google Authenticator app will begin generating codes for LastPass. • Biometrics The second factor doesn’t have to be a number. It’s also possible to set up two factor authentication with something you are – your fingerprint, or facial recognition, or even retinal or iris scans from a camera scanning your eye. Voice identification? Maybe, someday. Although these are rare today, lots of companies are working in this area right now, trying to improve your security and help us get out of our password hell. There’s one more type of authentication – and this one is considered to be the most secure. • A hardware key Services with the highest level of security (for example, LastPass and Google) let you set up two factor authentication with a hardware key that you carry with you on your keychain. The best known are made by YubiKey. They range from fingernail size to the size of a thin USB stick, and typically are inserted into a USB port when required as part of logging into a website or online service. You can get a YubiKey with NFC built-in that can be tapped on a phone. Last month Google introduced its own hardware key, the Titan Security Key. In time, it will likely be accepted at as many places as YubiKey, but at the moment it’s not quite as widely supported. Security keys provide the best defense against account breaches. A hacker on the other side of the world trying to break into your account needs not only your password but also your physical hardware key. Hardware keys have long been used by high risk targets like journalists, human rights activists, and government officials. Google has been issuing hardware keys to its employees since 2012 and recently highlighted the effectiveness of hardware keys with a remarkable statistic: “At Google, we have had no reported or confirmed account takeovers due to password phishing since we began requiring security keys as a second factor for our employees.” Each key costs about $50. You have to buy two and set up both of them, then leave one in a safe place where it won’t get lost. If you only buy one and you lose it, you’ll be locked out of your account. That’s the point. I’ve started using a YubiKey whenever possible. I have to tap it to my phone to open LastPass. I have to insert it into a USB port on my computer to log into my Google account. It’s inconvenient. No getting around it. If I don’t have the key nearby, I can’t log in. What a pain! Is there something that makes this easier? Some services (including LastPass and Google) allow you to check a box for the service to trust the device that you’re using at that moment – perhaps permanently, perhaps for two weeks or a month. You still have to supply a password but you won’t be asked for the other factor – the code, the hardware key – because you’re using something that you trust and that is secure. If someone steals your phone, they can’t unlock it because they don’t have your fingerprint or your face, so it’s reasonable to decide that you don’t want the hassle of holding up a YubiKey every time you want to open LastPass on that phone. If someone steals your laptop but they don’t know your login password, they can’t get to Chrome and your Google account, so you might want to make the laptop a trusted device so you can stay logged into your Google account. The effect is that the inconvenience is minimized day to day but you still get increased protection. The extra step will still be required if you or anyone else tries to sign into your account from another computer. When should you use two factor authentication? Start with adding two factor authentication to LastPass. If you’re using LastPass properly, it has the keys to everything important in your life and deserves very high security. Your Google account should be protected by two factor authentication. Google has information that could compromise your privacy and your security. Your Google account might be linked to other services. Google might have passwords, files, or mail in addition to all of its personal information about you. If you use an online service to store confidential files – Dropbox, Box, OneDrive, Google Drive – then two factor authentication increases the security of those files. This is especially important if you have an obligation to protect your client documents. Lawyers and CPAs, for example, should be taking extra steps to make sure online files are secure. Obviously, anything to do with financial services deserves the extra security of two factor authentication – banks, QuickBooks Online, or Paypal, for example. Facebook strongly encourages use of two factor authentication. It’s especially important if your Facebook account is linked to other services. Microsoft supports two factor authentication for its business Office 365 services, particularly important for larger businesses that use a wide variety of Microsoft business services. Once you get in the habit, it’s easy to use two factor authentication and you may want to enable it whenever its available. I use Google Authenticator on my phone dozens of times a day as I go in and out of services that might affect my clients. Perhaps you should too.
<urn:uuid:d52c64b7-1b84-4a28-8fdc-f8e185c985d7>
CC-MAIN-2022-40
https://www.bruceb.com/2018/10/something-you-know-something-you-have-better-security-with-two-factor-authentication/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00110.warc.gz
en
0.932449
2,062
2.71875
3
The introduction of the General Data Protection Regulation (GDPR) back in May 2018 set a high bar in privacy protection for individuals within EU member states. The data privacy landscape in the U.S. has changed considerably in recent years and data protection rules are now aligned increasingly with a European approach, although there remain some big differences. This article looks at the differences between modern data privacy laws in the European Union and the US. GDPR: Brief Overview GDPR is a comprehensive data privacy law that applies to organizations that collect, store, or hold personal data belonging to data subjects in EU member states. The European Commission defines personal data as any information that relates to an identified or identifiable natural person (data subject). Organizations operating within EU countries, organizations that sell goods or services to EU citizens, and organizations that monitor the behavior of data subjects all must comply with GDPR. The rules for GDPR compliance are substantial and are based on seven key principles, including minimization in data collection, storage limitation, and accountability. Specific categories of sensitive data require extra protection. Non-compliance with GDPR splits penalties into two tiers based on the severity of the violations. Standard violations lead to penalties of up to €10 million or 2% of annual global turnover while the penalties for more severe violations can be up to €20 million or 4% of annual turnover. GDPR replaced the Data Protection Directive as that law was deemed insufficient in scope and strength for modern data privacy protection in Europe. Since the GDPR’s implementation, several important rulings by the European Court of Justice have further bolstered individual rights, including allowing consumer protection associations to take representative actions on behalf of consumers affected by GDPR infringements. Data Protection Authorities in each member state typically handle complaints lodged against GDPR violations. US Data Privacy Laws and Differences with EU Arguably the most significant difference in US legislation versus the EU is the lack of a comprehensive data privacy law that applies to all types of data and all U.S. companies. Instead, American law takes a more fragmented approach with various regulations governing different sectors and types of data, including: - The Health Insurance Portability and Accountability Act (HIPAA)—this federal law protects sensitive patient healthcare information by specifying how healthcare providers must secure such data against fraud and theft. The law also sets limits on how organizations can use or disclose protected health information. Updates to HIPAA appear likely to be announced sometime during 2022 or 2023 at the latest. - The Gramm-Leach-Bliley Act (GLBA)—this act applies to financial institutions and sets out responsibilities and standards to protect the confidentiality and security of consumers’ nonpublic personal information. The Federal Trade Commission (FTC) announced important changes to the GLBA’s Safeguards Rule (due to become mandatory in November 2022) detailing more prescriptive data security measures financial institutions need to take to protect customer data. - The Federal Information Security Management Act (FISMA)—this federal law requires federal agencies to develop, document, and implement an agency-wide program that provides information security. FISMA 2022 is a bipartisan update to FISMA that takes a cutting-edge and strategic approach to ensure federal IT systems can better prepare for and respond to today’s cyber challenges that threaten federal information and information systems from unauthorized access, use, and disclosure. Changing US Laws A notable trend is the recent or impending changes to several existing U.S data protection laws that reflect an increasingly interconnected world with larger volumes of data than ever moving around a more complex information ecosystem. The necessity of these changes exemplifies a different approach between laws in the EU and the US. GDPR arguably sets the standard for data privacy worldwide, and it hasn’t had to be amended yet. But the lack of a true privacy-first approach in America’s disparate data privacy regulations makes it necessary to update them in line with the fundamental rights people now expect around how their data is used, shared, or disclosed. CCPA and More In recent years, state laws have emerged that attempt to provide stronger protection of personal data to individuals in those jurisdictions and greater transparency around how data is being shared. The U.S. law that’s most comparable to GDPR is the California Consumer Privacy Act (CCPA), which applies to consumers who are California residents. CCPA became effective in January 2020, but the impending California Privacy Rights Act (CPRA) amends the privacy legislation to expand opt-out rights and introduce other changes that bring it in even closer alignment with GDPR. CPRA becomes effective in January 2023. Interestingly, Virginia and Colorado are the only two other U.S. states that have signed a comprehensive data privacy law. There are important cultural differences that can’t be ignored when assessing the differing data privacy laws between the EU and US. Exemplifying the different approaches is how The EU Charter of Fundamental Rights establishes data protection as a fundamental right. This privacy-first mindset probably stems from a history of individuals’ information being used for nefarious purposes stretching back to the days of National Socialism and Communism. In contrast, the U.S. has traditionally taken a more hands-off approach that favors the companies that collect and use personal data. The use of personal data for commercial purposes exceeds the importance of data privacy. Recent years have seen the mindset somewhat shifting towards better protecting individuals as data breaches continue to cause havoc, but the underlying cultural differences will take more time to dissolve and bring the US in fuller alignment with the EU’s mindset and laws. Replacing the Privacy Shield Framework An important regulatory change was announced in March 2022 with the Trans-Atlantic Data Privacy Framework set to replace the EU-U.S. Privacy Shield framework. Both of these regulatory frameworks relate to data transfers of EU personal data to the United States. The European Court of Justice invalidated the Privacy Shield in 2020 after an Austrian activist successfully claimed that the framework did not protect Europeans from U.S. surveillance. This ruling led to uncertainty for many companies, including the likes of Google and Facebook, about cross-border data transfers. The Trans-Atlantic Data Privacy Framework introduces safeguards that limit access to data by U.S. intelligence authorities to what is necessary and proportionate to protect national security. The result is likely to be freer cross-border data flows and less regulatory uncertainty for businesses operating in both regions. Navigating A Complex Data Economy Businesses today must navigate a complex data economy in which an increasing number of regulations require them to be very careful about how they collect, store, and use customer data. There are notable gaps in the scope and strength of US data privacy laws compared to Europe, but the tide continues to turn as existing US laws are amended and new ones come into effect. Regardless of which law(s) your business needs to follow, regulatory compliance with today’s data privacy laws is essential for maintaining customer trust and avoiding substantial legal and financial consequences. Frequently Asked Questions The 7 principles of data protection upon which GDPR is based are: 1. Lawfulness, fairness, and transparency 2. Purpose limitation 3. Data minimization 5. Storage limitation 6. Integrity and confidentiality Download our free ebook on A comprehensive guide for all businesses on how to ensure GDPR compliance and how Endpoint Protector DLP can help in the process.
<urn:uuid:0127d2aa-fcfe-44e7-8e58-7dbfeaf2a681>
CC-MAIN-2022-40
https://www.endpointprotector.com/blog/eu-vs-us-what-are-the-differences-between-their-data-privacy-laws/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00110.warc.gz
en
0.919265
1,529
2.875
3
A time period is the definition of a time interval for each day of the week. These time periods enable the functionalities of the scheduler over a given period of time. Time periods apply to two types of actions: - Execution of check commands - Sending of notifications The configuration of time periods is done in the menu: Configuration > Users > Time periods. - The Time period name and Alias fields define the name and description of the time period respectively. - The fields belonging to the Time range sub-category define the days of the week for which it is necessary to define time periods. - The Exceptions table enables us to include days excluded from the time period. Syntax of a time period When creating a time period, the following characters serve to define the time periods : - The character “:” separates the hours from the minutes. E.g.: HH:MM - The character “-” indicates continuity between two time periods - The character ”,” serves to separate two time periods Here are a few examples: - 24 hours a day and 7 days a week: 00:00-24:00 (to be applied on every day of the week). - From 08h00 to 12h00 and from 14h00 to 18h45 on weekdays: 08:00-12:00,14:00-18:45 (to be applied on weekdays only). Time Range exceptions The exceptions allow us to include exceptional days in the time period (overload of the definition of regular functioning of the day). E.g.: An administrator wants to define a time period which covers the times when the offices are closed i.e.: - From 18h00 to 07h59 on weekdays - Round the clock at weekends - National holidays and exceptional closure days To be able to define the national holidays days and the exceptional closure days, it is necessary to use the exceptions. To add an exception, click on the button . For each exceptional day, you will need to define a time period. The table below shows some possible examples : |january 1||00:00-24:00||All day on the 1st of January, every year.| |2014-02-10||00:00-24:00||All day on 10 February 2014| |july 1 - august 1||00:00-24:00||All day, every day from July 1 to August 1, every year| |november 30||08:00-19:00||From 08h00 to 19h00 every November 30, every year| |day 1 - 20||00:00-24:00||All day from the 1st to 20th of every month| |saturday -1||08:00-12:00,14:00-18:45||Every last Saturday of the month during opening hours| |monday -2||00:00-24:00||All day every second to the last Monday of the month|
<urn:uuid:f9a726c9-e482-45c8-9524-415ac1cd0d15>
CC-MAIN-2022-40
https://docs.centreon.com/docs/20.10/monitoring/basic-objects/timeperiods/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00310.warc.gz
en
0.844669
662
3.203125
3
It’s Not If, But When When it comes to information security incidents, it’s not a matter of if, but when. It’s impossible to protect yourself from every single incident and attack vector that can occur, so it’s important to put a focus on risk management rather than risk elimination. When risks occur, you’ll want the proper policies and procedures in place to ensure the effects of the incident are diluted. Security Incident Classification: The Basics of Managing Incidents When risk happens, the initial reaction is often to call for assistance from an information security expert immediately. Before doing so, however, there are steps we recommend the organization takes on its own. Much like a tech support help desk, it’s difficult for information security experts to assist with the issue without initial troubleshooting measures already in place. Understanding the basics of incident management is the simplest and most critical way to improve your response to risks. Incidents vs. Events Understanding the basics of incident management begins with understanding what an incident is. Events happen constantly throughout each day, but very few of them should be considered incidents. We define information security incidents as a suspected, attempted, successful or imminent threat of unauthorized access, use, disclosure, breach, modification, or destruction of information; interference with information technology operations and computer networks; or significant violation of policy. Before responding to an incident, make sure that it is something that needs a response. Security Incident Classification Not all events are incidents, and not all incidents are the same. There are numerous variations of incidents. Once an incident has been identified, the next step is to investigate and classify the incident in question. Your organization can handle incidents however they see fit, but we’ve developed a methodology for classifying information security incidents that can be used as a starting point that you can tweak for your organization. We rate each incident based on its type, criticality, and sensitivity. Type is broken out into three separate classifications and includes both the type of breach and the number of people it affected. We use a scale from one to three with one being the most severe. A category one incident type would typically involve system access or data loss affecting five or more people or user accounts. Criticality reflects the systems that the incident has affected. Again, we give this a number on a scale of one to three with one being the most severe. A one in criticality is a confirmed incident that affects critical systems. Sensitivity involves the information collected or impacted. This category also gets a rating from one to three. The most critical information that an incident can impact is extremely sensitive and confidential information, such as patient records at a hospital. It’s imperative that as your organization works through categorizing the incident, you document as much as you can and as early as you can. This will ensure everyone remembers facts and can clearly explain events as necessary. Determine the Severity Once classified, a response to the incident needs to take place. Take the time to understand what your previous categorizations mean to your organization. How impactful is the incident to your organization, your partners, your customers, or civilians? Measuring the impact or severity of the incident sets the framework for communicating a given incident: who to communicate the incident to, and how rapid their response must be. Manage expectations, responses, and ownership of incidents within your organization by classifying the incident as high, medium or low risk. Once the security incident classification, impact, and severity have been determined, how will it be communicated? Is it something that can be kept internal and handled over time? Should it be escalated up to your internal Incident Response Team (IRT) and shared immediately? Now, do we call for an external information security expert to help resolve the issue? The answers to these questions will all be determined by how your organization feels it’s appropriate to handle each of the classifications of incidents. External communication is an added element and part of the communication process that an organization must develop when managing risk. As consumers and civilians, we typically don’t see what goes on behind the scenes of an incident. We often only see what is communicated through various news outlets. You can learn a lot about ways you think an organization should handle its external communication by studying and familiarizing yourself with the incident responses of high-profile organizations like Target, Chipotle, Equifax, Uber, etc. When risks occur, you’ll want the proper policies and procedures in place to ensure the effects of the incident are minimized. Use these four basic steps of incident response and management as a framework for ensuring your organization is prepared to handle incidents that occur. Remember, not a matter of if, but when.
<urn:uuid:56221693-cdc2-46b9-a4bc-0318d230534e>
CC-MAIN-2022-40
https://frsecure.com/blog/information-security-incidents/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00310.warc.gz
en
0.948824
984
2.953125
3
The strategy was used to help to provide the wireless powering of the tags. Although when the majority of people who have heard of NFC technology think of it in terms of advertising and mobile payments, as well as pairing smartphones to other devices, it has now been made possible to use this tech for wireless powering of an E-ink display. A team of students and researchers came together in order to create this unique high tech tag. The team was made up of individuals from the University of Washington, the University of Massachusetts Amherst, and Intel Labs. It created the NFC-WISP E-Ink Display Tag, which is based on NFC technology as well as a low power E-ink panel so that Android smartphones can transfer data (both sending and receiving) as well as power without the need for any cables or wires. The NFC technology achieves this goal through the use of inductive coupling. By applying NFC technology in that way, it can provide power by way of otherwise passive tags. The E-ink display can then take advantage of this capability through the use of a microchip that provides wireless power harnessing and a 1mAh battery. As one can expect, the initial form of this tech doesn’t provide a tremendous amount of power, but it remains very promising for the future. The current use of NFC technology for power transfer doesn’t provide a huge amount of power but it was capable of offering enough that it could power a 2.7 inch display with enough stored energy that it could be used to cycle through images, even when it was not actually paired with the smartphone. As of yet, using NFC technology for that purpose is relatively useless other than considering it a way to provide power to a secondary smartphone display, but it does hold some potential for the future development of power transfer tech. The E-ink screen could end up becoming popular for things such as maps and directions and shopping lists without having to draw on your limited battery power from the power-pig of a smartphone screen. The device’s 0.5MB of memory can hold an estimated 20 images.
<urn:uuid:8bfbe1b7-1fe7-492e-acc3-40321701759b>
CC-MAIN-2022-40
https://www.mobilecommercepress.com/tag/near-field-communication-technology/page/3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00310.warc.gz
en
0.956908
431
3.109375
3
As the impact of global warming intensifies, countries and international organizations have been quick to draft and announce ambitious climate action plans. At the COP 26 summit, more than 100 world leaders committed to ending deforestation by 2030, and the US pledged to slash emissions by 30% by 2030. More than 35 world leaders have signed up to scale and speed the deployment of clean technologies. The promises have evoked mixed responses from the environmentalists and the public as solid execution plans don't back them in most cases. Many promises appear to be a part of greenwashing aimed at assuaging people. Similarly, around 200 organizations have pledged net zero emissions by 2040. But very few have elaborated on their plan of action to reach that goal. Given the urgency of the matter, countries and organizations must declare and plan lofty sustainability targets and put standard procedures and tools to monitor progress towards those goals. A significant hurdle in monitoring climate action plans is the lack of visibility, data, and standardized monitoring tools. Digital technologies like data analytics, blockchain, and machine learning can help organizations in mining data and tracking the progress of their climate action plans. The scope of digital technologies in climate action and sustainability initiatives is not limited to providing a monitoring and reporting framework based on empirical data. It also provides the foundation of exigent data paving the way for improved efficiencies in many industries, helping them reduce their emissions and footprint through rampant digitalization. The technologies we talk about have the maximum scope in the industries traditionally considered hard to decarbonize and responsible for up to 80 percent of current global carbon emissions. These sectors constitute the socially and economically vital sectors of power, transport, industrial manufacturing, and construction. According to a 2019 United States Environmental Protection Agency report, the power and electric utility sector alone accounted for up to twenty-five percent of the global greenhouse gas emissions. Thus, for the scope of the current discussion, we plan to focus on how digital technologies can help achieve and monitor the progress on these climate action goals for the power industry without compromising sustainable growth and the interests of all concerned stakeholders. While the power sector interacts with all 17 UN Sustainable Development Goals (SDGs) from 2015, the four SDGs including SDG 7 (clean and affordable energy), SDG 9 (industry innovation and infrastructure), SDG 12 (responsible consumption and production), and SDG 13 (climate action) make the priorities for utility companies. These are the SDGs where the power sector can make a tangible impact by driving digital innovation and incorporating new-age technologies across the value chain. From power generation to retail and consumer services, utility companies can use digital technologies to create meaningful impact, unlock new efficiencies, and reduce material consumption and energy usage across the value chain for a climate-positive future. We have classified the impact of digital technologies on the power utility sector in three broad categories , (P) process optimization, (E) energy monitoring and operational efficiency, and (A) audit/reporting and analytics. Process optimization and operational efficiency A report by the International Energy Agency suggests that digital data and analytics can reduce operations and maintenance costs, improve plant and network efficiency, reduce unplanned outages and downtime, and extend the operational lifetime of assets. "The overall savings from these digitally-enabled measures could be in the order of USD 80 billion per year over 2016-40, or about 5% of total annual power generation costs based on the enhanced global deployment of available digital technologies to all power plants and network infrastructure," says the report. Utility companies can use sensors in their distribution network to provide real-time power consumption data. Companies can use this data for controlling the supply, network configuration, and switching load. They can further automate these decisions. Sensors on the grid can alert operators to outages, allowing them to turn off power to damaged lines to prevent electrocution, wildfires, and other hazards. Digitalization can substantially help the power sector increase the lifetime of power plants and distribution networks through condition-based constant preventive maintenance. IEA suggests that "if the lifetime of all the power assets in the world extends by five years, then close to USD 1.3 trillion of cumulative investment could be deferred over 2016-40. On average, investment in power plants would grow by USD 34 billion per year and in networks by USD 20 billion per year." We help our clients conceptualize, develop, and implement solutions that employ machine learning models and advanced data analytics to reduce fuel usage, optimize work processes, and minimize material resource usage. This, in turn, helps in lowering the carbon footprint. One of our recent projects involved working with an Australian waste management and recycling company to help them optimize the movements of their fleet of waste collection trucks through advanced machine learning models. The client witnessed a significant reduction in fuel consumption and resulting CO2 emissions. In another example, we helped a leading utility in Europe to digitize their field operations and maintenance rounds by enabling the field inspectors with state-of-the-art Google Glass-based inspection solutions making the entire process efficient, paperless and sustainable. Energy monitoring and efficiency Advanced data analytics and visualizations can help utilities and customers monitor their energy consumption and usage patterns. They can use this information to derive insights and aim for better energy efficiencies and optimize usage behaviours to lower emissions and utility bills. Besides advanced analytics, IoT has several other applications in the energy sector. A recent estimate by GE suggests "that $1.3 trillion of value can be captured in the electricity value chain from 2016 to 2025 globally by IoT." Two major applications of IoT in the energy sector are smart meters and smart thermostats. IoT coupled with advanced analytics systems can forecast the generation capacity of renewable energy sources such as solar and wind. This further allows power generators to tweak their operations in case of a contingency. Internet-connected intelligent meters collect data on power consumption and share it remotely with the utilities and customers. This allows utility companies to share detailed reports about energy usage with their users, which helps customers track and reduce their consumption. When combined with the cloud, utilities can use IoT for demand-side management to achieve energy efficiencies at scale. With these models, organizations can remotely manage the usage of high-powered appliances for their users based on real-time grid power demand. A solution as simple as an internet-connected Google Nest thermostat can save up to 10-15% on the HVAC (Heating, ventilation, and cooling) bills for an average household. Nest, the manufacturer of smart thermostats, has reported a drop in electricity bills of 10%–12% for heating and about 15% for cooling. A Gartner study also suggests that an integrated building management system managing cooling, heating, and lighting can help reduce energy consumption by 50%. In another example, one of our customers crafted a demand response system that leverages consumption data patterns to smoothen grid fluctuations by aggregating and controlling the DERs, storage systems, and even the residential heating systems through their proprietary smart grid controllers. Audit/reporting and analytics Technologies like blockchain and smart contracts can help trace the source of materials and energy origins through the supply chain. These tech solutions open many opportunities to optimize and reduce material and energy consumption. Users can now ask for eco/green-labeled energy and verify that the electricity they are using comes from a renewable resource. They can also demand to know the carbon footprint of their energy usage. It makes them more aware of their usage and ecological footprint, pushing them to make environment-conscious decisions in their daily life. Organizations now have access to various digital energy auditing, analytics, and reporting tools that enable monitoring energy usage, consumption patterns, and carbon footprint for their buildings. And, facility managers can optimize energy consumption and make better decisions while choosing energy providers. Distributed Energy Resources and microgrids now offer their customers the option of selecting green energy to power their homes, helping them reduce their carbon footprints. The benefits of green energy don't end with just homes. As utility companies provide energy for e-mobility and smart buildings, they support the transition to sustainability in other sectors. Utility companies can deploy analytics to study and analyze consumption trends. It is beneficial in the case of renewable energy sources such as wind and solar as organizations can better manage the demand to mitigate the inherent variability of the renewables, thus ensuring long-term grid stability and energy security. By analyzing the historical data of the equipment, utilities can do a health analysis of their grids and distribution networks. Energy loss analysis, forecasting, portfolio management, and analytics can help utilities in cutting down AT&C losses, improve reliability and boost profitability. We helped a leading energy technology and sustainable solutions company create innovative energy expense and data management tools that allow their customers to track and manage their critical energy usage and utility resource data at scale. We at Nagarro lead the way Amidst all the noise over climate change, we want to focus on feasible solutions to sustainability problems. We aim to do that by incorporating sustainability as a factor right from the beginning while developing and delivering digital solutions. The idea is to weave sustainability into these solutions and offer what we call 'engineered sustainability.' Whether working with OEMs in developing shared mobility solutions or helping our utility clients reduce their carbon footprint, we are already walking on the path toward a sustainable future. An individual organization cannot achieve sustainability alone. It requires combined efforts. Let's save the planet together. If you are interested to know more, write to us @ email@example.com The digital roadmap to sustainability in power utilities
<urn:uuid:2b6351e9-c54a-4c7f-92ac-927ec1383738>
CC-MAIN-2022-40
https://www.nagarro.com/en/blog/sustainability-digital-technologies-power-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00310.warc.gz
en
0.933757
1,940
3.484375
3
CAMBRIDGE, Mass. — In the last 20 years, a massive wave of companies have offshored manufacturing jobs from the U.S. and Europe to developing countries in search of lower costs. However, research by MIT Sloan School of Management Visiting Prof. Suzanne de Treville finds that it’s often more costly than companies think to offshore. Using a new finance tool, she shows how the mismatch costs that result from extending the supply chain may well be higher than the lower cost offered by the offshore supplier, leading to reduced profits. “This research makes an important contribution to the current search for ways to encourage manufacturing in developed countries, where workers earn a living wage,” says de Treville. “The Obama administration in the U.S. and many European governments are committed to supporting local manufacturing jobs, but the perception remains — when the mismatch costs aren’t entered into the calculations — that manufacturing locally results in companies either losing money or in governments needing to subsidize. Our results show that there are a lot more options for competitive local manufacturing than most people realize.” To explain how this works, she gives the example of a typical offshore supplier. That supplier offers a substantially lower price than a local manufacturer that produces to order. Extending the supply chain in this way increases the time between order and delivery, often to several months. As a result, the buyer must place the order based on a forecast. As the lead time gets longer, the range of demand levels that must be considered becomes wider. The order quantity must also take into consideration the possibility that actual demand will be several times the forecast. This leads to costly stock outs or overstocks, she says. “Companies are aware of these mismatch costs, which have long been recognized as substantial. However, up until now managers have lacked a tool to calculate the mismatch cost and make a direct comparison between it and the cost reduction,” observes de Treville. Her tool provides this missing information. Take, for example, a jacket that sells for $100. It costs $44 to manufacture locally to order, and has a salvage value of $20 if not sold during the season. The offshore manufacturer offers to make the jacket at a cost of $31, which appears to be a compelling 30% cost reduction relative to the local manufacturer. Demand for the jacket for a given style, color, and size might peak at four times forecast one time in 10. This estimate of the demand peak allows a company to roughly calculate demand-volatility exposure and thus the mismatch cost. The tool shows that the cost differential required to compensate for that mismatch cost is around 40%, so the 30% cost differential is “far from sufficient,” she notes. “Further, this mismatch cost assumes that there is no supply risk, no loss of innovation from separating production and R&D by 12 time zones, no increase in supply-chain management costs, and no loss of intellectual property.” De Treville says, “Managers are typically amazed by how much money is being left on the table with distant manufacturing. This $44 local manufacturing cost is often enough to pay a middle-class manufacturing wage to workers. Our results help managers know where local manufacturing makes sense, and to regain their hope that local manufacturing is a realistic option. This information is also useful to policymakers in developed countries, as it provides support for local manufacturing and the middle-class jobs it can create.” Suzanne de Treville is a visiting professor at the MIT Sloan School of Management from the University of Lausanne.
<urn:uuid:6480fd1d-058c-456c-b9bd-9b3771cb62c4>
CC-MAIN-2022-40
https://www.mbtmag.com/global/video/13211892/mit-prof-more-costly-to-offshore-than-many-companies-think
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00310.warc.gz
en
0.950482
738
2.90625
3
AI, or artificial intelligence, seems to be on the tip of everyone’s tongue these days. While I’ve been aware of this major trend in tech development for a while, I’ve noticed AI appearing more and more as one of the most in-demand areas of expertise for job seekers. Copyright by forbes.com I’m sure that for many of us, the term “AI” conjures up sci-fi fantasies or fear about robots taking over the world. The depictions of AI in the media have run the gamut, and while no one can predict exactly how it will evolve in the future, the current trends and developments paint a much different picture of how AI will become part of our lives. In reality, AI is already at work all around us, impacting everything from our search results, to our online dating prospects, to the way we shop. Data shows that the use of AI in many sectors of business has grown by 270% over the last four years. But what will AI mean for the future of work? As computers and technology have evolved, this has been one of the most pressing questions. As with many technological developments throughout history, the advancement of artificial intelligence has created fears that human workers will become obsolete. The reality is probably a lot less dire, but maybe even more complicated. What is AI? Before we do a deep dive on the ways in which AI will impact the future of work, it’s important to start simple: what is AI? A straightforward definition from Britannica states that artificial intelligence is “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” “AI” has become a catchall term to describe any advancements in computing, systems and technology in which computer programs can perform tasks or solve problems that require the kind of reason we associate with human intelligence, even learning from past processes. This ability to learn is a key component of AI. Algorithms, like the dreaded Facebook algorithm that replaced all our friends with sponsored content, are often associated with AI. But there is a key distinction. As software journalist Kaya Ismail writes, an algorithm is simply a “set of instructions,” a formula for processing data. AI takes this to another level, and can be made up of a set of algorithms that have the capacity to change and rewrite themselves in response to the data inputted, hence displaying “intelligence.” AI will probably not make human workers obsolete, at least not for a long time To put some of your fears to bed: the robots are probably not coming for your jobs, at least not yet. […] Read more: www.forbes.com
<urn:uuid:55cb5767-90f0-4caf-be0a-398de97d3a52>
CC-MAIN-2022-40
https://swisscognitive.ch/2021/04/29/ai-and-the-future-of-work-and-life/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00310.warc.gz
en
0.956907
567
3.078125
3
Load Balancer basically helps to distribute the network traffic across the multiple servers to improve the network, application performance. the Reconnaissance work on target to find out target domain has a load balancer so that penetration testing does not misdirect your probs or attacks. So Its recommended to check the domain has a Load balancer, Intrusion Prevention system, Reverse Proxies, Firewalls or content switches all these things will cause false results on security scans. - Load Balancer acts as a reverse proxy which distributes application or network traffic across a number of servers. - It ensures reliability and availability by monitoring the health of the application and sending a request server or application that can respond in a timely manner. - Load balancers are found in the network and transport layer (IP, TCP, FTP, UDP) and application layer (HTTP) Standard Industry algorithm: - Round-robin load balancing is one of the simplest methods for distributing client requests across a group of servers. Going down the list of servers in the group, the round-robin load balancer forwards a client request to each server in turn. - Does not always result in the accurate or efficient distribution of traffic, because many round-robin load balancers assume that all servers are the same: currently up, currently handling the same load, and with the same storage and computing capacity. - Weighted round robin – A weight is assigned to each server based on criteria chosen by the site administrator, most commonly used criterion is the server’s traffic-handling capacity. - Least Connections: If two servers in a cluster have exactly the same specification, one server can still get overloaded considerably faster than the other. - Random Connections: load balancer receives a large number of requests, a Random algorithm will be able to distribute the requests evenly to the nodes. Load Balancer Check: - Above figure illustrator that we have successfully found the Loadbalancer on the target domain. - Type lbd followed by the target domain name.Ex: lbd tamilrockers.pl - We have found HTTP & DNS load balancers for tamilrockers.pl domain. Before we start penetration testing. It’s mandatory to do this Reconnaissance work on the target domain to detect possible Network & Application Security devices.
<urn:uuid:1446db35-c734-4567-9e9e-88c5facdf014>
CC-MAIN-2022-40
https://gbhackers.com/load-balancer-reverse-proxy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00510.warc.gz
en
0.86195
487
2.703125
3
An analog telephone adapter (ATA) provides an interface that allows you to use a standard telephone to communicate over an IP network such as the Internet. The name "analog telephone adapter" was originated by Cisco for their adaptor and other manufacturers may use slightly different acronyms, but essentially they mean the same thing. Conventionally, the mention of the words ‘PBX phone systems’ brings about an image of expensive equipment, miles of entangled wires, and complex connections. For a large corporation having large number of employees, managing the PBX system requires lot of expertise. A company has to maintain dedicated staff for these specific tasks. Additionally, the system has to be updated periodically to meet the ever increasing needs of the growing company. A company can avoid this constant upgrades and maintenance tasks by switching over to a hosted PBX system. IP stands for Internet protocol. IP Phones allow the user to speak over IP networks such as the Internet or a company intranet. Other common terms for IP Phones could be VoIP Phone, Internet Phone, Web Phone etc. You're at an internet café or at the airport and get an important business call--on your laptop. You're on the road and receive an urgent voice mail--in your e-mail inbox. Your business has a phone number with a New York area code--even though your office is in Texas. Welcome to the world of VoIP. With a VoIP service, your phone calls travel over the internet as data, just as e-mail does. This type of service can dramatically lower your phone costs while increasing your productivity. It also provides a range useful features and capabilities that conventional phone technology just cannot offer. Obviously, the minimum requirement to use VoIP is a connection to the internet. If your connection to the internet is through a standard dial-up modem, you will also need a computer to access the internet. Keep in mind that a dial-up connection, can only provide a maximum of 56 Kbps in bandwidth, which limits the quality of VoIP services. For the best call clarity, the minimum speed for VoIP phone calls is between 90 kbps (kilobits per second) to 156 kbps at the other end of the VoIP speed spectrum. Generally speaking, the higher your upload speed (which is really what is meant by your Internet connection speed), the more reliably consistent the quality of your VoIP phone calls will be. Typically, only a broadband Internet connection can provide the minimum bandwidth for VoIP calls. There are several advantages using Voice over IP (VoIP), including the availability of advanced features that standard telephone systems are not capable of and the ability to have a phone number usually associated with a particular local area anywhere in the world. Voice over IP (VoIP) is a technology that allows voice traffic to be transmitted over a data network, such as the public internet. Using VoIP and, usually in conjunction with a broadband internet connection (cable modem or DSL), it is possible to use a wide range of equipment to make telephone calls over the net.
<urn:uuid:f077edad-7376-412e-bf2c-ba8ee4eab2ec>
CC-MAIN-2022-40
https://www.myvoipprovider.com/faq-topic/frequently-asked-question
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00510.warc.gz
en
0.935292
630
2.515625
3
Google’s plan to build a modular cellphone with interchangeable parts, known as Project Ara, has been suspended indefinitely. While Google may not release a device, there are rumors of Google working with partners through licensing agreements to bring the technology to market. Project Ara could potentially set the groundwork to more sustainable device design. Modular design is an overall positive development for building a stronger hardware ecosystem, but there are still roadblocks along the way. Bryan Ma, vice president of devices research at IDC, explains these challenges. “While the idea of reducing electronic waste by using modularly upgradable phones was a noble cause, it would have had to swim upstream against the fast phone upgrade cycles seen in the smartphone market, where vendors like Apple and Samsung keep dangling new flashy goodies each year,” Ma said. While there are many concepts being proposed in the area of product design that have potential to significantly reduce the environmental footprint of electronics, the reality of the market is consumers will buy products based on performance and cost. If the environmentally preferred product will not provide superior or at least comparable performance at a comparable cost, the design concept is destined to fail. It is not that we don’t know how to design more environmentally friendly products, it is just that in many cases, these products are not seen as superior to less environmentally friendly products that exist in the market.
<urn:uuid:ff349249-2fbc-4690-bf34-42752d8d4cfa>
CC-MAIN-2022-40
https://hobi.com/project-ara-cancellation-demonstrates-sustainable-device-design-challenges/project-ara-cancellation-demonstrates-sustainable-device-design-challenges/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00510.warc.gz
en
0.969103
280
2.5625
3
Due to recent high volume of phishing and scam emails, it is essential for everyone to be aware and cautious with email messages. Please NEVER enter your credentials in a link you receive in your email. Any suspicious emails should be forwarded to Creative Tech <email@example.com> Typically in a phishing email scam, you receive an email that appears to come from a reputable organization, such as: - Social media (Facebook, Twitter) - Online games - Online services with access to your financial information (e.g., iTunes, Amazon, accounting services) - Departments in your own organization To protect against phishing attacks, it’s good practice not to click on links in email messages. Instead, you should enter the website address in the address field and then navigate to the correct page, or use a bookmark or a Favorite link. Phishing emails may also include attachments, which if opened can infect the machine. How to avoid being phished Curate your emails. This means, each time you notice an email that's a spam source, you can mark it as spam so this sender can't send anything else: Never respond to emails that request personal financial information. You should be suspicious of any email that asks for your password or account information, or includes links for that purpose. Look for signs that an email is “phishy”. Some phishing emails are generic, using greetings like “Dear valued customer.” They may also include alarming claims (e.g., your account numbers have been stolen), use suspiciously poor spelling or grammar and/or request that you take an action like clicking a link or sending personal information to an unknown address. Don’t follow links embedded in an unsolicited email. Phishers often use these to direct you to a bogus site. Instead, you should type the full address into the address bar in your browser. Don’t open or reply to spam emails as this lets the sender know that your address is valid and can be used for future scams. What to look for - Spelling & Grammar Errors - Sender Address - Things that sound too good to be true Beware of unsolicited messages - Login pages Use your common sense Always be suspicious Don't click on suspicious links or attachments Delete suspicious messages immediately Never respond to email requests for personal info Does that email look suspicious? Read it again Be wary of threats and urgent deadlines
<urn:uuid:08739864-5470-4f72-a049-698194b74025>
CC-MAIN-2022-40
https://helpdesk.creativetech.com/hc/en-us/articles/360046577634-Phishing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00510.warc.gz
en
0.911787
556
2.640625
3
Carrier Ethernet technology requires specific testing standards and procedures because Ethernet was not initially designed for carrier networks. These procedures include the RFC 2544 test suite, filtering, truncation and capture triggers. The RFC 2544 test suite outlines the tests required to measure and prove performance criteria for carrier Ethernet networks. The standard provides an out-of-service benchmarking methodology to evaluate the performance of network devices using throughput, back-to-back, frame loss and latency tests, with each test validating a specific part of a service level agreement. The methodology defines the frame size, test duration and number of test iterations. Once completed, these tests will provide performance metrics of the Ethernet network under test. The ability to filter Ethernet traffic means capturing only the traffic that fits a specific profile, therefore efficiently using the available memory. A filter engine is based on basic and advanced filtering capabilities. In the basic mode, the user can filter traffic based on a single trigger value, while the advanced mode provides the ability to customize a filter using up to four triggers combined with logical operands (AND, OR, NOT). In both cases, a complete set of triggers includes MAC, IP, UDP and VLAN. Truncation means limiting the capture of data to a specific number of bytes and therefore, capturing only the data that is necessary. Since proprietary information cannot be decoded by test equipment, network engineers will limit the capture to the header, or add more bytes to include higher-layer information, for more in-depth testing. With most typical capture tools, capture starts as soon as the tool is enabled. However, this could mean that an event will occur after the memory buffer is filled up, providing no useful information whatsoever. Three are three types of capture triggers: The triggering position determines the position of the triggered frame within the captured data. Specifying where the trigger event is located in a capture is useful when performing pre- and post-analysis because it is important to understand what happened before and after a failure. Traditional capture tools only provide a post-trigger position, but today’s Ethernet capture tools also provide a mid- and pre-trigger. In Pre-Trigger mode, the last frame of the capture is the trigger event; therefore, the captured output contains all the frames leading up to the event. This mode can be used to determine what led to the specific event. Mid-Trigger mode is a very powerful application that provides a snapshot of the traffic before and after the trigger event. In this mode, the trigger event is usually in the middle of the captured traffic. In Post-Trigger mode, the first frame of the capture is always the trigger, and the remaining frames are the frames that follow the trigger event. This mode is typically used to analyze content after the event.
<urn:uuid:146238d2-8993-478f-8a31-70e2f2838e88>
CC-MAIN-2022-40
https://www.exfo.com/en/resources/glossary/carrier-ethernet-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00710.warc.gz
en
0.921537
568
2.65625
3
Machine learning, deep learning and artificial intelligence (ML, DL and AI) are related technologies that are changing the face of how many industries manage themselves and make decisions. Clearly, they are very important and complex processes that are evolving very quickly. It is important to understand the differences between them. Unfortunately, you almost need to use one of them to do so. The map was laid out well earlier this week by Hope Reese at TechRepublic. AI is a general term that “can refer to anything from a computer playing a game of chess, to a voice-recognition system like Amazon’s Alexa interpreting and responding to speech.” ML refers to machines that use data to teach themselves, Reese writes. The highest profile example is Google’s DeepMind, which beat the world champion in the Korean game Go last March. Deep Learning is a subset of ML, Reese writes, that solves problems in a way that simulates human decision-making. It is important to track how these tools are developing. Al Gharakhanian, the managing director of Cogneefy, offers some very valuable, high-level perspective on trends in ML and DL at InformationWeek. He points to three high-level trends. The first is the emergence of unsupervised learning. Today, he writes, the predominant method of training ML/DL tools is supervised learning. This approach uses large amounts of labeled data. The nascent trend is unsupervised learning. The big benefit is that unsupervised learning doesn’t require huge datasets. A second trend is the growth of generative adversarial networks (GANs). To understand GANs, it is necessary to understand discriminative models. These “labeled historical data and use their accumulated knowledge to infer, predict, or categorize.” Generative models rely less on stored knowledge. They synthesize or generate ideas based on “insights gained during training.” They are a refinement: GANs are really not a new model category; they are simply an extremely clever and effective way of training a generative model. This strength reduces the need for large training datasets. The third trend is reinforced learning (RL). This is learning though experimentation and exploration. It differs from supervised learning in that it doesn’t start with preconceived notions of “how the world works,” or “good training data,” Gharakhanian writes. Of course, it’s impossible understand these concepts from a single article. The important thing to understand is that this cutting-edge material is growing and changing almost in real time. Indeed, the time between the birth of the ideas and their use by commercial, military, governmental and other users is very short. In mid-February, for instance, Forbes highlighted two ways in which companies are using AI: While these companies dominate the headlines—and the war for the relevant talent—other companies that have been analyzing data or providing tools for analysis for years are also capitalizing on recent AI advances. A case in point are Equifax and SAS: The former developing deep learning tools to improve credit scoring and the latter adding new deep learning functionality to its data mining tools and offering a deep learning API. This is difficult material. It is important to understand at a high level, however, as the boundaries of research and development expand. Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at [email protected] and via twitter at @DailyMusicBrk.
<urn:uuid:f7329dc3-805c-4392-b406-ac063eb78886>
CC-MAIN-2022-40
https://www.itbusinessedge.com/it-management/the-lightning-evolution-of-machine-learning-deep-learning-artificial-intelligence/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00710.warc.gz
en
0.955648
797
3.125
3
To those who are not immersed in the complex and constantly evolving world of IT, many of the roles filled by tech experts might appear to be the same. However, as these positions become increasingly relevant for companies, it is crucial to understand the difference between jobs and why they might be needed. In this article, we’re talking about the roles and responsibilities of system administrators and security administrators. Though the names and jobs are similar, there are distinct differences in these IT-focused administrator roles. System administrator is often shortened to the buzzy title of sysadmin. More formally, some companies refer to their sysadmin as a network and computer systems administrator. A security administrator, on the other hand, can have several names, including security specialist, network security engineer, and information security analyst. As always, the job title is less important than the specific roles and responsibilities that a company may expect from the position. What’s a system administrator (sysadmin)? Computer networks are crucial to business, and they require a dedicated employee or several employees to manage the day-to-day operations of the network. That’s where system administrators come in. A system administrator—often shortened to sysadmin—is an IT professional who supports the computing environment of a company and ensures the continuous and optimal performance of its IT services and support systems. Sysadmins are essentially in charge of “keeping the lights on” for the organization, in turn limiting work disruptions. Roles & responsibilities As their responsibilities focus on daily network operations, system administrators are charged with a wide swath of computer work: organizing, installing, and supporting the computer systems, which can include local area networks (LANs), wide area networks (WANs), network segments, intranets, and other data communication systems. Several metrics—like uptime, performance, resources, and security—can help a sysadmin determine that the system meets the users’ needs within the company’s budget. Responsibilities of a system administrator may include: - Anticipating needs of the network and computer systems before setting it up - Installing network hardware and software - Ensuring and implementing upgrades and repairs in a timely manner - Maintaining network and computer system security - Understanding and solving problems as automated alerts occur - Collecting data to help evaluate and optimize performance - Adding and assigning users and network permissions, as determined by the organization - Training users in proper use of hardware and software Peers & reporting A system administrator likely reports to an IT department head. Unlike some IT positions, sysadmins have a unique responsibility to communicate and problem solve with colleagues both within and beyond the IT team. Because a sysadmin solves problems for and trains all users, including non-IT employees, communication is imperative. System administrator job skills In terms of skills needed, sysadmins need to know a little bit of everything. Beyond formal education, strong system administrators will need to possess several vital skills, including analytical, communication, multitasking, and problem-solving skills. System administrators must also have skills such as: - Leadership abilities - Hardware knowledge - Strong management skills - Attention to detail - Critical thinking - Basic programming skills Education & requirements Some businesses may require that a system administrator hold a BS in a computer-related field, though some companies may only require a post-secondary degree. Specific training and certifications alongside hands-on experience can strengthen a candidate’s position, especially when he or she hasn’t earned a BS. Common training and certifications for system administrators are offered by Microsoft and Cisco, including the Microsoft Certified: Azure Administrator Associate and the Cisco Certified Network Associate (CCNA) certification. The Bureau of Labor Statistics (BLS) projects that the employment of system administrators will grow by 5% by 2030–a rate that is slower than the average growth rate across all national occupations. Despite this limited growth, it is still estimated that there will be close to 25,000 job openings a year for network and computer systems administrators. It is also projected that the demand for IT workers should continue to grow as companies invest in newer, faster, and more advanced technology. The median annual wage for network and computer systems administrators was $84,810 in May 2020. What is a security administrator? The information stored within computers and infrastructure is crucial to business. In turn, security is of the utmost importance—particularly today, when individuals and sovereign nations threaten cybersecurity attacks. Security administrators are employees who test, protect, and ensure the hardware, software, and the data within the computer networks, is secure. A security administrator is the lead point person for the cybersecurity team. They are typically responsible for the entire system and ensure that it is defended as a whole. They will often install, administer, and troubleshoot an organization’s security solutions, and then make certain it is kept secure from any type of outside, or inside, threat. Roles & responsibilities Where a system administrator knows a lot about many sectors of IT, a security administrator specializes in the security of the computers and networks. In general, computer security, also known as IT security or cyber security, includes protecting computer systems and networks from the theft and/or damage to hardware, software, or information. It also includes preventing disruption or misdirection of these services. This should include knowledge of specific security devices, like firewalls, Bluetooth, Wi-Fi, and the IoT. This also includes general security measures and an ability to stay abreast of new security sector developments. Specific roles and responsibilities of a security administrator may include: - Monitoring networks for security breaches, investing violations as occurs - Developing and supporting organizational security standards, best practices, preventative measures, and disaster recovery plans - Conducting penetration tests (simulating cyberattacks to find vulnerabilities before others can find them) - Reporting on security breaches to users, as necessary, and to upper management - Implementing and updating software to protect information - Staying up to date on IT security trends and information - Recommending security enhancements to management and C-suite executives Peers & reporting Due to the necessity of network and data security, security administrators often report directly to upper management, which could be a CIO or CTO. Security administrators frequently partner with sysadmins for implementing new changes to the network for security purposes. Education & requirements At a minimum, security administrators are expected to hold a BS in computer science, programming, or a similar field. Some companies prefer to hire candidates that hold a MS in computer systems or an MBA in information systems. In addition, companies frequently prefer candidates who are certified in specific security fields. A common certificate is the Certified Information Systems Security Professional (CISSP), offered by the International Information Systems Security Certification Consortium (ISC)². The CISSP is one of the most sought-after cybersecurity certifications and it is designed to prove the candidate’s deep expertise in the field. Other top cybersecurity certifications focus on more specific areas, such as systems auditing or penetration testing. Security administrator job skills Work skills are just as important as formal education for the role of a security administrator. Candidates should be detail-oriented and analytical, as security vulnerabilities are often tiny, hard-to-notice parts of the program or network. Problem-solving and communication skills are necessary, as well, especially when training or helping non-IT colleagues. It is also important for security administrators to have: - Strong leadership capabilities - Technical expertise and experience with the ability to develop a security plan, coordinate and implement it, and monitor the IT environment - A dedication to a collaborative approach and mindset - An understanding of regulatory standards and how to ensure the business achieves compliance The BLS anticipates a significant growth in the security administrator role, predicting employment will expand by 33% by 2030. This growth rate is much faster than the average for all occupations nationwide, which are currently sitting at a growth rate of 8% from 2020 to 2030. As our economy relies more on hardware, software, and information, the need to protect them grows exponentially. With this, the need for security analysts and administrators will continue to be extremely high. As cyberattacks grow in frequency and complexity, it will be crucial that these professionals be able to come up with innovative and effective solutions. The median annual wage for information security analysts was $103,590 in May 2020. Security vs system administrators: Both critical Whether you are a company looking for assistance with IT and security, or you are looking for a new role, understanding the difference between system administrators and security administrators can be an important factor in ensuring all company needs are being met. The future of these jobs is secure, and the need for strong IT professionals will only continue to grow.
<urn:uuid:d6a505c7-aa9d-42ef-8ad8-18a09183e547>
CC-MAIN-2022-40
https://www.bmc.com/blogs/system-administrator-vs-security-administrator-whats-the-difference/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00710.warc.gz
en
0.938202
1,838
2.765625
3
Smart apps have been built primarily to provide consumers with enthralling functionalities which encourage convenience, ease of use, real-time services and many other benefits. Developers essentially want to please customers with the motive of making their app successful. This often leads to giving less importance to the security of the application and could jeopardize a customer’s personal or private identity. In this article, we take a look at why a simple functionality like caching sensitive data can cause a world of trouble for businesses if not done right, or if done at all. There have been many instances in the past, where hackers have been able to weave their way into personal information of users, through various loopholes in applications without them even being aware, destroying businesses in the process. One such way is if sensitive data of users are cached within the app. Whether it’s financially or competitively motivated, hackers will stop at nothing to see that their objective is in place especially if there is a functionality as easy as caching sensitive data to exploit. How hackers can use the caching sensitive data functionality to exploit your business. 1. Caching web application data may result in exposure of URL histories, HTTP headers, HTML form inputs, cookies, transaction history and other such web-based data easily being revealed. Although not as easy to access through the mobile as the web, mobile applications still give way to multiple entry channels by storing cached information. 2. Words entered by a user via the keyboard are stored in the Android user dictionary for future auto-correction. The user dictionary is available to any app without requiring any permission and this could lead to sensitive data being leaked. Recorded password and usernames from one app could sometimes be exploited by other apps. 3. Apps may cache camera images which remain available after the app has finished. Cached images pose a threat of leaking personal and private information to hackers which could ruin not only a company’s reputation but also the personal identity of an individual. The recent hack in the iCloud revealed personal and private images of many celebrities which allowed the general public access into their lives. Other threats that could arise out of this are bullying and blackmailing of an individual. 4. Application screens retained in memory enable transaction histories to be viewed by anyone with access to the device who can directly launch the transaction view activity. Malicious applications are sometimes created and launched by hackers. These apps can read data from retained screens of another application which sometimes holds payment transaction history, account number etc. If you think you are really making it convenient for consumers by caching their data (think again!), there is a bigger price to pay which no convenience is able to compensate for. Convenience can take you only so far, accountability for consumer privacy and security is a key ingredient in making you successful in the long haul.
<urn:uuid:b24e7fb1-953a-4fa2-8a00-0641928e1969>
CC-MAIN-2022-40
https://www.appknox.com/blog/caching-sensitive-data
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00710.warc.gz
en
0.949587
570
2.65625
3
When staying safe online, it’s important not to go “out of bounds” for communication. Simply put, going out of bounds could mean a recipe for how your users could fall victim to a phishing attack. For example, if you or your users are buying something on eBay, stick to eBay for bidding, negotiating, and payment. Because it's Cybersecurity Awareness Month, I wanted to also share a quick video explaining why it's so important to stay in bounds and what can happen if you don't: Criminals and scammers want nothing more than to take communications with your users out of bounds. The reason for this is because going out of bounds strips away any protection the platform is offering. Let’s look at our eBay example. eBay holds a lot of personal information such as your name, address, phone number, email address, your browsing history -- and through PayPal, it also has bank account information. However, all of that information is shielded from other parties, and only the relevant information is revealed to them when it is necessary. Even then, certain information such as bank account or credit card number will always remain confidential. If the other party reneges on a deal, or sends the wrong goods, or there is any other form of dispute, then eBay offers a certain amount of protection and can help refund or block any further communications. Consider this when compared to going out of bounds when the bad guys attempt to communicate with your users. All the protections have been removed, so not only does the other party need access to your user’s details to proceed with any transaction, they can send malicious links through other channels, resulting in your users to be phished. And if there is a dispute, there is no arbitration available. It can be tempting to go out of bounds, after all, sometimes the platform takes a commission from each sale, and you may want to avoid that. But is saving a few pence really worth potentially losing a lot more to a fraudulent transaction? But this isn’t just restricted to commercial transactions. We see this kind of activity occur in the corporate world all the time. Sometimes it’s just a colleague asking for a favour and asking for something to be done without going through the trouble of raising a ticket with IT. While this can be an innocent request, if it is a criminal, then there is no audit of where the request originated from, and why it occurred. Processes and official communication channels are often put in place for a reason, and staying within the confines of those channels offers protection for all parties involved. Therefore, at all times, it’s important for your users to stay within the bounds and don’t use unauthorized channels, regardless of whether that’s to buy or sell something online or to interact with your colleagues in the office.
<urn:uuid:b033aebe-77c4-486a-b646-a61a616249b4>
CC-MAIN-2022-40
https://blog.knowbe4.com/cybersecurity-awareness-month-lessons-learned-out-of-bounds-communication
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00110.warc.gz
en
0.943216
589
2.609375
3
There’s a lot to “green” in the global IT industry. A growing number of leading IT companies active in business lines spanning the entire industry value chain — from chip fabrication and equipment manufacturing to retailing consumer electronics — are now putting themselves at the forefront of the green tech wave. For years, tech companies have been making use of a host of toxic materials, releasing a range of pollutants into the air, land and water, and employing energy-intensive processes to put out product that leaves hectares of e-waste upon disposal. The potential is there for the IT industry to make a huge difference in efforts to reduce greenhouse gas emissions and environmental degradation. Energy demand is expected to grow 2 percent per year through 2030, rising from 400 to more than 700 quadrillion BTUs (British thermal units) per year, according to the base case reference scenario built into the U.S. Energy Information Administration’s International Energy Outlook 2006. Developing countries account for most of the possible increase. Today, the developed world, as represented by the Organization of Economic Cooperation and Development, uses half the world’s total energy supply to produce half its GDP (gross domestic product). More than 80 percent of the world’s population is projected to live in developing countries by 2030. The good news is that green tech has become a priority for a growing number of IT companies. Chip fabs and high-tech corporate parks are making use of electricity from photovoltaic cells, hydrogen fuel cells and biogas generation projects. Data centers are making use of virtualization and new heating and cooling systems to reduce electricity usage, reduce emissions and save space. Wireless sensor networks are being used to minimize the use of natural resources, such as water, by monitoring machines and facilities on-demand. Reinvigorated efforts to reduce the mountains of e-waste accumulating in toxic hotspots around the world through equipment take-back, reuse and recycling programs have also been set in motion. It’s not easy to minimize resource use and polluting emissions. With increasing economic activity comes growing demand for more polluting, energy intensive products and services. Due to a combination of factors — newer, better technology and rising fuel costs high up on the list — the economics of increasing energy efficiency and reducing greenhouse gas emissions has improved steadily during recent decades. While the goodwill value of “going green” has appreciated considerably, more IT companies are finding that doing so can cut costs and improve the bottom line. That’s proving to be the biggest motivator. “Green technologies are becoming very hot, especially in the West and migrating to the East as well. Many global companies view green initiatives as necessary to stay competitive today, which is quite a change from the philosophical viewpoints from a decade ago,” observed Mareca Hatler, director of research at ON World. Private sector investment in clean tech and renewable energy has passed a growth curve inflection point and is accelerating because of better economics and government incentives. “Investments in alternative energy sources are expected to grow to between US$6.2 billion and $8.8 billion by 2009 as decades of U.S. and European research and development mature,” according to the Cleantech Venture Network. There is substantial interest in the clean tech/energy sector that is being driven by several factors: rising gas and oil prices; increased awareness of global warming; a commitment by several large global corporations to go green, including GE, Wal-Mart, Dow and BP, among others; and increased incentives and policies by governments in the U.S., Europe and Asia to support development of clean technology, said Pradeep Halwar, director of the Energy and Environmental Technology Applications Center at the College of Nanoscale Science and Engineering (CNSE). “We see growing interest among leading global high-tech companies in pursuing clean tech research, often in collaboration with CNSE’s Energy and Environmental Technology Applications Center. These industry-university partnerships at CNSE will serve to advance research and commercialize alternative and renewable energy technology more rapidly for the benefit of both industry and consumers,” he added. As it has been for the IT industry, Silicon Valley has become a center for clean tech and renewable energy capital and finance. “Much of the activity and interest is centered in Silicon Valley,” Kevin Brosnahan, spokesperson for the U.S. Dept. of Energy’s Office of Energy Efficiency and Renewable Energy, told TechNewsWorld. “Technology venture capitalists are often quoted as viewing clean energy technology as the ‘next big thing,'” Green Tech and the Developing World A lot has been made of the potential developing nations have in terms of taking advantage of the latest generation of technology to “leapfrog” phases of the development cycle and launch themselves into the 21st century. In large degree, however, these nations are simply making the same mistakes as their predecessors in order to grow their economies as quickly as possible. They continue to use traditional neoclassical economic measures that ignore the public costs associated with the degradation of critical natural resources, including clean air, water and arable land, to measure progress and guide them. If emerging market countries are to avoid the mistakes developed industrialized nations have made in the past, they will need to take a more imaginative, forward-looking approach to energy and natural resource use — and they’ll need help. Transnational IT companies are typically among the largest sources of fixed capital investment in developing countries. As such, they have the potential to be natural leaders when it comes to promoting clean tech, sustainable energy and resource management. Though Africa, for instance, currently accounts for only 1.4 percent of global carbon emissions, foreign investment has been pouring into countries across the continent. While demand for increasingly costly natural resources is a primary driver, there also seems to be a growing realization among both investors and recipients that more has to be devoted toward developing domestic infrastructures and economies that are sustainable and do not irresponsibly compromise the natural heritage that is Africans’ birthright and draws millions to the continent every year. Increasing environmental awareness and promoting efficient, sustainable resource use is a big part of the equation if efforts to conserve these natural resources and minimize environmental degradation are to have any chance of succeeding. Leading IT companies, including IBM, are playing a growing role in this effort. Best Practices Applied Worldwide “IBM is uniquely positioned to work in all industries and all sectors bringing thought leadership and industry best practices,” Maureen J. Baird, business development, winback and solutions executive at IBM South Africa, told TechNewsWorld. “Research has shown that there is a direct correlation between economic growth and carbon emissions. As economic growth accelerates in Africa, IBM is positioned to help companies understand their current carbon footprint, to help customers develop their policies and governance models for carbon footprinting and to identify areas for immediate carbon footprint improvements,” Baird said. IBM has devised the IBM Carbon House Solutions toolkit to help customers find answers to what it has identified as seven key areas related to carbon footprinting: strategy, customer and product, supply chain, people, IT, property and information. The company is not just evangelizing, however, it is practicing what it preaches. “IBM South Africa is responsible for managing, evaluating and reporting on our internal green initiatives and our environmental affairs,” Baird noted. In its Country Annual Environment Report, IBM’s operations are measured and assessed according to a range of environmentally related criteria: climate and energy, air, waste management, ground and water, chemical management, regulatory activity, environmental cost and supplier evaluation. IBM’s Big Green program extends to its suppliers and service providers, multiplying the impact it has even further. “Supplier evaluations are done every second year by procurement, under the guidelines of CI109 (Corporate Instruction 109). Suppliers are required to have certain certificates and permits, completed an environmental impact assessment and require insurance [coverage] for environmental impact damage control,” Baird elaborated.
<urn:uuid:6f3cf0ab-e089-420c-b168-aadf0308ce45>
CC-MAIN-2022-40
https://www.crmbuyer.com/story/the-green-technology-revolution-part-1-gaining-momentum-60728.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00110.warc.gz
en
0.946739
1,686
3.1875
3
Share this post: What if your cells could actually see? Today, scientists at IBM Research’s Almaden lab in the Silicon Valley are pioneering a new field of discipline – cell engineering – that uses cells as sensors to transmit new information to researchers about disease and the environment. In collaboration with University of California-San Francisco, our lab will play a major role in the new Center for Cellular Construction, an initiative that the National Science Foundation has funded to the tune of $24 million. The grant also includes researchers from San Francisco State University, Stanford University, UC Berkeley and the Exploratorium science museum. Researcher Simone Bianco holds a scale model of a normal cell, left, and a cancerous cell, right, as he describes his work at IBM Research-Almaden. (Photo credit: Patrick Tehan/Bay Area News Group) “Our collaboration will produce an unprecedented amount of data,” said Simone Bianco, research staff member, IBM Research – Almaden. “Using cells as sensors, we can develop cognitive maps that help us understand the relationships between cell structures and functions that IBM Watson can further analyze. In a sense, we can use cells to give Watson ‘microscopic eyes’ so we can better understand cellular behavior in different conditions, from complex environments to human diseases.” With “microscopic eyes,” IBM Watson can analyze hundreds of thousands of microscopic images to help accelerate the pace of research in biology. Cell engineering combines several existing disciplines — from physics and biochemistry, to cell biology, mathematics and computer science – and relies heavily on massive amounts of data. Watson can help uncover relevant, sometimes “invisible,” features of the data with unprecedented speed and accuracy. “If we understand how cells structure themselves in normal conditions, we can use this information to infer any abnormal state,” said Bianco. “For example, the exposure of a bacterium to a toxin may cause it to shrink, while exposing it to a gas may make it bigger. Without even knowing what causes the changes, and with our machine learning algorithms, we now have a way to better study abnormal conditions. The information can help with a number of applications, from uncovering new advances in disease therapeutics to increases in efficiency of biochemical reactions.” IBM Research will drive several aspects of the new center including: Computer Aided Design (CAD): Researchers aim to create a computational platform — modeled on the CAD software used by mechanical engineers — to aid researchers around the world in precisely and predictably designing cells and multicellular structures with desired functions based on knowledge of their inner workings. Cell State Inference Engine: The center will design image analysis software to enable researchers to use cells as living sensors of environmental conditions — with real-world applications such as monitoring air pollution or serving as industrial quality control sensors. “It is another unusual feature of our center to have a major industrial partner as a core member of the center itself,” said Wallace Marshall, PhD, a professor of biochemistry and biophysics at UCSF who will serve as director of the new center. “We feel it is an important component for getting knowledge into the real world, while being able to take advantage of the major strengths of IBM in big data analysis.”
<urn:uuid:99abe828-0cc6-41a9-ac6b-6768ac335a52>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2016/09/microscopic-eyes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00110.warc.gz
en
0.918189
685
3.6875
4
The shift of daily production workloads from on-premises to cloud has also impacted approaches to protection of business assets and operations. When it comes to protecting critical information and achieving the ability to recover following a disruptive event such as a natural disaster, power outage, technical failure, or cyber incident, the methods and innovations of recovery have evolved to meet the demands of modern business and customers. One of these more recent evolutions in the umbrella of business continuity is Disaster Recovery as a Service (DRaaS). The origins of disaster recovery Disaster recovery (DR) services began to be offered in the 1970s, as a means of preparing a business for possible disaster events. Leaders recognized that keeping paper copies of critical information or purchasing insurance was not enough to protect against loss, since some IP can’t be replaced. So, IT groups developed tape backups to place valuable information in an off-location vault. But the scope of disasters spread and IT groups needed to account for regional disasters, like hurricanes and tornadoes. They needed the ability to recover into a geographic region that was unaffected by the disaster. This gave rise to data centers specifically built as “timeshares”. They provided the organization a data center and a set of hardware to use during a disaster. Most times this meant traveling to the out-of-region data center and bringing tape copies of all your data. Imagine driving for three and half hours to your recovery data center, only to discover no one thought to bring the tapes. This happened to us once, thank goodness it was only a test. Eventually, driving to a different region to retrieve physical copies of backups or conduct a failover of operations for recovery became too cumbersome and time-consuming. Technologies advanced allowing organizations to send copies of their data to other data centers, without the need for tapes. Thus, DR emerged with the intent to recover quickly from disruptions in a technology-to-technology transfer from one server to another. The emergence of DRaaS As the tactics of recovery grew to accommodate even more scenarios of downtime, especially in the 2000s with innovations in architecture and server virtualization, testing also grew in complexity and scope. The more systems and data changed in any given day, the faster the speed and frequency of replication needed to be to capture those changes. Because all this led to increased complexity across the board, Disaster Recovery as a Service (DRaaS) emerged as an option to offload some of the cumbersome activities of managing the DR solution to a third party from a remote location. In the days in which DRaaS was born, it was not unusual for companies to maintain duplicate sets of hardware in an off-site location. Yes, they could replicate the data from their production site to the off-site location, but the expense of procuring and maintaining the secondary site was prohibitive. This led many to use the secondary location for old and retired hardware or even to use less powerful computer systems and less efficient storage to save money. DRaaS is essentially DR delivered as a service. Expert third-party providers either delivered tools or services, or both, to enable organizations to replicate their workloads to data centers managed by those providers. This cloud-based model allowed for increased agility than previous iterations of DR could easily allow, empowering businesses to run in a geographically different location as close to normal as possible while the original site was made ready for operations again. And technology improvements over the course of the 2010s only made the failover and failback process more seamless and granular. The move from hosted to hyperscale clouds There is currently a shift in the marketplace, as DRaaS providers are increasingly interested in delivering their DRaaS solutions in hyper-scale cloud environments such as AWS or Azure, and not just in hosted environments in data centers. Right now, there is only one DRaaS provider with such a targeted offering, but more providers will soon have solutions available. Furthermore, companies with cloud-first strategies have started to recognize that DRaaS can even be used as a first step into the cloud, empowering a lower-stakes environment for staff to become acquainted with the intricacies of cloud before making a full production shift. This approach helps to relieve skills gaps and skepticism across the IT department, demonstrating an early win to board members that the cloud-first journey is on track. Plus, the replication tools that DRaaS uses are sometimes the same tools used to migrate workloads for production environments – this helps the technology team practice their cloud migration strategy. Defining your strategy to find the right option The story of DRaaS is one of a shifting threat landscape and new innovations to solve those challenges. This evolution has come in tangent with new customer expectations of uptime. A business can no longer shut its doors for days and weeks to recover from a disaster, as this will result in loss of revenue and reputation. If you are considering whether DRaaS is the right solution for your business, do your research to determine which cloud, technology, and resiliency service model is right for you, given your long-term organizational goals. There are three DRaaS service models: self-service, partially managed, and fully managed, each one shifts more work from your team to the DRaaS service provider. The more you shift, the more your team can focus on adding value to your business.
<urn:uuid:8746b35c-0289-4a37-9f00-8eb5cef5f595>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2021/09/24/draas-evolution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00110.warc.gz
en
0.958593
1,100
2.8125
3
Just about every information security standard or regulation contains requirements for the use of multifactor authentication when restricting access to sensitive information or resources. These requirements may seem daunting at first, but complying with them and thereby strengthening your agency’s security posture is a goal that is clearly within reach. Multifactor authentication is a straightforward concept, in which a system confirms a user’s identity to a high degree of confidence by using more than one type of proof. For example, if you pass through a turnstile on your way into the office that requires you to present both your identification card and enter a secret personal identification number (PIN), you’re using multifactor authentication. Identification, Authentication and Authorization Computer systems, file storage, cloud services and other resources all require access control systems to limit their use to approved individuals. Whether it's an agency allowing only specific IT leaders to have access to confidential data or a cloud video streaming service restricting access to paid subscribers, the three components of access control remain the same: identification, authentication and authorization. Identification is the process in which a user makes an assertion about his or her identity. In most cases, this is as simple as entering a user name into the system. In the offline world, it’s the equivalent of walking up to someone and saying “Hi, I’m Mike Chapple.” At this point in the process, the other party has absolutely no assurance that the claim of identity is authentic. I could just as easily walk up to someone and say “Hi, I’m Barack Obama,” just as I could attempt to log into a computer system with a coworker’s user name. Authentication is the process in which a user proves that he or she is who he or she claims to be. In the case of a basic access control system, the user might do this by providing a secret password, known only to the user and the service. Returning to the offline example, I could authenticate my claim of identity by showing the other person my driver’s license. Different authentication techniques have different degrees of confidence. You likely will be more confident that I am whom I claim to be if I show you my driver’s license than if I simply tell you who I am. Authorization occurs after an access control system authenticates the user’s claim of identity. Once the system is confident that it is dealing with a legitimate user, it must determine what resources or services the user is permitted to access. For example, an individual from an organization’s accounting department should not be able to access human resources records and vice-versa. This is the role of authorization. Understanding these three concepts is critical to understanding how access control systems work. Each is a discrete process with a specific purpose, and IT professionals must understand the goals of specific components of access control systems when implementing them. You’re already familiar with the most common method of authentication: a user name and password. More likely than not, you used this method to access your computer this morning and will likely use multiple passwords throughout the day to access other systems. While passwords are clearly the predominant authentication technique, there are actually three different categories of authentication methods: - Something you (and only you!) know: Passwords are the most common example of this authentication factor, but they’re not the only one. Something you know could also include the answer to a security question, a PIN, or any other secret information. The critical characteristic of a strong “something you know” authentication factor is that it must be known only to the user and not easily guessed. - Something you have: Another means of authentication involves asking the user to present something that only he or she possesses. Common examples of this authentication factor are a smartcard, security token or identification badge. - Something you are: The final authentication factor relies upon unique biological characteristics of the individual. These techniques, known as biometric authentication, can include fingerprint scanning, iris recognition or voice analysis. The strength of an authentication factor depends upon the answers to two questions: How hard is it for someone to impersonate another individual, and how difficult is it to reuse someone else’s credentials if you eavesdrop on their authentication session? Each of the three authentication factors has inherent strengths and weaknesses. Passwords can be guessed, identification cards can be stolen and voiceprints can be recorded. For this reason, many security standards recommend the use of multifactor authentication: the combination of authentication factors from more than one of the categories described above. It is very important to understand that multifactor authentication does not simply mean that you are using more than one authentication technique. It requires the use of factors from different categories. For example, requiring a user to answer a secret question and enter a password is not multifactor authentication. Rather, it is an example of using two factors from the “something you know” category. Here are some common examples of multifactor authentication: - An identification card (something you have) and a PIN (something you know) - A fingerprint scan (something you are) and a password (something you know) - A security token (something you have) and a password (something you know). When used in combination, multiple authentication factors add a greater degree of security to a system by minimizing the likelihood that an intruder will be able to compromise more than one technique. While someone can pick your pocket to get your ID card and look over your shoulder to obtain your password, it’s much more difficult to do both without attracting your attention. Two Isn’t Always Better Than One One final word on multifactor authentication: It’s not always the way to go. While the multifactor approach provides greater security than single-factor authentication in most cases, this is not always true. For example, iris recognition is fairly foolproof. Unless an intruder can somehow steal your eyeball (something you guard with your life!), they won’t be able to defeat this authentication technique. This single authentication factor would likely be stronger than a two-factor approach requiring an ID card and PIN. When you evaluate potential multifactor security solutions for your environment, keep this in mind. While security regulations might require you to use a multifactor approach, you should always consider the strength of each component as well.
<urn:uuid:9c3bf329-45d0-4d48-8777-e5a03a4cced4>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2012/01/multifactor-authentication-made-simple
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00110.warc.gz
en
0.925086
1,323
3.078125
3
Reducing Hazardous Situations by Using the Proper Camera Enclosures An oil rig can be a scary place. On January 22 of 2018, fear turned into reality when an oil rig in eastern Oklahoma exploded. It left many dead and injured. The accident was the result of an uncontrolled release of gas that caught fire. It is essential to take every precaution to prevent accidents. Flammable contaminants in the air can cause explosions. Gas and dust are the usual culprits. If the concentrations are high enough, it just takes one spark to cause a catastrophic explosion. This article reviews the certifications and standards that define what type of explosion-proof camera system should be used. The definition of a hazardous location (HAZLOC) is an area in which the atmosphere contains sufficient quantities of flammable or explosive gases, dust, or vapors. For example, a dangerous environment could be found on: - Oil and gas platforms - Oil & gas drilling rigs - Oil tankers - Refineries and petroleum terminals - Military facilities - Chemical and pharmaceutical Factories - Food processing plants as many solid food and beverage ingredients produce explosive dusts including sugar, starch, flour, spices, tea, grain, and proteins. The presence of flammable gases or combustible dust doesn’t necessarily define a hazardous area. The quantity or concentration must be sufficient to create the potential hazard. The standards are based on the likelihood of an explosion. It’s all in the statistics. For example, areas that contain gases or dust for an hour per year are less hazardous than an area that contains dangerous gas levels all the time. The Standards that Rate Explosion Environments A hazardous area is defined by someone knowledgeable and assigned to mark an area with a specific level of explosion probability. The person who is an Authority Having Jurisdiction (AHJ) can be a Fire Marshal, operational risk assessment engineer, occupational safety authority, or even an insurance underwriter. Once the area is labeled, all equipment, including IP cameras, must conform to protection methods that mitigate the risk of explosion. Several standards are used to define hazardous areas. The US, Europe, and the UK have similar specifications. They distinguish various environments by CLASS, Division, and Zones. In the USA, the National Electrical Code (NEC) provides the markings defined by “divisions” and “zones.” Division 1: is defined as an area where ignitable concentrations of flammable gases, vapors, or liquids can exist most of the time (over 10 hours per year) during normal operation. Division 2: is defined as an area where ignitable concentrations of flammable gases, vapors, or liquids are not likely to exist under normal operating conditions (under 10 hours per year). The UL60079 specification uses “zones” to provide more detail about the hazards of an environment: Zone 0: An area where ignitable concentrations of flammable gases, vapors, or liquids are present continuously or for long periods (over 1,000 hours/year) under normal operating conditions. Zone 1: An area where ignitable concentrations of flammable gases, vapors, or liquids are likely to exist between 10 hours and 1,000 hours per year Zone 2: An area where ignitable concentrations of flammable gases, vapors, or liquids are not likely to exist under normal operating conditions (less than 1 hour per year). Explosion-Proof Enclosures and Cameras There are explosion-proof enclosures (where you add a camera) and explosion-proof camera systems (that include the IP camera). In many cases, it’s best to purchase a complete system (with the camera) because, according to UL, the cameras must be installed in the enclosure in a UL-approved facility. The international and European standards don’t have this requirement, so it’s possible to just purchase the enclosure and use your own camera. But it’s important to note that the camera should also be approved for use in this type of application. IP Camera System Protection Methods The explosion-proof IP camera system should have the following characteristics to maintain protection: - The explosion-proof camera system must prevent a spark from reaching the combustible fumes or dust. The enclosure should be sealed, including all the places that cables enter the enclosure. - The camera system should not allow combustible gases and dust to enter the enclosure. This is usually accomplished by pressurizing the enclosure with inert gas. - The camera should be designed so there is no source of ignition. This means that the camera should not have a battery that could explode, and the power levels should not be too high. - If an explosion does occur inside the enclosure, the enclosure should be strong enough to contain the damage. The enclosures are manufactured to be stronger than standard camera enclosures. Check the Certifications Make sure that the equipment conforms to the certifications that meet the environmental requirements. The equipment must meet the specific certifications of the country where the equipment is installed. Here are some of the things to consider for the explosion-proof camera system: - The enclosure and camera should meet the certification that indicates (Ex d) explosion-proof housing for a potentially explosive environment - Dust ignition protection (tD A21) for zones 21 – 22. - The enclosure should prevent excessive temperature that could cause ignition in a gas environment (ATEX: T6). - The enclosure mustn’t exceed 85° C in a dust environment. Summary of Explosion Proof Camera Systems Installing IP camera systems in an explosive environment is challenging. The camera system must be carefully constructed to meet the stringent standards defined by the explosive environment. The potential for an explosion defines the environment. The more an explosive condition is present, the higher the level of protection required. This probability is defined by the person having jurisdiction (AHJ). Check that the explosion-proof certifications meet the hazards of the environment. To learn more about the best explosion-proof IP camera system, please get in touch with us at 800-431-1658 in the USA or 914-944-3425 everywhere else, or use our contact form.
<urn:uuid:4d48668a-5181-4f1d-90e9-f11a80cc49dd>
CC-MAIN-2022-40
https://kintronics.com/explosion-proof-ip-camera-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00110.warc.gz
en
0.89393
1,307
3.03125
3
Nearly 60 percent of the fake $88.7 million recouped last year was created using inkjet or laser printers. The U.S. government recouped more than $88 million in counterfeit currency last year, and more than half of it was made on regular old inkjet or laser printers. That's according to Bloomberg, which tells the story of a woman who pleaded guilty to counterfeiting up to $20,000 in fake bills over a two-year period. She took $5 bills, soaked them in degreaser, scrubbed off the ink with a toothbrush, dried them with a hairdryer, then reprinted them as $50 and $100 on a Hewlett-Packard printer, the news service said. While the counterfeiting business used to be specialized, these days it's easy for anyone with a printer to give it a try. And that's just what's happening in the United States. More from Bloomberg: "Statistics highlight the growth: In 1995, less than 1 percent of fake bills were produced on digital printers. In the last fiscal year, nearly 60 percent of the $88.7 million in counterfeit currency recovered in the U.S. was created using inkjet or laser printers, the Secret Service says." That's not the case outside of the United States, where most of the fake $68 million—in U.S. dollars—recovered last year was made with commercial-grade offset presses. The Secret Service says those kinds of machines are super efficient and can "more easily escape detection by U.S. authorities and even operate with the backing of corrupt governments," according to Bloomberg. Counterfeit currency has a long history in the United States. During the Civil War, one-third of all the money in U.S. circulation was fake, according to the Secret Service. (The agency was established in 1865 as a response to widespread counterfeiting.) The best way to spot a fake bill these days? Look for blurry borders. Many of the images on counterfeit money aren't as crisp as the real deal. Even the face on a bill will look different in a fake. From the Secret Service's website : "The genuine portrait appears lifelike and stands out distinctly from the background. The counterfeit portrait is usually lifeless and flat." You can also look for tiny red and blue fibers that are embedded in real bills. "Often counterfeiters try to simulate these fibers by printing tiny red and blue lines on their paper. Close inspection reveals, however, that on the counterfeit note the lines are printed on the surface, not embedded in the paper," the Secret Service says. There are even canine units trained to sniff out fake bills. All in all, there are some $1.27 trillion in circulation. Only a tiny fraction of that money is fake, according to Bloomberg, but the Secret Service still made more than 3,600 counterfeiting arrests last year. NEXT STORY: The Father of 3D Printing Says it’s Overhyped
<urn:uuid:c2975bbd-d18e-4a74-8221-f70af6c234d0>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2014/05/how-inkjet-printers-are-changing-art-counterfeit-money/84151/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00110.warc.gz
en
0.963883
620
2.59375
3
Today, Machine Learning (ML) is a commonly used word in almost every area of IT. Additionally, ML has proven invaluable in a variety of applications, including cybersecurity. It is frequently used to make sense of big data, enhance business performance and processes, and aid in prediction. Complexity is the driving force behind the demand for Machine Learning (ML). Many organizations now own an increasing number of Internet of Things (IoT) devices that IT is not aware of or managing. Since hybrid and multi-cloud are the new standards, not all data and applications are running locally. Due to the widespread acceptance of remote work, users are no longer primarily in offices. ML is well-understood and frequently used in a variety of contexts. The most common ones are Natural Language Processing (NLP), which helps to understand what a person or a piece of text is saying, and image processing for object recognition. In several ways, cybersecurity is unique from other ML use cases. Utilizing machine learning for cybersecurity has its own requirements and obstacles. The following three issues will make it difficult to use ML in cybersecurity. Due to the crucial role that cybersecurity plays in every industry, it is more important than ever for firms to ensure that the ML they use for cybersecurity is secure on its own. Machine learning aims to increase security’s scalability and efficiency in order to reduce human costs and stop unidentified attacks. Machine learning makes it simple to ramp up to billions of devices, which is challenging to do with manual labor. And that kind of scale is what companies actually need to protect themselves from the evolving threat environment. In many critical infrastructures, ML is essential for identifying anonymous attacks. The substantially stricter accuracy standards For instance, if a system misidentifies a dog as a cat while processing images for a business, it might be unpleasant but probably won’t have a life-or-death effect. The impact of the incorrect classification can be severe if a machine learning system mistakes a fraudulent data packet for a valid one, resulting in an attack against a hospital and its gadgets. Organizations observe numerous data packets passing through firewalls every day. Businesses can mistakenly block enormous amounts of typical traffic even if only 0.1% of the data is misclassified by machine learning. This would have a detrimental consequence on the company. It makes sense that some firms were apprehensive at the beginning of machine learning that the models wouldn’t be as precise as human security researchers. To train a machine learning model to achieve the same level of accuracy as a highly trained human requires a lot of time and data. But since they cannot grow, people are currently one of the most in-demand resources in the IT industry. Firms depend on ML to scale out cybersecurity solutions effectively. Additionally, because ML can establish baseline behaviors and identify any irregularities that depart from them, it can assist in detecting unidentified attacks that are challenging for humans to identify. For ML to be effective in any industry, enterprises must combine domain expertise with ML knowledge. It is difficult to locate professionals who are knowledgeable in both machine learning and security; rather, either machine learning or security alone lacks skill. Firms have discovered that it is crucial to ensure that ML data scientists and security researchers collaborate, despite the fact that they don’t share a common language, employ distinct methodologies, and think and act in different ways. It is critical that they understand how to cooperate with one another. Applying machine learning to cybersecurity effectively depends on collaboration between these two groups.
<urn:uuid:c5561f5f-0fbb-4c99-88e4-4e5b5bc90a37>
CC-MAIN-2022-40
https://itsecuritywire.com/featured/three-major-stumbling-blocks-to-employing-ml-for-cybersecurity/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00110.warc.gz
en
0.950994
730
3.09375
3
The portfolio of cancer-inducing habits and substances seems to expand by the day, with one scare-mongering British daily newspaper in particular venturing (opens in new tab) that everything from flip-flops to potatoes and crayons could up your risk of contracting the frequently deadly condition. (opens in new tab)According to a new infographic from radiation safety experts Tawkon (bottom), you can now add your iPhone to the threat list. The data shows Apple's present flagship smartphone, the iPhone 4S, to have the second highest specific absorption rate (SAR) - that's the measurement used to convey how much radiation your body is absorbing - of the devices investigated. The 4S clocked in with a SAR level of 1.11 watts per kilogram (w/kg), with only the BlackBerry Bold 9700 faring worse at 1.37w/kg. In fairness, both phones are still below the maximum levels recommended by various regulatory bodies. In Europe, 2w/kg is cited as the most radiation you should exposure yourself to, while America's Federal Communications Commission is more cautious, venturing a a 1.6w/kg guideline. Still, Apple is likely to be far from pleased with the latest findings: not long ago, the company removed Tawkon's radiation measuring app from its App Store, presumably because it failed to show the iPhone in a positive light. Worse still for the Cupertino-based tech giant, bitter rival Samsung fare very well according to the infographic. Smartphone enthusiasts who want to play it safe on the radiation front will find that the Korean company's latest flagship device, the Galaxy S3, is the least harmful mobile, emitting a relatively minuscule 0.34w/kg - which sounds roughly on par with eating a bag of chips, in our opinion.
<urn:uuid:0c47a1e9-76db-4306-861e-a79963548983>
CC-MAIN-2022-40
https://www.itproportal.com/2012/08/06/iphone-4s-three-times-more-likely-to-give-you-cancer-than-the-galaxy-s3-according-to-safety-specialists/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00110.warc.gz
en
0.931392
376
2.59375
3
A new research report Published by cybersecurity specialists, BestVPN.com, shows the state of online privacy in the United States. BestVPN surveyed 1,000 U.S. consumers to comprehend the state of online privacy in 2018. The report reveals a significant knowledge gap and suggests that, despite their fears, US citizens are not protecting themselves against the ever-growing amount of cyber-threats. In light of the 2018 information breaches and revelations, consumers were asked to detail their cyber hygiene habits. There is a significant distrust of social media platforms; 45% of consumers report feeling uncomfortable about using platforms that track and sell their information. Regardless of the mistrust of corporations, a lack of comprehension is evident with a substantial 46 percent of respondents not correcting their privacy settings on social accounts in the aftermath of both 2018 corporate cyber violations. The report also details the hazards encountered in people’s WiFi. More than half (52 percent ) of respondents acknowledge they often join public WiFi Networks, however, lack a comprehension of the danger this exposes them to From hackers or exploitation of their private, confidential information.
<urn:uuid:145fd18c-8767-455f-bd01-6dcf0987fead>
CC-MAIN-2022-40
https://cyberexperts.com/more-data-shows-that-americans-are-lackadaisical-about-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00311.warc.gz
en
0.934204
221
2.640625
3
Geotagging adds geographical information to media through the use of metadata. Geotagging data often includes latitude and longitude coordinates, but may also include altitude, distance, and physical location names. Geotagging is most commonly used for photographs and can help people get specific information about where the picture was taken or the exact location of a user who logged on to an online service. Some digital rights management software relies on Geotagging information to permit or deny access to content. Most cellphones are equipped with Global Positioning Systems (GPS), used for finding directions, local weather reports, or local restaurants to eat at. These mobile devices tie their GPS to the phone’s camera and automatically record the location data (geotagged data) within each image’s metadata. Law enforcement officials often use geotagging metadata in their investigations.
<urn:uuid:a46d1693-18ec-4e1d-b6b5-ffcc08c063eb>
CC-MAIN-2022-40
https://cyberhoot.com/cybrary/geotagging/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00311.warc.gz
en
0.883524
172
3.34375
3
What is capacity testing? Capacity testing is the process by which network infrastructure is tested for is capacity to handle the maximum anticipated load specified in the SLA (service level agreements), without affecting the customer experience. A capacity test determines the maximum number of concurrent users (users of an application at once) that the application can support without “breaking.” This kind of test is often related to capacity planning, which identifies the application’s architecture, including the number of servers, their memory capacity, and their speed depending on the business requirements. It is critical in this era of impatient customers. Statistically, close to 50% of the customers will move to a different website if does not load in less than four seconds. A second’s delay results in a loss of more than a quarter of a million dollars every year. The data from capacity testing is crucial in analysing the performance of existing infrastructure against peak load. This helps to figure out if the existing infrastructure should be enhanced so that the desired level of customer experience could be kept or achieved. Capacity testing is sometimes confused with load testing which as an entirely different process. Load testing measures the performance of network infrastructure on real-world loads. Matrices such as average response time, requests per second, peak response time, error rate, and throughput are measured under a load of ‘N’ number of users. Knowledge of programming is necessary to perform the test. What are the best practices in capacity testing? The 80/20 rule The stakeholders that are critical and influence the design process are to be decided by analysing existing data. The rule allows you to find and resolve the issues that have the highest business impact. Setting proper SLAs (service level agreements) and other conditions The automation team should analyse the real-time use cases by having conversations with product owners and business analysts so that every problem is predicted and prepared for. This allows the automation tools to better understand the business process and find the ones that generates maximum traffic. Defining test specifications. Information such as software version, client type, volume test, type of data volume and hardware specification are to be documented. The type of environment is also a crucial factor; the closer it is to the real-world, better are the chances to find and resolve all potential issues. With the current shortage of resources that companies are facing, integrating an automated capacity test into your pipeline. Automation tools which can provide both black-box and white-box results are preferable as they are better at root cause analysis. Reporting and analysing results Diligent reporting and analysis of capacity tests would help improve collaboration within the organization as well problem resolution. A detailed report allows organizations to analyse design and innovate better. What are the advantages of capacity testing? Improved planning: Capacity testing allows you to predict infrastructure upgrades. Optimization of processes: leverage the full potential of your infrastructure by supplying all the necessary information on real-world operations. Enhanced planning: The tests allow you to predict load conditions and infrastructure requirements more efficiently so that you can formulate a sound plan. Better peak-time performance: Capacity testing eliminate unexpected roadblocks during peak- load operations. Compliance: SLAs and other agreements are met as there would seldom be any unexpected issues. Capacity testing allows you to iron-out all the bugs in your system and maximise your profit
<urn:uuid:3ec879cb-9d38-485f-bcfa-a4e01671d9c7>
CC-MAIN-2022-40
https://alertops.com/articles/capacity-testing-what-is-capacity-testing-alertops/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00311.warc.gz
en
0.935024
703
2.765625
3
Find Nearest Tool One Tool Example Find Nearest has a One Tool Example. Visit Sample Workflows to learn how to access this and many other examples directly in Alteryx Designer. Use Find Nearest to identify the shortest distance between spatial objects in one file and the objects in a second file. There are many use cases for this tool. For example, use the tool to find the closest store locations to consumers in the customer file (both point files), identify the closest cell towers (point files) to LATAs (polygon files), or select congressional districts (polygon files) within 50 miles of a major thoroughfare (line file). A Universe input connection into this tool is optional, as this file can be specified with an input path. If you are using DriveTime, visit Guzzler Drivetime Methodology for more information. Configure the Tool The Find Nearest Point tool accepts 2 spatial inputs. - Select the Spatial Object Field to use for the Targets (T Input). Any object type can be chosen for the Target, but if it is not a point type object, the centroid is used for analysis. - Specify the Universe object. - Use Records from U Input: Select the Spatial Object field from the data going into the tool. - Use Records from File or Database: When reading in spatial objects from a data source, make sure the data source that is being brought in has already been sorted on the spatial object. Ensure no connection is going into the U input. - To specify the input data source, either enter the file path location of the input or browse to the data source's location. - Select the Spatial Object Field from the input data source to calculate the nearest distance to. - How many nearest points to find?: Specify how many nearest Universe objects to find for each Target. The default is 1. The Find Nearest tool can return more records than selected in the “How many nearest points to find?” field if there are multiple records in Universe the same distance from the Target object. - Set the Maximum Distance and units of measure these objects can be from the Target. If Drive time is to be calculated, the user can specify the dataset they wish to use to calculate this figure. If only one dataset is installed, you will not have the option to select another dataset. You can specify the default dataset from User Settings. Go to Options > User Settings > Edit User Settings and select the Dataset Defaults tab. - Choose whether or not to Ignore 0 Distance Matches. When checked, a point never matches to itself. If you are expecting the same number of input records as output records, ensure you do not have any duplicates in your data stream. - Use the table to modify the incoming data stream. Each row in the table represents a column in the data. Select, Deselect, and Reorder Columns To include a column in data, select the check box to the left of the column name. Deselect the check box to exclude the column. To reorder the columns of data... - Select to highlight a row, or drag down to highlight multiple rows. - Click the up or down arrows, or right-click and drag, to move the rows to a new location. The Unknown column is selected by default. It allows new columns in the data. Move the column to the location where you want a new column to be. Modify Data Type and Size To change the supported length (characters for string and numeric fixed decimal types) or measurement (bytes for other numeric types) of data in a column, select Size and enter a number. Size varies by data type and can be edited for fixed decimal numeric types and all string types. Use the [data type]: Forced option to ensure a column always contains the expected data type; this is helpful when creating macros. Rename a Column or Add a Description - To change the name of a column, select a field in the Rename column, and enter the new name. - To add a description, select a field in the Description column and enter a description. View More Options After you select or highlight rows (columns of data) in the table, select Options to view more configuration options: - Save/Load: Save the column configuration as a .yxft file. The Alteryx Field Type File is a text file that can be used in other workflows using the Load Field Names or Load File Names and Types options. - Select: Select or deselect all or highlight columns. Options include Select All and Deselect All. - Change Field Type of Highlighted Fields: Change the data type of all highlighted columns at once. - Sort: Sort the column order in ascending or descending order. Options include Sort on Original Field Name, Sort on New Field Name, and Sort on Field Type, or Revert to Incoming Field Order. - Move: Move highlighted columns to the top or bottom of the list. - Add Prefix to Field Names: Add a prefix to the selected or highlighted column name. - Add Suffix to Field Names: Add a suffix to the selected or highlighted column name. - Remove Prefix or Suffix: Remove the prefix or suffix from the selected or highlighted column name. - Clear All Renames: Remove the new name for all columns. - Clear Highlighted Renames: Remove the new name for all highlighted columns. - Revert All to Original Type & Size: Undo all changes to type and size in all columns and use the original values. - Revert Highlighted to Original Type & Size: Undo changes to type and size in the selected or highlighted columns and use the original values. - Forget All Missing Fields: Remove all columns that are no longer included in the data. - Forget Highlighted Missing Fields: Remove all highlighted columns that are no longer included in the data. - Deselect Duplicate Fields: Deselect the second column when duplicate column names exist. This option is only available with multiple inputs.
<urn:uuid:0ee6b41b-df65-4e31-abaf-d02d7a071eb1>
CC-MAIN-2022-40
https://help.alteryx.com/20221/designer/find-nearest-tool
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00311.warc.gz
en
0.738542
1,271
3.015625
3
Computer crashing problems are a significant inconvenience, and they could cost you a lot, especially if you are not sure what is causing them. Having a clear idea of the problem makes it easy to resolve it faster and prevent further damage, including data loss. LA IT support experts have identified the most common causes and potential remedies to help you troubleshoot your system or prevent further damage. Excess heat is one of the common causes of random crashes of otherwise well-functioning computers. If your computer is not getting enough airflow or the fan is overworking, then the computer will overheat and fail to function correctly, causing a crash. All computers come with a fan to mitigate the heat the computer produces. However, they will fail if they are covered with dust and other obstacles or something is blocking the vents. If you are continuously using your computer, and it starts to heat up and produce noise, then your fan is overworking, and you need to let it cool down first. Low Disk Space Disk space is often at a premium on any computer. The more you get, the more it is taken up by files and software. Games, large software, and multimedia files usually take up so much space that in time there is little memory to run your computer programs resulting in crashes. At times, you could also be operating several programs that take up much local disk space when they are running, resulting in crashes. You can check your storage settings and the task manager to know the disk resources left and take measures like deleting some files and closing other programs. Viruses and malware are present in many networks and connections. They could intrude your system through online means like websites and emails or downloaded software. It could also come through offline infected periphery devices. Whatever their source, once they infiltrate your system, they hog resources and cause many core functions to malfunction. Fortunately, LA IT support providers have various resources to prevent malicious software attacks and identify and remove them if they infiltrate a system. Windows operating systems have a large pool of files that helps the system function properly, called a registry. With time, the registry could become corrupted, disorganized and some files could go missing. These events result in frequent crashes whenever you are using your computer. You can use verified windows cleaner programs to remove obsolete files and automatically carry out repairs. If it does not change or you do not have access to the premium products, call experts to help you out. Sometimes you may not be able to fully resolve these issues on your own, and other times it helps prevent them from happening in the first place. If you want expert help in LA to prevent and resolve computer crashes or if you need IT support, don’t hesitate to contact us at Advanced Networks. Let us help you maintain your productivity.
<urn:uuid:22637a9d-944e-4ea7-8a33-7b2cfc4cc515>
CC-MAIN-2022-40
https://adv-networks.com/la-it-support-identifying-main-causes-computer-crash/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00511.warc.gz
en
0.942512
579
2.6875
3
How will this speed up or increase detection of cybercrime? Although the details are scarce in the original press release, IBM applies Watson to other areas of science as a kind of knowledge base. It guides its user toward of a potential solution by showing highly relevant articles from its corpus. If we apply the same idea to IT security, user would be a security analyst who is tasked with analyzing a specific potential incident. This work can definitely be made more efficient, however it doesn’t mean that Watson alone would be able to protect the organization. If this approach gains broad acceptance, will it result in thousands of job losses in the cyber security industry? Cyber security is struggling with acquiring and retaining talent. Anything that makes security processes more efficient is welcome by the security community. Will it be more effective than a highly-trained cybersecurity professional? How else could it change cybersecurity? The odds in security operations are often on the attacker’s side, because the defense team only needs to make one mistake and they are in. Traditional security tools are good enough to thwart the automated, broad attacks, but struggle with determined, innovative human attackers. Artificial intelligence is yet to show its full potential within security. I believe that this potential is high, and that it can deliver benefits that will help our quest against targeted attacks. [su_box title=”About Balázs Scheidler” style=”noise” box_color=”#336588″][short_info id=’68879′ desc=”true” all=”false”][/su_box]
<urn:uuid:5625dc4f-81ff-4e80-8359-e3e40017c201>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/eibm-watson-trained-spot-cybercrime-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00511.warc.gz
en
0.943576
338
2.53125
3
Data discovery refers to the use of advanced algorithms to perform analysis of data to detect patterns that would otherwise go unnoticed. Data discovery is about seeing the larger picture among multiple data sources, sometimes hundreds of in-house and 3rd party sources. Data insights then are translated into better decision-making and business strategy. At its best, data discovery automatically discovers data sources in an organization’s data environment, methodically and algorithmically sifting through databases and files to uncover specific predefined patterns and keywords laid out by classification and identification rules. This method is increasingly more important in the face of massive volumes of structured and unstructured data generated in many business cases. Data discovery and classification Data discovery is the process of identifying data sources within an organization's changing data environment. By leveraging automation, and often cloud-based systems to support data environments, data discovery becomes a foundational aspect of agile businesses. A data discovery platform makes business operations transparent, and sets up future success by creating a hub for other data innovations. Data classification is a step following data discovery where data is identified and categorized by type using pattern and keyword rules that apply labels to identified data. In one instance of this, In the health industry, medical ID patterns are used to find patterns of categorization. What is the purpose of data discovery? Data discovery has multiple purposes serving data stakeholders in different ways. In all of them, it is to find a more accurate and complete picture of the organization as a whole, and insights into the operational aspect of the business. For businesses, the iterative data discovery process helps to extract valuable insights from several data streams and centralizes insights for top leadership to make better strategic decisions. For data users, data discovery and data sharing allows multiple user tiers the ability to access relevant insights to their operations among all the data insights produced. This means, each department can view and analyze data specific to their needs without being bogged down with searching, cleaning, and preparing data. Data discovery process Technically, data discovery is the process of consolidating raw data from multiple sources, of which each may be fundamentally different, like combining structured and unstructured data. Because of significant volumes of data, organizations rely on smart data discovery tools to digest operational information and visualize it. Popular data visualization includes graphs, charts, tables, maps, infographics, dashboards, etc. Generally, there is a 5-step data process that intakes raw data and produces valuable insight. Connect And Blend Data Sources — Enumerating all data sources is the starting point. This includes listing and understanding necessary measurements and metrics to be collected. Typically, these data are ingested and stored within a data warehouse—a place where it is possible for disparate data types to merge. This stage is where discovery occurs. Clean And Prepare Raw Data — Raw data is typically unreadable, and processing prepares data for analytical use. Raw data in this stage is cleaned, standardized or normalized, in order to detect and remove data errors, distortions, and corruptions. Also data is put in alignment, such as using correct units of measure. Data Sharing — Data sharing among data stakeholders is a key benefit of data discovery. At this level, analysis has not yet occurred, and data can be categorized into informational domains which benefit different levels in the organization. Sometimes this comes in the form of a data mart, something like a data warehouse except for a specific singular data domain. Teams can then access the data most relevant to their day-to-day, and planning. Analyze And Develop Business Insights — At this point, management teams and data scientists can access and analyze data sets relevant to their needs. To analyze, teams deploy data discovery tools that can perform distributional analysis, and predictive analytics or market basket analysis. Depending on the need, custom analysis is often performed. Visualize Insights — What typically comes hand in hand with data analysis is data visualizations, which supports the main aim of data discovery, to deliver insights as clearly and quickly as possible. From this point, teams can develop action plans and react to these insights with confidence. Types of data discovery There are many tools and vendors to assist in data discovery and analysis. But the data discovery process initially began as manual. Manual data discovery and smart data discovery are the two types of data discovery processes today. And going forward, likely more and more businesses will utilize smart data discovery. Manual data discovery — As the name suggests, this is the manual, human, tedious process of discovering data patterns within data sets. This typically requires a highly qualified and trained human data technician, popularly assigned the role title “data steward”. These caretakers of the data would have to manually map, prioritize, and prepare data for analysis, including creating and categorizing metadata, documenting rules and standards, and ultimately conceptualizing the entire data strategy and company data models. Smart data discovery — The idea of a data steward has evolved with the advent of modern automated data processing. Today, AI and machine learning have augmented data discovery, and beneficially made data more robust, accurate, and usable, all while removing much human produced errors. The role of the data steward has changed with smart data discovery, now its emphasis is on ensuring the fitness of data and data governance. Smart data discovery Smart data discovery is a popular term for AI and machine learning advancements in data discovery. Before machines could perform data discovery, these tasks were conducted manually by data stewards. AI functions within the data discovery domain reached a tipping point and Gartner identified a new category of business intelligence software capable of dramatically organizing company data, and discovering sensitive data that can now be secured and made compliant to regulations. Gartner defines smart data discovery: “Automatically finds, visualizes and narrates important findings such as correlations, exceptions, clusters, links and predictions in data that are relevant to users without requiring them to build models or write algorithms. Users explore data via visualizations, natural-language-generated narration, search and natural-language query technologies.” Data discovery platform A data discovery platform (sometimes called sensitive data discovery platforms), such as Hitachi Content Intelligence, provides a complete set of data tools for detecting deep patterns using advanced analytics within disparate data sources. These patterns are then further put into context using other relevant systems, then subsequently visualized for data users, or otherwise presented using clear delivery methods, such as dashboards, charts, tables, etc., to clarify underlying business insights. These platforms include the following features: Automated data discovery tools Data monitoring in real time and sensitive data discovery algorithms Contextual search functions and other metadata search functions Compliance functions that enable organizations to adhere to industry regulatory standards (GDPR, CCPA, HIPAA, etc.) Business Email Address Thank you. We will contact you shortly. Note: Since you opted to receive updates about solutions and news from us, you will receive an email shortly where you need to confirm your data via clicking on the link. Only after positive confirmation you are registered with us. If you are already subscribed with us you will not receive any email from us where you need to confirm your data. "FirstName": "First Name", "LastName": "Last Name", "Email": "Business Email", "Title": "Job Title", "Company": "Company Name", "Phone": "Business Telephone", "LeadCommentsExtended": "Additional Information(optional)", "LblCustomField1": "What solution area are you wanting to discuss?", "ApplicationModern": "Application Modernization", "InfrastructureModern": "Infrastructure Modernization", "DataModern": "Data Modernization", "GlobalOption": "If you select 'Yes' below, you consent to receive commercial communications by email in relation to Hitachi Vantara's products and services.", "EmailError": "Must be valid email.", "RequiredFieldError": "This field is required."
<urn:uuid:e070a821-67bb-4700-aed7-601a592989ed>
CC-MAIN-2022-40
https://www.hitachivantara.com/en-anz/insights/faq/what-is-data-discovery.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00511.warc.gz
en
0.903753
1,774
3.0625
3
In surveillance technology, WDR refers to wide dynamic range. This is a camera’s capability for digital exposure adjustment so that a wider range of image details can be identified, even when subjects are surrounded by shadows or bright light. Cameras with WDR play a key role in surveillance applications, since in these situations a camera’s view may transition from dark to normal to very light areas. With wide dynamic range features, equipment can adapt to intense illumination around subjects or backlighting behind them. The WDR camera capability prevents recorded subjects from appearing too dark or overly exposed. Wide Dynamic Range Camera Images A surveillance camera’s field of view can include very bright and very dark areas. Wide dynamic range camera images are properly exposed. WDR features can tone down the natural and artificial illumination surrounding an object or person, so details appear clearly. The WDR camera function is beneficial if natural light enters a space from various angles, such as in a business with large windows and glass doors. A camera with wide dynamic range functionality is important in security recording solutions for stores, restaurants and other variably lit locations. Surveillance Camera with WDR Cameras without WDR may produce images that are missing important details. Surveillance cameras with WDR support subject visibility and detail identification. Having WDR in a camera means a consistently clear image is captured, even when a person moves through extremely lit or heavily shadowed areas of a room. WDR is often achieved using dynamic contrast and dynamic capture techniques. Commercial imaging equipment, such as D-Link and Axis surveillance cameras, features wide dynamic range functionality.
<urn:uuid:b8745efc-9f65-4967-b9e8-ffa907eb2026>
CC-MAIN-2022-40
https://www.comms-express.com/infozone/article/wdr/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00511.warc.gz
en
0.924918
326
2.578125
3
Unstructured data consists of any type of data that exists beyond the scope of an organization’s application or database. Such data includes things like word documents, audio files, videos, photos, webpages, presentations, and so on. The amount of unstructured data that companies store has exploded in recent years due to the rapid increase in storage capabilities. While it is true that many companies are still not engaging in any form of unstructured data governance – largely due to the fact that they already struggling to govern their structured data – more companies are at least starting to recognize the value of this type of data. As companies grapple with a barrage of complex and varied types of data, they are seeking new and improved methods to help them get their house in order. There are a number of tools available which can help them process and store vast amounts of unstructured data. Such tools typically provide dashboards, data mining features, as well as searching and indexing features. There are also hardware solutions available. Discovery and Classification The main problem companies have in trying to keep track of unstructured data is that they don’t know where to start, as they often don’t know what they are looking for or how to accurately assess the value of the data they hold. It should be noted, however, that it is not necessary to govern every piece of unstructured data that exists, only the most important types of data. It is therefore very important that you classify your data based on a set of pre-defined categories. These categories typically include: public, internal and restricted data. Of course, you will first need to discover your sensitive data, and then implement a data classification policy which outlines the objectives, workflows, categories, data owners and details about how the data should be correctly handled. There are a number of commercial tools available which can help you discover your sensitive data. Such tools can discover data in network file shares, SharePoint, Dropbox, OneDrive etc. They often come with features that enable you to identify the value of the data, classify the data based on its value, and apply protection measures to the sensitive data (PCI, PII, PHI) based on certain properties. They typically come with tools which enable you to quarantine or flag certain files that are stored in a manner which may present a security threat. They also usually come with built-in analytics and reports. Due to the sheer amount of unstructured data that typically resides on a corporate network, you will also need to ensure that you have the most sophisticated suite of auditing tools available. In order to sufficiently protect your sensitive data, there are certain questions you will need to ask, which include: - Who has access to what files, and what privileges do they have? - Who has been viewing, modifying and deleting these files? - Why/When were these files accessed, modifying or deleted? So, Where Do You Start? You might think that implementing a data governance program will be expensive, time-consuming and resource intensive, and there are many vendors out there that would have you believe this is the case. Fortunately, you probably already have the tools to get started. The File Classification Infrastructure in File Server Resource Manager is terribly underutilized, and actually provides a pretty powerful method of discovering, tagging and classifying your sensitive data. Simply input a load of regular expressions related to all manner of PII (you can find a list here), and FSRM will continually scan and enable reports to be generated listing your sensitive data and the relative criticality. This is a fantastic place to start. Now you will know where your most sensitive data is. However, it’s only the first step and unfortunately, until Windows builds the functionality in, you will have to look for third parties to make sense of this data. Many vendors will try and charge you extortionate prices to implement data governance solutions, and that’s because they believe the discovery and classification element to be very valuable. If you already have this in place through FSRM, then you can look for vendors that integrate this functionality already. This is where Lepide comes in. Lepide’s File Server Auditing solution has a built-in integration with FSRM that enables you to run reports and set alerts on changes occurring to critical, at risk data. This means that if a file containing vast amounts of PII is accessed, moved, deleted or modified in any way, you’ll know. You are also able to see who has permissions to the critical files and folders and when these permissions change. Best of all, the integration with FSRM comes for free with Lepide Data Security Platform, making it a very competitive choice for a Data Access Governance solution. Take a look for yourself!
<urn:uuid:445c4f58-720e-40d8-a1eb-0606a6146340>
CC-MAIN-2022-40
https://www.lepide.com/blog/what-part-does-data-access-governance-play-in-securing-unstructured-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00511.warc.gz
en
0.949894
983
2.515625
3
Financial security refers to your peace of mind that you feel whenever you are not worried about how much income you are making to pay your expenses. It also refers to you having enough money set aside for your future financial objectives and emergencies. In today’s challenging economic times, the financial security has become a key element in our daily lives. It is more important than ever before. The current economic environment has put a tremendous amount of stress on most households and it is affecting their ability to make ends meet. This is putting even the richest of people at risk. It is a good thing that there are ways to avoid the financial situation getting out of hand and to protect your financial future. The first step to financial security is planning and budgeting for your monthly bills. It is essential to keep track of all bills and other expenses so that you can make sure that they do not exceed your monthly income. Make a list of all of your expenses such as food, transportation, entertainment, and home repair and upkeep. The list can be made out of categories or it can be customized according to your needs. Once you have your monthly income and expenses, you need to divide your income into minimum and maximum amounts. A minimum amount is what is considered the average and this is what your family will need to live comfortably. You need to start by reaching your minimum income level each month. If you can’t afford to do that, you can work on your goal by having extra money during your free time. You can then begin working towards your financial goal by setting aside an amount for your monthly goals and expenses. It is important to set aside enough money for your long term goals and emergencies. Your goals should be as specific as possible so that you know where you want to be in two or three years. Having a detailed plan will help you to avoid missing out on opportunities and to be able to stay on top of things when they go wrong. Having these goals clearly defined helps you stay on track and focus on the goals you need to reach for your family. If you have set aside enough money to make ends meet and plan a solid future, you will find that it becomes much easier to manage your emergency savings. as you see what is available to you in the event of a major emergency. It is important that you understand how you can save more money in the event of an emergency, so that your financial situation is protected and that you are not faced with a major expense that would have taken over all of your savings. With proper financial planning and good financial planning, you can create a good financial situation. The best way to achieve that is to learn everything there is to know about saving and creating a financial future that will allow you to enjoy a secure financial future. You may not know it, but there are some things that will keep you financially healthy and safe no matter what happens in your future. This includes knowing your family’s financial situation, understanding your creditors, and learning about credit cards, loans, and other ways to spend and save money. Knowing your family’s situation can help you be more informed about what is going on in their lives and how they are reacting financially. If you are constantly worrying about what your child is doing, you may not be teaching them how to save properly and will end up in trouble down the road. If your spouse doesn’t show interest in making decisions with their money, you may find yourself in debt before you know it. If you and your spouse are having problems managing your finances, talking to them about it can help you figure out what their current financial situation is. You may also be able to offer tips to get your partner to learn better spending habits so that they will be better able to pay off their bills and have more control over their money. If you need to create emergency savings for your family, make sure that you set up a system that both of you can handle. Having a joint checking account is a great way to ensure that everyone can manage their money. If one person cannot handle managing their own money, they can rely on the other person to do it for them. They will be able to handle their money better and will be able to make better decisions.
<urn:uuid:aa3e1b3f-b4db-4fc6-897d-00746eaddc24>
CC-MAIN-2022-40
https://globalislamicfinancemagazine.com/how-to-create-financial-security-for-your-family/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00511.warc.gz
en
0.975546
862
3.234375
3
In English-speaking countries a so-called QWERTY keyboard layout is commonly used for text input. QWERTY refers to the first six keys on the top row of letters on a keyboard. Depending on the country, QWERTY keyboards are available with different arrangements for punctuation marks, special characters or language-specific letters. The key assignment of a telephone differs in the way the numbers are arranged when compared to keypads on calculators, cash registers or the numeric keypad on the computer keyboard. The numbers 1, 2 and 3 are located at the top and the 9 is bottom right. In addition, the star and hash keys are at the bottom left and right, on either side of the zero. These two keys can be used to access special features on the phone or telephone network. Often, in addition to numbers, keys also have three or four letters printed on them. This enables phone lines to be selected not only via numbers, but also through combinations of letters or names. Dialling using letter sequences is very common in North America. Although telephony with a cloud telephone system no longer requires a physical phone, as any computer with the appropriate software can be used to make phone calls, for example, by clicking on an entry in the address book, a virtual keypad with a typical phone keypad layout is usually available. This enables calls to be set up by entering individual subscriber numbers.
<urn:uuid:16587407-86c8-4ea4-b1b1-7758a02c9f5b>
CC-MAIN-2022-40
https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/key-assignment
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00511.warc.gz
en
0.918477
287
3.515625
4
Cyberattacks continue to rise, threatening the educational sector As the 2022-2023 school year looms, so do ongoing cyber threats directly targeting schools, universities and school district administrations. In 2021, there were an average of over 1500 attacks on education and research organization per week and these numbers are expected to continue to rise through 2022. The educational sector is massive and varied to begin with: it reaches urban, suburban, and rural areas, and spans varied populations of students, faculty, staff and alumni with varying amounts of awareness about cyber security. This heterogenous population often makes applying consistent cybersecurity policies a challenge. The spike in attacks on education has also been exacerbated by the pandemic-fueled shift to online learning technologies. As IT departments needed to pivot rapidly to digital platforms, the number of possible cyber entry points increased. Why are educational institutions being targeted? There are a few reasons schools and universities are the perfect targets for cyber criminals. - A Glut of Data Schools of all sizes are hotbeds of personal data. Student, teacher, administrator, and staff data is usually all stored in a single network. Other industries (like the financial sector) may offer better payouts but often mount a better cyber defense. Educational institutions, on the other hand, are often extremely vulnerable. Cybersecurity continues to not be managed as the huge threat it is. - Home-made Entry Points While the majority of lockdowns are in the past, the entire landscape of folks working and learning from home has changed. The combination of WFH and digital learning means unsecured collective networks. Shared and unsecured home networks are also targets for cybercriminals. At-home networks where both children and parents are accessing the same systems for a combination of work, education, and entertainment often means a messy and easy to breach situation. When one person using the network has their password stolen, or clicks a phishing link by mistake, it has the potential to affect other accounts. Cyber criminals are eager to use credentials for one account to access others. - Lagging Software and Resources Many educational institutions already suffer from a lack of funding and a lack of support for technological investment. IT departments may not have the resources to properly secure their fleet of devices—often, students of all ages are using out-of-date equipment. The issue of cyber threat is compounded because old software is no longer eligible for security patches or tech support, making entry into systems even easier as time passes. What to Do Educational institutions of all sizes must distribute funds and time to cybersecurity concerns and defenses. Requesting budget increases, reallocating existing funds, and when possible, investing in defensive solutions is the path forward. Institutions like NIST can provide IT teams at educational facilities with a set of guidelines to help with best practices, such as driving better password hygiene. Given that upwards of 60% of data breaches involve stolen credentials, focusing on strengthening the password layer can rapidly address vulnerabilities for an educational organization. The vast majority of people—teachers, students, and administrators included—reuse passwords across many accounts. Cybercriminals are wise to this, and to the many easy patterns most users choose for their passwords, so this layer is a priority. Fortunately, scanning for compromised credentials is an effective way for IT teams to evaluate and improve password hygiene. IT teams previously had little visibility to the problem of password reuse. Now they havve a way to quickly detect unsafe credentials, a problem at the root of the majority of hacking related attacks.
<urn:uuid:6f86fde4-2d53-4f66-a906-173c32a0315b>
CC-MAIN-2022-40
https://www.enzoic.com/back-to-school-means-more-cyber-concerns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00511.warc.gz
en
0.947696
726
2.765625
3
One of the most exciting technologies in the history of our species has to be that of artificial intelligence. The sheer, almost limitless potential for AI, machine learning and deep learning in almost every aspect of our world means that it is vitally important for us to learn as much as we can about AI and how it could affect us all in both the short and long term. In this series of articles, we’ll be looking at artificial intelligence, machine learning and deep learning and how these technologies could affect us both now and in the not-too-distant future. The future of artificial intelligence looks set to depend on the way machine learning and deep learning are both used and developed throughout the near future. With a strong focus on both industrial and cyber-based applications for AI, many advances will likely stem from sectors such as cyber -security, financial services, manufacturing and transportation with other industries such as healthcare also being likely candidates. With this being said, artificial intelligence technologies are already present in a wide variety of environments and enhancing the capabilities of systems and networks around the world, so what’s next? Even better machine learning? Predictive behavioral systems and AI chat bots that can not only predict our behaviors but mimic them, too? The answer is yes, almost certainly. However, one of the reasons artificial intelligence is such an interesting area of technology is that it enables machines and algorithms to learn for themselves, enabling them to break free from the limitations of the human mind. In this final article of our three-part series on artificial intelligence, we’ll be looking to understand what the future might hold for AI and how these technologies could go on to shape our future. Let’s start by taking a look at how AI could bring about much more advanced automation. Near Future (1-5 years): Advanced Automation Automation is already enhancing the way in businesses and organisations are operating and producing their products and services. With the benefits brought on by the kind of automation we already see today, enterprises and institutions are already looking for new and improved ways to enhance their automation systems. Artificial intelligence is capable of doing just that. One of the biggest ways in which AI technologies could further improve automation is through its management of ever-larger data sets. Human beings make mistakes, this is an unfortunate fact of life and one that seemingly gets worse the more they have to do, remember, or solve. This is not the case when it comes to AI. Given its ability to manage and interpret enormous amounts of data, artificial intelligence is now the perfect partner for automation systems and could very well be the key for other autonomous services such as predicative cyber security systems and driverless vehicles, too. As the number of connected devices we use continues to skyrocket, the data generated by all of our connected technologies will need to be understood and then analysed and used to generate actionable results. AI-powered automation systems would not only be able to collect, interpret, and then automate certain processes and operations, they would also be able to learn from each task they were assigned, making them an invaluable tool. Not-So-Near Future (5-15 Years): Quantum Machine Learning Any sentence or phrase that contains the dreaded “Q” word is often met with automatic confusion or skepticism, after all, defining quantum can sometimes be slightly tricky. In this instance, quantum machine learning (QML) is a relatively recent combination of machine learning and quantum mechanics. The idea behind QML is to look for or devise ways in which quantum software could be used to further enhance machine learning. The idea, based on the fact that quantum systems generate atypical patterns within data that classical systems are unable to efficiently produce, is that quantum computers would provide a superior alternative to classic computers when it came to machine learning. Quantum computers are still not too well understood by the general population and remain in the eyes of many, as the Large Hadron Collider at Cern did in its first few years, as an exotic, high-concept gadget with no real-world use as of yet. However, this notion might just be about to be turned on its head. Rigetti Computing, a California-based company whose work focuses on quantum integrated circuits, has recently demonstrated its ability to run a clustering algorithm (a machine learning technique used to organize data into similar groups) using a prototype quantum chip it had developed. While quantum computing may still be a few years away from becoming as widespread as early AI technologies have become, it seems likely that continued progress alongside machine learning technologies as well as in other fields could see quantum computers and quantum machine learning move from physics of the far future to the technology of tomorrow a lot sooner than we may realize. Distant Future (25+ Years): Superintelligence Of the three future technologies to be featured in this article, AI superintelligence is both the closest and the furthest from being realized. When considering artificial superintelligence, most people may think of the AI featured in numerous TV shows and Hollywood films such as HAL 9000 from 2001: A Space Odyssey, EVA from Ex Machina, or some of the hosts in Westworld. In the real world, an artificial superintelligence would likely not feature any of the human personification the previous examples had such as human bodies or voices. In fact, artificial superintelligences may not look like anything other than a box with a screen so as to allow it to communicate with its developers. The reason for this would initially be safety. When creating an artificial superintelligence, it is inevitable that you encounter what Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, has dubbed the control problem. How do you control something that is so utterly superior to you? While this may all sound like something straight out of science fiction novel, many experts believe that, with the creation of ever-more powerful machine learning and deep learning systems, what has become known as an “intelligence explosion” is probably inevitable. The ability of machines to learn and improve themselves will eventually get to the point where their cognitive capabilities dwarf that of any human being. While nowhere close to being an artificial superintelligences, there are various game-playing computers that have reached superhuman levels of intelligence for specific games such as chess, checkers and, more recently, Go. These machines have ushered us into the era where the most proficient players of these games on this planet will never again be human. While still incredibly impressive, these AIs are only significantly good at a single task, whereas an artificial superintelligence would outperform human beings in everything. In his book, Bostrom advises that a useful way of thinking about superintelligence is to consider human IQ levels. If the average human has an IQ of 130, an artificial superintelligence would likely have an IQ of around 6000. With the right kinds of control systems and goals put in place however, an artificial superintelligence would quite literally be able to solve almost any problem or computation and be capable of inventing or discovering almost anything. This would completely change the way we live on Earth as such an entity would likely be able to connect to and manage the entire planet’s industrial, commercial, and personal systems, technologies, vehicles, buildings and anything else it was capable of connecting to. Artificial superintelligence is still a very long way of, but artificial intelligence technologies have now been here for years and are still evolving and developing out in real-world environments. The Fourth Industrial revolution, as well as the Internet of Things and 5th generation (5G) wireless communications networks will also likely shape the direction of AI development in the near future. With the future seemingly destined to intertwine with artificial intelligence, it makes sense for us all to stay informed about these technologies and how they may affect us in the future.
<urn:uuid:4f72464f-f07f-41f6-af92-e5ca42b12dca>
CC-MAIN-2022-40
https://www.lanner-america.com/blog/ai-part-3-future-ai-advanced-automation-quantum-machine-learning-superintelligence/?noamp=mobile
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00511.warc.gz
en
0.962601
1,597
3.328125
3
Public key cryptography, a system used to secure online traffic, carries a significant flaw, a group of European and American mathematicians and cryptographers has found. Public key cryptography requires the sender and the receiver of a message to each have a digital key to encrypt and decrypt it, respectively. One of these keys is kept private. For this to work securely, the keys have to be generated totally at random. However, the researchers found that some of the keys they found had duplicates, which might perhaps allow the owner of one of the duplicates to hack into the messages of the other. Except in cases where people had the same key re-signed by a different certification authority, “the problems observed represent security problems, and there really isn’t any acceptable norm for security defects,” Paul Kocher, president and chief scientist of Cryptography Research, told TechNewsWorld. What the Researchers Discovered The researchers collected 11.7 million openly accessible public keys from the Web. These contained 6.4 million distinct RSA modules. RSA is an algorithm for public key cryptography. Users create and then publish the product of two large prime numbers, together with an auxiliary value, as their public key, but keep the prime factors secret. The team found that 4 percent of 6.6 million distinct X.509 certificates and PGP keys had duplicate RSA modules. Further, 1.1 percent of the moduli were duplicated more than once, with some being duplicated thousands of times. X.509 is a standard established by the International Telecommunication Union that specifies standard formats for public key certificates, attribute certificates and other things for a public key infrastructure. This is an example of an X.509 certificate. PGP, or Pretty Good Privacy, is a data encryption and decryption program that’s used for encrypting emails, files, directors and whole disk partitions, among other things. What the Findings Mean “This is a problem that should never occur if systems are properly designed, so it’s evidence of defects in systems that are in day-to-day use,” Cryptography Research’s Kocher remarked. The findings also “give some real statistical evidence about the prevalence of a security issue, which is usually fairly hard to come by.” However, there’s disagreement over how serious a problem this is. “The report talks about generally theoretical attacks that may be practical in an exceptionally small set of targeted attack scenarios and would usually require more work than using simple social engineering or other standard surveillance tactics to obtain the desired content,” Randy Abrams, an independent security consultant, told TechNewsWorld. “Any duplication of keys represents a theoretical threat,” Rob Enderle, principal analyst, Enderle Group, pointed out. However, “the authors did not demonstrate an actual breach, and … it may still be easier for an attacker to use a brute force attack, since determining which keys were duplicated, let alone getting hold of the duplicates, might prove unacceptably difficult.” Some devices and appliances will need security updates, and “users who rely on SSL security to secure remote login into appliances may want to consider adding another layer of security, like a [virtual private network],” Cryptographic Research’s Kocher said.
<urn:uuid:caf15779-5dfb-4632-8522-6cfb3ce10b16>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/random-public-crypto-keys-arent-so-random-74436.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00511.warc.gz
en
0.938732
695
3.296875
3