text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Local, state and federal agencies are using October to educate businesses, organizations and individuals. October is National Cyber Security Awareness Month in the United States. Agencies at a federal, state and local level use this time to provide education about safe internet use. It is applicable to everyone from individuals to companies, organizations, charities, schools, universities and anyone else who connects to the internet. This is a concern that is familiar to computer users but mobile security remains a low priority for many. Despite the fact that many people are aware that their mobile devices can be hacked, the most basic steps are often ignored. A large portion of the population still has not taken the most primary steps to protect their mobile devices. For example, many people have not created a password to lock their devices. Moreover, those who have a mobile antivirus app installed remain in the minority. Cyber Security Awareness Month is designed to help improve this situation. There are many threats that Mobile Security Awareness Month is hoping people will begin to recognize. Among the threats faced by mobile device users include: identity theft, viruses, phishing attempts, and online harassment. October is a good time for people to think about these concerns and take action to protect themselves against them. Many of these efforts are exceptionally easy to implement and use. For instance: • Parents can speak with their children about staying safe when using a mobile phone or tablet. • Computers, smartphones and tablets should have antivirus and firewall software installed and activated. • The added security features built right into the majority of smeartphones should be activated and used. • Apps should be kept up to date for the most secure versions. • Pay attention to suspicious ads, activities and behaviors and avoid opening or clicking on any of them. • Back up files regularly and keep passwords strong – alternately use a secure password manager. Cyber Security Awareness Month isn’t designed to frighten people. Only to educate them so they can keep their data safe. By taking these steps now, they will soon become second nature.
<urn:uuid:ee766f0e-73cc-47ad-82bf-16179360ac22>
CC-MAIN-2022-40
https://www.mobilecommercepress.com/tag/national-cyber-security-awareness-month/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00695.warc.gz
en
0.957592
421
3.203125
3
July 31, 2020 How Amazon Storage Gateway Works: Complete Walkthrough Amazon Web Services (AWS) is used by many users and organizations given its scalability, reliability, and other advantages. When migrating data to the AWS cloud, you should take into account certain features. By default, Amazon provides the web interface for managing the cloud environment and uploading/downloading files. However, using the web interface for regular uploading of high amounts of data may be inconvenient. If you use Amazon S3 as the cloud storage for your data, you can mount an S3 bucket as a network disk in your operating system. Mounting a bucket as a disk can significantly simplify copying files to the Amazon cloud. This solution is suitable for small companies and separate users, but if you are going to use Amazon cloud storage for large enterprise environments, you need a more scalable solution. Another aspect is that rebuilding an entire infrastructure is not an easy process if most processes are related to physical servers in the data centers of your company. Fortunately, Amazon provides a special tool that allows you to use your traditional physical infrastructure to copy data to the Amazon cloud or from the Amazon cloud. This tool is called AWS Storage Gateway and this blog post explains how to use and configure Amazon Storage Gateway. What Is AWS Storage Gateway? AWS Storage Gateway is a special solution that acts like a bridge between your traditional physical or virtual machines and cloud storage in AWS. It provides seamless integration between on-premises and cloud environments. AWS Storage Gateway provides access to unlimited Amazon S3 storage, Amazon S3 Glacier, Amazon S3 Glacier Deep Archive, and Amazon EBS (Elastic Block Storage). This concept of integrating on-premises storage and cloud storage can also be called hybrid storage. An internet connection is required for Amazon Storage Gateway because a connection to AWS servers must be established. Types of Storage Gateway There are three types of AWS Storage Gateway: File Gateway, Volume Gateway and Tape Gateway. File Gateway. This storage gateway type provides access to files that are stored as objects in an Amazon S3 bucket by using SMB (versions 2 and 3 of the CIFS protocol) and NFS shares (protocol version 3 and 4.1). An SMB (Server Message Block) or NFS (Network File System) mount point must be configured in your operating system to be used to access files/objects in an S3 bucket. The File Gateway supports the following Amazon S3 storage classes: S3 Standard, S3 Standard-Infrequent Access (IA), and S3 One Zone-IA. Versioning is supported – you can edit, delete, and rename files by accessing them via the NFS or SMB protocols and each file modification is stored as a new version in an S3 bucket. The main advantage of using versioning for a file (object) share is extended recovery capabilities. In addition to versioning, you can enable lifecycle management and cross-region replication for objects stored in Amazon S3. You can deploy one Storage Gateway VM on server 1 of data center 1 and one Storage Gateway VM on server 2 of data center 2. If both gateways are connected to the same bucket and both servers are connected to the appropriate storage gateway, then you can upload a file to the S3 bucket from server 1 and see that file on server 2 by using the NFS or SMB share. This is possible thanks to the RefreshCache API call that initiates a re-inventory on File Gateway 2. Volume Gateway allows your servers and applications running on-premises to connect to the AWS block storage (EBS volumes) in the cloud by using the iSCSI protocol (Internet Small Computer Systems Interface). While SMB and NFS used by a file gateway are file level sharing protocols, iSCSI works at the block level. There are two types of Volume Gateway – Stored Volumes and Cached Volumes. Stored Volumes. Your local storage, such as a hard disk drive on a physical server or a virtual disk of a virtual machine, is used as the main data storage. Asynchronous backup is performed to Amazon S3 as EBS snapshots. You can access storage with low latency when using Stored Volumes. The size of stored volumes can be between 1 TB and 16 TB. A stored volume is mounted as an iSCSI device. Cached Volumes (Cached Gateway). Frequently accessed data is stored in EBS volumes and infrequently used data is migrated to Amazon S3. This approach is more cost-effective because the price for using Amazon S3 buckets is lower than the price for using EBS volumes. The maximum size of a volume can be 32 TB. A volume is mounted as an iSCSI device. When using a Volume Gateway for block storage, volumes can be attached to or detached from the Volume Gateway. This feature allows you to migrate volumes between gateways for upgrading storage hardware on local servers (on-premises), for example. Tape Gateway is used to back up data for long term archival to Amazon Glacier and store that data on virtual tapes. In fact, data is stored in Amazon S3 Glacier or Amazon S3 Glacier Deep Archive. In this case, the physical interface used to write data on tapes by connecting tape drives and tape libraries is replaced with a compatible Tape Gateway Library interface that allows you to store data in the Amazon cloud. The iSCSI protocol is used to connect existing backup devices to the Tape Gateway. Existing backup configuration and workflow can be preserved. You can save data to the cloud directly via the Tape Gateway or by using specialized data backup applications. Tape gateways can be used to back up data without making significant changes to an existing backup configuration or as an alternative to physical tape drives and libraries (which are not cost-effective). Supported Host Platforms AWS Storage Gateway is provided as a virtual appliance (a virtual machine image/template) that can be deployed on different platforms. Supported host virtualization platforms are: However, there is also a hardware appliance that can be used if an organization doesn’t have any hypervisors in its infrastructure. You can purchase the hardware Amazon Storage Gateway appliance on the Amazon website and it will be delivered to you. The pricing policy of Amazon for the provided cloud services is pay for what you use. Amazon Storage Gateway is not an exception. The charging depends on the type of storage – Amazon S3 or EBS and the AWS Region. If you store data in Amazon S3, the price depends on the S3 Storage Class and number of requests. The price is calculated per GB per month. If you store data in EBS volumes, snapshots are billed if they are taken. As for Amazon Storage Gateway, you are charged for gateway usage (per gateway per month). You can check the actual prices on the Amazon website at any time. Advantages of AWS Storage Gateway The main advantages of using Amazon Storage Gateway are: - Integration of hardware and software configurations with no hardware changes. - The ability to use on-premises storage and cloud storage in Amazon (the hybrid storage concept). - Smooth migration from physical infrastructure to the AWS cloud. How to Deploy AWS Storage Gateway? Let’s find out how to deploy AWS Storage Gateway to access files stored as objects in Amazon S3 by connecting to a file gateway via NFS. You should have an AWS account and an ESXi host to run the Storage Gateway VM. Downloading the image Open the web interface of the AWS console. Click Services and select Storage Gateway in the Storage category. On the Storage Gateway page, click the Create Gateway button in the Gateways section of the navigation pane. On this page you can find previously created storage gateways if they are present. The Create gateway wizard is opened. Select gateway type. Select File gateway to store files as objects in Amazon S3. Click Next at each step of the wizard to continue. Select host platform. There are five supported host platforms to deploy. Select VMware ESXi and click the Download image button. Save the file. In our case the name of the downloaded file is aws-storage-gateway-latest.ova and we save this file to D:\virtual\ on a local machine. Don’t close the current browser tab with the Create gateway wizard displayed in the web interface of AWS because you will need to continue configuring the storage gateway from this step later. Deploying the virtual appliance on an ESXi host Now you have to deploy the downloaded aws-storage-gateway-latest.ova template file on an ESXi host. In our example ESXi hosts are managed by vCenter and we will use the web interface of VMware vSphere Client to deploy the AWS Storage Gateway virtual appliance. Connect to your vCenter, go to Hosts and Clusters and select the needed ESXi host that has enough free resources. Requirements: The File Gateway requires 16 GB of RAM, 4 virtual processors (vCPUs), one 80-GB virtual disk and one additional 150-GB virtual disk for storage cache. In our example, we select the host with IP address 10.10.10.90. After selecting the ESXi host in VSphere Client, click Actions > Deploy OVF Template. The Deploy OVF Template wizard opens. 1. Select an OVF template. Select Local file and click Browse to select the downloaded ova file. In this case we select aws-storage-gateway-latest.ova in D:\virtual\. Hit Next at each step of the wizard to continue. 2. Select a name and folder. Enter a virtual machine name, for example, aws-storage-gateway and select a location for the VM in vCenter. 3. Select a compute resource. Select an ESXi host where the Storage Gateway VM will run. We select 10.10.10.90 in this example. 4. Review details. Verify and review the template details of the AWS Storage Gateway virtual appliance you are about to deploy. 5. Select storage. Select the data store with enough free space to store virtual disk files and other VM files. Select the virtual disk format. It is recommended to select Thick Provisioned as a virtual disk format because all storage space needed for a virtual disk to function is allocated immediately. Read more about thick and thin provisioning in this blog post. 6. Select networks. Select a vSwitch that is connected to a router and provides an internet connection. A virtual network adapter of the virtual machine will be connected to this vSwitch and the appropriate network after deployment. 7. Ready to complete. Review the configuration of the VM that will be deployed from the template and hit Finish to start VM creation. Read the blog post about VM templates to learn more. Wait until the Storage Gateway VM is deployed from the template. You can see the job status in the Recent Tasks toolbar in vSphere Client. Once the VM is deployed, you can see the VM name you have defined before in the list of VMs of the appropriate ESXi host (10.10.10.90 in our case). Right click the VM (aws-storage-gateway is the name of the Storage Gateway VM deployed from the template in this example) and in the context menu hit Edit Settings. Now you have to add a new virtual hard disk for cache. This virtual disk is used to store recently accessed files and files that are accessed frequently to reduce latency when accessing that data. In the Virtual Hardware tab of the Edit Settings window click Add new device and select Hard disk. The recommended minimum size for a virtual hard disk used to store cache by AWS Storage Gateway is 150 GB. You should create a Thick Provisioned virtual disk for cache. In the New Hard disk string, type 150 GB. In the Disk Provisioning string, select one of the Thick Provisioning options. Hit OK to save settings and create a virtual disk. Make sure that time is set correctly on the Storage Gateway VM, ESXi hosts, and vCenter servers. Time on the VM must be synchronized to avoid issues and for successful gateway activation. Select your aws-storage-gateway VM in the list of VMs, click Edit Settings. On the Edit Settings screen select the VM Options tab, click VMware Tools to expand settings, and select the “Synchronize guest time with host” checkbox. Hit OK to save settings. Testing network connectivity It is recommended to test the network connection of the Amazon Storage Gateway running as a VM locally with AWS cloud storage. Power on the Storage Gateway VM. Log into the AWS Appliance VM by using the default credentials. You can check the IP address. If there is a DHCP server in your network, the IP configuration is obtained automatically. It is recommended to set the static IP address for long-term usage of the Amazon Storage Gateway. If you want to set the static IP address, press 2: 2: Network Configuration Then enter 3: 3: Configure Static IP Follow the recommendations to set the static IP address. In our example, the IP address of the Storage Gateway virtual appliance is 192.168.17.122 and the netmask is 255.255.255.0. After configuring the IP network configuration, you should test the network connectivity. In the main menu select 3: 3: Test Network Connectivity Then select 1: Select endpoint type: The test is passed in our case and you can see it on the screenshot below. Creating a bucket Before you can continue, ensure that a bucket has been created in Amazon S3 for your account. You can use this link to create a bucket. The name of the bucket used in this walkthrough is blog-bucket01. You must have enough permissions for your AWS account. AWS access keys should be generated if you are going to use other applications to access the bucket. You can get the AWS access key ID and a secret access key for your account on this AWS page. File gateway activation Now you have to define the IP address of Amazon Storage Gateway and activate the File Gateway. Go back to the AWS console web interface. As you recall we stopped at the second step of the Create gateway wizard. If you have closed that page, open the AWS console, go to Services > Storage Gateway, click Create Gateway. On the first step of the wizard (Select gateway type) select File gateway. These steps are explained above in the beginning of the walkthrough and are complemented with screenshots. Select host platform. Select VMware ESXi and click Next. Select service endpoint. Select Public as the endpoint type, then hit Next. The network access for your web browser to the Storage Gateway VM to the TCP port 80 must be allowed. Check the IP address of the Gateway VM (VA). You can find the IP address of the VM in the interface of VMware vSphere Client by selecting the needed virtual machine. In this example, the internal IP address of the AWS Storage Gateway VM is 192.168.17.122. Enter the IP address of the VM (the Storage Gateway virtual appliance), not the external (WAN) IP of your router. Click Connect to gateway. Activate gateway. Activation of the gateway securely associates your gateway with your AWS account. Select the gateway time zone. Enter the gateway name, for example Storage Gateway AWS. The name can be different from the name of the VM and the DNS name of the VM (appliance). Remember that TCP 80 port must be opened on the gateway VM. Click Activate gateway and wait until the cache disks are identified. Configure local disks. Ensure that your 150-GB virtual disk is allocated to cache. Then hit Configure logging. Configure logging. Logging provides you with additional abilities for troubleshooting and audit. Select Create a new log group. Click Verify VMware HA. Click Verify VMware HA if your Storage Gateway VM is running in the VMware High Availability cluster. We hit Exit as the Amazon Storage Gateway virtual appliance is not deployed in the VMware HA cluster in our example. Now the File Gateway has been successfully created and it is running. Creating a file share It’s time to create a file share in order to connect to a bucket by using standard NFS or SMB (CIFS) protocols. Let’s configure the connection to an Amazon S3 bucket via NFS. Select your File Gateway and click Create file share (see the screenshot above). Enter the S3 bucket name. The name of the bucket used in this example is blog-bucket01. Access objects using: Network File System (NFS). Gateway: Select the deployed S3 Storage Gateway in the drop-down list. Adding tags is optional and can be skipped. Hit Next to continue. Storage. Configure how files will be stored in Amazon S3. Amazon S3 bucket name: blog-bucket01 Storage class for new objects: S3 Standard Object metadata: Guess MIME type and Give bucket owner full control checkboxes must be selected. Access to your S3 bucket. Create a new IAM role. Encryption: S3 managed keys (SSE-S3). Review. You can leave the default values for the IAM role and other values except the following values. Allowed clients: 0.0.0.0/0 – by default access from any IP address is allowed. It is recommended to define custom allowed IP addresses for security reasons. Squash level. Click Edit in the Mount options and select All squash to make sure that everything will work properly. Click Create file share. If the Create file share button is not active, click Previous and then click Next again. The NFS file share is created on your file gateway. In the bottom of the File shares page you can see examples of commands that can be used to mount your file share to Linux, Windows and macOS. Connecting to the file share Let’s create a directory that will be used as the mount point on a Linux machine (Ubuntu 18.04) and set the needed permissions. In this example, the name of our Linux user account is user1. Set the owner and permissions for the created directory: chown user1:user1 /mnt/s3-gateway chmod 0775 /mnt/s3-gateway Mount the NFS share provided by the AWS Storage Gateway: sudo mount -t nfs -o nolock,hard 192.168.17.122:/blog-bucket01 /mnt/s3-gateway An error can occur: mount: /mnt/s3-gateway: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program In this case try to install the nfs-common package: sudo apt install nfs-common Then run the command: sudo mount -t nfs -o nolock,hard 192.168.17.122:/blog-bucket01 /mnt/s3-gateway You can check whether the S3 bucket (blog-bucket01) is mounted as the NFS share provided by AWS Storage Gateway to /mnt/s3/gateway/ with the commands: mount | grep gateway ls -al /mnt/s3-gateway/ As you can see on the screenshot below, the bucket is mounted successfully. The same content of the bucket is displayed in the web interface of AWS. You can ensure that everything is working correctly and copy a file in the Linux console to the bucket and then check the contents of the bucket in the web interface of AWS. You can configure auto mount on Linux boot by editing /etc/fstab. Similarly, you can configure an SMB (CIFS) share on your AWS Storage Gateway and mount that share in different operating systems. If you select the SMB option for a file gateway, it is possible to add your AWS Storage Gateway to an Active Directory domain. Amazon Storage Gateway is a hybrid cloud solution that allows you to use your current physical and virtual infrastructure with Amazon cloud storage without significant changes to your current hardware and software configuration. Standard storage protocols are used – SMB and NFS are used to provide access to files stored as objects in Amazon S3 on a file level and access to block storage (Amazon EBS volumes) is provided by using iSCSI. You can connect to virtual tape libraries in Amazon S3 via iSCSI instead of using physical tape libraries. This blog post has covered the working principle of Amazon Storage Gateway and explained how you can deploy the File Gateway on VMware ESXi and connect to an Amazon S3 bucket through the File gateway via NFS from Ubuntu Linux. Amazon Storage Gateway can be used to copy your data backups to AWS manually or with special backup solutions that can work with NFS, SMB or iSCSI protocols. NAKIVO Backup & Replication is a universal data protection solution that allows you to back up data to Amazon S3 and Amazon EBS. NAKIVO Backup & Replication can back up data to Amazon S3 directly without using AWS Storage Gateway. Download the free trial and perform AWS EC2 backup and backup to Amazon S3 in your organization.
<urn:uuid:acc22aba-5191-491a-8a5a-7c165ffcba1a>
CC-MAIN-2022-40
https://www.nakivo.com/blog/how-amazon-storage-gateway-works-complete-walkthrough/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00695.warc.gz
en
0.861559
4,491
2.578125
3
Low code no code is a way to allow people without a lot of development environment experience to create software with minimal training. While it's getting a lot of attention now, it's not exactly a new idea. Many people use programs that depend on this exact type of environment every day; however, new platforms that encourage citizen developers to create their own solutions have shined a spotlight on it. The prevailing feeling is that low code no code is what will make software creation accessible to all. It's not a process that comes without risk. On the contrary, it's a development environment seeing massive, rapid adoption by many individuals who aren't entirely familiar with security and compliance. It will be up to industry leaders to provide the tools and support low code developers need to create new programs safely. What Is Low Code No Code? Low code no code is not as new as it seems. A good example of it is Excel. Anyone who has ever worked in a spreadsheet, created a pivot table, or entered a formula has done low code no code. Over the years, low code no code has seeped into just about every industry, though it's been more about completing smaller specific tasks—like adding a column of numbers or putting a picture on a website. Now, low code no code platforms are emerging. These allow organizations to develop entire software programs based on their own specific needs. They offer the following risks and benefits. No code low code is a fantastic development, but rapid adoption will lead to increased risk. It's up to industry leaders to smooth the way by establishing processes that make security part of the environment. Using DevSecOps with Low Code No Code Platforms DevSecOps can still apply in a low code no code environment. This needs to happen at the platform level. Companies that provide low code no code environments must be aware of the risks and cater to their users. Someone who hasn't coded before won't necessarily know that most good developers write tests in conjunction with their code to verify its stability and accuracy as they go. This "test-driven development" allows creators to fix issues while they're small and minimize the risk of broken code. Many low code no code environments do not natively support such testing practices. The Interfaces focus on drag-and-drop elements that the individual needs to build a basic program. It's up to the people who developed the platform to build the tests necessary to guarantee code efficacy and security. This is the DevSecOps best practice of automation. Creators of the platform can develop automated tests for each unit of code. As this is a rapidly accelerating area with no clear best practices, someone must take ownership of security. The best, most reasonable solution is for this to be the domain of the creators of no code low code platforms. They can create the standards that all will follow and mandate their use through automation. The platform has inherent security, but custom pieces have to attend to their own security. This is where low code no code platforms become risky; they put people who are largely untrained in the driver seat, and there's usually little-to-no oversight of their work. Companies pursuing this approach will need to think about this reality and pivot their strategies accordingly. Low code no code is what will allow entire industries to adopt new programs customized for their specific needs. However, as with any other rapid technology advancement, it's not without its threats. Those who provide no code low code platforms also need to comprehend the risks and be prepared to offer security. Sticking to DevSecOps principles based on regular and consistent testing will help facilitate the adoption of this coding environment while limiting risks.
<urn:uuid:09fa3ffd-2721-4b47-9c21-300aa4b5f223>
CC-MAIN-2022-40
https://www.copado.com/devops-hub/blog/what-is-low-code-no-code-and-why-is-it-important
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00695.warc.gz
en
0.957347
762
2.78125
3
Privacy is a fundamental human right, and one that has deep roots in early American history. Colonials were so firm about it, in fact, that it became one of the main pressure points which provoked the Revolutionary War. Once the war was over, the Founding Fathers made sure to protect the people’s right to privacy in the 4th amendment, which states: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized” – Constitution of The United States of America, Amendment 4 The Founders were interested in protecting their property and papers. And we can only imagine that, had they lived in the 21st century, they would be even more interested in protecting their electronic communications as well. An attempt to protect electronic communications was legislated in 1986 in the United States: the Electronic Communications Privacy Act (ECPA), which has been amended and expanded over the past two decades. It is drastically more modern than the Fourth Amendment, but due to the fact that it predates the internet, it is understandably lacking. When this bill was written, nobody knew the extent to which the use of email would grow, or that it would become a primary form of business communication. Here are some areas where the ECPA is lacking: - Government is allowed to investigate emails that are considered “outdated” - Any email older than 6 months is considered outdated - Any data stored in the cloud (not just email) is subject to federal investigation without a warrant, given that it also classifies as outdated - Users aren’t notified when their information is subject to search The ECPA and the risk that it poses to our privacy must have Paul Revere, John Quincy Adams, and the rest of the Revolutionaries turning in their graves. The Email Privacy Act (EPA) A solution to fix the shortcoming in the Electronic Communications Privacy Act is in the works. It is called the Email Privacy Act. The bill is attempting to establish the Supreme Court’s ruling in U.S. v. Warshak as codified law, which upheld that a warrant is required for government to access emails that are stored by cloud service providers such as Google, Microsoft, Dropbox, etc. The Email Privacy Act recently flew through the House of Representatives in February of 2017, receiving unanimous approval. This isn’t the first time the bill has been proposed, however. It has already failed on two previous occasions, the first time failing to pass the House and the second time unanimously passing the House but failing to pass the senate. It failed in its second run solely because senators tried to change the bill at the last minute in a way which would have been harmful to our privacy. It is hoped that this time around, the bill can become law. This would be a needed step for the United States in greater privacy for email and cloud storage. Other threats to our privacy exist, in addition to the holes in the ECPA. Identity thieves, hackers, and other criminals exploit weaknesses in passwords or email mailbox security for profit. It is important that we understand this, and that there are significantly higher consequences than we may think with information theft. Here are some threats that an unsecure mailbox could pose: Identification Information: Your inbox might contain very sensitive information such as your social security number (we highly recommend never sending this information via email) and other information such as your date of birth, height, physical appearance, family members, etc. Piecing all of this information together makes it very easy for an identity thief to pretend to be you. Financial Information: Bank emails, emails from credit card providers, bank card information, etc., all pose a significant financial threat. If there are enough unprotected financial emails and information in your bank account, there is a significant threat that somebody could make an unauthorized withdrawal from your savings account, or that your credit card could fund their next shopping spree. Passwords: How many websites require your email for verification when signing up? Once you input it, they send you a verification email that often contains your email and password, for your records. If you use one password for multiple different services, (a practice that we also highly advise against) a hacker could access multiple accounts by obtaining just one password. To a cyber criminal, finding an unsecured mailbox could mean hitting the motherload. Besides loopholes in the ECPA (which will hopefully be closed shortly), the privacy of your mailbox is completely up to you. If you take the right measures and install the right protection software, your mailbox will be protected from hackers and criminals. We outlined some preventative measures that you can take in a related blog post here: Secure Your Business Email As for enterprise email protection, Micro Focus can help. GWAVA Secure Web Gateway is our antivirus/antispam solution that prevents malware from ever reaching your inbox, or spreading throughout your server. Cyber criminals become more and more crafty each year. They are experts in creating innocent looking emails, with viruses hidden in executable files. These executable files can be hidden in attachments or disguised as links, such as an unsubscribe button. Even the most cautious of users are at risk. This is why it is so important to prevent these messages, before they even enter your mailbox or system! With GWAVA, you can enable high performance email scanning by threading scan processes asynchronously across all available resources on the server, which prevents dangerous programs from ever infecting your environment and obtaining your sensitive information. The solution also monitors internet traffic to prevent illicit images, as well as defending against DOS/DDOS attacks. All of this is managed from an easy to use, scalable web interface.
<urn:uuid:5396d16f-ce68-4f20-bed5-047700c911ee>
CC-MAIN-2022-40
https://blog.microfocus.com/email-privacy-act-of-2017-to-close-loopholes-in-current-laws/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00695.warc.gz
en
0.963417
1,214
2.875
3
In the recent past, several security vulnerabilities have been discovered, in widely used software products. Since these products are installed on a significant number of devices, connected to the internet, it entices threat actors to develop botnets, steal sensitive data, and more. In this article we explore: - Vulnerabilities detected in some popular products. - Target identification and exploitation techniques employed by intrusive threat actors. - Threat actors’ course of action in the event of identifying a flaw in widely used internet products/technology. Popular Target Vulnerabilities and their Exploitation Ghostcat: Apache Tomcat Vulnerability All Apache Tomcat Server versions are vulnerable to Local File Inclusion and Potential RCE. The issue resides in the AJP protocol, which is an optimised version of the HTTP protocol. The years old vulnerability is vulnerable because of the component which handled a request attribute improperly. The AJP protocol, enabled by default, listens on TCP port 8009. Multiple scanners, exploit scripts, honeypots surfaced in a matter of days after the original disclosure by Apache. Stats published by researchers indicate a large number of affected systems, the numbers being much greater than originally predicted. Citrix ADC, Citrix Gateway RCE, Directory Traversal Recently, Directory Traversal and RCE vulnerabilities, in Citrix ADC and Gateway products, affected at least 80,000 systems. Shortly after the disclosure, multiple entities (ProjectZeroIndia, TrustedSec) released PoC scripts publicly that engendered a slew of exploit attempts, from multiple actors in the wild. Jira Sensitive Data Exposure A few months ago, researchers found Jira Instances leaking sensitive information such as names, roles, email IDs of employees. Additionally, internal project details, such as milestones, current projects, owner and subscriber details, etc., were also accessible to anyone making a request to the following unauthenticated JIRA endpoints: Avinash Jain, from Grofers, tested the vulnerability on multiple targets, and discovered a large number of vulnerable Jira instances, revealing sensitive data belonging to various companies, such as NASA, Google and Yahoo, and its employees. Spring Boot Data Leakage via Actuators Spring Boot is an open source Java-based MVC framework. It enables developers to quickly set up routes to serve data over HTTP. Most apps using the Spring MVC framework now also use the Boot utility. Boot helps developers to configure what components to add, and also to setup the Framework faster. An added feature of the tool called Actuator, enables developers to monitor and manage their applications/REST API, by storing and serving request dumps, metrics, audit details, and environment settings. In the event of a misconfiguration, these Actuators could be a back door to the servers, making exposed applications susceptible to breaches. The misconfiguration in Spring Boot Versions 1 to 1.4 granted access to Actuator endpoints without authentication. Although later versions secure these endpoints by default, and allow access only after authentication, developers still tend to ignore the misconfiguration before deploying the application. The following actuator endpoints leak sensitive data: |/dump||performs a thread dump and returns the dump| |/trace||returns the dump of HTTP requests received by the app| |/logfile||returns the app-logged content| |/shutdown||commands the app to shutdown gracefully| |/mappings||returns a list of all the @RequestMapping paths| |/env||exposes all the Spring’s ConfigurableEnvironment values| |/health||returns application’s health information| There are other such defective Actuator endpoints, that provide sensitive information to: - Gain system information - Send requests as authenticated users (by leveraging session values obtained from the request dumps) - Execute critical commands, etc. Webmin RCE via backdoored functionality Webmin is a popular web-based system configuration tool. A zero-day pre-auth RCE vulnerability, affects some of its versions, between 1.882 and 1.921. This vulnerability enables the remote password change functionality. The Webmin code repository on SourceForge was backdoored with malicious code allowing remote command execution (RCE) capability on an affected endpoint. The attacker sends his commands piped with Password Change parameters through `password_change.cgi` on the vulnerable host running Webmin. And if the Webmin app is hosted with root privileges, the adversary can execute malicious commands as an administrator. Why do threat actors exploit vulnerabilities? - Breach user/company data: Data exfiltration of Sensitive/PII data - Computing power: Infecting systems to mine Cryptocurrency, serve malicious files - Botnets, serving malicious files: Exploits targeted at adding more bots to a larger botnet - Service disruption and eventually Ransom: Locking users out of the devices - Political reasons, cyber war, angry user, etc. How do adversaries exploit vulnerabilities? On disclosure of such vulnerabilities, adversaries probe the internet for technical details and exploit codes, to launch attacks. Rand corporation’s research and analysis on zero-day vulnerabilities states that, after a vulnerability disclosure, it takes 6 to 37 days and a median of 22 days to develop a fully functional exploit. But when an exploit disclosure comes with a patch, developers and administrators immediately patch the vulnerable software. Auto update, regular security updates, large scale coverage of such disclosures help to contain attacks. However, several systems run the unpatched versions of a software or application and become easy targets for such attacks. Steps involved in vulnerability exploitation Once a bad actor decides to exploit a vulnerability they have to: - Obtain a working exploit or develop an exploit (in case of a zero-day vulnerability) - Utilize Proof of Concept (PoC) attached to a bug report (in case of a bug disclosure) - Identify as many hosts as possible that are vulnerable to the exploit - Maximise the number of targets to maximise profits. Even though the respective vendors patch vulnerabilities reported, upon searching GitHub or specific CVEs on ExploitDB, we can find PoC scripts for the issues. Usually PoC scripts require a host/ URL as an input and it measures the success of the exploit/ examination. Adversaries identify a vulnerable host through their signatures/ behaviour, to generate a list of exploitable hosts. The following components possess signatures that determine whether a host is vulnerable or not: - Indexed Content/ URL Many commonly used software has a specific default installation port(s). If a port is not configured, the software installs on a pre-set port. And in most cases a software installs on the default port. For example, most systems use default port 3306 to install MySQL and port 9200 for Elasticsearch. So, by curating a list of all servers with an open 9200 port, a threat actor can determine systems running the Elasticsearch. However, port 9200 can be used to install other services/ software as well. Using port scans to discover targets to exploit the Webmin RCE vulnerabilities - Determining that the default port where Webmin listens to after installation is Port 10000. - Get a working PoC for the Webmin exploit. - Execute a port scan on all hosts connected to the internet for port 10000. - This will lead to a discovery of all possible Webmin installations that could be vulnerable to the exploit. In addition, tools like Shodan make port-based target discovery effortless. At the same time, if Shodan does not index the target port, attackers leverage tools like MassScan, Zenmap and run an internet-wide scan. The latter approach hardly takes a day if the attacker has enough resources. Similarly, an attacker in search of an easy way to find a list of systems affected by Ghostcat, will port scan all the target IPs and narrow down on machines with port 8009 open. Software/ services are commonly installed on a distinct default path. Thus, the software can be fingerprinted by observing the signature path. For instance, WordPress installations can be identified if the path ‘wp-login.php’ is detected on the server. This facilitates locating the service as it accesses a web browser. For example, when phpmyadmin utility is installed, by default it installs on the path ‘/phpmyadmin’. A user can access the utility through this path. In this case, a port scan won’t help, because this utility doesn’t install on a specific port. Using distinct paths to discover targets to exploit Spring Boot Data Leakage - Gather a list of hosts that run Spring Boot. Since the default Spring Boot applications start on port 8080, it would help to have a list of hosts that have this port open. This allows threat actors to see a pattern. - Hit specific endpoints like ‘/trace’, ‘/env’ on the hosts and check the response for sensitive content. Web path scanners and web fuzzer tools such as Dirsearch or Ffuf facilitate this process. Though responses may include false positives, actors can use techniques, such as signature matching or static rule check, to constrict the list of vulnerable hosts. As this method operates with HTTP requests and responses, the process can be much slower than mass scale port scans. Shodan can also fetch hosts based on http responses, from its index. Software are commonly installed on a specific subdomain since is an easier, standard, and convenient way to operate the software. For example, Jira is commonly found on a subdomain as in ‘jira.domain.com’ or ‘bug-jira.domain.com’. Even though there are no rules when it comes to subdomains, adversaries can identify certain patterns. Similar services, usually installed on a subdomain, are Gitlab, Ftp, Webmail, Redmine, Jenkins, etc. Security Trails, Circl.lu, Rapid7 Open Data hold passive DNS records. Other scanners that maintain such records would be sites such as Crt.sh and Censys. They collect SSL certificate records regularly and have an add-on feature that supports queries. The content published by services is generally unique. If we employ search engines such as Google, to find pages based on particular signatures, serving specific content, the results will have a list of URLs running a particular service. This is one of the most common techniques to hunt down targets, easily. It is commonly known as ‘Google Dorking’. For instance, adversaries can quickly curate a short list of all cPanel login pages. For which, they could use the following Dork in Google Search: “site:cpanel.*.* intitle:”login” -site:forums.cpanel.net”. The Google Hacking database contains numerous such Dorks and after understanding the search mechanism, it is easy to write such search queries. There have been multiple honey pot experiments to study the mass scale exploration and exploitation in the wild. Setting up honey pots is not only a good way of understanding the attack patterns, it also serves in identifying malicious actors out there, trying to exploit systems in the wild. These identified IPs/ Network trying to enumerate targets or exploit vulnerable systems end up in various public blacklists. Various research attempts have set up diverse honeypots and studied the techniques used to gain access. Most attempts are to gain access via default credentials, and originated mainly from blacklisted IP addresses. Another interesting observation is that, most honeypot detected traffic, seems to originate from China. It is also very common to see honeypots specific to a zero-day surface on Github as soon after a the release of an exploit. The Citrix ADC vulnerability (CVE-2019-19781) also saw a few honeypots being published on Github within a short time after the first exploit PoC was released. Research carried out by Sophos highlights the high rate of activity on exposed targets using honeypots. As reported in the research paper, it took from less than a minute to 2 hours for the first attack on the exposed target. Therefore, if an accidental misconfiguration leaves a system exposed to the internet, for even a short period of time, it should not be assumed that the system was not exploited.
<urn:uuid:0ae3dd99-5cd9-4392-bae6-9ddad64179de>
CC-MAIN-2022-40
https://cloudsek.com/how-do-threat-actors-discover-and-exploit-vulnerabilities-in-the-wild/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00695.warc.gz
en
0.899836
2,696
2.546875
3
FERPA (Family Educational Rights and Privacy Act) compliance mistakes are expensive. What does that look like in practice? - Assuming FERPA compliance is commonplace knowledge: Don’t assume that FERPA issues are covered in teacher education programs or that all administrative staff members know their obligations. - Weak access controls over FERPA records: Poor access controls might have been excusable in the past. Today, that’s no longer the case. You need to develop a systematic approach to access. - Failure to fulfill inspection requirements: If you’re unable to provide FERPA records upon request to eligible students, you’ve failed. Efficient information management matters in FERPA. - Loose definitions of key FERPA terms: Like it or not, FERPA allows for some discretion in compliance. That means it’s up to impacted institutions to define and govern roles such as “school official.” - An incomplete inventory of student data: If you’re like most schools, you probably have more than one database that contains student data. You need to have strong FERPA governance in place for all student data, not just the data in the primary database. - Reliance upon manual processes: Unless you’re a tiny organization, a manual approach is unworkable. You’re just begging for a compliance requirement or record to be missed on a busy school day. First, Discover Your FERPA Compliance Levels Leaping into action before you confirm your current situation doesn’t make sense. You’re likely to waste time and money on tactics that won’t make a difference. Instead, you need to first find out where you are. It’s like setting off on a road trip; your GPS can’t provide directions until it knows your current location. Tip: To assess a single school, plan to conduct this assessment over the course of a few days. If you’re assessing a school district or a large organization, you’ll need more support to do the assessment. - Assess management: Question the school’s management to determine whether they understand the basics of FERPA compliance. - Assess front-line staff: Choose a random selection of teachers, librarians, and other staff to interview. Alternatively, you may want to consider a short survey sent by email. - Review IT systems and staff: IT has a crucial role to play in supporting FERPA. For the best results, spend at least half of your assessment effort on IT processes. - Review recent FERPA complaints and issues: Few schools have a perfect record with FERPA; mistakes will happen. What’s more important is that you have a follow-up process to learn from these issues and improve. - Assess remaining FERPA compliance gaps: In a few cases, you might be able to coach an employee to improve and close an issue. In other situations, a systematic weakness calls for a more through improvement. What can you do if you find deep problems? You need to leverage the right technology, and that’s where Avatier comes in. How to Improve FERPA Compliance with Avatier Some teachers are used to having “universal access.” They like the ability to check in on past students and see who might be coming to their classroom next year. Unfortunately, these habits are going to land you in hot water with FERPA. Satisfying your curiosity about Jimmy’s math skills after Jimmy has left your class isn’t going to look good on a FERPA compliance memo. Changing these old teacher habits takes work. Fortunately, you can use one shortcut: cybersecurity software solutions. Here are some ideas that can save you time and help you avoid compliance headaches. - Standardize access based on roles: Do you spend hours every semester setting up new teachers and assistants with accounts? Those days are over. Use Group Requester for set it and forget it access at the group level. - Save time on passwords: Mandating strong passwords comes with a trade-off, as users are more likely to forget their passwords. You can address this concern by providing a self-serve password reset. - Keep perfect access change records: If and when you face a FERPA investigation, records matter! If you can prove when and how access privileges were managed, that shows you have a professional organization. Audit logs and changes are automatically tracked when you use Avatier. If you run a large institution, you may need even more tools and support to address FERPA. We have an idea to cover you: use containers. Leveraging Docker Containers to Improve FERPA Compliance Efficiency and management oversight are essential components of a successful FERPA compliance program. There’s just one problem with that approach: it takes time to carry out proper oversight. Inspecting your critical servers regularly for security flaws is an important task, and it’s not one to rush. What’s the solution? You need to find reliable ways to improve IT productivity. We recommend using containers as a way to save time. Containers save time by eliminating configuration problems on new servers. Since containers make it easier to standardize configuration, you also get improved security as a result. Let’s say you save five hours per week of work effort by adopting containers. What could you do with that extra time to improve FERPA compliance? Check out these possibilities: - Eliminate inactive user accounts: Getting rid of this type of account is an easy win in security. You just need the time to make it happen. - Improve employee password training: What happens when you don’t provide training? Your employees are going to reuse their personal passwords at work. Regularly delivering cybersecurity best practices training is one way to stop that unsafe practice. - Review third-party user activity: Do third parties such as consultants and developers use your platform and software? If so, we recommend spending some time to educate these stakeholders on your cybersecurity practices. Suppose you need help winning support to bring container technology to your organization. We have got you covered there. Check out our article: Improve Developer Productivity Using Containers: The Two-part Strategy.
<urn:uuid:6fadf101-992a-4624-812f-51feefb7cb10>
CC-MAIN-2022-40
https://www.avatier.com/blog/the-shortcut-to-successful-ferpa-compliance/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00695.warc.gz
en
0.925606
1,310
2.578125
3
Your Password Tricks Are Not Protecting You from Today's Hackers Passwords are not a new thing as they've been around from the early 1960s. They help prevent unauthorized people from accessing files, programs, and other online resources. Of course, if you use them based on the best practices. Cybersecurity experts remind us to use strong and unique passwords referring to the same best practice guides. Despite this, passwords are still one of the top reasons for data breaches. You will ask why? The answer is simple, most of these guides are leading you in the wrong direction. What do we know about good passwords so far? The Internet is full of guides on how to create strong passwords that protect you from brute-force attacks. And usually you will be advised to: Use 8 or more characters. The more characters, the better; Mix uppercase and lowercase letters; Add some numbers; Include at least one special character, such as .,! @ # ? ]; Mix lookalike characters to protect against password glimpses. For example, the letter O and the number 0, the letter S and the $ sign. There is nothing truly wrong with the above list of to-dos. But nothing stops hackers from applying exactly the same patterns. They add various language dictionaries, even urban ones, numbers, special characters into their database. And if your password is something like Password12345! – it will take them roughly 10 minutes (depending on the algorithm they are using) to crack it. What do we need to avoid in our passwords? Creating completely uncrackable passwords is getting almost impossible. Reduce the chance of your password being compromised by avoiding the following bad practices. Sometimes that's all you need. Don't use words you can find in the dictionary, especially if your password is made out of one word; Don't reuse passwords listed in various articles as strong password examples; Don't use your name, birth date or any other personal information; Avoid keyboard patterns, such as 12345 or qwerty; Don't use common acronyms, such as ASAP, TLTR or PANS; Don't use repeating characters, such as 555; Don't use passwords that were used in various guides as a good password example. And above all don't reuse the same password on other platforms. What can you do to make hackers work harder? Make your password out of a sentence, this way it's easier to remember it too. It could be the first line of your favorite song or a random sentence. For example: Zaragotnicetrousersonsalefor$49.99 or Causeifyoulikedit,thenyoushouldhaveputaringonit (and yes, it’s the first line from Single ladies by Beyoncé). Use password generators to generate strong passwords. Enable two-factor authentication (2FA) where possible. It adds an extra layer of security that is difficult for hackers to crack. Change them periodically - once every three to six months. And we mean changing it, not just adding an additional number or character to the end of the current password. Be cautious with your passwords and never leave them exposed in any obvious places. Hackers are not some mysterious species living in the dark, they can be ordinary people around you. Be vigilant when using computers in public places, such as libraries or cafes. Consider using a VPN. And never save your passwords on a computer that is used by more than one person. Be cautious where you store your password. Don't store your password in a plain text file on your computer. Consider using a secure password manager, they can help you remember, manage and store your passwords securely. Using stronger passwords won’t keep you secure from all the threats out there, but it’s a good first step in the right direction. Subscribe to NordPass news Get the latest news and tips from NordPass straight to your inbox.
<urn:uuid:b47dd401-75c0-4ad4-94b9-1f4e9e739ac0>
CC-MAIN-2022-40
https://nordpass.com/blog/how-to-create-strong-password/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00095.warc.gz
en
0.90884
830
3.1875
3
Covid-19 has affected everyone’s mental health. Most people have lost the people they were closest to or have seen people losing someone and they could do nothing about it. The worst part about covid-19 is not being able to have the last touch with someone who will not make it out alive. The last memory engraved in your heart is just seeing their face thinking it’s just a fever, they will be alright in no time and you will continue with your daily life. No one would have thought how wrong you were. Now imagine the immense mental stress the healthcare workers would have been under by seeing people die right in their front of eyes, risking their own life by providing treatment to them. Staying away from their family so that the infection does not get carried to them. Providing help to people in their time of need is their work but that does not make it easier for them. They are also humans that feel emotions but they have to conceal their emotions and be strong in front of their patients so that they do not lose their hope. Even when all the healthcare workers are doing for us there are some people who are making it more difficult for them. Not wearing a mask in public, being present in a large crowd, and lastly, people not taking their vaccines. Along with this, there are also some people who refuse to get tested even if they have the symptoms of covid-19. All in all, being a healthcare worker in this time is not easy, but no healthcare worker has stepped down or backed out from their responsibilities. But this does not mean that they are not having a constant battle in their minds, they just know how to conceal it. Bottling up all the stress can become tiring after some time, which is also not good for mental health as well as physical health. This is where stress management should be performed by the healthcare workers so that their performance is not affected in any way and their health does not deteriorate. The following stress management could be adapted by the healthcare workers: The time constraints for the healthcare workers are very less and exercising might seem more like extra work on your already filled schedule. But the healthcare workers might know about the benefits exercising has on your body. During exercise, our body is responsible for releasing a set of hormones that alleviates our mood and provides energy to the body. The duration of exercise is not required to be very long, take out 20-30 minutes from your schedule, you will surely feel a positive result of exercising on your body. The exercise routine can include anything: It is possible that the healthcare workers do not have the proper facilities for exercising so they can follow the following points: - Use stairs instead of elevators - Walk to stores instead of taking your car - Park your vehicle at some distance from your home 2. Meditation and Yoga If exercising seems like a tedious task for the healthcare workers then they can indulge in meditation. Meditation requires a quiet place where you can focus on yourself or an object for some time. Meditation helps in reducing anxiety and keeps negative thoughts at bay. It takes more than ten to fifteen minutes at the least. Along with meditation, there are various types of yoga postures that can help you in relaxing and refresh your mind. When you feel that you are not in control of what is happening around you, try taking deep breaths, it will get better after some time. This stress management tip will feel like an impossibility for the healthcare workers but it is very important. The schedules of healthcare workers are filled to the brim and many times they have to attend the emergency calls which are not in their schedules. This means very few hours of proper sleep. Proper sleep constitutes 7-8 hours in a day but if that is not possible try taking short naps whenever it is possible. Short naps have the immediate effect of energizing people. The duration of the short nap should not exceed more than 1 hour as it can make you feel more sleepy and exhausted. If your schedule allows then never compromise with your sleep and take the proper amount of it. Oversleeping can also sometimes affect your body negatively. Top medical recruiting company 4. Proper Diet The healthcare workers are so engrossed in giving the best to their patients that they sometimes forget about their own needs. Having a proper meal which is filled with nutrition and other important minerals is very important. Food is the fuel of our body, without it, no one can function effectively. Have a diet chart laid out from the beginning so that you remember all your meals and follow them with your heart. Taking a cheat day or two can be done but healthy eating will make your body active and less exhaustive. 5. Connect with others At this time, cutting off from people can have a severe impact on your mental health. Humans are known as social animals who need others to talk to, listen to their problems, or just being present is enough. There will be no time to contact people who are outside your healthcare facilities but try to have as much contact as you can. Text or call your family members and friends, catch up on what they are doing, it will surely take your mind away for some time. For a mind refresh, social media can be used. Social media is very beneficial in these times as people all around the world can post what they are going through, and display their real emotions. Top medical recruiting company 6. Make Time for your Hobbies The situations are so overwhelming that the healthcare workers do not spend enough time on themselves. Hobbies are something that makes people happy as they are the most comfortable in performing those without any fear of judgment. Try performing your hobbies, take out 15-20 minutes and dedicate it solely to it. Painting, reading, dancing, singing, and playing an instrument will surely reduce your stress levels. 7. Express Gratitude We are very thankful to all the healthcare workers who have around the clock in saving our lives. Just like this, the healthcare workers should make it a practice to list out things and people that they are grateful to. This technique helps in reducing negative emotions from the body and instead replaces them with positive and hopeful ones. It will be more helpful if you express your gratitude to people in person, as it will not only make you feel lighter but can also make up the other persons’ day. 8. Take it Easy This stress management tip is easier said than done but if you adopt it in your daily life it will surely reduce your stress levels. The healthcare workers have to see horrific scenes at their facility and many times even if they give their 100 percent, it is not enough to save the person. The healthcare workers have to understand that it is not their fault because they tried. Efforts are what matters the most and healthcare workers in recent times have shown that they have put in more than enough effort. Take it easy and with time hopefully the situation will be back to normal. In the End If the mental stress on the healthcare workers becomes unbearable, try taking a break for some time. Rejuvenate yourself, talk to your family about your problems, and if you feel talking to a therapist will also prove to be helpful. Top medical recruiting company Sunny Chawla is a Managing Director at Alliance International. He specializes in helping clients with international recruiting, staffing, HR services, and Careers advice service for overseas and international businesses. Binary Blogger has spent 20 years in the Information Security space currently providing security solutions and evangelism to clients. From early web application programming, system administration, senior management to enterprise consulting I provide practical security analysis and solutions to help companies and individuals figure out HOW to be secure every day.
<urn:uuid:d2049b8d-fa97-45e7-938d-5b0c11dc53f1>
CC-MAIN-2022-40
https://binaryblogger.com/2022/01/19/top-8-stress-management-tips-for-healthcare-workers-during-covid-19/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00095.warc.gz
en
0.972129
1,655
2.75
3
Have you ever wondered how information travels on the internet or how data traffic flows from one point to another? Have you thought about what exactly is needed in between these points to allow them to interface with each other? The public internet, or any other data network, performs a very basic (though not so simple) task: transferring data from one end (e.g. your smartphone) to the other (e.g. a website) and back. This task is performed by several network segments: Backhaul, therefore, is the connection between an access node and the core network. A backhaul network is planned according to a number of factors including the required transfer rate, known as bandwidth, and the time it takes for data to go from one point to another, known as latency. Interference, reliability, scalability and speed are traffic needs that have a great impact on end users. Looking into backhaul options, network operators typically use a combination of two commonly used infrastructures: fixed-line or wireless. Based on fiber, fixed-line backhaul involves deploying fiber infrastructure or leasing unused (dark) fiber, such as a wavelength or a certain capacity, from a third party that already owns fiber infrastructure. Leasing existing lines requires a significant increase in operating expenses and, in some cases, compels the network operator to depend on a direct competitor. Laying new fiber-optic lines is capital-intensive, and these lines cannot be rapidly deployed. Wireless backhaul infrastructure includes point-to-point (PtP) and point-to-multipoint (PtMP) microwave (MW) and millimeter wave (mmW) equipment. Wireless backhaul is widely used where deploying fiber is not feasible, cost-efficient or possible due to time constraints. Wireless backhaul involves microwave systems that use radio frequencies as the transmission medium. The radio spectrum in the MW band covers 6-42 GHz and is widely used to transfer multiple Gbps for distances of up to 250 kilometers. Higher bands such as the mmW E-Band (71-86GHz) are used to transfer larger amounts of data for shorter distances (up to 20Gbps for a few kilometers). According to Dell’Oro, wireless systems revenue will expand by 3% CAGR over the next five years, reaching $2.7 B by 2024. Fiber systems revenue is forecast to grow at an average annual rate of 6% through 2022, and then decline for the duration of the forecast period. Fiber-based networks can easily support the rapid growth in bandwidth demands, but they carry high initial deployment costs and take much longer to deploy than wireless. As a result, most network operators rely heavily on wireless backhaul solutions. Mobile network evolution towards 5G is driven by the business opportunity to grow profitability, expand market share and increase competitiveness. This potential is realized by new service offerings that result in new revenue streams and enhanced business success. However, there are several network trends – including the introduction of new services, open network architectures and higher cellular spectrum – that pose new challenges for mobile operators. These challenges range from providing up to 100 times more capacity in every network domain, through achieving ultra-low latency, to using mid-bands and mmW to manage network densification in 5G. Those challenges must be addressed under some key constraints: Resolving the abovementioned challenges under the listed constraints calls for a highly flexible and cost-efficient solution. Specifically, deploying more sites with higher capacity and reduced latency while supporting multiple services requires backhaul that allows operators to eliminate dependency on fiber availability and feasibility when planning, locating and acquiring new cell sites. Wireless (microwave and millimeter wave) solutions bring the flexibility, agility and efficiency required for 5G network deployment while answering all of the aforementioned 5G-specific network challenges. Private data networks – serving enterprises, small businesses, municipalities, educational institutions and other organizations – require reliable, high-speed, low-latency, high-density broadband connectivity. Enterprise networking is becoming more and more crucial to organizational business continuity and the demand for capacity is enormous. Business applications, multimedia traffic and even basic intra- and inter-organizational communication make massive demands on capacity, which in turn drives significant growth in enterprise networking scales. These capacity and connectivity requirements are addressed by either leased lines or private networks. Private networks increasingly are becoming the preferred approach for delivering broadband connectivity to enterprise, campus and industrial IoT environments. The need for greater security, enhanced reliability and low costs is the main driving force of private networks that are based on 5G or other broadband technologies. A major part of private network infrastructure is backhaul, also often referred to as transmission. Critical infrastructure users have very demanding needs, and they want their communications to be available and secure at all times. Critical communications users include public safety agencies, utilities, transportation companies and other professionals. For them and the people they protect and serve, communications can be a matter of life or death. Existing mission-critical networks are based on specialized digital technologies, such as TETRA, Tetrapol and P25, and by their nature are voice-centric and narrowband. However, the critical infrastructure segment is evolving at a rapid pace, and many public safety organizations are looking at 4G and 5G technologies to deliver real-time video, high-resolution imagery, multimedia messaging, situational awareness, unmanned asset control and other broadband capabilities. The Emergency Services Network (ESN) in the UK, FirstNet in the US and SafeNet in the Republic of Korea are among the nations’ first public safety broadband networks that are based on standard industry technologies. As in the previous cases, a main pillar in building mission-critical networks is backhaul.
<urn:uuid:2c051e33-b2f7-4a8b-a324-546ac428d0d6>
CC-MAIN-2022-40
https://www.ceragon.com/what-is-backhaul
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00095.warc.gz
en
0.938269
1,178
3.15625
3
June 6, 2017 | Written by: Miles Ludwig and Satya Nitta Categorized: Cognitive Computing Share this post: Since 1969, Sesame Street has been a part of our lives in the U.S. and around the world. We watched it as children, and we’re now watching it with our children and even grandchildren. We’ve seen our lives reflected on the street and through the compassion of the loveable Sesame Street Muppets. For almost 50 years, Sesame has known precisely when to introduce new friends to the neighborhood to ensure we reflect the lives of all children. Students at Georgia’s Gwinnett County Public Schools play with a new cognitive vocabulary learning app from IBM and Sesame Workshop. This generational impact began on the new technology of the day – the television. Today, we want to continue that tradition, connecting more individually with children, parents, and teachers. To do this, IBM and Sesame Workshop, the nonprofit educational organization behind Sesame Street, are introducing another new friend: Watson. For about a year now, our two organizations have been collaborating to find the “just-right fit” between IBM Watson’s cognitive computing technology and Sesame’s whole-child curriculum in order to help each child become smarter, stronger, and kinder – Sesame’s expressed mission. We’re thrilled to share that the result of our collaboration is an intelligent play and learning platform. The platform, which is currently under development, will be hosted on the IBM Cloud and enable software developers, researchers, publishers, educational toy companies, and educators to create individualized learning experiences. The need for tailored solutions is critical, as it allows educators to present content that is specific to each student, including the educational challenges they face and the content style that resonates with them most. Georgia’s Gwinnett County Public Schools, one of the nation’s top urban school districts and the largest school district in the state, has recently experienced the benefits of this collaboration by piloting a new adaptive cognitive vocabulary app that’s enabled by this new platform. During the pilot, kindergarten students and their teachers had the opportunity to engage with the app, which is focused on enhancing students’ vocabulary development. Imagined as a teacher’s assistant, the app features learning design methods from Sesame Workshop’s established practices, as well as Sesame Street’s beloved characters in research-based videos and interactive learning games. The Gwinnett pilot focused on teaching words, specifically words that would otherwise be challenging for kindergarteners, such as “arachnid,” “amplify,” “camouflage,” and “applause.” Over the course of the pilot, teachers observed the engagement and learning that we had hoped for: students acquired new vocabulary and incorporated the new words into their everyday language and interactions. For example, during recess, teachers found students referring to spiders on the playground as “arachnids” and noting the camouflage on bugs’ bodies. Acting as a virtual teacher’s assistant, this app makes it easy to monitor children’s vocabulary development through a secured dashboard in real-time. Teachers can also adjust lessons, pacing, and the curriculum, all based on each student’s needs. Gwinnett County Public Schools’ pilot of the cognitive vocabulary app is only the beginning. Looking ahead, IBM and Sesame expect the platform to support educational toys, apps, and games that will feature Watson’s speech- and image-recognition capabilities. These offerings will engage directly with children, delivering context-rich play experiences in areas like literacy, emotional learning, and school preparedness, all adapted to each child’s preferences and learning patterns. Best of all, they’ll be useable anytime, anywhere, giving parents and caregivers the opportunity to experience cognitive learning technology and understand how it can impact the kids’ development at home. Sesame Workshop and IBM share a belief that providing children with the best education possible is imperative, and that cognitive computing solutions like Watson will enhance and personalize learning in ways that we have never seen before.
<urn:uuid:2a60c4c6-0a03-4181-ae86-7f7e5e99f2a0>
CC-MAIN-2022-40
https://www.ibm.com/blogs/think/2017/06/sesame-street/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00095.warc.gz
en
0.945988
882
2.953125
3
Internet of Things (IOT) cybersecurity is becoming an issue of increasing concern as these devices continue to secure a larger marketplace presence. This is due to the fact that IoT solutions are a cost effective means of achieving integration of connected devices. IoT include smart home products, wearable technology, health monitoring devices, alarm systems, and transportation equipment. They can also be found in industrial controls technology, agriculture, military, and infrastructure applications. IoT devices are functional, inexpensive, and easy to implement. As a result there has been an amazing growth in this market. Fortune Business Insights predict that IoT Technology will grow from 478 billion dollars in 2022 to 2.4 trillion dollars in 2029. IoT Device Core Baseline Cybersecurity To address the vulnerabilities of IoT platforms, the National Institute of Standards and Technology (NIST) has released recommendations for manufacturers of IoT systems for improving how securable the IoT devices they make are. The IoT Device Cybersecurity CapabiIity Baseline provides six actionable items, four that should be conducted to assess pre-market impact, and two activities with primarily post-market impact. Because these activities affect the process by which design specifications should be created, the document is primarily intended for the development of new devices. Pre-Market Activities for Baseline IoT Security IoT product manufacturers should consider the security of a product throughout its life cycle. This includes an examination of integration into the customers probable usage and overall system requirements. Because these factors will widely vary from product to product the following steps should be conducted: - Identify expected customers and users, and define expected use cases. - Research customer cybersecurity needs and goals. - Determine how to address customer needs and goals. - Plan for adequate support of customer needs and goals. IoT Considerations After Product Release It is important to define methods for communicating cybersecurity risks and recommended protocols. These considerations should include a declaration of risk related assumptions. It is important to remember that both the manufacturer and the consumer share a responsibility in implementing and maintaining security. NIST has provided a list of six recommended security features that manufacturers should build into IoT devices. These features should be considered when consumers are selecting a device. - Device Identification: The IoT device should have a unique identifier when connecting to networks. - Device Configuration: An authorized user should be able to change the device’s configuration to manage security features. - Data Protection: Internally stored data should be protected by a device. This can often be accomplished by using encryption. - Logical Access to Interfaces: The device should limit access to its local and network interfaces by using authentication of users attempting to access the device. - Software and Firmware Update: A device’s software and firmware should be updatable using secure protocols. - Cybersecurity Event Logging: IoT devices should log cybersecurity incidents and provide this information to the owner and manufacturer. Additional Protective Steps Because IoT devices often do not allow access to their built in management tools, implementing IoT devices can provide access points into networks that contain sensitive data. Additionally, preventing access to devices from unauthorized persons can be a challenge in large industrial settings. Therefore, segregation and isolation of these devices by using Virtual Local Area Networks (VLAN) should be considered when installing devices in a business setting. Cybersecurity of Increasing Concern for Businesses Because many incidents go unreported, real losses to U.S. manufacturing from cybercrime are difficult to assess. Even the most statistically reliable data is derived from a small survey of businesses conducted by the Bureau of Justice Statistics. In a recent report from Douglas Thomas of NIST, estimated losses for all industries could be as high as between 0.9% and 4.1% of total U.S. gross domestic product (GDP), or between $167.9 billion and $770.0 billion. The unfortunate reality for businesses is that those implementing IoT systems do not fully comprehend the vulnerabilities these devices present. As with cloud computing, proper implementation is essential. Common issues include insecure interfaces, lack of consistent device updates, and weak password protection. It is therefore essential that those who select, install, and service IoT devices be trained and follow documented best practices to prevent data breaches. Other actions can be taken to mitigate malicious threats on sites where IoT applications are used. Performing data analytics can often allow an organization to identify threats before they become critical. Another tool for protecting data is utilizing Public Key Infrastructure (PKI) to provide effective encryption of IoT networks. Call for IoT Certification and Labeling Because consumer based cybersecurity measures are at best reactive, there has been an effort to initiate a Certification & Voluntary Labelling Scheme to set a standard for manufacturers of IoT devices. A labeling system would allow an easy way for developers of IoT applications to gain the confidence of consumers. This international certification framework would involve third party assessments of at accredited test facilities to and would be internationally recognized. Currently, a pilot program is open for applications for case studies. CVG Strategy Cybersecurity There are many applications where the benefits of IoT have yet to be fully explored. As development of IoT sensors continue, they will contribute to the enhancement of such technologies as Artificial Intelligence (AI) and even smart cities. However, as they rely on internet connectivity they have inherent vulnerabilities. Many manufacturers implement such devices to control processes and gather critical data. Because of this, the risk these devices present should be taken into consideration by an effective Information Security Management System (ISMS). CVG Strategy can help your business implement ISO 27001 to exercise due diligence and compliance with contractual and regulatory data security. CVG Strategy is committed to assisting organizations doing business with the Department of Defense achieve CMMC to secure our defense manufacturing supply chain’s information secure. As industry leaders in cybersecurity, ITAR, and risk based management systems. We have experience with companies of all sizes and understand the importance of innovating flexible approaches to meeting the requirements CMMC, establishing effective programs, and achieving certification.
<urn:uuid:887047f5-cf95-4e12-b14a-ccc430c19e74>
CC-MAIN-2022-40
https://cvgstrategy.com/iot-device-cybersecurity-guidance-for-manufacturers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00095.warc.gz
en
0.932476
1,229
2.765625
3
The Cloud Security Alliance release new research report: A Day Without Safe Cryptography What would happen to our daily lives if our most commonly used methods of encryption were to suddenly disappear? At last week’s RSA Conference in San Francisco the Cloud Security Alliance (CSA), which is the world’s leading organisation dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment, released its newest research report entitled “A Day Without Safe Cryptography”. “The effect of broken encryption tools and services cannot be overstated. If bad actors can overcome current defensive objectives, it will threaten every aspect of our daily social, business and governmental activities, the operational and economic impact of which may take years to rebuild,” said Bruno Huttner, Quantum-Safe Security Working Group co-chair and Director of Quantum Space Programs in the Quantum-Safe division of ID Quantique. “Those that are first to implement quantum-safe security will reap both the rewards of quantum computing, as well as see dramatically reduced costs for insurance and security defences.” What does the paper explore? The paper examines topics including: - What is quantum computing? - How will quantum computing place existing cryptography and encryption at risk? - What would our digital lives look like if bad actors use quantum computing to break encryption? - What will quantum-safe encryption look like and what are the next steps forward? The work of the Quantum Safe Security Working Group addresses critical generation and transmission methods to help the industry understand quantum-safe methods for protecting their networks and their data. The Working Group examines two differing technologies: Quantum key distribution and Post-Quantum cryptography. Download your copy: A Day Without Safe Cryptography About the Cloud Security Alliance The Cloud Security Alliance (CSA) is a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing, and to provide education on the uses of Cloud Computing to help secure all other forms of computing. The Cloud Security Alliance is led by a broad coalition of industry practitioners, corporations, associations and other key stakeholders.
<urn:uuid:f8e5d5bb-c1e6-426a-b295-ffb1a0ecb165>
CC-MAIN-2022-40
https://www.idquantique.com/the-cloud-security-alliance-release-new-research-report-a-day-without-safe-cryptography/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00095.warc.gz
en
0.916069
440
2.5625
3
By: Khushboo Kumari Social media threat is an important topic to talk about. As social media has created a new platform for communication and interaction, and become a part of our social life that helps us to communicate with others. We all know the coming of social media platforms like Facebook, Twitter, and WhatsApp brought a very big change in how we use the internet for personal and professional purposes. Social media play an important role in modern-day communication . But it is easy for hackers/attackers to get a hold of the user information if the user is not careful about the security. This is very common because people are not careful about it like using the password which is very common as the name of the family member or some familiar place. Downloading or opening the file which contains the malware without conforming to the sender identity. There are ways to prevent these threats like using the algorithms tool which can help to detect the threats. If we see the history of social media we find that attack on social media is done on a big scale, not like Facebook 2007, Facebook, MySpace et al., 2010, Facebook, 2014, and the recent attack on Facebook in 2021. So we can say that there is always a possibility that attackers can attack and get a hold of the user-sensitive information. Threat on online social media Multimedia content threats People share the data like photos, videos, interests, activities, and others. Multimedia data is a type of this data sharing. Users use high resolution for sharing photos and others. But using the advancement in the multimedia technique user location, face recognition, etc. increases the chance of chance to illegally utilize the items. Multimedia content threats are Sharing the ownership, Manipulating the multimedia content, Steganography, Metadata, Exposing the content on multimedia, Sharing the links of multimedia content, Static links, Transparency of data centers, Video conference, Tagging, Disclosing the unauthorized data [1, 2]. Traditional threats are unique and include different types of attack techniques, like phishing, malware, identity thief, viruses for obtaining the user’s sensitive information. User sensitive information is very useful for an attacker, like attacker can get hold of other confidential information, like security number of the user used on social media, id for login and its password, and account details for the bank. After an attacker obtains the user information, they can use the data to commit a different types of crimes and serious attacks, like phishing and stealing the identity of the user. The traditional threat used by the attacker is Spamming, De-anonymization attack, Clickjacking, Inference, Cloning the profile, Sybil attack and creating a fake profile, Phishing, Malware . In online social media attackers can use the social relationship feature and interact with different kinds of users, like minors, employees of corporations. An attacker can have several motives like blackmailing, cyber harassment, and spying. The various social threats are Cyberbullying and cyber-grooming, Cyberstalking, Corporate espionage [3, 4]. Social media security solution Threat on social media increase as the use of social media increases. To prevent these threats there are algorithms given that protect the user data. And there are other solutions for detecting and preventing the threat [1, 5-7]. Watermarking is used for superimposing a logo or text on top of a document or image file. This process provides copyright protection and marketing of digital works. Co-ownership in this multiple users have co-owned data and every user applies their privacy policies on that co-owned data. Steganalysis on social media there are many malicious information are present. Steganalysis is used to find this malicious information. Digital oblivion is used to prevent attackers to get access on user-sensitive information after the expiration time of the data. Storage encryption is used for efficiently storing and recovering user information without exposing any sensitive information to a third party. Malware detection is used if there is malware propagation present in the social media. Sybil defense and fake profile detection there are many tools and techniques for detecting fake profiles and providing defense against the Sybil attacks. These methods either depend on the social graphs by performing the limited amount of arbitrary walks or on the random routes concept. Phishing detection is are anti-phishing methods that find and prevent phishing attack. Machine learning techniques are used in these methods . Spammer detection is used to extract the feature set which separates legitimate and spam users and supply that extracted feature set to different machine learning classifier models to identify the activities which is inappropriate. - Misuse the Identity– in these type of attack attacker impersonate the identity of another user which result in identity theft. By using this they can gain the access on user personal information. - Using 3rd Party Applications– There are applications which ask for permission from user for accessing sensitive information of different type of apps. Like access on camera, gallery etc. And some of these applications may contain malware which will be downloaded on the user’s personal device without their consent. - Trust on the Operators of Social Networking Sites– when user upload or post on the social networking sites, the upload and post are available to operators of the social networking sites. If operators want to save account data, they can save it even after deletion and use it for getting user information. - DDoS Attacks: DDoS attacks are performed by attackers to exploit the availability of information on the social media [9-11]. - Legal Issues– There are cases where user post contents which can displease to any individual, community or even a country. So if someone invading someone’s privacy and leaked these confidential information, there can be legal risk associated to it. - Viruses, Phishing Attacks and Malwares– when we use social media there we always get some ads. From those ads, these types of attack find their way onto the user device. After finding their way to the device, they start gaining access to the network, though these attacker can get hold of sensitive information by using spam mails . - Privacy of user Data– User post or upload their information on social media which can become cause for the privacy related issue because attacker can gain access to the data using various method. - Phishing Attacks - Identity Federation Challenges - Click Jacking Attacks For making social media more secure, we can use cryptography. In cryptology, we can use different types of algorithms to make social media more secure like RSA algorithms, AES, end-to-end encryption, digital signature, etc. But still, we need to work on this area to give user data more security. In this overview, we have given the different types of attacks that can happen to a user on social media. And different type of solution which is already available to prevent these type of attack. We can use different types of algorithms like RSA, AES, Digital signature, etc. to stop the attacker to get a hold of the user sensitive information - Rathore, S., Sharma, P. K., Loia, V., Jeong, Y. S., & Park, J. H. (2017). Social network security: Issues, challenges, threats, and solutions. Information sciences, 421, 43-69. - S. Rathore, P. Sharma, V. Loia, et al. Social network security: Issues, challenges, threats, and solutions. - Barinka, A. (2017). Bad Day for Newsweek, Delta Amid Social-Media Hackings. - El Asam, A., & Samara, M. (2016). Cyberbullying and the law: A review of psychological and legal challenges. Computers in Human Behavior, 65, 127-141. - Gupta, S. S., Thakral, A., & Choudhury, T. (2018, June). Social Media Security Analysis of Threats and Security Measures. In 2018 International Conference on Advances in Computing and Communication Engineering (ICACCE) (pp. 115-120). IEEE. - Zhang, Z., Sun, R., Zhao, et al. (2017). CyVOD: a novel trinity multimedia social network scheme. Multimedia Tools and Applications, 76(18), 18513-18529. - Gupta, S., et al. (2018). Robust injection point-based framework for modern applications against XSS vulnerabilities in online social networks. International Journal of Information and Computer Security, 10(2-3), 170-200. - Khan, A., & Chui, K. T. What is Mobile Phishing and How to Detect it?. Insights2Techinfo, pp.1 - Chhabra, M., et al. (2013). A novel solution to handle DDOS attack in MANET. Journal of Information Security Vol. 4 No. 3 (2013) , Article ID: 34631 , 15 pages DOI:10.4236/jis.2013.43019 - Zargar, S. T., Joshi, J., & Tipper, D. (2013). A survey of defense mechanisms against distributed denial of service (DDoS) flooding attacks. IEEE communications surveys & tutorials, 15(4), 2046-2069. - Tripathi, S., et al. (2013). Hadoop based defense solution to handle distributed denial of service (ddos) attacks. Journal of Information Security, Vol. 4 No. 3 (2013) , Article ID: 34629 , 15 pages. - R. S. Pal (2021) Phishing Attack in Modern World, Insights2techinfo, pp.1 Cite this paper as: Khushboo Kumari (2021) Online social media threat and It’s solution, Insights2Techinfo, pp.1 - 1.2 Million Accounts of GoDaddy got Hacked - Operating System Security and Significance of Logging - Captcha Improvement: Security from DDoS Attack - Phishing Attack in Modern World
<urn:uuid:4be62707-8b15-44cc-8f40-038eeddd278b>
CC-MAIN-2022-40
https://insights2techinfo.com/online-social-media-threat-and-its-solution/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00296.warc.gz
en
0.871737
2,119
3.1875
3
Artificial intelligence (AI) has become a cornerstone of recent technological innovations, it exists in the world all around us, automating simple tasks and dramatically improving our lives. But as AI and automation become increasingly capable, how will this alternative labor source affect your future workforce? There have been major industrial innovations in the past that disrupted the workforce. How is AI different from these? In this article, we’ll take a look at both some optimistic and pessimistic views of the future of our jobs amidst increasing AI capabilities. Technology-driven societal changes, like what we’re experiencing with AI and automation, always engender concern and fear—and for good reason. A two-year study from McKinsey Global Institute suggests that by 2030, intelligent agents and robots could replace as much as 30 percent of the world’s current human labor. McKinsey suggests that, in terms of scale, the automation revolution could rival the move away from agricultural labor during the 1900s in the United States and Europe, and more recently, the explosion of the Chinese labor economy. McKinsey reckons that, depending upon various adoption scenarios, automation will displace between 400 and 800 million jobs by 2030, requiring as many as 375 million people to switch job categories entirely. How could such a shift not cause fear and concern, especially for the world’s vulnerable countries and populations? The Brookings Institution suggests that even if automation only reaches the 38 percent means of most forecasts, some Western democracies are likely to resort to authoritarian policies to stave off civil chaos, much like they did during the Great Depression. Brookings writes, “The United States would look like Syria or Iraq, with armed bands of young men with few employment prospects other than war, violence, or theft.” With frightening yet authoritative predictions like those, it’s no wonder AI and automation keep many of us up at night. “Stop Being a Luddite” The Luddites were textiles workers who protested against automation, eventually attacking and burning factories because, “they feared that unskilled machine operators were robbing them of their livelihood.” The Luddite movement occurred all the way back in 1811, so concerns about job losses or job displacements due to automation are far from new. When fear or concern is raised about the potential impact of artificial intelligence and automation on our workforce, a typical response is thus to point to the past; the same concerns are raised time and again and prove unfounded. In 1961, President Kennedy said, “the major challenge of the sixties is to maintain full employment at a time when automation is replacing men.” In the 1980s, the advent of personal computers spurred “computerphobia” with many fearing computers would replace them. So what happened? Despite these fears and concerns, every technological shift has ended up creating more jobs than were destroyed. When particular tasks are automated, becoming cheaper and faster, you need more human workers to do the other functions in the process that haven’t been automated. “During the Industrial Revolution more and more tasks in the weaving process were automated, prompting workers to focus on the things machines could not do, such as operating a machine, and then tending multiple machines to keep them running smoothly. This caused output to grow explosively. In America during the 19th century the amount of coarse cloth a single weaver could produce in an hour increased by a factor of 50, and the amount of labour required per yard of cloth fell by 98 percent. This made cloth cheaper and increased demand for it, which in turn created more jobs for weavers: their numbers quadrupled between 1830 and 1900. In other words, technology gradually changed the nature of the weaver’s job, and the skills required to do it, rather than replacing it altogether.” — The Economist, Automation and Anxiety Impact of Artificial Intelligence — A Bright Future? Looking back on history, it seems reasonable to conclude that fears and concerns regarding AI and automation are understandable but ultimately unwarranted. Technological change may eliminate specific jobs, but it has always created more in the process. Beyond net job creation, there are other reasons to be optimistic about the impact of artificial intelligence and automation. “Simply put, jobs that robots can replace are not good jobs in the first place. As humans, we climb up the rungs of drudgery — physically tasking or mind-numbing jobs — to jobs that use what got us to the top of the food chain, our brains.” — The Wall Street Journal, The Robots Are Coming. Welcome Them. By eliminating the tedium, AI and automation can free us to pursue careers that give us a greater sense of meaning and well-being. Careers that challenge us, instill a sense of progress, provide us with autonomy, and make us feel like we belong; all research-backed attributes of a satisfying job. And at a higher level, AI and automation will also help to eliminate disease and world poverty. Already, AI is driving great advances in medicine and healthcare with better disease prevention, higher accuracy diagnosis, and more effective treatment and cures. When it comes to eliminating world poverty, one of the biggest barriers is identifying where help is needed most. By applying AI analysis to data from satellite images, this barrier can be surmounted, focusing aid most effectively. Impact of Artificial Intelligence — A Dark Future I am all for optimism. But as much as I’d like to believe all of the above, this bright outlook on the future relies on seemingly shaky premises. Namely: - The past is an accurate predictor of the future. - We can weather the painful transition. - There are some jobs that only humans can do. The Past Isn’t an Accurate Predictor of the Future As explored earlier, a common response to fears and concerns over the impact of artificial intelligence and automation is to point to the past. However, this approach only works if the future behaves similarly. There are many things that are different now than in the past, and these factors give us good reason to believe that the future will play out differently. In the past, technological disruption of one industry didn’t necessarily mean the disruption of another. Let’s take car manufacturing as an example; a robot in automobile manufacturing can drive big gains in productivity and efficiency, but that same robot would be useless trying to manufacture anything other than a car. The underlying technology of the robot might be adapted, but at best that still only addresses manufacturing AI is different because it can be applied to virtually any industry. When you develop AI that can understand language, recognize patterns, and problem solve, disruption isn’t contained. Imagine creating an AI that can diagnose disease and handle medications, address lawsuits, and write articles like this one. No need to imagine: AI is already doing those exact things. Another important distinction between now and the past is the speed of technological progress. Technological progress doesn’t advance linearly, it advances exponentially. Consider Moore’s Law: the number of transistors on an integrated circuit doubles roughly every two years. In the words of University of Colorado physics professor Albert Allen Bartlett, “The greatest shortcoming of the human race is our inability to understand the exponential function.” We drastically underestimate what happens when a value keeps doubling. What do you get when technological progress is accelerating and AI can do jobs across a range of industries? An accelerating pace of job destruction. “There’s no economic law that says ‘You will always create enough jobs or the balance will always be even’, it’s possible for a technology to dramatically favour one group and to hurt another group, and the net of that might be that you have fewer jobs” —Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy In the past, yes, more jobs were created than were destroyed by technology. Workers were able to reskill and move laterally into other industries instead. But the past isn’t always an accurate predictor of the future. We can’t complacently sit back and think that everything is going to be ok. This brings us to another critical issue … The Transition Will Be Extremely Painful Let’s pretend for a second that the past actually will be a good predictor of the future; jobs will be eliminated but more jobs will be created to replace them. This brings up an absolutely critical question, what kinds of jobs are being created, and what kinds of jobs are being destroyed? “Low- and high-skilled jobs have so far been less vulnerable to automation. The low-skilled jobs categories that are considered to have the best prospects over the next decade — including food service, janitorial work, gardening, home health, childcare, and security — are generally physical jobs, and require face-to-face interaction. At some point robots will be able to fulfill these roles, but there’s little incentive to roboticize these tasks at the moment, as there’s a large supply of humans who are willing to do them for low wages.” — Slate, Will robots steal your job? Blue-collar and white-collar jobs will be eliminated—basically, anything that requires middle-skills (meaning that it requires some training, but not much). This leaves low-skill jobs, as described above, and high-skill jobs that require high levels of training and education. There will assuredly be an increasing number of jobs related to programming, robotics, engineering, etc.. After all, these skills will be needed to improve and maintain the AI and automation being used around us. But will the people who lost their middle-skilled jobs be able to move into these high-skill roles instead? Certainly not without significant training and education. What about moving into low-skill jobs? Well, the number of these jobs is unlikely to increase, particularly because the middle-class loses jobs and stops spending money on food service, gardening, home health, etc. The transition could be very painful. It’s no secret that rising unemployment has a negative impact on society; less volunteerism, higher crime, and drug abuse are all correlated. A period of high unemployment, in which tens of millions of people are incapable of getting a job because they simply don’t have the necessary skills, will be our reality if we don’t adequately prepare. So how do we prepare? At the minimum, by overhauling our entire education system and providing means for people to re-skill. To transition from 90 percent of the American population farming to just 2 percent during the first industrial revolution, it took the mass introduction of primary education to equip people with the necessary skills to work. The problem is that we’re still using an education system that is geared for the industrial age. The three Rs (reading, writing, arithmetic) were once the important skills to learn to succeed in the workforce. Now, those are the skills quickly being overtaken by AI. For a fascinating look at our current education system and its faults, check out this video from Sir Ken Robinson: In addition to transforming our whole education system, we should also accept that learning doesn’t end with formal schooling. The exponential acceleration of digital transformation means that learning must be a lifelong pursuit, constantly re-skilling to meet an ever-changing world. Making huge changes to our education system, providing means for people to re-skill, and encouraging lifelong learning can help mitigate the pain of the transition, but is that enough? Are We F*cked? Will All Jobs Be Eliminated? When I originally wrote this article a couple of years ago, I believed firmly that 99 percent of all jobs would be eliminated. Now, I’m not so sure. Here was my argument at the time: The claim that 99 percent of all jobs will be eliminated may seem bold, and yet it’s all but certain. All you need are two premises: - We will continue making progress in building more intelligent machines. - Human intelligence arises from physical processes. The first premise shouldn’t be at all controversial. The only reason to think that we would permanently stop progress, of any kind, is some extinction-level event that wipes out humanity, in which case this debate is irrelevant. Excluding such a disaster, technological progress will continue on an exponential curve. And it doesn’t matter how fast that progress is; all that matters is that it will continue. The incentives for people, companies, and governments are too great to think otherwise. The second premise will be controversial, but notice that I said human intelligence. I didn’t say “consciousness” or “what it means to be human”. That human intelligence arises from physical processes seems easy to demonstrate: if we affect the physical processes of the brain we can observe clear changes in intelligence. Though a gloomy example, it’s clear that poking holes in a person’s brain results in changes to their intelligence. A well-placed poke in someone’s Broca’s area and voilà—that person can’t process speech. With these two premises in hand, we can conclude the following: we will build machines that have human-level intelligence and higher. It’s inevitable. We already know that machines are better than humans at physical tasks, they can move faster, more precisely, and lift greater loads. When these machines are also as intelligent as us, there will be almost nothing they can’t do—or can’t learn to do quickly. Therefore, 99 percent of jobs will eventually be eliminated. But that doesn’t mean we’ll be redundant. We’ll still need leaders (unless we give ourselves over to robot overlords) and our arts, music, etc., may remain solely human pursuits too. As for just about everything else? Machines will do it—and do it better. “But who’s going to maintain the machines?” The machines. “But who’s going to improve the machines?” The machines. Assuming they could eventually learn 99 percent of what we do, surely they’ll be capable of maintaining and improving themselves more precisely and efficiently than we ever could. The above argument is sound, but the conclusion that 99 percent of all jobs will be eliminated I believe over-focused on our current conception of a “job”. As I pointed out above, there’s no guarantee that the future will play out like the past. After continuing to reflect and learn over the past few years, I now think there’s good reason to believe that while 99 percent of all current jobs might be eliminated, there will still be plenty for humans to do (which is really what we care about, isn’t it?). The one thing that humans can do that robots can’t (at least for a long while) is to decide what it is that humans want to do. This is not a trivial semantic trick; our desires are inspired by our previous inventions, making this a circular question. — The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, by Kevin Kelly Perhaps another way of looking at the above quote is this: a few years ago I read the book Emotional Intelligence, and was shocked to discover just how essential emotions are to decision making. Not just important, essential. People who had experienced brain damage to the emotional centers of their brains were absolutely incapable of making even the smallest decisions. This is because, when faced with a number of choices, they could think of logical reasons for doing or not doing any of them but had no emotional push/pull to choose. So while AI and automation may eliminate the need for humans to do any of the doing, we will still need humans to determine what to do. And because everything that we do and everything that we build sparks new desires and shows us new possibilities, this “job” will never be eliminated. If you had predicted in the early 19th century that almost all jobs would be eliminated, and you defined jobs as agricultural work, you would have been right. In the same way, I believe that what we think of as jobs today will almost certainly be eliminated too. But this does not mean that there will be no jobs at all, the “job” will instead shift to determining, what do we want to do? And then working with our AI and machines to make our desires a reality. Is this overly optimistic? I don’t think so. Either way, there’s no question that the impact of artificial intelligence will be great and it’s critical that we invest in the education and infrastructure needed to support people as many current jobs are eliminated and we transition to this new future. Originally published on April 1, 2017. Updated on December 23, 2021.
<urn:uuid:801766ee-071f-499b-98bf-182fba07760c>
CC-MAIN-2022-40
https://www.iotforall.com/impact-of-artificial-intelligence-job-losses
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00296.warc.gz
en
0.948714
3,525
3.171875
3
Top Big Data Security tips A massive amount of data is being collected every day. Every business that has ever existed online has collected customer data. This data streams from a range of smart devices interconnected as the IoT (Internet of Things). Computer capacities are growing worldwide, so the amount of data is also increasing exponentially; as the number of data increases, so do security concerns. A vast volume of available online data is sensitive and up for grabs by whoever knows the nooks and corners of the web, which is worrisome for most people. So, what is Big Data? Big data is made up of complex, large data sets that need to be analyzed and characterized for the information to benefit businesses or individuals. There are a few factors inherent of big data that can further simplify the concept. 1. Big data is comprised of information that grows exponentially 2. Conventional data processing procedures cannot be used to analyze big data because of it’s the sheer volume 3. Data mining, data analysis, data storage, data sharing, and data visualization, are all parts of Big data analyzing procedure 4. Big data is a comprehensive term including data, data framework, tools, and techniques used to analyze it Types of Big Data Although Big data is an all-inclusive term, there are types of Big data. Let’s have a look: When the data can be processed, stored, and eventually retrieved in a secure, per-ordained fashion, the information is called structured data. This data can be easily accessed from a database using a simple search engine algorithm. Data without a defined form or structure is described as unstructured data. It is difficult and time-consuming to process and analyze this data. An example of unstructured data is email. Data containing both structured and unstructured format is called semi-structured data. Although this data does not fall under any database, it might provide vital information that segregates individual elements within the data. The security challenges of big data Big data is doing great things, building and tearing down businesses every second is no game. But, are you prepared to take a hit when Big data collapses around you or becomes a death trap for your business? No, right? And you shouldn’t prepare for that; instead, you should prepare to fight off any challenge arising from big data, especially the big security challenges. These are some of the challenges: · Fake data generation Cyber criminals can fabricate data and pour it into your data pool to misguide you into dismissing valuable trends while embracing non-existent ones. · Untrustworthy mappers After collection, big data goes through parallel processing. Data might be split into several bulks first, after which a mapper processes them and allocates them to specific storage. If cyber criminals get hold of a mapper’s code, they can utilize these codes to make mappers create inadequate lists of value pairs. Outsiders can also get access to sensitive information. · Cryptographic protection and its problems Although the promise of end-to-end encryption to protect confidential information is common these days, the actual procedure is often ignored or kept at the back foot. A lot of data is stored in the cloud without proper encrypted protection. · Information mining Perimeter-based security allows systems to be protected from all entry; but, what about inside the system? What IT specialists do inside the system is not only unprotected, but it is also a mystery in most cases. · Security Audits It is advised that businesses should hold security audits regularly. However, most companies do not follow this advice; therefore, awareness of security gaps is also dropping every day. Top Big Data Security Tips 1. Security first You cannot wait for a data breach to assess your security measures or to secure your data. Before starting a big data project, your IT security team and everybody else involved should have the scary but immensely important data security discussion. 2. Accountability centralization There is a possibility that your data currently resides in diverse organizational silos & data sets. It would be best if you centralized the accountability for data security, which will ensure consistent policy enforcement and access control. 3. Data encryption Data flow must be protected at entry points and while it is in motion inside the system as well. So, you can add transparent data encryption at the file layer, and SSL encryption to the information as it moves between nodes & applications. 4. Separate encryption keys and data You cannot store your encryption key and encrypted data together on the same server because this would be like locking your door and leaving the key hanging by the lock. You need a key management system to keep your encryption keys separately and secure. 5. Authentication gateways should be protected Most data breaches are the result of weak authentication. A hacker can easily access sensitive data by exposing vulnerabilities in the authentication function. If the implementation of the user authentication process is flawed, the chances of a breach can increase tenfold, and that is why one must ensure that there are no broken authentication tokens to be exploited by unauthorized users. 6. Implement the principle of least privilege Ideally, tiered access control and the principle of least privilege or PoLP should be maintained regularly in a business. This ensures limited user access at a minimal level, which allows normal functioning. In short, your users should only get specific privileges that enable them to complete their responsibilities without hiccups. In the end Data breaches are increasing every day because of the increased automated data collection. Still, your company does not need to fear big data and the data breach possibility. A big data solution is an answer to all your data breach concerns. Employee training, encryption techniques, and a big data strategy created at the inception of a big data project can genuinely help your business. You can also use real information to analyze the current big data situation in your company and create unique solutions.
<urn:uuid:8a8f66d3-5da4-4d46-a432-d797c264810b>
CC-MAIN-2022-40
https://www.protecto.ai/top-big-data-security-tips/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00296.warc.gz
en
0.909902
1,235
2.8125
3
Share this post: How can businesses absorb disruptive impact and begin to integrate these new business models in existing and new streams of business opportunity? NFTs have taken the world by storm. Rejuvenating the blockchain movement started by Bitcoin, followed by smart contract platform Ethereum, NFTs seem to be a natural progression in the explosion of asset tokenization, of all kinds of things we value. NFT stands for non-fungible token and to understand NFT we need to understand the concept of fungibility! What is an NFT? An asset is considered fungible when it is considered mutually interchangeable with another identical item. For instance, money which can be changed for other forms and modality or other purposes. In the crypto-sphere, most native assets such as Bitcoin and Ethereum serve the same function, besides its core purpose or providing utility they are also fungible tokens. By this definition non-fungible tokens, or NFTs, invert the definition of fungibility and provide a unique perspective of other things that are also as valuable to us as money and can be measured in fungible units. See how blockchain adds value to trade and finance NFTs solve the issue of uniqueness and a one-of-a-kind asset type. Examples include not only the popular collectable such as digital art, music, and other digital collectibles — which we value and treasure — but can also include other practical objects such as digital IDs, healthcare records, credit history and others, which are unique, valuable to us, and serve a purpose in digital networks powered by blockchain. So, while blockchain is the underlying technology that provides a transaction system by enforcing a digital ledger and enforces rules of engagement via smart contracts, it also extended tenets such as immutability, transaction record and transparency to facilitate verification and asset movement with an embedded trust system. What can NFTs be used for? NFTs are also a type of token and asset class online — other tokenized assets can include stable coin, security tokens, tokenized securities and others. NFTs are unique in that they can be tradable (art and collectibles) or hypothecated (healthcare records or digital history) and this is where things get interesting. As NFTs are also things of value and make their way into marketplaces they need a fungible token like a utility coin or stable coin for derivation of value in measurable terms. These marketplaces will usually need integration with either banking rails or existing crypto-provided rails such as digital exchanges or a DeFi ecosystem to facilitate trade and transfer. I address the complex issue of a fleeting and rapid rise of NFTs, after a similar meteoric phased rise of decentralized finance (DeFi), creating amazing innovations with the immense promise of democratization, new business models, and global marketplaces with global access — all fueled by the basic premise of decentralization and fundamental constructs of tokenization and wallets. NFTs may be characterized as unique one-of-a-kind cryptographic tokens with some intrinsic value to the holder (ID, health record) or a market (art, collectible). Where are NFTs going next? NFTs that have an intrinsic value and are essentially tokens that are simple proof-validations of the existence, authenticity, and ownership of digital assets. Fungible tokens are valued on various bases, such as the sum total of economic activity in the network (cryptocurrency), utility (smart contracts and transaction network processing), assigned values (as in stable coins and security tokens), and so on. NFTs represent both transferable entities and non-transferable tokens that we value. We begin to realize the promise of blockchain that envisioned digitization, tokenization and democratization of finance by enabling networks that are capable of moving value with reduced friction and intermediation. The question to ponder now is: how do businesses, individuals and enterprises understand the transformative and disruptive impact and begin to integrate these new business models into existing and new streams of business opportunity? Turning strategy into business outcomes IBM Blockchain Services can help bring your ideas to life. Explore the use of blockchain and digital assets in your business. Connect with a blockchain services expert
<urn:uuid:dc9600c6-c1f1-4c85-851f-fc1a841fd118>
CC-MAIN-2022-40
https://www.ibm.com/blogs/blockchain/2021/04/the-rising-nft-tide-lifts-all-tokens-so-what-is-an-nft/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00296.warc.gz
en
0.935845
869
2.625
3
In the world of cryptography, data is only safe as long as the keys used to protect that data are kept secure. While, on one hand, this means that keys must be protected against unauthorized access, it also means that keys must be created in a way that makes them difficult for an attacker to guess. To produce cryptographically strong keys, cryptographic modules use random number generators, or RNGs, which in turn rely on random data as input. This random input data is called entropy, and is the foundation of a secure cryptographic module. I had the opportunity to discuss entropy with the great group over at Computer Sciences Corporation (CSC). The panelist included Lachlan Turner, Jason Cunningham, and Maureen Barry. In our first of a two-part series, our panel answers some questions to offer insight into what you need to know about entropy and how it could affect your Common Criteria or FIPS evaluation. What does entropy mean? The term entropy loosely translates to, “The degree of disorder or randomness in a system.” This is how we describe entropy in computing, although the term is also used in thermodynamics. For our purposes, however, it is the random data collected from electronic sources, for use in computing applications. Entropy is a measure of randomness often expressed and measured by bits. The more entropy you have feeding into a given value, the more random that value will be. Why is entropy receiving so much attention when programs are already testing cryptographic algorithms? The self-tests implemented to test cryptographic algorithms are considered to be a health check, which ensures that they can mathematically and procedurally operate as they were intended to. This concept is different from that of entropy testing. Most modern day cryptographic implementations rely on the use of sufficiently random data in order to ensure a high degree of secrecy when establishing shared secrets, or creating the data required to generate cryptographic keys. The random number generators that typically rely on this input to be random can only produce sufficiently random numbers if the input they require also contains a high degree of randomness. The entropy serves as that high degree of randomness. When it comes to entropy, an old saying applies. “You will get out of it, what you put into it.” Since the quality and quantity of entropy is the foundation of cryptography, it’s vitally important that entropy be considered as part of the testing process. What challenges do vendors face when trying to measure their product’s entropy? The information coming from NIAP, CMVP, and the other validation program bodies is that vendors have to understand what sources contribute to their product’s overall entropy and how many bits of entropy are contributed by each source. That can be quite difficult. Quite often the crypto modules that are used in products are created by third parties, and vendors don’t really know what happens “under the hood.” Another challenge comes from the need to measure the entropy at the appropriate point in the overall process. Many systems will take a value produced from entropy sources and “condition” it before using it as input to the random number generator. However, testers want to see entropy measurements performed on the pure, pre-conditioning value, but these values cannot always be captured. What are the requirements for entropy in Common Criteria and FIPS evaluations? Thus far, the entropy requirements for CC and FIPS have only been loosely defined through draft publications. That is not to say, however, that there isn’t a framework in place. The Computer Security Division at NIST has completed a publication that encompasses the testing of entropy. It is anticipated that the concepts in the publication will soon form the basis of all future entropy testing for FIPS 140-2 (and possibly Common Criteria). From a Common Criteria perspective, there is an NIAP-approved Protection Profile (PP) and within that PP is an annex with an entropy profile. From a practical standpoint, a vendor has to describe the entropy; that is, the vendor needs to document what entropy source is actually producing random data. Examples could be ring oscillators, keyboard key presses, noisy diodes, mouse movement, or disk input/output operations. The requirements are to describe what the sources are and then describe what is done with those random event values (i.e., what is done to condition them), and what is the interaction between the entropy source and the crypto module. There are also requirements around health testing. In the end, vendors are required to provide a justification (supported either by test data or mathematical models) that demonstrates how many bits of entropy are being generated. That justification must include a good argument for why it’s sufficient. This justification area is currently evolving and is a bit grey. For FIPS, things are very similar to Common Criteria. The CMVP released guidance that says any type of analysis that provides information regarding sufficiency of a crypto module’s entropy will be considered — they understand that there is no perfect way to quantify it. Statistical analyses can be conducted or source code can be analyzed to mathematically support a vendor’s claim that their entropy is sufficient for generating random numbers. NIST doesn’t really come right out and call it entropy. This process is part and parcel to the strength of the key generation method. They want to know everything that happens before the data goes to an approved RNG. There is quite a bit of confusion right now about entropy — hopefully, we can clear a bit of it up. In our next post, we’ll dive a bit further into entropy testing, touching on what vendors need to do to meet the entropy requirements, what entropy testing tools are available, and how much time entropy testing is adding to evaluations. Panel members from Computer Sciences Corporation (CSC) are: Lachlan Turner is the Technical Director of CSC’s Security Testing and Certification Labs with over 10 years of experience in cyber security specializing in Common Criteria. Lachlan served as a member of the Common Criteria Interpretations Management Board (CCIMB) and has held roles as certifier, evaluator and consultant across multiple schemes – Australia/New Zealand, Canada, USA, Malaysia and Italy. Lachlan provides technical leadership to CSC’s four accredited CC labs and is passionate about helping vendors through the evaluation process to achieve their business goals and gain maximum value from their security assurance investments. Jason Cunningham leads the FIPS 140-2 program at CSC and has over 10 years of experience in IT security. Throughout his career, Jason has been involved in numerous security related projects covering a wide range of technologies. Maureen Barry is the Deputy Director for CSC’s Security Testing and Certification Labs (STCL) and primarily manages the Canadian laboratory. She is also a Global Product Manager responsible for developing, managing, and executing the Cybersecurity Offering program for STCL across four countries: Canada, USA, Australia and Germany. She has almost 10 years of experience in Common Criteria in addition to over 10 years of experience in IT. Corsec Lead Engineer Darryl Johnson was also a member of the panel discussing entropy testing and contributed to the writing of this post. For help with your FIPS 140-2 or Common Criteria evaluation or for additional questions about entropy testing and how it might affect your next certification, contact us.
<urn:uuid:064212e2-1bc6-4c45-89d8-d9dc39053cfd>
CC-MAIN-2022-40
https://www.corsec.com/entropy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00296.warc.gz
en
0.94398
1,521
3.609375
4
Nowadays cybersecurity personnel are well aware of the importance of Security Zones. Compliance processes based around the NIS directive and the main cybersecurity standards such as IEC 62443 and the upcoming TS 50701, are constantly there to remind them that it is indispensable to segregate zones according to different security levels. If it’s mandatory, you may think that everyone should know how to do it. Unfortunately, many railway infrastructure managers and operators have been struggling to implement concrete zoning measures, as prescribed by these standards. Why so? Until recently, technology couldn’t separate the asset and network dataflows without affecting the overall network performance. Let’s now understand how railways can become cyber compliant, especially in safety-critical environments. What’s a security zone? In railways when we talk about a System under Consideration (SuC), we either refer to a complete network comprising many sub-systems with different security levels, or to any one of these sub-systems with their own assets. To account for this diversity of network complexity, standards have introduced the concepts of zones and conduits. A Zone is defined as the logical or physical grouping of railway assets (i.e., physical assets, applications, or information) sharing identical security requirements. Each zone has a unique set of characteristics and security requirements with various attributes (e.g., security policies and levels; asset inventory; access requirements and control; threats and vulnerabilities, etc.). A Conduit can be considered a specific type of zone, which regroups the communication devices (e.g., switches, routers, firewalls, communications gateways, etc.), enabling the dataflow between zones. On top of a zone’s attributes, it also possesses a set of characteristics and security requirements linked to the interconnected zones and communications protocols. The IEC 62443 standard defines security levels as a qualitative method, serving to compare and manage security for different zones of an organization. Through a risk assessment, Professional Service experts will assess three types of security. Firstly, they will identify the right security level to operate correctly, which is called the Target Security Level. Once a system design is established or is already implemented, these experts will measure and rank the Achieved Security Level. Finally, professionals will determine whether or not this asset or sub-system is capable of reaching the Target Security Level natively, when configured correctly, without any additional countermeasure. The ability of an asset or a sub-system to provide that protection is called the Capability Security Level. Service Professionals, within their risk and vulnerability assessment, will consider these three level types and assign one of the 5 following Security Levels to any given zone and conduit. In other words, each asset in the same zone and all conduit dataflows will receive the same Security Level from 0 to 4, established in function of similar cybersecurity requirements, for all three security types. The complexity of zone and conduit partitioning Because of their complexity, Railway and Public Transport operations must usually go through yearly detailed risk and vulnerability assessments. Each time, the process requires assets to be checked for the three security types and if needed, reassigned coherently to the right security zone or conduit. The IEC Standard 62443-3-2 proposes a general set of partitioning guidelines, which the future standard TS 50701 has adapted to railways. Among its recommended zoning criteria, three are especially useful: Other pertinent criteria for segmentation are the: All this means that to be cyber compliant, a railway or public transport operator must rely on technology enabling an easy enforcement of rules prohibiting assets to communicate between them unless they share the same requirements, according to most of these criteria. Why OT and IT Solutions will not solve it? Firewalls have been around for a time, doing just that: authorizing outgoing and incoming packets to flow between assets, by comparing them with very limited pre-established criteria (e.g., IP addresses, packet type, port number, etc). Even the Next Generation firewall (NGFW) technologies cannot meet the more sophisticated TS 50701 partitioning criteria, because they only provide security analysis for the TCP/IP based protocols they support. TCP handshake checks and packet inspection on common protocols are great features in IT environments, but no NGFW technology can read and interpret specific OT protocols or Railway Applications, which is the only way to provide zoning compliance in railways. Furthermore, in safety-critical systems, low latency is an absolute prerequisite. The extra step (hop) involved in checking/blocking/dropping with security gateways when transferring the packets, creates significant slowdown and risk to operation and therefore, prohibits the usage of NGFW within these networks. Even when considering OT solutions that have the proper support for IoT protocols it is important to remember that these solutions mainly use shallow packet inspection for the purpose of extracting protocol parameters without their context to the entire rail signaling and rolling stock safety applications. Therefore, they won't be able to map the monitored components into their desired zone and conduit. Compliance made simple with rail focused cybersecurity solutions On the other hand, some Rail-Focused Monitoring Systems are made for railway OT and safety-critical networks. These purpose-built cybersecurity solutions use machine learning algorithms and deep packet inspection technology in order to address TS50701 requirements. Hence, they can: Such Continuous Monitoring Systems can automatically map all these assets into zones and conduits, based on best practices and standards. In matters of minutes, all policies and blocking rules are thus established simplifying a process that with Firewall technology is complex and lengthy. Obviously, manual fine-tuning is possible enabling, for instance, zoning per departmental responsibility. In fact, with some Continuous Monitoring System, updating zone partitioning becomes a child’s play. As the system is non-intrusive - so has no impact on latency – railway CISO’s can dispose of an evolutive solution that will never affect the performance of the operations. Railway CISOs can stay in their comfort zone knowing that technology exist to support them meeting 24/7 cybersecurity compliance.
<urn:uuid:2f464f19-b674-4bd0-b098-5b4a6f153e08>
CC-MAIN-2022-40
https://www.cylus.com/post/dont-lose-sleep-over-railway-zoning-compliance
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00296.warc.gz
en
0.927032
1,252
2.828125
3
In November 2017, the Strava fitness tracking app published a visualization map to show where users exercise across the world. However, that map also revealed location information about military bases and spy posts around the world, military analysts report. The company lets users record running, walking, or biking activity on their smartphones or wearables, and upload it to the Internet. Military analysts noticed the map - which was constructed using more than three trillion individual GPS data points - has enough detail to give away potentially sensitive data on where soldiers on active duty are located. Users in locations like Afghanistan and Syria seem to exclusively be military personnel, they say. "If soldiers use the app like normal people do, by turning it on and tracking when they go to do exercise, it could be especially dangerous," says Nathan Ruser, analyst with the Institute for United Conflict Analysts. On Strava's map, the Helmand province of Afghanistan shows the layout of operating bases via exercise routes. The base is absent from satellite views on both Google Maps and Apple Maps. These findings arrive the day after Data Privacy Day, which was created to encourage both individuals and businesses to respect user privacy and protect data. Strava's decision to publish sensitive location data is part of a growing discussion around how companies should handle the massive amount of information they collect on users. Read more details here.
<urn:uuid:00c11610-4b49-42e2-9d93-572959823ae5>
CC-MAIN-2022-40
https://www.darkreading.com/application-security/strava-fitness-app-shares-secret-army-base-locations
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00496.warc.gz
en
0.944744
276
2.53125
3
What is Secondary DNS? Secondary DNS is a technical term for Domain Name System (DNS) that can be used in case there’s an outage or disruption on your main domain name server at any point in time. As we all know, DNS plays a major role in directing traffic across the internet by translating website names into IP addresses, which are easier for computers to understand. Secondary DNS is needed primarily for load balancing. If more than one or two websites are hosted by your web hosting company then there may be a possibility that all the website’s connections will go to load balancers. If Secondary DNS isn’t deployed in this case, users accessing each website may get different IP addresses of servers for every request resulting in performance issues. What are secondary DNS records? A secondary DNS record is a record for a hostname that’s managed as an alias to another original name. If the original domain name is deleted or changed, the secondary can still be used with no disruptions in service. Secondary DNS records are created to allow a hostname to have more than one domain assigned to it. This is useful when you want to change the primary or canonical name of a site without breaking links. Instead, you can create secondary names for the site before changing them over time until they are no longer necessary. How many types of secondary DNS records are there? There are two types of secondary DNS records: the CNAME record and the MX record. CNAME stands for Cannae name. The MX record stands for “mail exchange”. A Canonical Name or CNAME record is used to point from one domain name to another, usually from a subdomain to its parent domain. The most common reason for using a CNAME instead of an A Record with a wildcard is to avoid having to manage service downtime during a switchover: if the name of service changes, only its CNAME needs to be changed. MX records are used in email servers to point the domain name to one or more mail servers responsible for receiving and delivering email for that domain. The lower the number, the more preferred is this server. How do they work? A secondary DNS server is a DNS server that stores an updated list of IP addresses for the computers on the Internet. A secondary DNS server can store both IPv4 and IPv6 addresses. When a domain name needs to be resolved into an IP address, the computer requests that information from a DNS server. The first DNS server to respond with the location of the IP addresses is called a primary DNS server. The request for an IP address is then sent to the primary DNS server. This process can take several seconds – sometimes up to several minutes. The IP address returned by the primary DNS server is cached on the secondary server, however, to ensure that it can be quickly looked up should another request for the same IP address come through again. If a computer has set up multiple DNS servers to use in case one fails, any other computers using this machine as their DNS server will query the secondary DNS server for an IP address. They will also use the cached information to ensure that they get all of their requests answered quickly. If another computer using this machine as a DNS server needs to lookup an IP address, it will be added to the list on the original computer’s secondary server. Why would you want to use them? There are many reasons for using Secondary DNS. One of the most common reasons for running multiple DNS servers is to provide increased redundancy and availability. Other common reasons for running secondary name resolution systems include providing higher performance or distributing the workload to different networks linked by WAN links, geographic regions, etc. Another common reason companies might want to use secondary DNS systems is for off-boarding purposes. Let’s say you’re a company that has recently acquired another and the new acquisition uses an internal DNS system and it would take too long to get their internal DNS ready for production use with your environment. A workaround could be to set up a secondary DNS server for this purpose so you can off-board the old DNS servers slowly. When should you not use them? You should not use a secondary DNS on your computer if the network you’re on is unreliable. Also, if you have more than one computer and need to access the same website from different networks, then secondary DNS will not work for you. This is due to the fact that you will have multiple IP addresses, each of which will point you to a different secondary DNS. The end result will be that you’re still unable to access the website – because your computer does not know which secondary DNS to choose from. Secondary DNS is a way to speed up the process of loading websites. It’s primarily used for caching purposes, which essentially means that your browser stores certain files from popular sites on its own hard drive so that it doesn’t have to constantly fetch them over and over again. The best time to use secondary DNS is when you want faster load speeds but don’t plan on changing location much while viewing the site – whether because you’re at home or in an office building with a fast internet connection. However, if you’ll be moving locations often during browsing sessions (such as using public wifi), then this won’t work well for you since each new network will require downloading all those cached files anew.<br><br>Secondary DNS can also help bypass geographical restrictions since it can disguise your true location. This is especially useful for people who travel often and want to access geo-blocked sites while abroad. Using secondary DNS will allow you to unblock websites like Netflix, Amazon Prime Video, Hulu, BBC iPlayer, and more without having to change any network settings or install additional software on their devices.
<urn:uuid:0a9a0f71-7398-495c-9abf-5759b3e73c61>
CC-MAIN-2022-40
https://gigmocha.com/what-is-secondary-dns/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00496.warc.gz
en
0.943781
1,199
3.3125
3
In the early 1900’s two researchers working independently derived a relatively simple equation that serves as a Moore’s Law for the wireless industry: the Shannon- Hartley theorem. This theorem gives an upper bound to the amount of information that can be transmitted over the wireless channel where the individual channel capacity is dependent on only two parameters: Channel Bandwidth (BW) and the Signal-to-Noise ratio (SNR). While the capacity scales linearly with the channel bandwidth, it only scales at log2 for the signal to noise ratio: C = BW log2(1 + SNR) From the Shannon-Hartley theorem, there are three basic methods to increase network capacity - Increase the channel bandwidth: In 4G, carrier aggregation was used to increase the available signal bandwidth and 5G FR2 uses the mmWave frequencies to obtain larger capacities. - Increase the number of channels: MIMO utilizes the multipath scattering inside the network to concurrently transmit on several channels at the same time. Similar to the channel bandwidth, network capacity also scales linearly with this effect, but with an upper limit determined by the correlation (or similarity) of the multipath inside the network. 5G FR1 relies on scaling up MIMO to provide increased data rates. - Increase the output power of the network: Due to the presence of the noise in the SNR, the asymptotic log scaling of the SNR, and health/safety concerns of high electromagnetic energy, this method has its limits. One safer method of increasing the SNR throughout the network is the use of femtocells in areas of decreased coverage. By targeting the energy to a specific user, however, the energy efficiency of the network can be increased—this is referred to as “beamforming” and it is a key technology for both 5G FR1 & FR2 base stations. Source: End-To-End 5G Test Solutions, Sponsored by Rohde and Schwarz
<urn:uuid:6b9f7c57-0405-4c91-9343-2197a5b6155b>
CC-MAIN-2022-40
https://moniem-tech.com/2021/02/17/what-determines-the-capacity-of-a-network/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00496.warc.gz
en
0.913462
409
3.1875
3
Delegates to a United Nations climate conference on Saturday committed 195 nations to substantially reducing their greenhouse gas emissions in order to stifle rising global temperatures. The accord, reached after two weeks of talks at the conference in Paris, stipulates that countries set individual emissions targets every five years beginning in 2020. Developed nations would set stricter limits, while developing nations would be expected to ease emissions growth and adjust their goals as circumstances warrant. Proponents hope that the targets would be increased in five-year increments as renewable energy becomes more prevalent. The deal also establishes disclosure requirements and calls for wealthier nations to provide funding to help poorer nations reduce their emissions and cope with the effects of climate change. Ultimately, the proposal aims to cap the global rise in temperatures at 1.5 degree Celsius and to eventually achieve zero emissions — ensuring that man-made emissions can be absorbed by the planet. World leaders promptly characterized the conference as a landmark diplomatic achievement that included all nations in discussions about climate change for the first time. “This is truly a historic moment,” UN Secretary General Ban Ki-moon told The New York Times. “For the first time, we have a truly universal agreement on climate change, one of the most crucial problems on earth.” Achieving its goals, however, will prove to be extremely difficult. Global temperatures increased by about 1 degree Celsius since the dawn of the Industrial Revolution, and experts said curbing the overall increase at 1.5 degrees could already be impossible. The deal also would not impose penalties on nations that fail to meet their emissions standards, and the economic cost of eliminating more than 7 billion tons of carbon production could be staggering. "The problem is not solved because of this accord," President Obama said following the deal. "But make no mistake, the Paris agreement establishes the enduring framework the world needs to solve the climate crisis." Republicans on the campaign trail and in Congress, however, largely dismissed it. “The president is making promises he can’t keep, writing checks he can’t cash and stepping over the middle class to take credit for an agreement that is subject to being shredded in 13 months,” said Senate Majority Leader Mitch McConnell, R-Kentucky.
<urn:uuid:255b668e-9168-42f2-8db2-2fe2ec28a48d>
CC-MAIN-2022-40
https://www.mbtmag.com/global/news/13224645/paris-conference-produces-tough-goals-for-addressing-climate-change
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00496.warc.gz
en
0.951484
461
2.96875
3
For many of us, the cloud is part of our daily lives. We use these virtual storage servers to hold our pictures, our memories and our work documents, just to name a few. Cloud storage is also making its mark in the medical industry, with electronic health records making patient care easier no matter where you’re making your appointments. This utilization of virtual information storage is also being used to improve the speed and accuracy of DNA sequencing. How can cloud storage change the way we look at DNA? The Importance of DNA Sequencing DNA, which stands for deoxyribonucleic acid, is the smallest building block of life. It’s found in almost all living things on the planet. Your DNA, found in every cell in your body, holds the blueprint that governs why you are the way you are. Do you have red hair, or blue eyes? That’s written into your DNA. Are you tall, short, fat, skinny or athletic? You guessed it — that’s written into your DNA as well. Do you hate cilantro and think it tastes like soap? Believe it or not, that’s something that’s written into your DNA too. In that DNA blueprint, there are answers to thousands of questions that we’ve been posing for centuries, including things like how long we’ll live, what diseases we may be predisposed to, and many others. That is where DNA sequencing comes in. To stick with our same metaphor from a moment ago, you wouldn’t be able to read a blueprint without a key to tell you what different symbols mean, right? DNA sequencing provides researchers with the key to our DNA blueprint. By learning the order of the four base amino acids that make up DNA, researchers can determine which combinations of genes produce what result. Old Tech, New Tech Until now, DNA sequencing was performed on non-networked computers. While breakthroughs were being made, they were limited by the small subset of information available and the insufficient computer processing speeds. In other words, individual computers used for DNA sequencing are limited by the amount of processing power that they can possess. Moore’s Law, coined by Gordon Moore — one of the founders of Intel — suggests that computers are limited by the number of transistors that can be placed on a single chip. He stated that this number would likely double every two years, and all current trends show that even with today’s advances, Moore’s Law still holds true. Advances in DNA sequencing are appearing exponentially, and in many cases are only being limited by the available processing power. Predictive analytics, or the study of patterns to make predictions, has already made its way into the medical fields. When applied to DNA sequencing, it’s often dubbed Predictive Genomics. Cloud computing is a key component in the success of predictive genomics for a variety of reasons, including: - The amount of data — The sheer amount of data in one human being’s genome is almost mind-boggling. Each individual’s genome has up to 25,000 genes. These genes are made up of almost 3 million base pairs. When you break that down into digital data, you’re looking at upwards of 100 gigabytes of data per person. - The cost — Right now, having your personal genetic code sequenced costs between $1,500 and $4,000. This also plays a large role in the high cost of testing for specific genetic markers, like the BRCA1 and BRCA2 genes that indicate a higher chance of breast cancer. The use of cloud computing and predictive genomics can reduce costs, ensure quality and improve accuracy throughout the world of DNA sequencing. Amazon, our favorite online shopping mall, is doing what they can to help in the world of cloud computing and genomics. Amazon Web Services provides a cloud computing service that a number of companies, including DNAnexus and Helix, are using to improve the speed and accuracy of their genome sequencing. There’s an App for That While sending off a saliva-soaked q-tip to have your DNA tested isn’t a new concept, this is the first time it’s heading to both the cloud and the App Store. A new startup from Silicon Valley named Helix has recently hit the DNA sequencing market with a new twist on the DNA game. Now, not only can you have your DNA tested for all sorts of information, but you can also have your genetic ancestry analyzed by the minds at National Geographic. As the icing on the cake, all of your information will be stored on the cloud and accessible through Helix’s app. Cloud computing is becoming an invaluable tool for a variety of different industries, with DNA sequencing as just the latest in a long line of innovations. As this advancement becomes more mainstream, only time will tell what secrets our DNA holds, and what we’ll be able to do with them once we find them. By Kayla Matthews Kayla Matthews is a technology writer dedicated to exploring issues related to the Cloud, Cybersecurity, IoT and the use of tech in daily life. Her work can be seen on such sites as The Huffington Post, MakeUseOf, and VMBlog. You can read more from Kayla on her personal website.
<urn:uuid:4c08d4d4-c3c7-4ce4-b82c-ccd27c9e3aef>
CC-MAIN-2022-40
https://cloudtweaks.com/2016/11/cloud-dna-sequencing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00496.warc.gz
en
0.948126
1,119
3.359375
3
The advent of VMware virtual machines have taken the level of computing technology to a whole new level. The set of files presented in the virtual mode make things fast and seamless in terms of configuration, storage and implementation. That is the reason why in order to understand VMware virtual machines in a better way it is essential to get an insight of the basic file settings and the formats in which they are executed in the system. There are basically two formats in which the set of files are saved. One being VMFS (virtual machine file system) and the other being RDM (raw device mapping). It is a fact that both of these formats help you to access VMDK (virtual machine’s desk), but it’s the comparison of RDM vs VMFS in VMware that we are going to emphasize upon down here. There are several features on the grounds of which we can draw comparison between VMFS and RDM including storage. Here, we also try to understand why VMFS is recommended by VMware for the majority of virtual machines. When we shift our focus to the VMware studies, then we would find that there is not much performance difference between VMFS and RDM formats. The studies established on the basis of different performance tests reveals the fact that both VMFS and RDM delivers similar I/O throughput for the maximum workloads tested. So what are the features where the difference lies? Let us find out! RDM vs VMFS in VMware With the conclusion of the above table, it is quite evident that apart from the deliverance of performance there are certain features on the basis of which we can differentiate RDM vs VMFS in VMware. The performance difference observed between the two formats in negligible, but when we take into consideration the disk data, the assigned role as well as the storage capacity then the variation can be highlighted in an easier way. As far as the recommendation is concerned, then it is pretty clear that VMFS is advised by VMware for the maximum number of virtual machines. The features and utilities are conducive enough for a long run and that is something that matters a lot. But that does not mean RDM format is out of the equation. It is suggestive in few special situations. One such example is SAN-aware virtual machine. Related – VMware Interview Questions
<urn:uuid:f8dbfe86-7206-4b79-9e30-e9f560be81ed>
CC-MAIN-2022-40
https://ipwithease.com/features-evaluation-rdm-vs-vmfs-in-vmware/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00496.warc.gz
en
0.955061
470
2.796875
3
Data migration is the process of moving data from one location to another, one format to another, or one application to another. Generally, this is the result of introducing a new system or location for the data. The business driver is usually an application migration or consolidation in which legacy systems are replaced or augmented by new applications that will share the same dataset. These days, data migrations are often started as firms move from on-premises infrastructure and applications to cloud-based storage and applications to optimize or transform their company. The short answer is "data gravity." Although the concept of data gravity has been around for some time, the challenge is becoming more significant because of data migrations to cloud infrastructures. In brief, data gravity is a metaphor that describes: To move applications and data to more advantageous environments, Gartner recommends "disentangling" data and applications as a means overcoming data gravity. By making time at the beginning of the project to sort out data and application complexities, firms can improve their data management, enable application mobility, and improve data governance. The main issue is that every application complicates data management by introducing elements of application logic into the data management tier, and each one is indifferent to the next data use case. Business processes use data in isolation and then output their own formats, leaving integration for the next process. Therefore, application design, data architecture, and business processes must all respond to each other, but often one of these groups is unable or unwilling to change. This forces application administrators to sidestep ideal and simple workflows, resulting in suboptimal designs. And, although the workaround may have been necessary at the time, this technical debt must eventually be addressed during data migration or integration projects. Given this complexity, consider promoting data migration to "strategic weapon" status so that it gets the right level of awareness and resources. To ensure that the project gets the attention it needs, focus on the most provocative element of the migration – the fact that the legacy system will be turned off – and you’ll have the attention of key stakeholders, guaranteed. There are numerous business advantages to upgrading systems or extending a data center into the cloud. For many firms, this is a very natural evolution. Companies using cloud are hoping that they can focus their staff on business priorities, fuel top-line growth, increase agility, reduce capital expenses, and pay for only what they need on demand. However, the type of migration undertaken will determine how much IT staff time can be freed to work on other projects. First, let’s define the types of migration: Data migration involves 3 basic steps: Moving important or sensitive data and decommissioning legacy systems can put stakeholders on edge. Having a solid plan is a must; however, you don’t have to reinvent the wheel. You can find numerous sample data migration plans and checklists on the web. For example, Data Migration Pro, a community of data migration specialists, has a comprehensive checklist that outlines a 7-phase process: This may appear to be an overwhelming amount of work, but not all these steps are needed for every migration. Each situation is unique, and each company approaches the task differently. Even though data migration has been a fact of IT life for decades, horror stories are still reported every year. Here are the top 10 challenges that firms encounter in moving data: Not contacting key stakeholders. No matter the size of the migration, there is someone, somewhere who cares about the data you’re moving. Track them down and explain the need for this project and the impact on them before you get going on the task. If you don’t, you’ll certainly hear from them at some stage, and chances are good that they’ll disrupt your timeline. Not communicating with the business. Once you’ve explained the project to the stakeholders, be sure to keep them informed of your progress. It’s best to provide a status report on the same day every week, especially if things get off track. Regular communication goes a long way in building trust with all those affected. Lack of data governance. Be sure you’re clear on who has the rights to create, approve, edit, or remove data from the source system, and document that in writing as part of your project plan. Lack of expertise. Although this is a straightforward task, there's a lot of complexity involved in moving data. Having an experienced professional with excellent references helps the process go smoothly. Lack of planning. On average, families spend 10 to 20 hours planning their vacation, while IT teams may spend as little as half that time planning a small data migration. Hours spent planning don't always guarantee success but having a solid data migration plan does save hours when it comes to actually moving the data. Insufficient data prep software and skills. If this is a large migration (millions of records or hundreds of tables), invest in first-class data quality software and consider hiring a specialist firm to assist. Good news: An outside firm will probably rent you the software to help conserve costs. Waiting for perfect specs for the target. If the implementation team is sorting out design criteria, press on with steps 2 and 3. Target readiness will matter later in the project, but don’t let it stop you now. Unproven migration methodology. Do some research to be sure that the data movement procedure has worked well for other firms like yours. Resist the temptation to just accept the generic procedure offered by a vendor. Supplier and project management. Vendors and projects must be managed. If you're still doing your day job too, be sure that you have the time to manage the project and any related suppliers. Cross-object dependencies. With the technology and capabilities of data management tools available today, it's still shocking to learn about a dependent dataset that wasn’t included in the original plan. Because cross-object dependencies often are not discovered until very late in the migration process, be sure to build in a contingency for them so that your entire delivery date isn’t thrown off. The terms data migration and data conversion are sometimes used interchangeably on the internet, so let’s clear this up: They mean different things. As pointed out earlier, data migration is the process of moving data between locations, formats, or systems. Data migration includes data profiling, data cleansing, data validation, and the ongoing data quality assurance process in the target system. In a typical data migration scenario, data conversion is only the first step in a complex process. The term data conversion refers to the process of transforming data from one format to another. This is necessary when moving data from a legacy application to an upgraded version of the same application or an entirely different application with a new structure. To convert it, data must be extracted from the source, altered, and loaded into the new target system based on a set of requirements. Another term that is sometimes confused with data migration is data integration. Data integration refers to the process of combining data residing at different sources to provide users with a unified view of all the data. Integrating data from multiple sources is essential for data analytics. Example of data integration include data warehouses, data lakes, and NetApp® FabricPools, which automate data tiering between on-premise data centers and clouds or automatically tier data between AWS EBS block storage and AWS S3 object stores. Move to Infrastructure as a Service (IaaS): Move to Platform as a Service (PaaS): Choosing a deployment model that aligns with business requirements is essential to make sure that any data migration is both smooth and successful and delivers business value in terms of performance, security, and ROI. NetApp's data management solutions drive efficiency with software management tools designed to work together. Integrate your applications with new services in the cloud and even pay off some of that technical debt! Accelerate your data center migration, reduce risk, eliminate or minimize disruption, and ensure your data center is cloud-ready. From artificial intelligence to data centers in the cloud, learn why NetApp is the gold-standard for data storage and management. AI requires efficient management and processing of huge volumes of data. NetApp designs AI solutions to meet the most challenging needs. NetApp leads the storage industry with its all-flash arrays that deliver robust data services. Modernize your IT environment with the world's leading data management experts and specialists. You need a solid foundation for your seamless hybrid cloud. NetApp® ONTAP® data management software gives you every advantage possible
<urn:uuid:ba5de261-a9b0-4a00-bac8-a072dc68afc1>
CC-MAIN-2022-40
https://www.netapp.com/data-management/what-is-data-migration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00496.warc.gz
en
0.927761
1,769
2.90625
3
This is 2017, and you’re not pressing the buttons on your smartphone’s keyboard anymore. You’re using the movement of your tongue to order your phone to carry out the required function. A few years ago, we could only dream about science and technology to produce something like this. Devices like Google Home, Alexa, Apple Siri and Amazon’s Echo can hear you in between noise and laughter, from anywhere in the room. Though this is everything we’ve ever dreamed of – convenient, effortless, and straightforward – it is important to highlight and be aware of the threats this new invention has introduced in our lives. Have you ever dictated valuable, secret information such as your email passwords or your credit card numbers? Ever thought somebody else might be hearing you and recording this information to make some money off you? If not, you better start thinking now, because this poses not just security and privacy risks but also physical hazards. Your location, your home address, your daily routine, someone could be keeping tabs on you and making unauthorized access to all your personal details. Scary, huh? Security experts have time and again highlighted the threats posed by other devices that need a connection with the internet, and that collect your personal information and connect it with the dots to soak advantage out of you. These devices have been hijacking webcams for years. Smart speakers, for example, “hear” from afar in a noisy environment, even when playing music, Google assistant understands the pronunciation of a three-year-old child with no problem. Also, the systems do not know how to distinguish people by voice, because of what Alexa performs the commands of everyone who is nearby. As a result – unplanned purchases in the online store through Echo sometimes make children, for whom it’s just a game. Think about it. We’re placing our trust in the hands of software! We get so skeptical about trying on clothes in the changing rooms of clothing outlets but don’t think much before allowing a live microphone in our private space. The remotest threat to our privacy from surveillance cameras gets us angry, yet we’re not doing anything about what’s going on inside our own houses. We’re being betrayed by the devices we put our trust into. Besides this, the introduction of voice assistants in smartphones is nothing but harmful to the development of young people. Children are becoming more and more dependent on technology for their everyday tasks. Getting things done is becoming easier by the day. Minds are dumbing down, and intellectual growth amongst children is slowly becoming something that parents are starting to feel extremely worried about. While voice assistants might be a convenient way to order your phone around, they are also making you lazier than ever. And that is not okay. It has become more important now than ever for us to pay attention to the mental health, privacy and security of our lives. We must be aware of the sources through which we are being controlled or surveilled. Otherwise, we risk violation of a fundamental human right. The right to life and personal liberty.
<urn:uuid:5f74bd13-a994-42b0-9ef8-0ac44ffedd67>
CC-MAIN-2022-40
https://gridinsoft.com/blogs/can-voice-assistants-dangerous/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00496.warc.gz
en
0.952674
644
2.671875
3
Possibilities of Emerging Quantum Technologies Electrons, Atoms, and Q-Bits, Oh My! Quantum technologies rely on properties that do not readily show themselves in our daily lives. And at least right now, they are not easily understood by the public. I will undoubtedly be the first to admit that I have no more understanding than the average layperson regarding anything quantum. That fact, however, does not stop me from being entirely enthralled by having discussions about it or hearing experts speak on the possibilities that lie within the realm of quantum technologies and even mulling over what that means, especially in terms of national security and defense. Technologies like these involve manipulating components (think atoms and electrons) at scale, in complex experiments, at which point certain fundamental properties can be observed and subsequently utilized in specific ways. Just as bits describe information in classical computing, qubits (quantum bits) are the objects of information referred to in quantum computing. A New ‘Wave’ of Thinking – It’s a Feature, Not a Bug As research develops and concepts become more understood, the government intends to replace old tech with newer tech and harness the possibilities within quantum technology in ways that are not yet entirely known because it has not been explored deeply enough – yet. Recently, I attended a webinar hosted by the former Principal Director for Quantum Science at the Department of Defense, where guests briefly heard about certain quantum technologies presently drumming up excitement within the defense and science communities. Current hot topics from a defense perspective involve atomic clocks, quantum sensors, quantum computers, and quantum networks. - Atomic Clocks – Provide resilient timing in contested environments and novel precision timing applications. The ability to synchronize clocks is essential for positioning and timing attack prevention. - Quantum Sensors – Still in research stages. Improved ISR sensors, better PN, and newer/cheaper manufacturing techniques. - Quantum Computing – There’s been significant investments and some early demonstrations. Still in the discovery stage for new classes of algorithms and applications, research, adoptions of new machines, and post quantum cryptography. - Quantum Networks – Early stages. Long-term proposals are parallelizing quantum computing, connecting networks, and potentially quantum internet. Even if we are still quite a long way from a mainstream understanding (some experts say about ten years or more), the exciting potential quantum technologies hold and their role in securing our nation – whether on the battlefield or in cyberspace – is just waiting to be explored. Thinking and learning about emerging, future-oriented technology like this is part of the reason NetCentrics has recently established our very own Tech Gurus employee resource group, because getting the right people with the right knowledge to hang out in rooms together is how innovation blooms. We’re always on the hunt for big thinkers – check out our open positions here. The exciting potential quantum technologies hold and their role in securing our nation - whether on the battlefield or in cyberspace - is just waiting to be explored.
<urn:uuid:233b2c19-e04b-48b6-9227-64f9d3a323d6>
CC-MAIN-2022-40
https://netcentrics.com/possibilities-of-emerging-quantum-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00496.warc.gz
en
0.917651
622
2.5625
3
It’s a bird, it’s a plane... it’s a Flying COW. Despite its funny name, we expect our Flying COW will be doing serious, potentially life-saving things for our customers. We’ve used drones to inspect cell sites, measure network strength in sports stadiums, and now we’ve built a Flying COW. A Flying COW – which stands for Cell on Wings – is a cell site on a drone. It is designed to beam LTE coverage from the sky to customers on the ground during disasters or big events. Last week, our drone team completed what we believe is an industry first: A successful live test flight of the Flying COW transmitting and receiving high speed data above a field outside Atlanta. Check out the video above to see it in action. Here’s how it works. The drone we tested carries a small cell and antennas. It’s connected to the ground by a thin tether. The tether between the drone and the ground provides a highly secure data connection via fiber and supplies power to the Flying COW, which allows for unlimited flight time. The Flying COW then uses satellite to transport texts, calls, and data. The Flying COW can operate in extremely remote areas and where wired or wireless infrastructure is not immediately available. Like any drone that we deploy, pilots will monitor and operate the device during use. Once airborne, the Flying COW provides LTE coverage from the sky to a designated area on the ground. Compared to a traditional COW, in certain circumstances, a Flying COW can be easier to deploy due to its small size. We expect it to provide coverage to a larger footprint because it can potentially fly at altitudes over 300 feet— about 500% higher than a traditional COW mast. Once operational, a Flying COW may ultimately provide coverage to an area up to 40 square miles. We may also deploy multiple Flying COWs to expand the coverage footprint. We see the Flying COW playing an important role within our Network Disaster Recovery (NDR) team. We can transport, deploy, and move it quickly to accommodate rapidly changing conditions during an emergency. For example, at the direction of first responders, it could follow firefighters battling a quickly moving wildfire line—keeping them connected while they fight blazes. The Flying COW is tough. It can fly and provide coverage in bad weather—from high winds to heavy smoke. We’ll also look to use Flying COWs to enhance coverage at big events like music festivals. Used in conjunction with traditional COWs, the Flying COW may allow us to extend coverage to the outlying areas of the festival grounds. The sky is truly the limit when it comes to the use of drones on our network. The Flying COW is an exciting next step in how we’re using drones to bring strong wireless connectivity to those who need it most. Art Pregler - Unmanned Aircraft Systems (UAS) Program Director
<urn:uuid:76dcfc7c-1c1b-434e-864d-ef72bf9d62ac>
CC-MAIN-2022-40
https://about.att.com/innovationblog/cows_fly
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00696.warc.gz
en
0.92006
618
3.15625
3
VoIP simply stands for Voice Over Internet Protocol and means the use of the internet for telephony. It’s the basis of many modern phone systems and allows voice calls through well known apps and services. At Gradwell, we’ve had almost 20 years of experience in supplying and supporting the technology to over 8,000 customers nationwide and beyond. What is VoIP? Ever used Facebook’s call function? That’s VoIP. FaceTime or FaceTime Audio? VoIP again. Skype? Well, you get the idea. VoIP phone systems allow you to make low cost calls over the internet; using a multitude of different networks and operators, you can connect with members of staff, clients and customers across the globe both cost-effectively and efficiently. Get started with Wave Back in the early days of VoIP, the unreliability of dial-up connections meant that landline calls were highly superior – VoIP was beset by poor audio quality and high drop rate. However, with modern internet speeds, including Fibre Optic broadband and Leased Lines, VoIP phone systems far surpasses landline connectivity in quality and innovation. If you run a business, you know how beneficial this is: vital sales calls, multi-partner conference calls, being available for your customer or client when they need you the most. VoIP can also be a great addition to your home – in fact, it may become essential, with Openreach planning to switch off their ISDN network by 2025, migrating nearly all UK landlines over to a SIP trunk -based VoIP service. How does VoIP work? How does VoIP differentiate from traditional phone calls? First, we’ll need to debunk some jargon: In terms of landline calls (or ISDN and PSTN), VoIP bypasses the national landline network’s copper wire by using data packets of information (usually audio) transmitted across a network. These IP packets are received over the packet-switched network, and ‘de-packetised’ as the voice you hear through your VoIP endpoint – the technical term for a VoIP desktop phone, VoIP mobile phone, soft phone application (VoIP Software), or other receiver. Mobile phones operate calls through connection to the PSTN, maintained via a terrestrial network of base stations. When you use VoIP, your voice is converted into digital information which is then transmitted in as data over the internet, unlike the way traditional phone lines work via a local phone provider. For business phone systems, a Private Branch Exchange (PBX) refers to the main piece of tech that routes all your calls, using Session Initiation Protocol (SIP) in SIP trunks to make those connections. Modern PBXs are also called IP PBXs to indicate they use VoIP, rather than analogue connections (although now this is often assumed). Your PBX can be a physical bit of hardware, run and maintained by you (or your IT department); however, with modern cloud solutions, your VoIP provider can usually run your PBX via the cloud – essentially a phone system combined with Software as a Service (SaaS). This allows you to control your system with a personal online control panel, resulting in instant access to your add-ons and personal features, as well as a real-time overview of your business telecoms. What are the different types of VoIP phone systems? VoIP as a technology exists to power calling. Typically a business user will come across it in the following ways: These include Skype, WhatsApp, Facebook Messenger, Facetime and any other voice application designed to run primarily on a mobile or tablet device. This also includes mobile apps for business communications, like Slack, Zoom and Microsoft Teams. On-premise IP PBX systems An IP PBX is essentially a phone system that uses VoIP to place and receive calls. Traditional PBXs used analogue connections, whereas IP PBX systems use internet connections and SIP Trunking (or alternative technology) to route calls to and from the traditional phone networks. On-premise refers to the fact that the system is housed or managed internally. This approach usually requires more up-front spend (capital expenditure) as well as in-house expertise. These systems will also need a SIP Trunk provider to function. Hosted PBX systems There are many names for these systems including Cloud Hosted PBX, Hosted Voice, Cloud Calling and so on. Hosted PBX systems are cloud-hosted, meaning the provider takes care of all functionality, security, updates and delivery. All the user has to do is configure the setup of their system and users as they see fit. In many cases, the provider will even pre-provision phone hardware for you. Hosted systems are becoming increasingly popular, particularly for small businesses, as they require almost no capital expenditure or in-house expertise. With COVID-19 changing the way that businesses operate, there has never been a better time to think about a cloud PBX. At Gradwell, we offer two types of hosted phone systems: Wave: for smaller businesses who want an affordable VoIP system with fast setup 3CX: for businesses that want a fully-featured phone system with customised setup Benefits Of Using A VoIP Phone System There’s very little to get to grips with when making a VoIP call – you dial a number, and connect through to the person(s) on the other end – most of the time you won’t even realise the difference! So what’s the point? Well, VoIP is incredibly cheap (sometimes free!), usually faster, more reliable and, what’s more, affords numerous flexible features. With VoIP, you can talk to anyone, anywhere as long as you have an internet connection. This means, with the right provider, you can choose any number and make and receive calls with it from anywhere in the world. This is particularly common with small businesses who want a global footprint. Remote extensions are usually standard with VoIP, PSTN extensions are expensive when using PSTN, as additional dedicated line installs are needed. VoIP services are perfect for protection from disasters as all data and functionality is provided in the cloud. Gradwell’s hosted VoIP service offers a comprehensive disaster recovery service, including remote access to your system, cloud backup, and rerouting to business mobiles. Ultimately, the outstanding benefit of VoIP is the price. Not only is there the capacity to conduct a variety of free calls, but the monthly costs are phenomenally lower. The average business line rental for PSTN is almost 4x times greater than the VoIP equivalent. As bandwidth is utilised efficiently, there is less wastage – one internet connection is all that’s necessary to transmit all voice information. Using data connections for voice creates an opportunity to unify communications technologies into fewer systems. With something like Direct Routing, Microsoft Teams combines chat, file management, video and voice calls into one application. What features do VoIP phone systems include? A VoIP phone system provides you with the unique opportunity to take advantage of a wide variety of additional services. With a VoIP phone system, you can record both inbound and outbound calls. Often, you can link call recording with a CRM system and contact records. Having the ability to listen back to your sales and support calls helps you to improve your call handling processes and increase customer satisfaction and experience. It’s also helpful when training new staff, evaluating employees and increase productivity. Custom hold music When a caller is on hold or in a queue, the queue is typically accompanied by music or a pre-recorded message. Being able to customise your music or message reassures callers that their call will be answered while also keeping them slightly more entertained than listening to silence. No business wants to use the same hold music or message as everyone else, so being able to customise this is a helpful feature to have. Intelligent call routing VoIP allows you to route incoming calls depending on a set of predetermined rules or conditions, such as time of day, location, users and more. Call routing with a VoIP phone system means you can quickly modify the routing rules as and when you need to. Also known as Interactive Voice Response or IVR, a virtual receptionist allows customers to interact with a company’s host system and to enter options via voice or keypad. Their duties range from taking messages, directing calls to the right people, providing information to customers and managing sales and support enquiries. A virtual receptionist responds with pre-recorded or generated messages and directs users on how to proceed. This makes answering a high volume of calls far more efficient and helps the user calling in get the right answer or person they’re looking for. High quality audio With a strong internet connection and greater bandwidth, VoIP calls have a high-quality audio that rivals that of traditional analogue phones. VoIP call quality relies on a reliable internet connection, so as long as that’s in place, your VoIP calls will be clear and better sounding. Voicemail to email A lifesaver for a mobile workforce, VoIP systems can send recordings of voicemails to your email address and other applications. You can then download the file and can view and ‘re-listen’ to your voicemails – sometimes with text transcription. With a VoIP phone system, the VoIP number (essentially a virtual telephone number) isn’t connected to just one device but is instead connected with a user. This user can access the same number for both inbound and outbound calling using any device you want, like a laptop, computer, tablet or smartphone. This is an extremely attractive feature as you can make and take work calls from wherever you are – especially helpful for remote workers. What equipment do I need? If you opt for a physical (on premise) PBX, you’ll need a location at your business to house the PBX, as well as actually purchasing the PBX. A VoIP phone system may be the most obvious bit of hardware that you may need; these are phones designed specifically for VoIP networks, and come in a variety of formats. Whether they’re desktop, cordless or conference phones, you can guarantee a stable, high-quality connection. You may wish to invest in headsets (both wired and wireless), or other accessories, including DECT clips and specific receivers. Adapters work over an IP network, connecting analogue phones and fax machines to a VoIP network. A great choice for any business with a set of analogue phones already at its disposal. Software Phones, or softphones are downloadable applications that live on your computer or mobile device. They usually consist of a keypad for making call, plus numerous other VoIP functions, such as call recording. Softphones such as Zoiper can be downloaded and utilised for very low cost – a great way to integrate your communication solutions as an alternative to physical VoIP phones. One service in particular called 3CX has a very sophisticated soft-phone, and comes as standard with the solution. If you choose to go with a hosted PBX service, the provider will house and service the PBX at a monthly fee; many customers opt for this, as maintaining an on-premise PBX can take a lot of time and technical know-how. A Virtual PBX is a possible third option. While it lacks some of the functions of a full hosted PBX, the costs are substantially lower, and the use of SIP Trunking means that calls can still be routed appropriately. With the advent of cloud systems, a virtual PBX may be a more appropriate solution in the future. You’ll need a good-quality internet connection to support a VoIP service, particularly for high quality, reliable calls. FTTP and Ethernet connections, like Leased Lines are well worth looking at. How to set up VoIP for your office To start the initial set-up, you first need to check your bandwidth and assess what you need. Checking your bandwidth basically means your internet connection. You need to do this to ensure that your call quality and speed will be up to scratch and that your VoIP phone system will cover everyone who needs it in the office. Do this by running an internet speed test – simply type this into Google. To assess what you need is relatively simple. How many people will be using the device? How many lines will you need? Can your internet support the call volume? Once you figure this out, you’ll have a general idea of the features you want and can explore any add-ons, like other media communications. Next, choose a provider that is right for you. Do a bit of researching or call up providers and find the deal and package that is right for you. It’s also a good idea to choose a provider that has good customer service. If anything were to go wrong with your VoIP system, you want to be able to contact someone that can help you fix the problem. Once you decide your provider and system, you can order the phones and necessary software. Finally, you need to set up and configure the systems. Most of the time, you can plug your phone into the ethernet system, configure the settings on the phone and you’re ready to go. If you’re considering buying with Gradwell, we will set everything up for you! Glossary of VoIP terms ATA stands for Analog Telephone Adaptor. It is a type of hardware that converts audio, video and data signals into IP packets, that can then be sent over the internet. It connects standard phone lines to high bandwidth lines to make VoIP calls. Bandwidth is the maximum rate of data transferred during a given time period. The data transfer rate is measured in bits per second. So, a high bandwidth means that data is transferred at a fast pace, whereas a low bandwidth means it is transferred at a slow pace. The cloud or cloud computing means storing and accessing hardware and software services over the internet. A codec is a device or program that converts data packets of a segment of software or hardware, so it can be delivered, received, used, stored and encrypted. They do this by compressing or decompressing and encoding or decoding data packets. It is best used in voice and video services. The ethernet is a system that connects computer systems to form a local area network. Essentially, it is a family of computer networking technologies. A gateway is a router in a computer network that is a key stopping point for data on its way to or from other networks. It communicates and sends data back and forth and blocks harmful things from infiltrating your network. A hosted solution is a service that are delivered by a service provider in their own private data centres and provides the physical servers to run your phone network. IAX stands for Inter-Asterisk Exchange. It’s a protocol used by Asterisk telephone systems to connect multiple Asterisk servers and devices. IP stands for Internet Protocol, meaning a communications system that routes data from one computer to another over the internet, using a set of rules and formats. You may have also heard of an IP Address. An IP Address is a fixed or dynamic number that is associated with an internet-enabled device. It is essential to have an IP address, to connect different devices over the internet and for voice and data communications. ISDN stands for Integrated Services Digital Network. It’s a set of communication standards that uses digital transmission to make phone and video calls, transmit data and other network services over the circuits of the PSTN. IVR stands for Interactive Voice Response. It’s an automated telephony system that interacts with callers, gathers information and routes calls to the right person or team. LAN stands for Local Area Network. LAN connects computers to each other within a group or area, for example, offices and schools. Modem is an abbreviation for modulator-demodulator. It’s a hardware device that converts data into formats so it can be transmitted from computer to computer. Opposite to a hosted solution, an on-premise solution requires a business to install and look after their physical hardware in their office. A packet is a collection of data that’s used by computers to communicate with each other. PBX stands for Private Branch Exchange. It is a private telephone network exchange and well-known business telephone system. Those who use a PBX phone system can communicate within their company and outside, using communication channels like VoIP or ISDN. Provisioning is a configuration of an IP phone through the IP telephony server or PBX. PSTN stands for Public Switched Telephone Network. It is the system that has been in use since the 1800s and is essentially, the world’s combined circuit telephone network. It has developed from being an analogue system to completely digital. A service provider is a company that provides organisations with communications. SIP or Session Initial Protocol is a system that transmits voice and video information across a data network. It is involved with the VoIP process, allowing VoIP users to take advantage of communications flexibility and shared lines. A softphone is a type of software for desktops, tablets and laptops that provide VoIP call services. Rather than being a physical phone, it is on a computer. A softphone receives input from a microphone and outputs through speakers or headphones. Telephony is the field of technology that involves the development, application and deployment of telecommunication services. Stands for Voice over Internet Protocol. VoIP enables the delivery of data over the internet. Frequently Asked Questions What is a VoIP phone system? VoIP is a phone system (or “PBX”) that uses the internet to place and receive calls. How much does VoIP cost? Costs are hugely reduced by using VoIP for business, and calls between internal numbers (in any location) are free as no connection to the traditional phone networks are needed. Do I have to maintain my VoIP business phone system? No! You can host your VoIP system yourself (on-premise) or get a provider to host it for you (hosted). Does VoIP require a special phone? No. Although IP (or SIP) enabled phones are extremely widespread, you can use an ATA (Analogue Telephone Adaptor) to connect analogue phones to VoIP systems. Can I use my mobile phone with VoIP? You can use VoIP on your mobile phone, given you have the right app and the right VoIP provider. It routes your calls through the internet rather than a phone connection, so you need a package that will allow you to do that. Who can I call on my VoIP phone? Your VoIP phone can call any number in the world, whether it’s for business or personal, local or long distance. The main difference between a VoIP phone and a regular phone is the way the information travels, but essentially, VoIP can do anything a normal phone can do. Why would businesses use a VoIP number? The most common benefits of using VoIP include cost savings and being able to make and receive calls from numbers from anywhere in the world and on any device. Can I keep my number? Yes. VoIP providers allow number portability, where you can keep the same number you had with your regular phone provider. You can also port or transfer over any saved numbers you have. This process is called number porting. Will VoIP affect my Internet speed? The most important aspect of VoIP is having an internet connection – it won’t work otherwise! You need to assess your internet speed before you install a VoIP connection, to ensure your connection and speed will work with VoIP and that the two won’t affect each other. If you have a fast internet connection, VoIP will work perfectly and won’t affect your internet speed. How fast does my broadband speed need to be for high-quality calls? For regular or landline quality calls, your broadband speed should be 90-200 Kbps. How secure are VoIP calls? VoIP transfers your data over the internet, so it makes sense why people worry about security. VoIP providers ensure your network is fitted with the right security measures, like Session Border Controllers, which protect your data and monitor what goes in and what comes out of your network. Is VoIP better than a landline? In most cases, yes. Good quality internet connections mean modern VoIP calls are much clearer than traditional landlines. Does VoIP have any downsides? VoIP is an extremely effective technology and far outperforms analogue telephony. One obvious downside of using internet connections to place calls is that if your connection goes down, you won’t be able to make calls at all. This is why resilient and secure internet connections are very important to modern businesses. What does VoIP have to do with the ISDN switch off? ISDN or the Integrated Services Digital Network is the most popular way to make phone calls, video calls and other services across the world. However, ISDN is set to be switched off in 2025. The main reason is because it’s an outdated system. Instead, telecommunications like VoIP have come to the forefront. VoIP is a modern and progressive upgrade that only relies on an internet connection. It is also less expensive, requires fewer physical lines and is more scalable and flexible.
<urn:uuid:262e873d-aacf-4a6b-9a13-82f433a518b0>
CC-MAIN-2022-40
https://www.gradwell.com/guides/voip-phone-systems/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00696.warc.gz
en
0.933925
4,631
2.78125
3
With 5G networks, billions of devices and IoT (the internet of things) are interconnectible — leading to use cases like smart cities, AR/VR on mobile networks, remote medicine and much more. The potential is practically unlimited. On the other hand, IoT introduces a new security risks due to the number of devices, impact of an attack and lack of appropriate security controls. As long as the IoT continues to expand, the number of threats will continue to increase . The IoT attack surface is across the entire IoT system, including the individual device profile, scale of devices, network interfaces, IoT application, IoT platform and shared resources in the cloud. 5G offers several significant security enhancements compared to its predecessors like 4G and LTE. Lets check below some of the most important 5G Security enhancements : - Network slicing allows different networks and services to share the same infrastructure but are isolated from each other. So, each network slice is an isolated end-to-end network tailored to fulfil diverse requirements requested by a particular application - 5G is more capable of protecting your identity. For the first time, your connection is shielded from he unauthorized devices that may capture phone calls by mimicking cell towers. Consequently with 5G Your ID is encrypted. - With 5G Data and voice traffic within the 5G infrastructure are protected with a robust encryption algorithm. Which means that hackers with powerful computers won’t want to take the time to decrypt your info. - Then, we have edge computing — which is all about where data is processed. With traditional or cloud computing, data may have to travel to a server far away. With edge, it’s processed much closer to the source, enabling the ability for improved threat detection. With these enhancements related to 5G Security, you’ll be able to:provide mobile users with a safe access to the internet , deliver secure access to applications, improve productivity and provide consistent user experience.
<urn:uuid:5d8cfd45-d881-4376-a149-8df86a3a5442>
CC-MAIN-2022-40
https://www.5gworldpro.com/blog/2022/06/19/what-is-5g-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00696.warc.gz
en
0.930756
397
3.03125
3
What is Data Center Efficiency? In a field that is always evolving and shifting, new data center metrics must be determined for the evaluation of data center quality. Data Centers, much like cars or airplanes, need many fancy looking gauges and dials that spit out useful information. Data center efficiency information like power usage (PUE), carbon emissions (CUE), water usage effectiveness (WUE), CUPS and flops—but we'll get to these later. Looking forward, what will be the new data center metrics for efficiency? In the past, factors such as data center density, automation, and storage density were basic guidelines. However, times are changing—measures attempting to value both the rapid growth, concentration of computing power, and storage will no longer be sufficient in providing the whole picture. But what data center metrics do you actually need to measure in determining where to collocate or purchase a dedicated server. Parameters outside of staffing, space, compute cycles, and storage utilization will now need to be included in the metric system. Since the old times, energy prices have risen significantly, and financial and environmental awareness has been increased. As we aim to increase the efficiency of the IT and data center industry, other measures with new emphasis on energy consumption and heat production have come to the forefront. Heat Efficiency Measured in cycles per BTU of heat produced, computational heat efficiency compares computation to the main waste product that is heat. It does have some overlap with the measurement of energy efficiency, because the energy required to cool down the data center is also included in the energy draw data. However, computational heat efficiency is a useful direct measurement of the environmental impact of the data center’s computations, especially in colder weather, where cooling can be a matter of circulating air rather than cooling down air. Despite cooling efforts, heat is still being dumped out into the environment and this should not be a metric that is looked over. Data Center Metrics: Computational Power and Heat EfficiencyComputational Power: Measured in cycles per kilowatt-hour, the computational data center power efficiency compares the computing activity to the energy required to power it. In order to see a true picture of energy input, data center managers measure the actual inputs of electricity used, rather than making calculations based on component ratings. Additionally, they will measure all inputs, including power and lighting. This is a great metric in determining data center power efficiency. Take a look at Google’s Data Center Efficiency Best Practices video for a bit more information: Data Center Metrics: Storage Power and Heat EfficiencyStorage power efficiency is measured in terabytes per kilowatt-hour. Storage heat efficiency is measured in terabytes per BTU. Data centers must only account for terabytes adding to the actual work accomplished. The most honest measurement of energy consumption must account for the empty disk space, and the measurement should be considered terabytes used per kilowatt-hour consumed or BTU thrown off. Unless an enterprise can separate the storage from computation in isolated power and cooling domains, there should be a measurement for both storage and computational consumption/waste against the same energy inputs or heat outputs. This makes it more convenient to create a complete composite, simple or weighted, to reflect the shared inputs and outputs. Network Efficiency: The network should not be neglected as well, as data center networks will draw a lot of power and thus create a lot of heat. There should be a metric of cumulative output on the data center network, in order to get the complete profile. This is a simple yet essential metric. Staffing: It is not just the data center that generates heat; the workers generate heat as well. In fact, the average data center staff will general over 350 BTU per hour on average. The more people working in the data center, the more heat they add to it and the less heat efficiency there is. Efficiency in staffing and human operations is essential, and will help reduce the heat waste for efficiency metrics. Data Center Efficiency FactorsTo maximize your business outcomes, consider these data center metrics and efficiency factors: - Data Center's Age -- If the facility is older, they may not be able to handle or capture the data and information needed to feed today's more advanced metrics, without additional investment. - Data Center's Tier -- The facility's tier standard rating will help you determine the metrics quite well. For example, a Tier II facility will have less features for capturing data than a Tier IV data center. - Business Model -- How the data center is run, and who operates it, is a key factor in analyzing the data center as a whole. For example, if the owner offers the facility's services to resellers, then he/she is more likely to laud the data center's overall efficiency and Service Level Agreements (SLAs). - Regulatory Compliance -- Some data center operators are subject to more regulations than others due to carbon reporting, size, power, etc. - Data Center Costs -- Data centers need to have a great "green" plan, or a plan to reduce the cost of a data center
<urn:uuid:e888614e-8686-430e-ad0f-6aa1b6eeae4f>
CC-MAIN-2022-40
https://www.colocationamerica.com/data-center-metrics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00696.warc.gz
en
0.910747
1,033
2.84375
3
In the war between machines and mankind, the machines have gained the upper hand. It’s bad enough that computers can now beat us at chess, Jeopardy, and Go. Artificial intelligence-driven algorithms are now tackling jobs once considered the exclusive province of living, breathing bipeds. That includes doctors, lawyers, teachers, and, yes, IT professionals. McKinsey estimates that roughly half of all work activities could be automated using today’s technology, and that up to 30 percent of global workers could be displaced by 2030. The jobs of millions more will be changed forever by AI. But automation will also create new roles and opportunities that did not exist before. Whether those new jobs will be sufficient to replace the ones made obsolete is an open question. Are you at risk of being replaced by an algorithm? And if so, what can you do about it? Here’s what you need to know about our glorious robotic future. Take this job and code it As in virtually every industry, the IT jobs that will be automated first involve repetitive and often manual work that doesn’t require a lot of human discretion, notes Keith Strier, global and America’s advisory leader for AI at consulting giant EY. “If you’re a sysadmin, or tier one tech support, or even in cybersecurity but your primary job is to look for certain signals and indicators, your jobs are up for grabs,” he says. According to researchers at Oxford University and the Kellogg School of Management at Northwestern University, database administrators have a 39 percent chance of having their jobs automated. If you’re an IT operations tech, that number rises to 78 percent. Those numbers will also vary depending on where you live, says Dr. Hyejin Youn, assistant professor of management and organizations at Kellogg. The smaller the city, the more likely your job will be taken over by a machine. “These job titles won’t become extinct,” explains Youn. “There will still be a need for humans in the occupation, but the tasks will change, and the number of people doing it will be much smaller.” Because IT is continually asked to do more every year, usually without a huge boost in budget, Strier says IT departments are more likely to reassign employees to more advanced tasks and use automation to fill the gaps. “It’s less about letting people go, and more about reduced hiring,” he says. “Doubling capacity without doubling headcount seems to be an increasingly popular way of looking at the savings automation creates.” But what’s changing is the types of tasks that can automated, notes Forrest Brazeal, senior cloud architect for Trek10, a cloud consultancy. In a widely shared essay titled “The Creeping IT Apocalypse,” Brazeal wrote about the quiet decimation of low- and mid-level IT jobs brought about by the growth of cloud services and AI. While the loss of jobs through automation has been a byproduct of technology advancement since the industrial revolution, Brazeal says this time is different. “This is a sea change,” he says in a phone interview. “Entire disciplines will be going away. There will be much less call for Windows sysadmins, DBAs, and network engineers. That’s what a lot of people are missing, and it’s what I mean when I talk about the ‘creeping apocalypse.'” Alexa, write me an application One of the higher-level jobs that AI will soon take on is writing code. In fact, the quest to automate programming is already well underway. In 2017, Google’s AutoML research project demonstrated that it could generate machine-learning software that’s sometimes more accurate than similar programs written by humans. It’s now available as a cloud-based service that allows developers with limited machine learning experience to train ML models. Last year, computer scientists at Rice University unveiled BAYOU, an AI application that uses “Neural Sketch Learning” to generate code. After studying 100 million lines of Java on GitHub, the DARPA-funded tool is able to recognize high-level patterns in programs and recreate similar ones on demand. Enter a few keywords to tell BAYOU the kind of program you want to create, and it will spit out Java code to fit the bill. AWS’s App Sync and Amplify “low-code/no-code” development automation services are another example of this, says Brazeal. “The idea is to take most of the work out of creating a traditional back end, so you can spin it up and have it happen automatically with just a few lines of config,” he says. Once that’s in place, he adds, you’ve eliminated an entire class of software developers. And the writing for many of the rest is clearly on the wall. “We’re starting to see the beginning of ‘conversational programming’ — the ability to build services by saying, ‘Alexa, take these components and put them together to give me an application,'” he says. “We’re not there yet, but it’s something to keep an eye on.” When executives talk about automation, they invariably say that relieving IT employees of boring and repetitive tasks frees them to take on more strategic roles and responsibilities. But when you ask what those new roles will look like, and how employees will make that transition, you tend to get a blank stare. The fact is, few organizations have even thought about it, says Strier. “The majority of these projects are being championed by a mid-level executive who’s been told to reduce overhead or improve customer service or some other goal,” he says. “They’ve not been empowered to worry about the future of their workforce. To them, that’s an HR issue. They truly believe automation will free up their workforce to do more important things, but no one does the work to figure out what that is.” Strier says he has one client, a large telecommunications firm, that has thought through the implications of how automation will change what its employees do. That firm was forced to address these issues because it was heavily unionized. “That enabled them to say to the unions, ‘Look, we’ve identified job classifications that will be more deeply impacted over the next three years. We can give notice to those employees and offer them an opportunity to think about retraining,'” he says. “That’s better than getting a letter on Monday saying, ‘We’re shutting down your department.'” Implementing this kind of broad organizational change is not a trivial task, says Stanton Jones, director of research for ISG, a technology and research advisory firm. “The vision many enterprises have is that they’re going to take that 30 percent of people’s tasks and repurpose them for something more important,” says Jones. “But doing that for hundreds or thousands of people is really hard work. I’m not saying it can’t be done — a small number of companies are really rethinking how their organizations can be run — but it requires they put people ahead of their cost savings and productivity goals. That’s pretty rare.” Augmented, not replaced It’s an article of faith that as jobs continue to be made obsolete by automation, new roles will emerge to replace them. And so it goes with AI, which will both dramatically change today’s jobs and create new ones that do not yet exist, notes Erik Brown, a senior director in West Monroe Partners’ technology practice. “In a few years it will be hard to find a job that’s not augmented by AI,” he says. “Think about financial services and fraud detection, or risk exposure in investments. AI will be used by utility companies to predict how weather will impact energy demand, and by insurance companies to process claims. And there will be a lot of jobs in education, teaching people how to use their business knowledge to train algorithms.” Likewise, Brown adds, network engineering jobs could evolve into roles that use AI to manage data centers more efficiently, as Google did with the DeepMind algorithms it developed to defeat a human champion at the Chinese game of Go. One of the biggest sources of new jobs will be embedding AI into hardware such as robots or autonomous vehicles, says Strier. “Integration of that software and hardware is very complex, and it won’t happen on its own,” he says. “So while you might have used AI to write some of the software, ultimately humans will need to do the integration, modeling, and testing of these complex hardware/software configurations.” Newly emerging roles such as ITops data scientists, AIOps architects, and automation path designers will be created as a result of AI-driven automation, says Will Cappelli, CTO for EMEA at Moogsoft. “The human side of IT will have to shift from observation to analysis, which is why professionals in these future positions will need skills involving mathematics and an end-to-end understanding of how modern IT systems behave,” he adds. At the executive level, companies riding the AI wave are looking to hire data-savvy executives who combine business skills and analytics expertise, says Scott Snyder, a partner with Heidrick & Struggles, an executive search and consulting firm. “We place a lot of leadership positions like chief AI officer and chief data officer,” says Snyder. “We’re always looking for people with data-intensive backgrounds who can graft those skills with institutional or functional knowledge, such as HR, legal, or supply chain.” Brave new jobs AI is also likely to generate a raft of other jobs that are just barely visible on the horizon, says Amber Bouchard, director of talent acquisition for Maven Wave, a digital transformation consulting firm. One of those roles could be “citizen data scientist,” she says. Such an employee would analyze data and extract insights, but without the need for an advanced degree in statistics — like a business analyst on steroids. Another new role would be “neutral AI assurance expert,” a master coder who can detect potentially biased algorithms in complex machine-learning models. And once the impacts of AI start to become felt, companies may also want a chief ethics officer to oversee the moral implications of machine learning and AI in the workplace. “Organizations will need a person who can partner with HR, senior management, and C-level executives to oversee the implementations of these new technologies,” Bouchard says. “Someone will have to erect virtual walls and help organizations navigate the waters of technological advancement.” She adds that there will remain tens of millions of jobs that can’t be easily automated. “There are many jobs that are susceptible to automation,” Bouchard says. “There are just as many jobs, especially within firms like ours, that require the judgment, social skills and hard-to-automate human capabilities that AI cannot take away.” In the meantime, IT pros worried about being replaced by robots should think seriously about diversifying their skill sets and consider becoming full-stack engineers, says ISG’s Jones. “People who can manage everything from the web server through middleware, the operating system, and even down to the virtual machine layer are in huge demand,” says Jones. “Organizations cannot find those people fast enough. But if you’re stuck on a single set of technologies, that’s going to be problematic.” Jobs that consist of “undifferentiated heavy lifting” are always the first to go, adds Brazeal. The more generic the tasks you perform each day, the more likely you’ll be replaced by code. Developing expertise in areas that add bottom-line value to the company are the best ways to ensure job security. And while you won’t need to become an AI expert, deep familiarity with the available AI solutions will become increasingly necessary. “The skill sets the tech department will need to remain relevant won’t be building AI systems,” says Strier. “They’ll need to be experts on the different solutions in the field, the strategic use of this technology, and how to integrate third-party AI services into their operations.” In other words, he adds, tech pros won’t necessarily need to know how to build a facial recognition algorithm, but they will need to know how to pick the right one for their company. That’s something robots can’t do… yet.
<urn:uuid:734bba14-01bc-4e60-8d7a-4dc03699d314>
CC-MAIN-2022-40
https://www.cio.com/article/220111/it-job-apocalypse-rise-of-the-machines.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334514.38/warc/CC-MAIN-20220925035541-20220925065541-00096.warc.gz
en
0.948459
2,742
2.515625
3
The LDAP protocol can deal in quite a bit of sensitive data: Active Directory usernames, login attempts, failed-login notifications, and more. If attackers get ahold of that data in flight, they might be able to compromise data like legitimate AD credentials and use it to poke around your network in search of valuable assets. Encrypting LDAP traffic in flight across the network can help prevent credential theft and other malicious activity, but it's not a failsafe—and if traffic is encrypted, your own team might miss the signs of an attempted attack in progress. For example, if an attacker is using brute force to try and gain access to a restricted database or storage area, that attack will leave network artifacts such as "failed login" messages which are also transmitted across the network using the LDAP protocol. If you've encrypted LDAP traffic as a protective measure, you'll need decryption capabilities to detect those failed login messages associated with sensitive assets. Advanced LDAP encryption is key to good cybersecurity, but so are smart implementations and the ability to decrypt traffic without compromising your other security controls. Scroll down for more answers to your LDAP questions, or learn how to safely implement TLS 1.3 with passive decryption here. Frequently Asked Questions About LDAP: 1.) Is LDAP encrypted? Short answer: no. Longer answer: While LDAP encryption isn't standard, there is a nonstandard version of LDAP called Secure LDAP, also known as "LDAPS" or "LDAP over SSL" (SSL, or Secure Socket Layer, being the now-deprecated ancestor of Transport Layer Security). LDAPS uses its own distinct network port to connect clients and servers. The default port for LDAP is port 389, but LDAPS uses port 636 and establishes TLS/SSL upon connecting with a client. 2.) Is LDAP authentication secure? LDAP authentication is not secure on its own. A passive eavesdropper could learn your LDAP password by listening in on traffic in flight, so using SSL/TLS encryption is highly recommended. 3.) Is LDAP port 389 secure? Not exactly. The port itself is no more secure than unencrypted LDAP traffic, but you do have some alternatives to LDAPS for increasing your security: you could use the LDAPv3 TLS extension to secure your connection, utilize the StartTLS mode to transition to a TLS connection after connecting on port 389, or set up an authentication mechanism to establish signing and encryption. 4.) What is the difference between LDAP and Active Directory? Both LDAP and Active Directory are directory services, but although the Active Directory protocol builds on the LDAP protocol, AD is proprietary to Microsoft and requires a Microsoft Domain Controller to function.
<urn:uuid:c612339e-3325-4737-a0be-9381651af7f6>
CC-MAIN-2022-40
https://www.extrahop.com/company/blog/2019/ldap-encryption-what-you-need-to-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00096.warc.gz
en
0.909856
567
2.578125
3
If you’ve ever used technology, the power button has had a pretty consistent appearance, and an even more consistent use. However, there’s a reason that the power symbol we’re so familiar with looks the way it does. Furthermore, there’s more that the power button can ultimately do. What the “Power” Symbol Means The symbol that appears on the power button looks somewhat unique. However, this makes more sense when you consider that it’s just what you get when you smoosh the “|” for on and the “O” for off into a single symbol. How the Power Button Can Be Used Hopefully, you’ve already learned that your power button should really only be used to power up your system, or—if no other options are available—to power off the device after all your work is saved and your programs are all closed out (again, only as a last resort). Whenever you can, it is better to use the shut down option nestled into the operating system. We take this so seriously because abusing the power button is just a convenient means to abuse the device itself. Improperly powering down your system in this way can lead to file corruption and potentially give the device a hard time when you start it back up. Of course, with help from a technician, it is possible to remap your power button to do something different when it is pressed if you so choose. Remapping Your Power Button You have the capability to change your power button’s functionality, allowing you to set it to do something other than turn off your system when it is pressed—or, if you’re working with your laptop, your lid is closed when it’s plugged in or running on stored battery power. In your Control Panel, under Hardware and Sound, find your Power Options and Choose what the power button does. Your options as to its function include: - Do nothing - Shut Down (when pressing the power button on a laptop) - Turn off the display (when pressing the power button on a laptop) Make sure you Save changes so that your settings are properly applied. Interested in finding out more about your technology and how it can most benefit your business? Give CTN Solutions a call at (610) 828- 5500 to find out more.
<urn:uuid:088ed755-b1d8-4f6e-974d-6a9c0eeeb8c9>
CC-MAIN-2022-40
https://ctnsolutions.com/the-power-button-is-capable-of-more-than-on-off/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00096.warc.gz
en
0.925593
502
2.859375
3
What makes a smart city smart? Half the world lives in cities. By 2050 that figure will rise to 70 percent, boosted by the 2.2 billion more people who will live on the planet at mid-century. Cities generate 80 percent of the global gross domestic product and swallow 75 percent of natural resources, producing around 80 percent of global greenhouse gas emissions. And as most cities are situated on coastlines, they are at high risk from the impacts of climate change, such as rising sea levels and powerful coastal storms. Clearly, cities face many challenges. As cities grow, they will need to boost their economies to sustain their booming populations. They must expand and renew critical infrastructure – for water and wastewater, energy, heating or cooling and transportation – while reducing their use of natural resources and cutting emissions. They must compete with other cities, domestically and internationally, for investment and talent. And they must deliver essential services like firefighting, policing and public safety to ensure the wellbeing of their citizens. In response to these challenges, cities are setting themselves goals to improve their sustainability, quality of life and economic growth. Becoming smart and digitalized – especially in the use of energy, water, and essential services – is the key to achieving these goals. Vision, strategy and goals A smart city provides quality of life for its citizens. It makes itself resilient to risk by driving sustainable economic growth and by integrating its utilities and services into a unified system to improve efficiency, reduce operating costs and lower its carbon footprint. These utilities and services include power, water, wastewater, heating and cooling, as well as future e-mobility infrastructure for vehicles, automation for factories and buildings, and networks for ultra-high-speed broadband. Digitalizing smart cities separately, without a unifying vision, achieve limited results. The best outcomes are attained when they are coordinated in a smart city vision, with a common strategy and with clear goals based on the input of multiple stakeholders, including citizens, businesses and service providers. Closing the loop By interconnecting utilities and services like electricity, water, and district heating, cities can unleash new powers of optimization that reduce operating costs, energy use, and pollution. We have these solutions that co-ordinate the operations of municipal water/waste-water, district heating/cooling, and power systems, enabling them to operate as a unified system within a closed loop system. Here’s how it works. The many energy-hungry pumps in a water treatment plant and a district heating network do not need to run all around the clock. Even though the plant and network operate non-stop, the pumps can be scheduled to run when electricity prices are lowest, without risk of shortfall. Typically, the combined heat and power plant that produces the heat, considers the electric power production as a by-product, which it sells without taking into account price shifts and market volatility. By optimizing production to meet smart cities market needs, it could maximize revenues by delivering power to the market at peak periods. Remember: the plant is powering the city’s water and district heating pumps at off-peak times and has excess power to sell when demand is highest. Smart city solutions for Sweden Earlier this year, ABB was selected by Swedish multi-utility Mälarenergi to develop smart city solutions for Västerås, Sweden’s fifth largest urban area. Mälarenergi operates hydropower plants, the local power grid, a waste-to-energy plant, heating and cooling networks, water and wastewater treatment plants, a water distribution network and a fiber-optic network for the city’s 150,000 residents and businesses. A key objective of the project is to integrate the control rooms of the many automation systems that manage the city’s utilities and services into one unified operating environment. Another is to reduce the city’s water losses in the distribution network by 20 percent and cut energy use by the district heating network. Unified energy and water management In Germany, one of the country’s most progressive smart cities is Trier. There, the local municipality Stadtwerke Trier supplies electricity, gas, drinking water and district heating. It also treats wastewater and is responsible for the public transportation system. We have already developed a smart energy management system for the city’s diverse range of generation sources – wind power, hydropower, solar photovoltaic, biomass, combined heat and power (both large-scale conventional and micro CHP), as well as for battery storage, heat pumps, electric vehicle chargers and industrial loads. The solution optimizes production, balances it with consumption and is connected to weather and load forecasting tools. It has the scalability and flexibility to seamlessly integrate new generation units, storage devices, vehicle charging stations and other loads without disruption to operations. In a new project (Interreg VA EnergiewabenGR), we’re working with Trier to connect the city to three other municipalities in France, Belgium, and Luxembourg, each of which operates its own power pool of diverse types of generation and storage. The solution will enable the pools to compensate for fluctuations in renewable energy by exchanging power with each other and using storage capacity intelligently. This, in turn, will maximize their use of renewables and minimize their dependency on the national grids. Now, Trier and ABB are connecting the wastewater plant, the water network, on-site PV generation and the CHP plant in a separate power pool to reduce operating costs. This is another instance of how the coordination of utilities and services – in line with the city’s vision, strategy, and targets – creates value for the municipality and its citizens. Originally this article was published here and was written by Sleman Saliba, who is responsible for the global market introduction of energy optimization solutions for virtual power plants, industrials & commercials, and smart cities. In this way, he brings together his passions: Protecting our climate for future generations and make ABB the market leader in green energy.
<urn:uuid:a9f007ea-ff30-4311-85e0-90f1d85efa29>
CC-MAIN-2022-40
https://www.iiot-world.com/smart-cities-buildings-infrastructure/smart-cities/what-makes-a-smart-city-smart/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00096.warc.gz
en
0.948306
1,251
3.375
3
The term robot encompasses too many concepts. Is an autonomous drone a robot? Is it a Roomba vacuum? What about Tesla’s self-driving cars? Getting into the matter: What exactly does the word robot mean? What is the first thing that comes to mind when we think of robots? Maybe a machine, an android, like in Star Wars, Star Trek or Terminator movies. Although these robots capture our attention, now they only exist in science fiction. Robbota: the origin of the word The origin of the word “Robot” dates back to 1920. It was the Czech writer, Karel Capek, in his work: Rossum’s Universal Robots (R.U.R), who used the term robot for the first time. In their language, Robbota means servitude, forced labour or slavery. It could be said that robots have always been designed with the aim of serving man, facilitating his day to day. As for the term robotics, Isaac Asimov developed it in literature in 1942, in his book Runaround. He enunciated the “Three Rules of Robotics”, which later became the Three Laws of Robotics in works of science fiction. But what does a robot really consist of? The use of the term robot has been used to cover broader meanings: electromechanical devices in human form, automated distributors, software that act as adversaries on gaming platforms, bot computers… There is considerable discrepancy between experts, and a universal definition is not agreed, the only common point being that robots are intelligent physical entities that can perform actions in such a way that they interact with the environment. Following this criterion, a robot is considered as a machine that must make intelligent decisions with logical sense in the real world. This definition leaves out quite a number of devices mistakenly classified as robots. If intelligence is the key factor that makes the difference between simple electronic devices and robots. What are robots for and where can they be used? The future of production The first robots appeared in the early 1950s, despite their high cost. R.C. Goertz developed a programmable manipulator for the management of radioactive elements. They were intended to perform repetitive, dangerous or toxic tasks for human operators. From 1970, the space industry joined the interest in the development of electronic technology and servo control. Today, electronic evolution and artificial intelligence allow the development of much more precise, fast and autonomous robots. Industrialists, welders, and surgical specialists compete in ingenuity to develop robots to assist them with delicate and complicated tasks. At the same time, intelligent machines are being developed for domestic use, facilitating and covering the most tedious tasks that take up our productive time. Robotics has many fields of application. Robots have been installed in industries, allowing repetitive tasks with constant high precision. For years, they have only been considered useful in the industrial sector, but recently service robots are being used in scientific research, education or social welfare. Because of the evolution in technology, robots are found in avant-garde sectors such as space exploration, medicine or the military. How autonomous must a device be to be considered a robot? There are many levels of autonomy. As Machine Learning algorithms become more sophisticated, robots equipped with this artificial intelligence will respond to the environment in more complex and unpredictable ways. Robots pick up external signals, modify them, react and learn automatically. This means that their intelligence is “humanized”. These robots do come close to the concept we learn from science fiction movies: a robot capable of reacting to the unpredictable and learning. The revolutionary development in the use of robots as practical devices indicates that they will be a key factor in the future. The development of AI allows intelligent and automated problem solving, interpreting information from sensors and significantly benefiting society. Welcome to the future.
<urn:uuid:616fded0-26af-456d-8c12-66738d334c4b>
CC-MAIN-2022-40
https://www.midokura.com/what-is-a-robot/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336921.76/warc/CC-MAIN-20221001195125-20221001225125-00096.warc.gz
en
0.953029
783
3.015625
3
Zero Water Consumption Cooling For many years, data centers have used various water-consuming cooling methods, such as water towers and evaporative cooling, in order to reduce electricity costs. With the common industry focus on Power Usage Effectiveness (PUE), shifting the cooling burden from electricity to water meant that a facility could achieve a low PUE because the water was invisible to the calculation. While this does achieve the objective of reducing energy impacts, the concentration of many servers into a single hyperscale data center also concentrates the water consumption into one watershed. Many of these centers are located in areas where water is scarce, or will be in the future. At CyrusOne, all of our newly-built data centers are designed with zero water consumption cooling. This means no water towers, no evaporative cooling, and very low water use. While small amounts of water are still used for humidification, facility maintenance, and domestic water, this is minor compared to facilities that use water for cooling. Water Usage Effectiveness (WUE) – the ratio of water used at the data center to the electricity delivered to the IT hardware – is a common measurement of how efficiently a data center uses water. While some companies strive to get a WUE below a 1.3, CyrusOne operates ten data centers with a WUE of 1.1 or less. Download the CyrusOne Waterless Cooling Whitepaper
<urn:uuid:4ecc081b-e991-40d5-9d02-f56ad28b5552>
CC-MAIN-2022-40
https://cyrusone.com/about/about-us/sustainability-4/conservation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00096.warc.gz
en
0.946131
286
3.0625
3
(This is the first part in a series of articles that accompany my Security Summit presentation, HTML5 Unbound: A Security & Privacy Drama.) The Meaning & Mythology of HTML5 HTML5 is the most comprehensive update in the last 12 years to a technology that’s basically twenty years old. It’s easy to understand the excitement over HTML5 by looking at the scope and breadth of the standard and its related APIs. It’s easy to understand the significance of HTML5 by looking at how many sites and browsers implement something that’s officially still in draft. It’s also easy to misunderstand what HTML5 means for security. Is it really a whole new world of cross-site scripting? SQL injection in the browser? DoS attacks with Web Workers and WebSockets? Is there something inherent to its design that solves these problems. Or worse, does it introduce new ones? We arrive at some answers by looking at the history of security design on the web. Other answers require reviewing what HTML5 actually encompasses and the threats we expect it to face. If we forget to consider how threats have evolved over the years, then we risk giving a thumbs up to a design that merely works against hackers’ rote attacks rather than their innovation. There’s a mythology building around HTML5 as well. Some of these are innocuous. The web continues to be an integral part of social interaction, business, and commerce because browsers are able to perform with desktop-like behaviors regardless of what your desktop is. So it’s easy to dismiss labels like “social” and “cloud” as imprecise, but mostly harmless. Some mythologies are clearly off mark, neither Flash nor Silverlight are HTML5, but their UI capabilities are easily mistaken for the type of dynamic interaction associated with HTML5 apps. In truth, HTML5 intends to replace the need for plugins altogether. Then there are counter-productive mythologies that creep into HTML5 security discussions. The mechanics of CSRF and clickjacking are inherent to the design of HTML and HTTP. In 1998, according to Netcraft, there were barely two million sites on the web; today Netcraft counts close to 700 million. It took years for vulns like CSRF and clickjacking to be recognized, described, and popularized in order to appreciate their dangers. Hacking a few hundred million users with CSRF has vastly different rewards than a few hundred thousand, and consequently more appeal. If CSRF is to be conflated with HTML5, it’s because the spec acknowledges security concerns more explicitly that its ancestors ever did. HTML5 mentions security over eighty times in its current draft. HTML4 barely broke a dozen. Privacy? It showed up once in the HTML4 spec. (HTML5 gives privacy a little more attention.) We’ll address that failing in a bit. So, our stage is set. Our players are design and implementation. Our conflict, security and privacy.
<urn:uuid:68dbefdc-7e4c-492b-972a-02b2ef62c69c>
CC-MAIN-2022-40
https://deadliestwebattacks.com/presentation-notes/2012-05-23-html5-unbound-part-1-of-4
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00296.warc.gz
en
0.944519
781
2.703125
3
Cyberattacks understandably cause dread in everyone affected by them. Dealing with the issue can ramp up costs in more ways than one. Besides the labour needed to fix the problem and restore systems, targeted parties may lose business and suffer reputational damage. As today’s society has become more dependent on the internet, hackers have more opportunities to wreak extensive havoc. Here are some specific reasons why cybercrime costs continue rising. A Sophos report revealed that more financially intensive ransomware recoveries are a factor behind costlier cybercrime. The data showed average remediation amounts more than doubled between 2020 and 2021 and now top out at an average of US$1.85 million. A related factor was the slight increase in the percentage of organisations paying the ransom, reaching 32% this year. However, only 8% of affected entities got all their data back with that approach. Moreover, the study showed that US$10,000 was the most common ransom paid. However, it could be a complete waste if the hackers don’t unlock the seized data after receiving payment. The attention paid to the costs associated with cybercrime is similar to the interest in the financial ramifications of equipment downtime in a manufacturing plant. A hypothetical scenario in such a setting could cost a company 1.4% of its annual production capacity if a transformer failure leads to a 120-hour outage. A key thing to remember about cyberattacks is that their effects do not always remain inside a company’s boundaries. For example, a report estimating the average per-enterprise costs of companies affected by the SolarWinds cyberattack showed they totaled 11% of an organisation’s annual revenue. Elsewhere, a report from IBM put the average cyberattack cost at US$4.24 million, resulting in the highest figure in the study’s 17-year history. That figure shows that company leaders must assume addressing an incident will cause a substantial financial loss no matter how far the ramifications reach. Hospitals have long been popular targets for cybercriminals. Such incidents have continued to affect health care organisations during the COVID-19 pandemic. For example, INTERPOL revealed seven types of significant cyberattacks appearing more frequently during the global health crisis. One incident affecting a Vermont hospital cost the organization US$1.5 million per day in lost revenue and increased expenses, according to Stephen Leffler, the organisation’s President. That estimated figure did not include costs associated with getting systems operational again. “If you told me more than a month [after the attack] we still would have functions that weren’t normal, I would have bet you that you’d be wrong. We really did not anticipate the scope or the impact the attack had on our system and how far-reaching it was,” Leffler admitted. When a cyberattack drastically affects hospital operations, it could cause people not to get the treatment they need soon enough or cancel certain procedures. The cyberattack could even lead to lawsuits from patients or their loved ones affected by subpar care. The figures mentioned here are likely startling to most, but people should still keep in mind that even the most dedicated research efforts can’t always accurately estimate cybercrime costs. That’s largely because of the unknown factors at play. For example, how many customers were thinking about contacting a company about doing business with it but changed their minds after that organisation suffered a cyberattack? In how many cases did a cyberattack at a hospital directly affect a patient’s complications or death? It’s not easy to figure out those aspects. But it’s clear that cyberattacks represent a significant financial burden to everyone affected by them. That’s true now and it’s not likely to change anytime soon. Devin Partida is a technology writer and the Editor-in-Chief of the digital magazine, ReHack.com. To read more from Devin, check out the site.
<urn:uuid:06d96db6-c5e2-4203-9ab6-b4e90026e835>
CC-MAIN-2022-40
https://internationalsecurityjournal.com/the-cost-of-cyberattacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00296.warc.gz
en
0.955945
820
2.8125
3
Integration in the semiconductor industry The semiconductor industry began when a trio of Bell Labs / AT&T researchers first successfully demonstrated the capabilities of a transistor in 1947. Their findings were published the following year and they would eventually go on to win the Nobel Prize. Between 1950 and 1980, semiconductor companies became more vertically integrated. Companies like Texas Instruments, Fairchild, and Motorola designed, fabricated, and packaged their semiconductor chips for consumption largely by systems companies. By 1970, the industry faced its first wave of deconsolidation, as new entrants like National Semiconductor, Intel, and AMD stole market share from dominant industry players by targeting new applications like minicomputers, microcomputers and eventually, PCs. They did this using new microprocessor technologies. Strategic options for semiconductor companies The rapid transformation of end markets has threatening to disrupt the lives of every semiconductor company. Semiconductor companies must now get creative to maintain their growth trajectory or risk becoming commoditized by their customers. They have three competitive plays to capture value as more businesses bring their hardware development in-house. Major trends driving vertical integration Four trends that have heightened the demand for system integration and have shifted the balance of power in favor of delivering targeted end-customer solutions:
<urn:uuid:b2d3a552-8b76-43a9-a40f-19aae3acdee8>
CC-MAIN-2022-40
https://www.accenture.com/bg-en/insights/high-tech/going-vertical
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00296.warc.gz
en
0.952351
262
2.78125
3
Malware – Worm In the last blog post we discussed a specific type of malware called a virus. This week we will discuss another type of malware called a worm. There are some distinguishing characteristics of a worm: It can replicate itself without human interaction and it does not need to attach itself (i.e., infect) to a file to do damage. Worms usually exploit some sort of software vulnerability like a flaw in the operating system and use that as the vector to spread itself very rapidly. The name “worm” was chosen deliberately because most often a worm is designed to spread or wiggle its way through an entire network such as a government agency or company. Therefore, worms are often used to penetrate a high value target and either destroy it (make the computer systems crash) or steal sensitive data. Although worms are often used to infiltrate high value targets, another popular use is to quickly infect large numbers of computers to form a botnet which is controlled by a central authority. This army of bots can then be used to perform various nefarious acts on demand such as to flood a specific website with a large volume of traffic in order to bring it down (denial of service). Other uses for a botnet include sending spam email or stealing data such as passwords. The key is that the botnet is large and distributed with new bots coming and going over time.
<urn:uuid:db804548-258b-496f-99c1-1a70fae5d611>
CC-MAIN-2022-40
https://accoladetechnology.com/malware-worm/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00296.warc.gz
en
0.964617
278
3.421875
3
Most cyber security professionals take for granted the information technology or IT nature of their work. That is, when designing cyber protections for some target infrastructure, it is generally presumed that protections are required for software running on computers and networks. The question of whether some system is digital or even computerized would seem to have been last relevant to ask in 1970. We all presume that everything is software on CPUs. The problem is that not everything is software that CPUs control. Cars include mechanical parts that can get only so hot; airplanes have wings that can bend only so far; factories include assembly lines that can go only so fast; and power plants include fluid piping that can only handle so much. These tangible entities consist of solids, liquids, and gases, rather than 1’s and 0’s, so their management requires a different type of component called an industrial control system or ICS. The supporting ecosystem that enables industrial control is referred to collectively as operational technology or OT, and this introduces a new set of cyber security concerns. OT protection is particularly intense, because the physical consequences of compromise may be completely unacceptable, and because many of the security mechanisms that are second nature on IT networks can in fact impair physical operations as badly as a cyberattack. This leads to both puzzles and headaches for cyber security engineers. Cyber security engineers have thus begun the journey of trying to determine how to apply the best elements of IT security, learned through practical experience over the past three decades, to the OT management and monitoring of ICS. In many cases, IT insights are directly applicable to OT/ICS security; but situations do emerge where the nature of industrial control infrastructure introduces novel malicious threats that require innovative new cyber solutions. Safety and security in IoT The intimate relationship between security and safety concerns in OT environments cannot be understated. Recall, in contrast, that IT security experts will reference the traditional confidentiality, integrity, and availability (CIA) model of threats. The goal of IT security thus becomes putting functional or procedural controls in place that will cost-effectively reduce the CIA-type risks to data assets. OT experts have a different set of objectives in mind. Obviously, they must deal with the goal of preventing information leaks, malware infections, and availability attacks; but their primary mission emphasis is on safety. That is, to an OT security professional, the most critical objectives involve assurance of safe, sound operation of OT infrastructure in a manner that avoids human casualties and lost production for large, costly physical assets. The emphasis on safety concerns tends to influence OT technology protection in ways that might differ from traditional IT. A commonly cited example is change management, which is important for assuring application of security updates. An IT security team will often prioritize rapid deployment of such updates over all else, where an OT engineer might be more concerned with the risks that software changes pose to worker safety and to uninterrupted physical operations. Purdue model of OT/ICS To explore options for how OT/ICS infrastructure might include proper mitigation of cyber risk, it helps to use a common model of OT – and the most popular choice is the Purdue Enterprise Reference Architecture, established over two decades ago. The hierarchical model specifically includes four layers of networks to support decision-making and control for industrial applications in the context of both OT and IT monitoring and support. Before reviewing the model, some brief encouragement for traditional IT security experts trying to navigate OT/ICS: While the terminology of manufacturing control, plant management, and industrial operations might look different and daunting, you should have little trouble extrapolating your own understanding of how an enterprise runs with these newer concepts. Don’t get hung up on aspects of the model you might find confusing. Just move on. Purdue Enterprise Reference Architecture Level 0 includes the physical processes for industrial application. Level 1 includes the basic instrumentation that controls physical layer systems. Level 2 includes supervisory control and data acquisition (SCADA) functions and human interfaces. Level 3 includes support for site manufacturing and industrial operations. Level 4 supports business planning, logistics, and other management considerations. Level 5 involves enterprise IT and network systems. As an overlay to these six ICS functional levels, four zones of operation are identified in the model: Levels 4 and 5 are referred to collectively as the enterprise zone; Level 3 is referred to as the manufacturing zone; Levels 2, 1, and 0 are referred to collectively as the cell/area zone; and a fourth safety zone is defined that includes air-gapped systems that monitor and manage physical Layer 0 systems. None of these levels or zones are hard and fast; they are a guide. It is worth emphasizing that references to safety in the context of OT/ICS infrastructure cannot be understated. Traditional safety procedures and mechanisms have become an essential component of the emerging cyber security programs. For example, if an administrator notices evidence of malware in critical control systems, then procedures for safety-focused emergency shutdowns are not only practiced, but might even be required by local laws. Physical and perimeter security for OT/ICS Unique security challenges emerge at each layer in the Purdue model. First, it is obvious that any physical devices or systems must be locally protected against on-site physical tampering or hands-on sabotage by compromised staff. Motivation for such attacks can range from nation-state guidance to employee disgruntlement. While hands-on attacks do not cascade and cannot be done remotely, this does not make them any less dangerous when they do occur. As a result, ICS infrastructure generally includes mature, well-developed, facility controls. Personnel are carefully vetted and authenticated before given access to equipment and systems. Buildings, factory floors, equipment rooms, and physical plants are typically accessible only to badge-carrying personnel, and well-policed by on-site security guards with the authority to act if necessary. For these reasons, most people see physical controls as essential to overall ICS security programs. The challenge is that with the introduction of automated control and management, ICS security inherits the vulnerability challenges of remotely accessible software. Specifically, potential security exploits emerge across the so-called OT/IT interface that exists just beneath the highest layer in the Purdue model. It is this interface that connects traditional hackers with computers on IP networks and the OT-based devices in an ICS ecosystem. For this reason, most implementations of the Purdue model now include a separation function, expressed as a demilitarized zone or perimeter network, at this OT/IT interface. This separation includes firewall, intrusion detection, filtering, and other traditional network security functions. The implementation is usually generic, using addresses, ports, and protocols, but the control at least offers some opportunity to separate functions and enforce policy. Purdue Enterprise Reference Architecture with DMZ The challenge with this perimeter-based security zone – as you would expect – is that IT security experts have already determined that software-based perimeters don’t work. Sadly, this conclusion extends to OT/ICS environments as well. Service exceptions, compromised insiders, and unavoidable traffic entry and exit make perimeter firewalls look more like network cross-connects than traffic cops. This is no longer a controversial claim; everyone agrees. Advanced, modern cyber security for OT/ICS The challenge for modern cyber security engineers working in the OT/ICS area involves modernizing the weak or missing protection controls in existing infrastructure toward more advanced and effective solutions that will stop malicious actors. The good news is that many of these controls can be extended from mature IT security, but in the lower layers of the Purdue model, some new situations emerge that require new types of cyber risk management. An important consideration in practical OT/ICS contexts is the practical belief by many industrial experts that traditional IT security – including patching, anti-virus software, and password management – is simply inadequate to the serious consequences associated with industrial systems. This is a promising balance to the often-cited shortcomings in OT/ICS staff in their expertise and training in modern cyber security. It helps to first partition OT/ICS into two categories – namely, (1) OT infrastructure consisting of non-traditional computing components such as analog signaling and electromechanical operation, and (2) IT infrastructure consisting of traditional computing components such as application software, physical and virtual servers, and packet networks running the TCP/IP protocol suite. OT/ICS threats exist within each of these domains, or across their boundary. To ensure protection of these domains and the OT/IT interface, three basic security objectives provide optimal design guidance: - Strong Entity Authentication – This involves strong validation of reported identities by OT devices in IoT or ICS settings. No security architecture can possibly work without such assurance and for IT-exposed systems, multi-factor usage is becoming more the norm than the exception. - Domain Separation – This involves the creation of strongly separated architecture domains that can enforce desired policies. Unidirectional gateways are emerging as a useful technique to ensure provable separation between domains. - Activity Monitoring – This involves gathering information about observable activity for threat analysis, compliance monitoring, and report generation. Nearly all compliance frameworks demand activity monitoring functionality, and this includes OT/ICS. The achievement of these basic security objectives within OT is by far the greater challenge, simply because any change in OT must be analyzed and tested so very extensively, while IT security best practices evolve at a rapid pace to stay ahead of our attackers. Two important caveats are worth mentioning with respect to these security objectives: First, in the presence of strong entity authentication, administrators might need workarounds to deal with emergency situations that require immediate unimpeded access to safety systems that can save lives. OT/ICS security design must therefore account for this important consideration, if only because of the unique role that safe, assured operation plays in industrial systems. Second, it should be recognized that domain separation – and perimeters, in particular – play a much more vital role in OT/ICS security design than enterprise IT infrastructure. This follows the common specificity associated with the input and output command and traffic requirements for an OT/ICS domain. Unlike enterprise IT systems, these industrial requirements are more tractably supported by perimeter controls. The prospect for achieving the three basic security objectives are much more promising within and across the IT/OT interface. Subsequent articles in this series will explore specifically how modern cyber security controls can be embedded in this aspect of the OT/ICS model to reduce cyber risk. Highlighted results of this application for the next four articles in the series are listed below: - Article Two offers an insight into how hackers have had success to date breaking into operational systems - Article Three outlines the SCADA vulnerabilities associated with typical industrial control system architectures - Article Four covers how innovations such as unidirectional gateways can be used to separate industrial networks from Internet-exposed IT networks - Article Five provides a glimpse into the future of OT and SCADA systems in critical infrastructure. The insights offered in these articles are intended to provide guidance for both traditional IT security experts, as well as OT engineers who might be new to cyber protection solutions. The optimal staff arrangement in any OT/ICS environment would optimize the OT experience and expertise of the engineers with the cyber security insights of the traditional enterprise IT security expert. These articles are intended to help both types of expert. Click here to download the complete series of 5 articles. Contributing author: Andrew Ginter, Vice President of Industrial Security at Waterfall Security.
<urn:uuid:e7835fd9-0bb7-4614-acef-8917a370c0a5>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2018/07/13/ot-ics-landscape/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00296.warc.gz
en
0.929408
2,335
2.796875
3
Let’s face it: Storage is dumb today. Mostly it is a dumping ground for data. As we produce more and more data we simply buy more and more storage and fill it up. We don’t know who is using what storage at a given point on time, which applications are hogging storage or have gone rogue, what and how much sensitive information is stored, moved or accessed, and by whom, and so on. Basically, we are blind to whatever is happening inside that storage array. Am I exaggerating? Of course, I am, but only to a degree. Can we extract information from the storage array today? Yes, we can. But one has to use a myriad of tools from a variety of vendors and do a lot of heavy lifting to get some meaningful information out of storage. The information is buried deep inside and some external application has to work hard to expose it. This activity is generally so cumbersome that most users simply don’t use it, unless it is required by law. In such cases (compliance or governance, for instance), external software is used to pull relevant information at great expense and time. Of course, over the past decade, technologies such as auto-tiering have helped in moving less active data to lower-cost storage, and one may even find software that automatically deletes files, when their retention period has expired. But these are all one-off solutions, and the basic premise still stands: storage today is basically dumb. What if storage were aware of the data it stored? What if all data were catalogued upon creation, indexed and analyzed? What if analytics were built-in and real-time? What if storage were aware of all activity taking place inside? What if data protection were an inherent part of storage and there was no need for media servers and tapes and separate disk systems? What if search and discovery were an integral part of the array? Wouldn’t smart storage like this be a paradigm shift? Wouldn’t it fundamentally change how we manage, protect and use storage? Of course, it would. Welcome to the new era of data-aware storage. The Need For Data-Aware Storage This advance could not have come at a better time. Storage growth, as we all know, is out of control. Granted, the cost per gigabyte keeps falling at about 40 percent per year, but we keep growing capacity at about a 60 percent growth rate. This causes both the cost and capacity to keep increasing every year. While the cost increase is certainly an issue, the bigger issue is manageability. And not knowing what we have buried in those mounds of data, if anything, is an even bigger issue. Instead of data being an asset, it is a dead weight that keeps getting heavier. If we don’t do something about it we will simply be overwhelmed, if we are not already. Why is it possible to develop data-aware storage today when we couldn’t yesterday? Flash technology, virtualization and the availability of “free” CPU cycles make it possible for us to build storage today that can do a lot of heavy lifting from the inside. While this was possible in the past, if implemented, it would have slowed down the performance of primary storage to a point where it would have been useless. But today we can build in a lot of intelligence without impacting performance or quality of service. We call this new type of storage data-aware storage. When implemented correctly, data-aware storage can provide insights that were not possible yesterday. It can reduce risk for non-compliance and improve governance. It can automate many of the storage management processes that are manual today. It can provide insights into how well the storage is being utilized. It can identify if a dangerous situation were about to occur, either for compliance or capacity or performance or SLA. In this article we will define the attributes of data-aware storage, examine the business benefits of deploying these systems and provide an industry landscape of up-and-coming storage companies that are introducing these pioneering products. Data-Aware Storage Defined All storage systems are getting smarter with each new generation, but to be categorized as data-aware storage, Taneja Group believes they must meet most, if not all, the criteria described below: - Increased Awareness: The storage understand more about the content or attributes of the data stored on the device than traditional storage devices do. Examples include enhanced metadata about quality of service, file attributes and application-aware metrics, as well as actually scanning the data real-time looking for contextual patterns or keywords for security and regulatory compliance. - Real-Time Analytics: It is not enough for these storage systems to gather enhanced metadata without making it useful in real-time. Therefore these systems must provide instantaneous updates of the enhanced analytics such that administrators or policy engines can react before issues become critical. An example would be the detection and suppression of a rogue application before it can sap IOPS from a more important application Another example would be understanding who is accessing which files and their relationship to others accessing the same files; this would help a business understand which types of data are more important and to which groups of people. - Advanced Data Services: In addition, the storage system should have additional data services that enable better business outcomes based on the increased awareness. Examples would be the availability of archiving functions for dormant data, bursting the application to cloud once a threshold has been met, or balancing QoS across different application workloads. Other examples could include triggering compliance workflows or alerts or even built-in intelligent data protection. - Open and Accessible APIs: In order for this new category of storage to flourish all the capabilities of these new systems must be open and available to enable a rich ecosystem of integrated applications and tools to come alongside and complement the data-aware storage. There are far too many vertical application requirements that could take advantage of unique data-aware features such that no one company could provide it all. Over time, de facto industry standard APIs will emerge for the most popular enhanced capabilities, similar to how the Amazon S3 data protocol became a standard.
<urn:uuid:7e27d1c9-cb82-4b40-8e62-2d3ff13bdf1d>
CC-MAIN-2022-40
https://www.infostor.com/storage-management/data-aware-storage-taming-data-sprawl-using-real-time-analytics-1.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00496.warc.gz
en
0.948225
1,260
2.515625
3
The future of predictability The ancient Egyptians knew that when Sirius disappeared for a few weeks and then became visible before dawn, a few weeks later the banks of the Nile would overflow. For three thousand years, this prediction held. Except it wasn’t really a prediction. The Egyptians didn’t have an idea of the future that enabled them to make predictions in the modern sense. For example, when we talk about banks and note that they’re closed on Sundays, we’re not really making a prediction. We’re stating a fact of modern life in the United States. If we put on Karnack’s turban—mimicking the old Johnny Carson fortune-teller character—and were to say in a grave and mysterious voice, “I predict the banks will be closed on Sunday,” it would be either a joke or a misunderstanding of what it means to make a prediction. The same would be true if we were to “predict” that an apple will fall when dropped, that day will be followed by night, or that two plus two will turn out to equal four. To make a prediction, the future has to be unpredictable. The ancient Egyptian future was cyclical, which is the opposite of unpredictable. But having an unpredictable future is not enough to enable the concept and practice of prediction to arise. For the ancient Egyptians, that which was not cyclical, such as the day the pharaoh will die, was too unknowable to be predictable. The ancient Hebrews, on the other hand, had a non-cyclical future. They could rely on it because it came straight from the mouth of God, but it came as a promise that someday they would return to their land and the world would be redeemed. How, and even if, they got there was up to them. That’s why the words of their prophets are generally too conditional to sound like predictions: If our people continue in these wrong-headed ways, then we will face deprivation and punishment, but if we follow the word of God, then we will be blessed. The ancient Greeks, on the other hand, believed in a future that enabled predictions of a certain sort. When they looked up, they saw the same wheeling stars that the Egyptians did and believed in their regularity just as firmly. But at eye level, below the heavens, there was no telling what would happen. The Greek framework for making sense of this didn’t entirely cohere. The Fates determined your lifespan, as well as some of the broad-brush themes, such as whether your marriage was going to be happy. The gods could not undo the Fates’ decrees, but they could intercede in other ways relevant to your life. Then there were the daimons who intervened in individual lives in unpredictable ways. So, quite a mix of super-human forces determined the turning points in your life, including just plain bad luck that might, for example, have you captured by an enemy army and turned into a slave. The preordination of events and the Greeks’ awareness that they were not fully in control of their future created a space for predictions … predictions famously delivered by the Oracle in pronouncements that typically could not be understood until the events they predicted had come to pass. Just ask King Oedipus who knew the Oracle had said he would kill his father and marry his mother, and yet could not escape that fate. Even so, we probably wouldn’t say that the Oracle was making predictions so much as pronouncements, for there is no way for the Oracle’s statements not to become true. For predictions in the modern sense we need a future that is determinate yet not fully knowable. We need a future that we can be right about, but that we can also be wrong about. Knowable but not too knowable That’s the idea of the future that we have grown up with. Predictions in our culture are always probabilistic, even if we don’t explicitly state the probability. “The Democrats aren’t going to win any new Congressional seats in 2018” is a prediction because it’s understood that no matter how confidently I pronounce it, I recognize that the future is uncertain. For us to have the form of speech we call “predictions” we need a future that can be known only probabilistically. But that’s not enough for predictions. We also have to be able to say why we think the future will turn out this way instead of that. If you ask me why I’m so pessimistic about the Democrat’s chances, if I say, “I dunno. I just think that,” then it’s such a weak form of prediction that it’s really just a guess, like my guessing that the next throw of the dice will be a four. So, for predictions to be a form of thought, we need a future that is knowable but not too knowable. And that’s what we have now. We believe the future is determined by a set of scientific rules—what we used to call Laws of Nature—operating on a set of data too vast to be perfectly comprehended. We are now, however, on the cusp of changes in both those beliefs. First, we are well along in accepting that simple rules can quickly yield highly complex results. Second, our machines are now far better able to manage vast quantities of data, and doing so without much reducing the complexity of their interrelationships. Some of those machines’ predictions are already being made without the possibility of human brains understanding how the machines came up with them: too many variables, too many contingent relationships. Yet those predictions are showing themselves to be highly accurate. We thus have a type of future within which predictions make sense. It’s just a future far more complex and interdependent than the Egyptians, Hebrews, Greeks, or we about 10 years ago could ever have predicted.
<urn:uuid:b62fac7b-4440-447c-b5b5-998ff1481937>
CC-MAIN-2022-40
https://www.kmworld.com/Articles/Columns/Perspective-on-Knowledge/The-future-of-predictability-118955.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00496.warc.gz
en
0.968943
1,257
3.15625
3
Online Voting for a New President?The trouble with OmniBallot and other voting platforms Near the end of this year, there will be a new presidential election in US. From the Democratic side, Biden seems to be leading according to some early competing with Trump on the Republican side seeking reelection. This election process may occur amid the that is currently affecting us. Thus, considering that we aim to maintain safe distances to prevent contagion, questions arise on how to carry out the voting processes. Could it be somewhat more convenient and more secure to perform such processes over the Internet? A few days ago, the researchers Michael A. Specter, of MIT, and Alex Halderman, of the University of Michigan, published an that reports how an online election could be affected by undetected attackers. These authors made the first review explicitly focused on used in different states on certain voting activities. Using reverse engineering to analyze platform security, Specter and Halderman found that OmniBallot is vulnerable to specific attacks that can mean alteration of votes or theft of personal data. They also gave some recommendations to take into account for the next Current health risks have led some states to consider the Internet as a means of running the coming elections. Generally, the Internet has been used to allow specific vulnerable populations or those not present in the country to participate in elections. Tools such as been used for these purposes. OmniBallot is a web-based platform that can serve for three modes of operations: blank ballot delivery, ballot marking, and online voting. Now, reportedly, it is going to be used for online voting for the first time in Delaware, West Virginia, and New Jersey with larger groups of voters. This is the riskiest mode in relation to cyberattacks. Let’s clarify each OmniBallot’s mode of operation: Online blank ballot delivery: The voter downloads her corresponding blank ballot, and it is printed, manually marked, and physically returned to the election authorities. Online ballot marking: The voter marks her ballot on the website and then downloads it to print it and return it physically. Some jurisdictions give the option to return it via fax or email. Online ballot return (online voting): The voter marks her ballot and transmits it to the authorities over the Internet through a service of Democracy Live. Among the OmniBallotcustomers and in comparison with the two previous modes, this is the least used. Following ethical and legal principles, Specter and Halderman limited their analysis to the publicly available parts of specifically the Delaware version. Therefore, as a general description OmniBallot architecture, they proposed the following: After having a clear understanding of the platform’s architecture and client-server interactions, the authors analyzed the risks created when OmniBallot is used in each of the three modes mentioned above. Before we talk about that, let’s state the possible attackers or adversaries: First, adversaries may have access to the voter’s device. These attackers could be system administrators, abusive partners, or remote attackers that control certain malware, and could modify place are the attackers with access to the server infrastructure of OmniBallot. These adversaries could be, for example, internal staff from Democracy Live or Amazon, and external attackers ready to access and affect the systems involved. In third place are the adversaries with control of third-party code. This involves attackers who may have access to third-party software and services which OmniBallot integrates, such as Google Analytics, AngularJS, reCAPTCHA, and Fingerprint JS. Also, customers load some libraries from Amazon, Cloudflare, and Google, where there could also be malicious subjects willing to modify the So, what could these attackers end up doing in the different ways Online blank ballot delivery: We can start with the fact that the attacker could manipulate the ballot design, for example, swapping or removing candidates. In a more difficult to detect manipulation, for instance, an attacker could even change bar codes to alter the records when tabulated by a scanner. On the other hand, there may be attacks not directed at the ballot itself but at the ballot return instructions. The attacker could make the ballot be sent to an inappropriate place, after knowing the voter’s site that is among the data verified by OmniBallot at the start. Additionally, the attacker could mail a different ballot (following their preferences) to the appropriate place employing the voter’s data. Online ballot marking: Here, the attacker could know the voter’s selection before the ballot’s generation, and from this, modify that particular ballot to suppress the vote for a specific candidate. Attacks may also involve reordering the candidates and swapping the barcodes linked to each of them. In these online marking cases, the attacker could also simply alter the voter’s marking and select a different candidate. And while some might notice the change, many others would not detect the errors on their ballots and return them as they are. Online ballot return: OmniBallotdoes not use the "end-to-end verifiability (E2E-V)" approach for a secure remote voting protocol. Computer scientists have been working on it for several decades, and to some extent, it is the most recommended approach. It "allows each voter to independently check that their vote is correctly recorded and included in the election result." OmniBallotuses a protocol in which no one can verify that what the voters gave as a selection is the same as what the officials received. Hence the possibility of the attacker changing the votes without being noticed. Finally, a risk associated with all modes of operation is the collection and storage of privacy-sensitive data, including names, addresses, and dates. Recommendations and conclusion Apparently, Democracy Live’s security controls are limited. Following the authors' recommendations, OmniBallot’s online ballot return should be eliminated, and the physical ballot return should be improved on accessibility and efficiency. Also, online marking should be offered only to voters who have this mode as necessary to join the elections. Moreover, officials should carry out risk-limiting audits (RLAs) to test at least in part the accuracy of the computers' work. Additionally, Democracy Live could reduce risks eliminating unnecessary reliance on third parties that may constitute multiple routes of attack. And as a final tip related to some legal protections, OmniBallot should have a their data by Democracy Live and third parties. The fact is that such data should only be used for the election process. The public security review of OmniBallot by security experts is something of high value when its use —due to a positive record in much smaller procedures— is under consideration for the next presidential election. In the end, they warn us that with such high risks of election outcomes being altered without detection, and without sufficient tools to mitigate those risks, it is best that OmniBallot’s (or any similar voting platform’s) online ballot return doesn’t become the default Ready to try Continuous Hacking? Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying.
<urn:uuid:027f186c-29d1-401e-a11f-7d0322918abc>
CC-MAIN-2022-40
https://fluidattacks.com/blog/online-voting/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00496.warc.gz
en
0.914389
1,788
2.609375
3
What, how, and why. As a consumer, when you’re attempting to understand the value of something (be it a product or a service), these are the three most important questions to ask. What is the product? How does it work? Why does it exist, and why do you need it? As a refresher, DNS-based web filtering is the act of categorizing and sifting through online content, blocking unsafe and unwanted sites in the process. Every individual web page is assigned an identifying string of numbers known as an IP address. When we access content, however, we don’t use these numbers. Rather, we rely on words and names that are easier to remember. Instead of typing “18.104.22.168” into your address bar, you can type “facebook.com” and let DNS servers connect you. DNS is a foundational element of the internet, considering that everything connected to the internet has an IP address — computers, smartphones, tablets, wearables, and websites. The best way to filter digital content, therefore, involves DNS. Given its versatility, DNS filtering offers users advanced customization features. Depending upon the needs of your organization, you can choose which types of content are permissible and which to block, specific to your company’s needs. In addition, by enabling DNS-based web filtering, you safeguard your users against malicious content. Let’s take a look at the four main benefits of filtering DNS. DNS filtering helps solve a problem so many of us face on the internet: distraction. Everything is accessible to us with the click of a button. It’s useful for getting work done, but because we can access social media, dating sites, online shopping, entertainment, and news on the same device we use for work, it quickly slows productivity. More people than ever are working remotely and in less supervised capacities. What’s more, a report that explored how COVID-19 changed the way people work found that over half of remote employees view inappropriate content on the same device they use for work. How, then, can your organization ensure that your employees remain on task? You implement a DNS filtering solution. DNS filters can prevent your users from accessing content categories known to distract them. Block content like personal blogs, games, sports, social media, and entertainment. With a simple set-up, you can keep your team from visiting sites like Facebook and Amazon without having to establish invasive surveillance. Another reason you should use DNS filtering is to avoid unwanted and illegal content while you browse the web. Just as you can block websites that reduce your productivity, you can block inappropriate content. DNS filtering can identify and block content that includes: DNS filters will ban websites that contain pornographic material, triggering content, as well as racist, sexist, and violent information or experiences. Along a similar vein, DNS filters can assist your organization in gaining compliance with internet regulations. NIST compliance and CMMC compliance have become major factors recently when it comes to companies looking to obtain work as government contractors. Protective DNS is an important part of both NIST and CMMC, meaning companies are one step closer to achieving compliance with PDNS deployed. Schools and libraries can become compliant with the Children’s Internet Protection Act (CIPA) by restricting access to inappropriate internet content. If your organization has specific company standards which forbid employees or network users from accessing certain content, DNS-based web filters are a simple way to maintain compliance. DNS filters are a fantastic tool for limiting a user’s access to unproductive, inappropriate, and illegal content. They’re also a crucial layer in an effective cybersecurity defense. Protective DNS measures prevent you from interacting with malicious internet threats. When you filter DNS, you are protecting your network against malware, phishing scams, ransomware, cryptojacking, and other common cyber attacks. Each time you request access to a website, DNS protection services will compare the domain’s IP address to databases of known and suspected threats. If the destination contains content that has been deemed malicious, you will be rerouted to safety. If your DNS filtering solution utilizes artificial intelligence for threat categorization, you’ll also be protected against zero-day attacks. DNS filtering solutions are surprisingly quick to implement. They can be deployed in a matter of minutes, and are easy to customize to the needs of your organization—as long as you pick the right one. DNSFilter, as the name suggests, offers a powerful DNS filtering solution. We provide security and peace of mind to our customers, blocking more than one million malicious and deceptive websites each and every day. We’ve quickly grown to become a leader in DNS security with AI-powered content categorization and the industry’s largest DNS network. If you’re interested in adding DNS filtering to your network, start your 14-day free trial today.
<urn:uuid:19641743-34b4-436a-8254-ef3843d3e2a3>
CC-MAIN-2022-40
https://www.dnsfilter.com/blog/why-you-should-filter-dns
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00496.warc.gz
en
0.93256
1,017
2.53125
3
By Renee Tarun, deputy CISO, Fortinet. The disruptions to our society due to the coronavirus pandemic include significant impacts to education. Universities and colleges around the world have had to adjust to the reality of remote learning, at least for the foreseeable future. The nation’s largest four-year college system, California State University, announced in May that instruction will primarily be conducted online this fall, and many other institutions are following suit. It’s now estimated that 70% of students are currently engaged in some form of online education. This shift to digital learning has introduced a steep learning curve that many institutions that were unprepared for. Schools are working quickly to not only build the curriculum and content necessary to support online courses, but to also build the distance learning infrastructure needed by faculty and students to ensure simple and seamless remote access to this content. The challenges are, how to do this at scale, and how to do it securely. The need to provide distance learning, and to do it quickly, has introduced new risks for educational institutions while creating potential opportunities for cyber adversaries. Schools have long been a target for cybercriminals. According to the 2019 Verizon Data Breach Report, education continues to be plagued by human errors, social engineering and denial of service attacks. The changes brought about by the pandemic only compound those existing challenges. Based on recent information released in the latest Global Threat Landscape Report from FortiGuard Labs covering the first half of 2020, education comes in third, only after telecommunications providers and managed security service providers (MSSPs), in the percentage of institutions detecting ransomware. Making Distance Learning Secure Cyber adversaries have refocused their criminal efforts to take advantage of the new remote work and education environment resulting from the COVID-19 pandemic. They’re targeting the vulnerable devices and home networks of remote users looking to use those systems to open a back door into the core network. This is evidenced by the significant increase in attacks targeting such things as consumer-grade routers, personal IoT devices, and components such as DVRs connected to home networks detected during the first half of 2020. Threat researchers are also seeing a spike in older attacks designed to exploit vulnerabilities in the often unpatched devices on home networks. In fact, 65% of detected threats were from 2018, and a quarter of all detected attacks targeted vulnerabilities from 2004. Naturally, the ability to securely support a remote learning policy is an essential component of any continuity and disaster recovery plan. However, to ensure that networked resources of colleges and universities, as well as those of remote faculty and students, are protected, these new realities need to be taken into account.
<urn:uuid:8d457d26-6e1b-4cdc-93de-f3b75e4f048f>
CC-MAIN-2022-40
https://educationitreporter.com/tag/renee-tarun/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00496.warc.gz
en
0.963906
546
2.640625
3
The Common Channel Signaling System 7 is a set of application protocols for the transmission of service information over telephone networks. SS7 is used for routing connections in digital or analogue voice-to-voice communication, and also facilitates the exchange of service data. It is also used for billing calls and for other purposes. The SS7 infrastructure forms the basis for services such as automatic number identification (ANI), call forwarding, and incoming call holding. Developed in 1975, it falls short of today’s security requirements: various studies have highlighted SS7’s vulnerabilities, which can be exploited to eavesdrop conversations and determine caller location.
<urn:uuid:3faa977b-89e2-4c6c-bf6c-b23ecf349a48>
CC-MAIN-2022-40
https://encyclopedia.kaspersky.com/glossary/ss7/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337244.17/warc/CC-MAIN-20221002021540-20221002051540-00496.warc.gz
en
0.909269
129
3.09375
3
Choosing a prediction modeling technique This article was originally published at Algorithimia’s website. The company was acquired by DataRobot in 2021. This article may not be entirely up-to-date or refer to products and offerings no longer in existence. Find out more about DataRobot MLOps here. We encounter the results of prediction models everyday. We get offered the right incentive to buy a product. We get a text from the bank when something strange is happening with our accounts. And we rarely have to see spam in our inboxes. This is all thanks to the models built on the data that we generate everyday. Read on to learn more about the business of prediction modeling and how you can develop and train your own. Is predictive modeling machine learning? Predictive modeling and machine learning are related, but have slightly different definitions. Predictive modeling is often defined as the use of statistical models to predict outcomes. Machine learning is a subset of artificial intelligence that refers to the use of computers to construct predictive models. In more recent years, however, the terms have been used synonymously. How is prediction modeling used in business? Here are a few examples of how predictive modeling is used in the business environment: One of the most important first steps in outbound sales is identifying the right potential customers to contact. Sales teams often use lead scoring models based on predictive analytics to identify the most likely and lucrative future customers. Predictive modeling is used in marketing in several different ways. Take for example, recommended products that a customer sees on an e-commerce site. These items are the results of models based on the purchasing behavior of the customer and users with a similar profile. In email marketing, content like the subject line and the body of text are often drafted based on models predicting the likelihood of someone opening the email and clicking through a call to action. Customer churn is one of the most common use cases for prediction modeling in business. This is because it’s typically cheaper to keep an existing customer than to onboard a new one. Churn models predict how likely it is for a customer to discontinue using your product or service based on their previous actions. These models can also be used by customer service agents to offer relevant incentives to customers most at risk of churn to keep them as customers longer. Financial services companies use anomaly detection models to detect possible fraudulent activities. For example, when a credit card customer is sent an alert requesting that they verify a transaction, this is triggered by actions that don’t match the person’s typical behavior. Operations and supply chain professionals use predictive analytics to help them determine the type and number of products to produce and ship. These models are based on past customer/buyer behavior in addition to more current global and economic factors. What are common predictive modeling techniques? Supervised learning models have a specified target output which is either a classification (label) or a continuous variable. The purpose of supervised learning models is to predict a specified outcome. Unsupervised learning models, on the other hand, don’t have any sort of target variable. These models are often used during exploratory data analysis to uncover patterns or natural groupings within data. Although we are going to focus primarily on supervised machine learning models in this piece, note that unsupervised techniques do play a role in predictive modeling. For example, a customer churn model may actually begin with an unsupervised task like clustering to uncover groups of similar people within a high risk for churn group. This more nuanced information can help you build models to predict the right incentives to retain an at-risk customer. Classification v regression Supervised learning techniques can be split roughly into two categories: classification and regression. The target variable of a classification model is the class or category that a new observation belongs to. A variation on this is class probability estimation which predicts the likelihood of a new observation belonging to a particular class. In regression models, the target variable is a numerical value. The following are a few examples of prediction algorithms. Note that some can be used for classification or regression. Linear regression is one of the easiest machine learning algorithms to understand. It models the relationship between the target response (dependent variable) and one or more independent variables. In a linear regression model used to predict a potential customer’s spending, independent variables could include factors such as income, age, and how frequently they’ve used your services over a period of time. Nearest neighbor models can be used for classification or regression. Predictions are based on the distance between a new observation and existing data points. The “k” in this model represents the number of data points or “neighbors” to compare the new observation. In a classification model, the new observation is put in the same class as the majority of its neighbors. In a regression model, the prediction is typically an average of the numerical value of the neighbors. Decision trees can be used for classification or regression. They work by splitting a complete dataset into successive subsets based on classification features or a predetermined numerical value. The splitting continues until it reaches a terminal node where it cannot be divided any longer. Banks sometimes use some form of a decision tree to make decisions about whether or not to offer a customer a loan. The very visual nature of decision trees are also useful for when a modeler needs to transparently demonstrate how they came to a decision. Logistic regression is actually a class probability estimation model. In marketing, logistic regression is often the basis of propensity models that predict how likely it is for a customer to make a purchase. Providing a more granular view of a customer’s possible choice provides marketers with the information they need to develop more targeted and relevant outreach. How do you train a predictive model? You can find more on developing machine learning models here, but here are few steps that are important to emphasize: Exploring the data Before diving into the data in an attempt to answer a question, it’s important to see and understand what’s there. You may want to visualize it with charts or graphs in order to uncover patterns that might not be obvious from a spreadsheet. Dividing the data Building a model will require you to divide your dataset into at least three different subsets. - Training data – This is the largest subset and will be the one on which you build the model. The model also learns from this data. - Validation data – This is the subset for which the model is continuously evaluated. As you refine the model, you will continue to test it against the validation dataset. - Test data – This is the final dataset you will use to evaluate the model’s fit. This dataset should only be used once. Developing the model The algorithms for which you choose to base your model need to be relevant to the question you are trying to answer. In addition, your algorithm options may be expanded or limited by the data that you have.
<urn:uuid:adfb3e82-4108-40b6-bd21-b9d3b2983bdc>
CC-MAIN-2022-40
https://www.datarobot.com/blog/choosing-a-prediction-modeling-technique/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00496.warc.gz
en
0.9333
1,483
2.71875
3
Deep Reinforcement Learning–of how to win at Battleship According to the Wikipedia page for the game Battleship, the Milton Bradley board game has been around since 1967, but it has roots in games dating back to the early 20th century. Ten years after that initial release, Milton Bradley released a computerized version, and now there are numerous online versions that you can play and open source versions that you can download and run yourself. At GA-CCRi, we recently built on an open source version to train deep learning neural networks with data from GA-CCRi employees playing Battleship against each other. Over time, the automated Battleship-playing agent did better and better, developing strategies to improve its play from game to game. Current progress has been made to establish a framework for (1) playing Battleship from random (or user-defined) ship placement (2) deep reinforcement learning from a Deep Q-learner trained from self-play on games starting from randomly positioned ships and (3) collecting data from two-player Battleship games. Our experiments focused on how to use the data collected from human players to refine the agent’s ability to play (and win!) against human players. There are various technical approaches to deep reinforcement learning, where the idea is to learn a policy that maximizes long-term reward represented numerically. The learning agent learns by interacting with the environment and then figures out how to best map states to actions. The typical setup involves an environment, an agent, states, and rewards. Perhaps the most common technical approach is Q-learning. Here, our neural network acts as a function approximator for a function Q, where Q(state, action) returns a long-term value of the action given the current state. The simplest way to use an agent trained from Q-learning is to pick the action that has the maximum Q-value. The Q represents the “quality” of some move given a specific state; the following pseudo-code outlines the algorithm: In practice, when we are training the Q-learner, we do not always pick the action that has the maximum Q-value as the next move during the self-play phase. Instead, there are various exploration-exploitation methods designed to balance the ‘exploring’ of the state space in order to gain and access information on a wider range of actions and Q-values versus ‘exploiting’ what the model has already learned. One basic method is to start with completely random choices some percentage of the time and to then slowly decay to a smaller percentage as the model learns. Playing Battleship, we found that starting at 80% and decaying to 10% worked well. More Advanced Deep Q-learning Methods To help with faster training and model stability, more advanced deep Q-learning methods use techniques such as experience replay and double Q-learning. Experience replay is when games are stored in a cyclical memory buffer so that we can train batches of moves and we can sample from games that were already played. This helps the model avoid converging to a local minima because the model won’t be getting information from a sequence of moves in a single game. It also helps the model take into account past moves and positions, providing a richer source of training. Double Q-learning essentially uses two Q-learners: one to pick the action and another to assign the Q-value. This helps to minimize overestimation of Q-values. To generate the sample data, we began with the open source phoenix-battleship, which was written in elixir using the phoenix framework. We modified phoenix-battleship to save logs of ship locations and player moves and we made slight configuration changes for the sizes of ships and generated data. We hosted the app on Heroku, encouraged our co-workers at GA-CCRi to play,and saved logs of the games that were played using the add-on papertrail. We collected data from 83 real, two-person games. The following shows one GA-CCRi employee’s view of a game in progress with another employee. Dark blue squares show misses, the little explosions show hits, and the gray squares on the left show where that player’s ships are. We wrote the code in PyTorch with guidance from the Reinforcement Learning (DQN) tutorial on pytorch.org as well as Practical PyTorch: Playing GridWorld with Reinforcement Learning and  Deep reinforcement learning, battleship. We trained an agent to learn where to take actions on the 10 by 10 board. It successfully learned how to hunt for locations and target the squares around a hit to sink the rest of the ship once one part of it has been found. Next, we plan to refine the agent using the collected data to perform better than human players. Ship Placements for Collected Game Data Heat maps of the ship locations show that there are clearly favored positions for the ships. In particular, players favored rows B and I and columns 2,4, and 9. We can also see that players often tried to hide the size 2 ships in the corners. “Starting position” refers to the square that is the upper left most position on the ship. From the starting position the ship has two possible placements: going horizontally to the left or going vertical and down. The graph above is a scatterplot of the average number of moves it took for one player to win, calculated over the most recent 25 games, as the agent was training. We used a basic Deep Q-learning reinforcement method to train a learning agent. Ship placements were randomly initiated to start each game, and then the games were played out using the learning agent. Initially, 80% of the moves were random to encourage the model to explore the possible move and hit locations; gradually we relied more on the model to pick the next move based on what it had learned from previous games. At the end of training, the learning agent averaged around 52 moves to win each game. At this point our model has learned a method better than the hunt/target method (randomly shoot squares and then if a square is selected shoot the squares around it). The benchmark distribution for that strategy averages around 65 moves. But, it has not yet been able to have shorter game playing time than a probability-based method which, rather than randomly shooting squares, first selects squares with ‘higher likelihood’ (based on the total number of configurations that cover that square) of being shot to hit. The benchmark distribution for this strategy averages around 40-45 moves. Game Play Strategies the Model Learned from Training We watched the agent play out a few games to assess if it had learned any strategies and what those strategies were. The play board is visualized as an ‘ocean’ where light blue represents unsearched squares. As the game is played, dark blue squares represent squares that have been searched but are misses (no ship) and white squares represent squares that are hits (a ship has been hit). As you can see even from the single frame (and which is more obvious when you watch a game unfold), the agent did indeed learn some strategies. - Once a ship has been hit, the agent continues to target squares around that hit. After a size 5 ship, the agent does not always continue targeting squares around it; see player 2’s play for an excellent example. This is effective, since 5 is the largest size. - The agent searches for ships using a diagonal or modified parity method. This makes sense because searching along adjacent diagonal squares but not adjacent squares covers more ground to hit a ship given that they are size 2, 3, 3, 4, 5. Watching a Sample Game using the Agent |Starting Ship Positions for ships of size 5,4,3,3,2.| |In move 1, Player 2 hit a ship!| |Now Player 2 starts searching the squares around the first hit. Player 1 starts exploring another area of the board.| |Player 2 sinks the first ship! In moves 3-6 Player 2 aimed hits around the initial hit and found the bounds on either side. The model understands how to hunt for the rest of a ship once it gets a hit. |After sinking its first ship, Player 2 starts exploring another area of the board. Notably, it does not keep trying to continue to find hits around the ship it already sunk. The model has some understanding that a ship has been sunk. |Player 1 finally sinks its first ship! In moves 8-18, Player 1 continues searching the board along different rows and columns. Note that the diagonal is along the main diagonal, where ships have a higher probability of being placed. The model has a non-random search strategy. |Player 1 wins! Even though Player 1 took more moves to sink its first ship, Player 2 had trouble finding the last size 2 ship in the end.|
<urn:uuid:c47159ae-b1dd-487a-8835-19f12ab115be>
CC-MAIN-2022-40
https://www.ga-ccri.com/deep-reinforcement-learning-win-battleship
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00496.warc.gz
en
0.955799
1,887
2.765625
3
Before Chris Davis traveled to China for work some years ago, he was warned officials would take his electronics and make hard drive copies after he landed. Expecting this, Davis, VP of product management for cybersecurity and compliance company Caveonix,encrypted the contents of his laptop. While the data was accessible, it was unreadable, safeguarding his privacy and digital records. Encryption protects data from unauthorized eyes if it is ever lost, stolen or breached. Without encryption, data lays bare and vulnerable. Cybersecurity specialist Bruce Schneier compares encryption to placing a lock on the front door of a home. It's unlikely burglars are willing to shuffle through keys until one unlocks the door, he said. "Most aren't even clever enough to pick the lock (a cryptographic attack against the algorithm)." Intruders don't always bother with locks when there are other methods of invasion. In Capital One's case, the intruder metaphorically "smashed the window," according to Davis, in an interview with CIO Dive. A brief history of encryption People have wanted to protect their secrets since the big bang, thus beginning the use of ciphers. A cipher is an algorithm for encryption and decryption. Early and uncomplicated methods, like Caesar, used a substitution method to encrypt by subbing a letter for the next letter down the alphabet. For example, A would be replaced by B, C would replace B and the pattern continues. Simplistic encryption like Caesar prevents data from being a flat histogram, said Davis. "I want it to be completely unreadable and untraceable. And that's what true encryption is, that's what it does." There are two primary reasons for encryption: Control who has access to raw data, And maintain confidentiality of data if access is compromised Some regulators across industries — including the International Organization for Standardization, Payment Card Industry Data Security Standard, Health Insurance Portability and Accountability Act — require organizations to encrypt all data at rest and in motion. Though standards exist for encryption, there are no federal laws that mandate data encryption, Gary LaFever, CEO of data risk management company Anonos, told CIO Dive. There are laws that encourage encryption with incentives, such as reduced liabilities if the company can prove the data was encrypted at the time of the breach. "More importantly, regulation can be spotty when it comes to what kind of encryption businesses are required to adopt," said LaFever. This means companies can be selective with what data maintains encryption while it's at rest. While there are solutions that provide encryption for data in transit, those practices limit protection while it's in use or "processed through company algorithms, cloud storage applications, or by third-party vendors," said LaFever. Encrypting and decrypting databases impact how companies access and process data, like performance or user experience. Encryption software is also expensive. "Countless chief information officers are simply willing to accept the financial risks of a breach given the expense of proper data stewardship," said LaFever. However, even with limitations, encryption's value is unrivaled. No matter the perceived impact on performance or budgets, the "reality is today, the cost, the CPU cost of encryption has stayed the same or gone down a little bit," said Davis. If a company has compliance it has to adhere to, variations of encryption is unavoidable. Solutions, like modern pseudonymization and anonymization, protect data in transit. It's also a requirement of the General Data Protection Regulation (GDPR). Before data can be used, it needs to be decrypted, which immediately makes data vulnerable. Pseudonymization attempts to remove any attributable feature of a consumer, though GDPR still recognizes pseudonymized data as personal data. If pseudonymized data is "recombined," with its rightful owner, it negates the protection benefits, according to LaFever. Because of this loophole, pseudonymized or anonymized data are not considered encrypted data because of differences in key holders. In true encryption, the key belongs to the data generator. But an encryption key is only as strong as its algorithm. Because all security solutions have flaws, companies must put multiple protections in place. Encryption is only a piece to the puzzle. If a company loses control of who has access to data, decryption keys, everything else is theoretically irrelevant, said Davis.
<urn:uuid:6116fff4-75ad-4af5-8c58-f8b0115a62fb>
CC-MAIN-2022-40
https://www.ciodive.com/news/encryption-101-and-the-trap-of-leaving-data-unprotected/561406/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00696.warc.gz
en
0.943422
919
3.3125
3
What constitutes a backdoor in software, firmware, or even hardware? This question nagged at us during a recent project that Duo Labs worked on. We managed to get ahold of three Android-based phones that were for sale in China only, so naturally we figured we’d: - Look for some government backdoors that some government put in; - perform some actions that involved informing the Internet about our findings and how awesome we were; and - argue about how to spend the Nobel Peace Prize money. So what did we find? A ton, but in reality, a lot of nothing. You see, it all depends on what your definition of a backdoor is. What Exactly is a Backdoor? This is actually a hard question to answer. We’ll start by breaking things down into several categories, and yes this is an incomplete list. There is an obviously bolted-on piece of code whose sole purpose is to provide some type of access (remote or otherwise) to an attacker. This is your traditional backdoor, it could come in the form of an extra program or app that is installed that allows a bad guy to function on the system. This is usually considered real-time remote access - many of your traditional rootkits fit into this category - but it could allow for special access if the bad guy is holding the device in their hands. Of course the more obvious the backdoor, the easier it is to spot, and the more likely forensics could trace back and identify the attacker. There is a backdoor, it isn’t supposed to be a backdoor, but when you see it, well, it is a backdoor. Is this vague enough? Sometimes the backdoor is rather obvious, particularly if the word “backdoor” is in the secret password, even if it is written backwards. Other times, the vendor adds a backdoor for “remote maintenance” or states that it was used during development and accidently left in (which is often the truth, people do make both coding and design mistakes). It is rather entertaining to go Googling for articles about backdoors in commercial products - virtually all of the vendors with a disclosed backdoor in their products release some type of statement on their website, and then later delete that statement after a few weeks. The whole backdoor issue makes a vendor look really bad, so it’s not surprising they remove it from the website as soon as they think they can get away with it. This issue has a long history, with operating systems shipping with default accounts and passwords since nearly the beginning of operating systems. Alas, these type of issues still appear in modern systems. There is a coding flaw that has been introduced specifically by the attacker to allow that attacker to bypass normal access methods. This is a little more obscure. It could involve simply adding a subroutine that does a badguy check “if (AreYouBadguy == true) then YouGetFullAccessYourTableIsWaiting();” and then the bad guy gets to do whatever is desired. Every once in awhile you hear about this type of thing happening. For example, there was the infamous equal sign backdoor attempt (which was caught). An attacker compromised a server containing Linux kernel source, and added a simple change that should have contained “current->uid == 0” (basically asking if the current process was root), but instead contained “current->uid = 0” (basically assigning the calling process root level access). This was in 2003, and people argued and debated about it for days. I guess in modern terms, they “Interneted” about it, but nonetheless, it was a backdoor method to bypass restrictions and gain root access. Again, forensics can potentially allow someone to track down the attacker that installed this bit of code. And while we’ve never seen proof of this, it is possible that evil backdoor code could be in the form of seemingly legit but maybe questionable functions that are added. While there are security checks in the code to prevent security issues, compiler optimization actually removes the security checks. Bugs like this do actually occur but are rare, since a security review of source code alone may not catch them. Personally, we feel this would be the most elegant route to go, since the source code looks like the bad guy did not intentionally introduce the flaw to begin with, and this makes proving the evil intent of the coder even harder. Also, for a smartphone where one is working off of a known code base, this might be the proper route to go to really hide that backdoor. There is a coding flaw that has been introduced accidentally by a legitimate coder that will allow an attacker to bypass normal access methods. This is arguably the best backdoor - a 0day. A flaw exists that the attacker knows about on the target system. The attacker didn’t have to get the flaw introduced into the code specifically, they just poured over the existing code or reverse engineered compiled code and located the flaws. Granted, this is an odd science in itself, and if anyone else finds the 0day, it could end up getting it patched, effectively burning the attacker’s “backdoor.” However, there is no evidence that says the attacker put the flaw there, so there is no trail back to the attacker to trace. A library or subsystem has been included into the overall build process, and it contains a flaw that allows for an attacker to bypass normal access methods. The phone manufacturer puts together their codebase, maybe starting from a fairly secure Android version, but they’ve added on additional programs that introduce flaws through third-party coding dependencies. This tends to happen a lot when people are in a hurry to build something and just make it work. You could easily combine this one with the previous one to a degree as well. Again, the attacker just takes advantage of the introduced flaw. Normal processes are in place as they are on every phone, however due to configuration changes affecting options or timing, an attacker can bypass normal access methods. This is somewhat strange, but it happens. Let’s say it’s fine to run the questionable_program_that_backs_up_creds_to_the_cloud because the transmission is encrypted and the certificate is pinned. However, an attacker has made a change that weakens the encryption algorithm to one the attacker knows is breakable, and just has to capture the traffic. Or the attacker knows that this configuration is weak to begin with and just camps out waiting for the traffic. Now all the code on the system looks fine, but a configuration adjustment is the weak link. This type of backdoor could happen in an obvious way, or a not so obvious way. A known piece of good code is included on the phone, but it has been altered so that its security has been weakened. This is somewhat similar to the previous one, but with a twist. Let’s say a known web browser has been added, but a number of security features that would normally be included or turned on have been purposely adjusted - it could happen. There is a reason for this type of concern. Now here is where it gets weird - substitute all those “attacker bypasses normal access methods” statements with “uploads private data” and that may actually provide the same intended result. Exfiltration of critical data off of the phone may be the intended end result anyway. Further imagine that the phone is just violating privacy aggressively as a normal part of doing business - all you have to do is figure out how it does that and monitor it, and that’s your “backdoor.” We’ve barely scratched the surface on this backdoor business - really one could write a book on the subject. And people wonder why Duo Labs has such a large bar tab at the end of any given night - thinking about all of this makes one’s brain hurt. So Those Results…. Ok so we went off on a bit of a tangent there, so let’s bring this back to our real world example - those Chinese phones. The Chinese smartphones were running outdated software, in fact they were all based off of older versions of Android code (4.4) with its well-documented known flaws. One could simply query for a list of older Android CVEs, spend a few minutes searching the interwebs, and there is your pile of potential backdoors. Ok, well not quite, but running older versions of software is certainly a valid starting place, particularly if you are looking for as easy entry into a phone’s OS. Were there questionable apps with sketchy-looking code included with a basic install? Of course there were - just like most phones. Were there violations of privacy that could be leveraged? You bet - but again, just like most phones. How does one even write that stuff up? Trust us, you can’t simply turn in a report to management that says “yup, they are all shit, but no, we didn’t find anything anywhere close to new.” But that is exactly what we found - your basic hot mess of questionable code but no unicorn. Just saying “4.4” or even just “non-current version of the OS” should be enough. We had toyed with the idea of looking at the firmware, but this is a tricky thing in the grand scheme of backdoors. It violates the first rule of attack - use the easiest method to achieve the objective. It boiled down to this - there were issues with the phones that allowed for someone with the right resources to get to information on individual phones. Modern tech systems all phone home (no pun intended). The Chinese government has the means to get data from private firms based in China, and monitor the infrastructure that allows the phones to communicate. Most of the issues discussed above required either access to data being uploaded or being able to intercept and decrypt network traffic, so there would be no reason for the Chinese government to backdoor the firmware. They can easily enough get to pretty much anything, or at least most of what they need. So then the question is, do you pursue pulling apart firmware looking for unicorns, a process that could take weeks or months (we’re also working on other projects at the same time), or do you spend your time looking at something a little more fruitful? After 30 days, we chose the latter. So no Nobel Peace Prize, no fascinating new bugs, just a lot of looking at older code that already has been picked clean of anything interesting. But we did learn a lot about how to approach backdoors, so not a total loss. Special thanks to Darren Kemp and Khang Nguyen who also worked on this phone project with me.
<urn:uuid:a34e594b-7d07-42af-b0e7-7078eaecc11a>
CC-MAIN-2022-40
https://duo.com/decipher/unicorn-wrangling-101-what-is-a-backdoor
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00696.warc.gz
en
0.967498
2,227
2.71875
3
What does disaster data recovery mean? This term describes the method businesses use to regain access to stored information after a disruptive event. Any event, like a cyberattack, ransomware, a natural disaster, or even something new like the Covid-19 pandemic. When data is lost, businesses can employ a variety of methods for their disaster data recovery plan. How does disaster data recovery work? Disaster data recovery relies on the data being replicated in an off-site location that has not been affected by the outage. When a server goes down due to a cyberattack, equipment failure, or a natural disaster, businesses can recover their lost data from a backup location. When the data is backed up on the Cloud, businesses can access their data remotely so they can continue to operate. What are some key elements of effective disaster data recovery plans? We need a plan! A data recovery team will assign specialists to create, implement and manage the data recovery plan. Should a disaster occur, the data recovery team will facilitate communication with employees, customers, and vendors. Risk evaluations. An effective data recovery plan needs to assess all potential hazards. Depending on the type of disaster, the risk assessment will dictate what needs to happen for the business to resume operations. For example, if there were a cyberattack, what measures will the data recovery team use in response? A natural disaster will require a different response. Identification of critical assets. For a disaster data recovery plan to be effective, it needs to include a list of all assets. Vital resources, systems, and applications that are critical to the business are at the top of the list. Next, it’s important to have the steps that need to be implemented to recover the data. Backing up your data. An effective data recovery plan needs strategies and procedures for backups. You should know who will perform the backups and how often they will be done. Those responsible for data backups must also work out the business’s recovery time. Calculate the amount of time the organization can be ‘down’ after a disaster and work from there. Optimization and testing. The data recovery strategy should be tested and updated continually to protect the business from new threats. In this way, the business will be able to navigate challenges successfully. Planning a response to a cyberattack ahead of time will make sure your team will know what to do. Types of disaster data recovery There are a variety of options when it comes to data recovery. Perhaps the simplest method is backup. Your data is stored on or off-premises, or both for extra safety. However, relying solely on data backup gives minimal protection for businesses. If there is no backup of the IT infrastructure as well, there could be even bigger issues. For example, are your critical programs backed up as well? Using DRaaS – Disaster Recovery as a Service DRaaS is another way in which businesses can protect their data and infrastructure in the event of a disaster. Your business’s computer processing happens on the DRaaS cloud infrastructure. This means that the business can continue to operate seamlessly, even if its servers are down. A DRaaS plan can be either a pay-per-use or a subscription model. A similar solution is Back UP as a Service. But this only backs up data and not infrastructure. Why is IT disaster recovery important? There exists no business that can ignore disaster data recovery. Having a plan in place for this means that businesses can protect themselves from closure. Most businesses can’t even afford to close for one extra day. With a strategy in place for disaster data recovery, businesses will be able to get back to normal operations much more quickly. They might even be able to continue operating as normal. Why would anyone risk their business without a Backup Disaster Recovery plan? As your Managed Service Provider, we can assist you with your Backup Disaster Recovery (BDR) plan. You know how valuable your data is. Don’t run the risk of losing it! Contact us today and we can go over our data recovery solutions.
<urn:uuid:3cde08c4-1414-4fc7-871d-b34f5a8854a7>
CC-MAIN-2022-40
https://www.ccstechnologygroup.com/disaster-data-recovery-illinois/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00696.warc.gz
en
0.945942
838
3.046875
3
Cognitive computing and machine learning are some of the hottest topics in all of high tech. Many companies in the semiconductor industry have been looking for ways to improve how computers interact with us and more importantly, understand our behavior and needs. Because without understanding humans or our behavior, computing devices will never truly become as effective at helping us as they could be. So, naturally, many companies are chasing different solutions that try to improve how computers interact with us and better understand our world. Companies like IBM are working on solving the big data problem by incorporating cognitive computing into better understanding humans’ questions in order to supply them with the right information. NVIDIA is harnessing their experience and knowledge with GPUs and parallel computing to enable running neural networks that can help improve computer vision and object recognition in order to make transportation safer. Others like Qualcomm are focusing on setting the stage for a new level of intelligence and personalization for mobile devices, and expanding this into other areas, such as automotive, robotics and wearables. Qualcomm’s efforts in this space are being accelerated with their introduction of the Zeroth platform, which is the focus of this column. Improving how computers understand us Right now, most of the cognitive computing requires vast amounts of high performance computing, which is generally cloud-based. Nvidia’s DriveX is arguably mobile in the fact that it’s embedded into a car, but once you leave the car it is no longer with you. However, there is no denying that a lot of cognitive computing in the mobile space has a lot to do with computer vision and visually recognizing objects and using them to provide context to the device to help it provide more relevant and accurate data to the user. Some of that occurs at the device end and eventually gets sent to the cloud for computing and recognition and then sent back to the device. This scenario is one that has too much latency in mobile environments, doesn’t actively and continually understand users’ interactions and above all, requires a constant internet connection and added power consumption. Lots of companies already have the existing silicon, tools and some APIs to make cognitive computing happen, as we’re seeing with NVIDIA and Qualcomm, but it seems that a lot of the cognitive computing capabilities for the real world are accomplished through GPUs. And unsurprisingly, some of the most powerful mobile GPUs available today come from NVIDIA and Qualcomm. But there are other aspects of cognitive computing that require more than just GPUs with simulated neural networks. An example of this is mobile, where you need to have an optimized and balanced heterogeneous computing architecture, which Qualcomm provides as part of their Qualcomm Zeroth platform. This platform takes advantage of “the right engine for the right task” and is optimized for mobile environments. Qualcomm Zeroth platform brings cognitive computing with the user Qualcomm’s Zeroth Platform benefits from new hardware and software innovations that are at the heart of Qualcomm’s heterogeneous computing technology and leading edge connectivity within a highly integrated SoC. The Zeroth platform provides the foundation for more intuitive experiences and natural interactions through the addition of on-device intelligence designed for a range of key mobile experiences and cognitive capabilities. The next generation of Snapdragon SOCs should enable more seamless movement of data between different processing engines to enable low latency & efficient processing. As workloads move between different processing engines, Qualcomm’s platform seeks to minimize the power consumption of these workloads while maximizing the performance by choosing the optimal engine for each task. Qualcomm using visual and auditory cognitive computing to improve experiences Some of the cognitive computing enabled by the Zeroth platform will have to do with visual use cases, showing the inescapable need to see what’s going on around you to understand your surroundings. Some visual capabilities that Qualcomm is building into their Zeroth platform include visual perception, with the phone recognizing the environment around you so it can capture things that matter the most to you. Think about the phone knowing you’re at a football game and therefore knows to zoom in on the player’s faces because it recognizes a football helmet. Through on-device deep learning, computer vision and cognitive camera technologies, devices can recognize objects, read handwriting, identify people and understand the overall scene and its context. This opens new possibilities like enabling your phone camera to autonomously adjusts its settings based on its understanding of the nature of the scene—imagine the different settings used when snapping shots at a football game versus a sunny beach versus a child’s birthday party. The phone could start aggregating knowledge it has gained by observing user behavior over time to personalize your pictures by changing exposures on the fly based on lighting and skin tones, facial recognition and other qualities that users may find valuable over time. There are also additional capabilities that incorporate scene understanding, which may also incorporate other ‘senses’ like sound, which may give the device a better understanding of exactly where it might be. That could prove to be useful for giving the device improved context of surroundings and possibly adjusting things like volume when something loud drives by. Imagine your smartphone knowing you are driving in a car and optimizing the microphone and audio to compensate precisely for the situation. This shouldn’t require the user to change anything and if the device understands the users’ preferences through machine learning, then it can know how to accomplish such tasks automatically and adapt to the environment without user input. There are other applications like handwriting recognition where the Qualcomm Zeroth platform can help recognize users’ handwriting and without the use of a special stylus. Imagine being able to take a picture of a blackboard, a whiteboard or a page of handwritten notes and the device recognizing what someone wrote down like you and I are able to. Many have tried to do this, but didn’t work well in part to the fact there just wasn’t enough processing power. On-device intelligence saves power, improves performance, improves privacy There is simply no denying that on-device cognitive computing with something like Qualcomm’s Zeroth platform makes more sense than constantly sending data to the cloud. Three great reasons jump to mind. First, there’s a much better responsiveness from a device-based solution. Waiting for information to travel to and from the cloud is just not an option for certain situations. On-device processing removes the latency of connecting to the cloud. Imagine driving a car on a highway at night with a moose crossing the road. You would want to be notified as soon as possible. Second, on-device processing improves security and privacy by letting users control their own data. You only share what you want and know when things get sent up to the network. Third, having to constantly contact the cloud will result in even more latency and data traffic through the networks that are already overloaded in many cases. This will also not work very well for users that have data caps, which are fairly common in developed markets. This will also mean added power consumption by constantly keeping the modem connection open. That doesn’t mean that you won’t need the cloud, it just means you’re more in control of both the data charges and security when using on-device cognitive services. The key with Qualcomm’s cognitive computing approach is to push the intelligence as close as possible to the edge. This is because the performance and experience can be the best when the computing is closest to the user. Now that smartphone has extremely capable mobile processors and there is no reason to leave that processing power untapped or flatline compute capability. By taking advantage of a highly optimized heterogeneous computing architecture, the performance required for on-device cognitive capabilities can be achieved within the power and thermal constraints of mobile devices, and in fact, as a whole, will probably save power when you consider how much computing won’t be done in the cloud all the time. There will still be a need for the cloud to continually improve the intelligence and update the capabilities of these platforms like Qualcomm’s Zeroth, but they won’t be used as constantly as some cognitive cloud solutions today. Differentiating with mobile Qualcomm’s approach to cognitive computing is a unique one in the sense that they are harnessing a lot of their already existing technologies and combining them together with new hardware and software innovations in a way that enables better machine understanding of user needs. As it is right now, they are pretty much alone in the field in terms of enabling mostly standalone cognitive computing that is primarily done on the smartphone. By bringing that capability to the device directly, Qualcomm could enable the best possible cognitive computing experiences with minimal power consumption, which is absolutely necessary in today’s mobile world. Qualcomm is making this a reality by optimizing the Zeroth platform for premium mobile devices based on their next-generation premium tier SoC, the Qualcomm Snapdragon 820 processor. Cognitive computing is going to be what lifts the entire industry up from competing on Gigahertz and number of cores in the coming years and companies will do battle to provide more unique and personal computing experiences. This will enable device manufacturers to continue to differentiate around user experience but in brand new ways and consequently open new market opportunities. Ultimately, the companies that are successful in developing cognitive technologies will do so by improving their customers’ experiences and enhancing their daily lives through simple yet impactful application of intelligence to things that matter most. Because realistically, a lot of the processing capability on our smartphones and tablets is generally sitting idle most of the time and we could be putting a lot of it to good use. That untapped compute capability could be helping us every day, making cognitive computing a constant companion and we wouldn’t even know it. Qualcomm’s approach is a good one and one that I think they can have a lot of success with given their IP, investment capability and platform delivery track record.
<urn:uuid:9a3d69fb-45fc-4e15-b7a9-c85083bd2272>
CC-MAIN-2022-40
https://moorinsightsstrategy.com/mwc-qualcomm-uses-zeroth-cognitive-computing-platform-to-reflect-human-like-thinking-and-actions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00696.warc.gz
en
0.950173
2,014
3.03125
3
Cyberattacks are getting common and their impact is quite severe. Security breaches are no longer limited to a few large tech companies. Cybercriminals have rapidly altered tactics and started targeting several Small and Medium Enterprises (SMEs) as well. Today, companies, big or small, are targets of ransomware, viruses, malware, bots etc. Hence, it is important to understand some of the common cybersecurity jargon keywords. Knowing what they mean could help companies become quickly aware of their digital security needs and set up defences accordingly. Here’s a list of complete cybersecurity jargons: 1) API (Application Programming Interface) APIs are essentially communication platforms that allow two applications to communicate with each other. They are a software intermediary that lets a product or service interact with other products or services. There’s an element of security to ensure the intermediaries remain unaware of the backend processing. 2) API Security Security aspects concerning APIs specifically. It is a process to identify possible vulnerabilities in APIs, getting them fixed, and protecting APIs from potential exploits. Related Topic - Complete Guide on API Security for Mobile Apps Agile is basically a project management method. It essentially involves adopting a cross-functional approach throughout the Software Development Lifecycle (SDLC) usually with multiple methodologies. Agile broadly describes a set of values and principles for software development under which requirements and solutions evolve through collaboration. 4) Application Security Testing Application Security testing involves scanning and testing applications to discover and address or fix security vulnerabilities in web applications, mobile apps, or APIs. Adopting a continuous approach generally helps bridge the gap between development, operations and security. Related Topic - Explaining Mobile App Security in Simple Terms 5) Binary Code Analysis Binary Code Analysis involves testing a binary code level to hunt for vulnerabilities. This involves analyzing raw binaries that make up a complete application. This is helpful when there is no easy or ready access to the source code. Any incident that could potentially result in data, applications, networks, or devices being accessed without authorization is called a breach. At its core, a breach is any unauthorized or unapproved access to sensitive information or network. A bot is essentially an automated or autonomous piece of software that runs without human intervention. A Botnet is usually a cluster of bots designed to perform single or multiple purposes. 8) Black Hat Any computer code writer that designs software tools with the intention of violating computer security for their personal profit or malice. Bring Your Own Device (BYOD) is a term that means allowing employees to use their own devices, be it laptop, smartphone, tablet, to access company data, networks, etc. 10) Brute Force Attack A Brute Force Attack attempts to “guess” an unknown value, such as a username, password, or secure key. The most common technique involved in brute force is using an automated process to try multiple possible values. 11) Cache Poisoning Cache poisoning, also known as DNS poisoning or DNS cache spoofing, involves corrupting an Internet server's Domain Name System table to hijack visits to a legitimate domain. This is usually done by replacing a valid Internet address with that of another, presumably rogue address. 12) Code injection Code injection is used by an attacker or hacker to insert or "inject" code into a vulnerable computer program. When an application interprets and executes such a code it allows unauthorized access to data exploitation possibilities. In other words, Code Injection can cause data loss or corruption, lack of accountability, denial of access, and even a complete host takeover. 13) Command injection It is a form of attack which involves executing arbitrary commands on the host operating system via a vulnerable application. This is usually possible if the victim is using compromised operating systems that offer privileged access to certain remote users. CI/CD or CICD generally refers to the combined practices of Continuous Integration and either Continuous Delivery or Continuous Deployment. This software development approach helps bridge the gaps between development and operation activities as well as different teams by adopting automation in building, testing and deployment of applications. 15) Cross-site scripting Cross-site scripting (XSS) occurs when a user injects malicious script into an otherwise trusted website. The malicious script usually runs on the victim’s web browser. The most sought-after information during cross-site scripting is user information such as credentials, session cookies and other sensitive data. 16) Container Security Container Security involves deploying security tools and policies to protect any virtual software against cybersecurity threats and ensuring that the container always runs as intended. 17) Common Vulnerability and Exposure (CVE) Common Vulnerability and Exposure is a database system that keeps a record of vulnerabilities in software or firmware. Companies routinely refer to these publicly known information-security vulnerabilities and exposures. 18) Compliance standards Compliance standards are a set of government-mandated or corporate-defined guidelines. HIPAA or Europe’s GDPR are good examples of compliance standards. Clickjacking is also known as the UI misrepresentation attack. It generally involves taking misusing a vulnerability in the UI or webpage. Malicious code writers can edit the UI and add multiple transparent and opaque layers over. The intention is to fool a visitor into thinking they have visited a legitimate webpage. 20) Data security Data security is the process of shielding data from unauthorized access and intentional data corruption. Common steps include data encryption, hashing, tokenization, and key management practices that protect data across all applications and platforms. 21) Denial of Service Denial of service is essentially an attack that seeks to disrupt assured access to a digital service. Its main goal is to ensure the service remains unavailable, usually by overloading the backend services. 22) Dynamic analysis Dynamic analysis, also known as dynamic program analysis, is the evaluation of a program or technology using real-time data. This method of analysis can be done usually while the program is running for gathering real-world implications or behaviour. 23) DNS Spoofing DNS Spoofing involves corrupting or hijacking an Internet server's Domain Name System (DNS) table by changing a valid Internet address with another, presumably rogue address. When a web user looks for the original page, the request is redirected to a different address. It is a form of attack on a website or internet-based service. Usually, the assailant begins by exploiting a vulnerability in one computer system and making it the DDoS master. The usual goal of a DDOS attack is to make a machine or network resource unavailable to its intended users. It is a software development culture or practice that helps transition an organization’s approach from compartmentalized, traditionally adversarial groups to mutual or shared ownership. The primary goal is to have an automated software delivery mechanism where development, testing and release happens in a synergistic manner. It is an extension of DevOps with the goal of continuously integrating security within the Development Environment (IDE). The tools and processes must be able to automate some security structure and practices that otherwise would slow down the DevOps workflow. 27) Dynamic Application Security Testing (DAST) An application security tool that analyzes a web application from the external-facing side or through the front-end. The aim is to find vulnerabilities through simulated attacks. Taking any action or set of actions that cause the software to deviate from its designed intent and functions, generally for taking unauthorized actions. 29) Ethical hacking The art of finding security loopholes, vulnerabilities, bugs, and software faults with the intention of alerting the owners or developers. A security system for any computer network that monitors traffic to and from the same based on predetermined security rules. Health Insurance Portability and Accountability Act or HIPAA is United States legislation formed in 1996. It outlines data privacy and security provisions for safeguarding medical information. 32) Interactive Application Security Testing (IAST) A combination of SAST and DAST techniques promises to provide quicker and more precise results. IAST looks for vulnerable code while the application is running. 33) Issue Severity A classification system for the impact that a defect has on the development or use of a program. They are usually ‘Critical’, ‘High’, and ‘Medium’. Malware is any software that can intentionally cause harm or penetrate deeper into a network. Some of the common malware include viruses, Trojan horses, spyware, ransomware, adware, etc. 35) Malicious Code An application security threat intentionally created to either create or exploit system vulnerabilities. This code can negatively impact the confidentiality, integrity, or availability of an information system. Any decentralized approach to software development, where larger applications are broken down into smaller components and developed separately and concurrently. To establish a plan for handling threats on a computer, server, or network. Companies try to reduce impact by removing or reducing their potential impact through remedial actions, prevention, or outright solutions. 38) Manual Application Security Testing (MAST) The process of finding and fixing security issues in mobile applications across devices, networks, and servers. 39) National Vulnerability Database (NVD) NVD is the U.S. government repository of standards-based vulnerability management data. It is characterized using the Security Content Automation Protocol (SCAP). 40) Open-source software Any code that is specifically designed to be commonly accessible and open to the public. Essentially the creator and copyright holder openly grants any user the rights to see, use, modify, and freely distribute the code. 41) OWASP TOP 10 A ‘Top 10’ list of the most critical or dangerous security risks to web applications. Developers around the globe refer and can contribute to the making of the list. A set of code deliberately inserted into an otherwise running or executable program with the intention to address a vulnerability or flaw. Patches are needed after the affected software is released and flaws are identified. 43) Patch Management The process of distributing and applying updates to the software in a network of computers. 44) Penetration Testing A technique to find flaws or security vulnerabilities in a computer system. This is done by simulating a cyberattack against a computer system. The intention is to find exploitable vulnerabilities. The act of obtaining legitimate or authorized security credentials or other sensitive information by fraudulent means. 46) Quality Assurance The process of ensuring that all software is compliant with predefined standards and parameters. Basically, companies must ensure the proper quality of the software. A malicious code or program that locks the owner out of his own information and demands payment to send an unlock key. The process by which organizations recognize and address security threats to their systems. This is done by addressing existing vulnerabilities. 49) Run time application self-protection (RASP) Technology that remains vigilant for any attacks on an application in real-time. It measures attacks from the inside and prevents exploits from within. Usually, a collection of computer software, typically malicious, is designed to enable unauthorized access to a computer or an area of its software. 51) Secure coding Set of practices that apply security considerations to how the software will be coded and encrypted to best defend against cyber-attacks or vulnerabilities from the beginning. 52) Security information and event management (SIEM) A software solution that gathers and analyzes activity throughout the organization’s technology infrastructure. The aim is to generate detailed and actionable reports on security-related incidents and events. Companies need to remain alert of any potential security issues. 53) Social Engineering Any attempt that tries to trick people into voluntarily giving up confidential or sensitive information that can be used to attack systems or networks. 54) Software as a service (SaaS) A software distribution and licensing approach that relies on a subscription model and is usually centrally hosted. Users need not purchase and install the same on individual computers. 55) Security Operation Centre (SOC) A centralized facility that deals with security issues on an organizational and technical level. It houses an information security team responsible for monitoring and analyzing the situation, usually in real-time. 56) Software vulnerability An error in the software can be used by a hacker to gain unauthorized access to a system or network. 57) Software Weakness Flaws, weak points, vulnerabilities, and other mistakes in software development, implementation, design, or architecture can make systems and networks vulnerable to exploitation or attack. 58) Source code A human-readable list of code and commands that a programmer compiles into an executable computer program 59) Static application security testing (SAST) An Application Security tool that analyzes the application from the “inside out” by scanning an application’s source, binary, or byte code. 60) Software composition analysis The methodology provides users better visibility into the open-source record of applications and judges potential areas of risk by using third-party and open-source components. 61) Source code analysis Methodology to analyze source code or compiled versions of software to find potential vulnerabilities, weak spots, etc. A malicious code or software designed to look for information and relay the same back to its creator. Any potential negative action or event, usually facilitated by a vulnerability, with the potential of adversely impacting organizations. 64) Trojan horse A type of malicious code or application that looks and feels authentic but can cause harm or steal information. 65) Threat Modelling A process to recognize threats or any missing safeguards in order to prioritize risk mitigations. 66) Unified threat modelling (UTM) A security solution that provides multiple security functions to a network as a single system. A UTM includes a number of network protections. Any malicious code or software that ‘infects’ a computer system and can potentially cause problems, steal data or corrupt systems. 68) Virtual private network (VPN) A technology that can encapsulate and transmit network data over another network. Often used to access information typically not available through public internet access systems. 69) Vulnerability Assessment The practice to define, identify, classify, and prioritize security holes (or vulnerabilities) in a computer, a network, or IT infrastructure. 70) Vulnerability Management The continuous process of identifying, classifying, and remediating security holes. 71) Web Application Firewall (WAF) An application-specific firewall particularly for HTTP or internet-based applications or exchanges. The intention is to protect against common attacks such as cross-site scripting (XSS) and SQL injection. 72) White hat Any hacker or computer security expert seeks permission to try and break into a computer system in order to expose and report on the findings. 73) White Box Testing To test an application’s internal coding and infrastructure. The process focuses primarily on strengthening security, the flow of inputs and outputs through the application, and improving design and usability. 74) Web Application Pentesting A simulated cyber attack particularly against a computer system that offers a web application to check for exploitable vulnerabilities. A malicious software code designed to penetrate deeper into any computer network after initial infection by a virus. A currently unknown flaw inside a computer system or software that is available to the software maker or to antivirus vendors. 77) Zero false positives Any computer security system that thoroughly checks a software or network and reports there are no security threats that may be mistakenly tagged as a virus. 78) Zero False negatives Any computer system that may mistakenly confirm there are no viruses or threats, when in fact, there could be underlying security threats.
<urn:uuid:072e444d-aff3-48c4-9761-d939a1b3bc9d>
CC-MAIN-2022-40
https://www.appknox.com/blog/complete-cyber-security-jargons-by-appknox
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00696.warc.gz
en
0.897088
3,352
3.328125
3
To align a camera installed on a patrol vehicle, you must determine where to set up the tripod that holds the alignment target. Before you begin What you should know Use the following illustration to help you measure X, Y, and D relative to your vehicle: - Secure the end of the measuring tape to the camera using one of the hook and loop clips. From the height of the camera lens, measure the X distance. Measure parallel to the ground and perpendicular to the vehicle’s wheel line. Use the plumb-bob weight and chalk to mark the ground where X falls. From the X point you marked on the ground, measure the Y distance. Measure on the ground, and parallel to the vehicle wheel line. Use the chalk to mark Y on the ground. To validate D, measure the D distance from the height of the camera lens. Measure parallel to the ground. Use the provided plumb-bob weight to see where D falls on the ground. If you are more than three inches off from the Y point, you must re-measure X and Y. - Set up the tripod on the Y point you marked, raise the target so that its center is 36 in. (0.9 m) from the ground, and orient the target so it’s facing the camera.
<urn:uuid:04b48387-4547-46f1-87ce-da8c9ba38170>
CC-MAIN-2022-40
https://techdocs.genetec.com/r/en-US/AutoVuTM-SharpZ3-Deployment-Guide-13.1/Setting-up-the-tripod
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00096.warc.gz
en
0.908201
286
2.78125
3
Digital work is essential in government offices today, but they still have a great need for printers. Some agencies have a dozen set up on their network while others have hundreds or even thousands of these devices. These aren’t just any government office supplies. Today, printers are multifunctional and capable of communicating information in both directions. When it comes to data security, printers can be one of the biggest vulnerabilities on a network. Largely, this is due to an underestimation of how big a threat this actually is in cyber security. Recognizing the Importance of Cyber Security With Printers The first issue with securing printers on a government network is recognizing that there is a threat. Printers are now multifunctional devices. They connect to a network and access information. While an IT professional doesn’t generally need someone to ask, “Why is security important?” they may need to hear the question, “Why is security important on a printer?” People often think of printers as single-function devices that receive a message, print it on paper, and that’s it. Today, printers are sending information back and forth across a network. This means that they have a lot of great functionality that makes them work well in an office. It also means that they have capabilities ripe for hackers to exploit when they’re unsecured. For an example of why this is important, consider the types of documents printed. Often, these are contracts or other highly sensitive forms of information submitted to other government agencies on paper. That information is all going through a printer that could have malware on it. To prove just how many unsecured printers there are in the world today and how easy it is to access them, security experts at CyberNews did an experiment. They successfully hijacked 28,000 printers and made them all print a five-page document. While they used their ability to do something fairly harmless, the experiment shows how easy it was for them to take over that many devices. Because so few offices are looking at the security of their printers, hackers know this is the easiest way into a network. When a system network administrator for a government office creates a security plan that doesn’t include printers, they aren’t purposely ignoring them; it just hasn’t become standard to include that in a cyber security plan yet. Today, offices need to catch up to the threats before malware, ransomware, or other issues get there first. How to Secure Printer Networks On a beautifully configured network, security becomes automated, and compliance to security standards, policies, and protocols is systemically handled. Today, cyber security technicians face the task of finding the best ways to automate and optimize security compliance for all government office supplies and devices, including printers. The options they choose need to work well with the other devices on the network and use trusted and proven methods. These must be staying up-to-date with cyber security standards. What issues do these practices need to address in order to properly secure printers on a network? The first step is to figure out where you are. How many printers are on your network? What capabilities do they have? Do your printers have hard drives? What access rights do they have? Who in your office is using these devices? Currently, are users authenticated, or can anyone use them? Are there different access rights for simply printing versus accessing data stored on your devices? What are the staff using your printers for? Are they printing confidential documents? Are they utilizing any of the other functions on these devices? Are there currently any measures at all in place to protect the printers? There may be some security installed that isn’t enough, or there may not be anything at all. Start by assessing the risks, and build from there. How to Secure Printer Data on a Government Network Securing a single printer for a home or business is not like securing printers on a network in a government building. For a home printer, it would be efficient to limit network-wide printing capabilities, install a firewall on the network, update your firmware on the device, and change the default password. Not only is this not enough when it comes to securing the kind of confidential information on your particular network, but it’s also inefficient if you need to secure hundreds of devices. You do need to install printer software updates and patches. This may be an overwhelming task, particularly if you have a variety of devices on your network. Many offices add to their inventory of printers and have multiple types of devices across the office. This is a hassle, but at the very least, all devices do need to stay up to date. There should be a system in place where the IT professionals in charge of your network receive notifications on software support or release updates. This may mean you need to enforce a schedule where you regularly check for those across all the types of devices on your network. Your printer may have a storage device like a hard drive. These require encryption like any other hard drive on your network. Access rights to the printers need restrictions in place to only allow company-owned devices. You don’t want people bringing in their home laptops and connecting to your printer. This is a security risk. This also means that you’ll need to have policies for employees using the printers and enforce those. Do your printers have the option of entering a PIN before people can print? Enabling this feature is a strong way to prevent unauthorized access to your secure devices. During the assessment of your network, you need to take inventory of what the staff uses the printers for. If they are not using fax or email, you can isolate the printer from these functions. This may mean disabling out-of-network printing and better securing the devices. Have your administrator login credentials been set up on your printers? If so, have you changed the passwords from their defaults? You want these securely set up so that your admins can always change any necessary functions and non-admins can’t. Are any of your staff members using the printers for particularly classified information? It may be necessary to remove the device from the network altogether and connect it to a computer using cables. This may seem extreme, but it is the most secure method if you have work that is particularly sensitive. Keeping Your Network Secure in the Long Term While the above covers some of the essential work to secure printers and keep a network safe, it is actually a look at getting started. You’ll also need policies and protocols in place for moving forward in your agency. For example, you will need rules for personal printers. Staff may not consider purchasing their own printer for the office to be a security threat. It needs to be clear to employees that this is not allowed. Your security standards will need to grow as technology does. We can count on technology to continue changing, and so your policies and procedures must as well. Regular cyber security checks and meetings should occur to review these standards. These reviews must include printers. For many offices, it will make sense to work with a solution specialist who is used to meeting the needs of a federal or state-run government office. Your needs are unique, and so you need an expert who knows how to keep information secure to the standards of your office. Are you ready to access the data security of printers on your network? Schedule a free consultation meeting with our solution specialist today.
<urn:uuid:11a02709-c1ac-4ca6-9c31-47a73949f5ec>
CC-MAIN-2022-40
https://www.fbponline.com/news/printer-security-for-government/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00096.warc.gz
en
0.947613
1,524
2.65625
3
The opportunity for Machine Learning in detecting warranty claims fraud. Machine learning (ML) is a branch of artificial intelligence that looks at patterns of data and draws conclusions. Once it gets good at drawing correct conclusions, it applies itself to new data sets to find out hidden patterns. It is not a single technique or technology, but rather a field of science that incorporates numerous technologies to create systems that can learn from the data in their environment and then make predictions and take actions when confronted with a new situation. How Machine Learning works ML techniques can be categorized as supervised or unsupervised. Supervised algorithms require an analyst to provide both input and desired output, in addition to furnishing feedback about the accuracy of predictions during algorithm training. The Data scientist determine which variables, or features, the model should analyze and use to develop predictions. Once training is complete, the algorithm will apply what was learned to new data that is being acquired from daily operations. Unsupervised learning does not need to be trained with desired outcome data. It is used against data that has no historical labels. The system is not told the “right answer.” The algorithm must figure out what is being shown. The goal is to explore the data and find some structure within. Unsupervised learning works well on transactional data. For example, it can identify warranty claims and create profiles of typical claims for certain type of repairs. Warranty Fraud situation today: Warranty management has increased in importance dramatically over the years and the general maturity of the warranty profession in the industry has improved during the past 15 years. However, the battle against warranty fraud continues to be a challenge for many warranty professionals. Industry studies show up to 10% of warranty costs are related to warranty claims fraud, costing manufacturers billions of dollars. Most companies suspect warranty fraud, but are not sure of the extent and ways to eliminate it. The existing tools and methods to detect warranty fraud are rules based which allows you to identify only known fraud strategies or are complex and expensive, causing manufacturers to unwillingly bear the excessive costs. To add to the equation, the sheer volume of claims makes it very difficult for knowledgeable warranty processors to thoroughly review and analyze individual claims. Use of business rules during the claims process can identify some errors in claims entry, but are ineffective in mining the warranty data to detect anomalies and patterns that indicate fraud. Manufacturers need a powerful, easy-to-use, and cost effective warranty fraud detection solution. Today fraudulent warranty claims occupy an estimated 3% to 15% of the average company’s warranty costs, which generally average between 1% and 4% of product sales. In individual companies, these figures can be much higher. Even at the low end of the range this translates to several billion US dollars globally making both warranty and warranty fraud major issues. Case to use Machine Learning for Warranty Fraud detection Businesses must improve the accuracy and speed of their decisions on fraudulent threats. The only mature technology available to achieve this is Machine Learning. Coupled with the fact that ML allows organizations to identify new fraud strategies that are being adopted, ML provides the perfect spoiler to organized fraud schemes in warrant claims. In fact, one of the most effective uses of ML in identifying warranty fraud is staying ahead of new fraud schemes that the organization has not witnessed before. Manual review is most effective as the last defense against fraud: While it can be invaluable, particularly in cases where there is no substitute for human insight, manual review works best to help fine-tune machine learning models decisions and aid their detection of changing patterns of fraud, rather than being the sole fraud detection process. The feedback loop allows information from detected fraud threats to be used to enhance the detection engine so that instances of fraud of this type are identified much quicker and efficiently. There is little doubt that where large amounts of data are concerned, machines are far more accurate and effective. They are able to detect and recognize thousands of patterns fraud schemes instead of the few captured by creating rules.. This is the reason why we use machine learning algorithms for preventing fraud for our clients. The three factors which explain the importance of machine learning are – - Reacting at the speed of business – The velocity of commerce is rapidly increasing and will continue to do so in the foreseeable future, it’s very important to have a quicker solution to detect fraud. Our merchants want results fast so that they can act fast. It is so much easier to withhold payment till the issue is resolved than to pay and try to retrieve the payment after the fraud has been identified. Only machine learning techniques enable us to achieve that with the sort of confidence level needed to approve or decline a transaction. - Handling increasing volumes of information – As is true in almost any field, the problem of identification does not lie in the extreme cases because they are easy to spot. The problem lies in cases that seem to be ‘on the fence’ – which is probably the bulk of your transactions. Machine learning algorithms and models become more effective with increasing data sets. Machine-learning improves with more data because the ML model can pick out the differences and similarities between multiple behaviors. Once told which transactions are genuine and which are fraudulent, the systems can work through them and begin to pick out those which fit either bucket. These can also predict them in the future when dealing with fresh transactions - Efficiency – Unlike humans, machines can perform repetitive tasks with the same degree of efficiency and accuracy throughout. Similarly, ML algorithms do the dirty work of data analysis and only escalate decisions to humans when their input adds insights. ML can often be more effective than humans at detecting subtle or non-intuitive patterns to help identify fraudulent transactions. Moreover, unsupervised ML models can continuously analyze and process new data and then autonomously update its models to reflect the latest trends. The ideal combination – the smart assistant Machine learning is not a panacea for fraud detection. It is a very useful technology which allows us to find patterns of an anomaly in everyday transactions. They are indeed superior to human review and rule-based methods which were employed by organizations. The ideal situation is for the ML based fraud detection solution to be a feeder to the experienced human eye. The ML solution would throw up ‘suspicious’ warranty transactions for human review. The expert human eye would make the judgement on whether or not the transaction is a fraud thereby bringing their knowledge to bear on the process but at the same time ‘teaching’ the ML solution to further refine their selection criteria to detect fraud more efficiently in the future.
<urn:uuid:951f6672-09d6-404c-bcfc-2ee441dd7782>
CC-MAIN-2022-40
https://avianaglobal.com/machine-learning-for-warranty-fraud-detection/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00096.warc.gz
en
0.946178
1,339
2.84375
3
It may seem like disaster strikes out of nowhere, but in many instances, there are signs emergencies are about to take place. Hurricanes are usually heralded by atmospheric conditions, financial crisis will stem from irregular market activities, and emergency room visits are prompted by preexisting health concerns and other factors. The problem isn't always a lack of information, but rather too much. It can be hard to tell which data indicates a problem is going to occur and which can be ignored as irrelevant. IBM detailed how Eric Holderman, a Disaster Preparedness Consultant, wants to overcome these information obstacles and use predictive analytics to simulate emergency situations to train response teams and inform the public. His strategies could be just what hospital emergency rooms need to prepare for the worst. "The goal is to protect lives and decrease the amount of money lost to emergency situations." It's impossible to actually look into the future, but an organization can learn from the past. Eric Holderman plans to improve disaster management by collecting and analyzing the data created by similar incidents. The goal is to protect lives and decrease the amount of money lost to emergency situations. By using analytical solutions and machine learning, the more relevant information fed into predictive modeling tools, the better they can compare data to spot trends and prepare users for incidents in the future. For example, studying the effects of storms can help people who operate seaports recognize weather conditions that should cause concern and secure the most likely points of damage. Plans for emergency rooms Predictive analytics can be applied to large and small medical emergencies. The Nation Center for Biotechnology Information detailed how organizations use machine learning and data collection to predict the spread of disease and prevent pandemics. These predictive practices can also help health organizations prepare resources like pharmaceuticals for emergency demand. By studying the current consumption of pharmaceuticals and then comparing existing datasets to emergency models, organizations can make decisions about emergency stockpiles and alternative distribution. The models used in predictive analytics should come from relevant information. In the seaport example, it's important information used for analytics comes from structures of similar size with comparable traffic. An emergency room can prepare for major disasters or disease outbreaks by comparing information in its healthcare management system to that provided by regulatory agencies and data collected from identical health organizations. Whatever the results, the information must be made available to parties potentially affected by future disaster. This could include doctors, resource managers, ambulance drivers and patients. The information generated by predictive analytics is only helpful if it's visible and used to anticipate emergencies before it's too late.
<urn:uuid:6adef885-45ec-445e-b444-ebdbfc345131>
CC-MAIN-2022-40
https://avianaglobal.com/preparing-for-medical-emergencies-with-predictive-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00096.warc.gz
en
0.935587
521
3.375
3
Interested in learning how enabling IOT for manufacturing can benefit your operations? IOT, also known as the Internet of Things, is revolutionizing the way businesses across many industries operate. It helps them work smarter with efficiency. Access to data is at the core of the insights that will help your operation streamline operations, but you need to really understand what to track and which processes can be improved by IoT. IoT is really just referring to internet-connected devices. In this context, it is known as “smart” manufacturing. It is also sometimes called the Industrial Internet of Things, or IIoT. Devices get deemed “smart” in the sense that they are online at all times ad can be controlled and monitored. At home, IoT devices are things like meters, lighting, fridges, and televisions – everyday items kept on 24/7 to increase efficiencies. In the commercial world these devices can be embedded in products, equipment, and even processes. IoT for manufacturing creates “smart” facilities. Incorporating this in the ways described here can reduce wasted material, improve process, and increase output as a result. IoT Applications in Manufacturing IoT for manufacturing plants and process can be put in place for warehousing, shipping, logistics, transportation, assembly, packaging, and even administrative aspects like document management. In these settings, connected devices provide real-time updates on things like incoming and outgoing goods or production volumes and quality. Benefits of IoT-Enabled Manufacturing IoT may just be the next big thing in this industry, alongside analytics and machine learning. Bain & Company predicts that the combined IoT market will grow to at least $520 billion in 2021, more than doubling its 2017 value of close to $235 billion. There are a number of benefits for businesses that put IoT in place, including: - Energy efficiency with insights down to device level - Predictive maintenance, reducing unplanned downtime and improving efficiency - Quality control for materials and products - Enhancement of performance and monitoring of equipment - Ability to resolve problems remotely Get Started with IoT Prioritize and identify what your business problems are, what data you need to be able to analyze real-time, and what processes might be improved by connecting in this way. You can create small pilot applications of IoT to test and determine if you will get true return on investment. Implementing and integrating IoT for manufacturing business environments is challenging in a number of ways. You could see resistace from staff who see it as a threat to jobs, of course, among other plausible issues. The real key here is to be open and honest about introducing any new technologies. Educate your employees about the benefits to the business and how that feeds their success – even before you start testing. Security of devices is also a humongous potential issue that deserves its own entire piece here. You really should start with an assessment of your manufacturing environment, followed up by a comprehensive plan to keep it secure with these potential additions of technology connection. IoT can potentially give your manufacturing business a huge competitive advantage by allowing you to operate and serve customers better; however, before all of that, you must determine the best way to use IoT for the greatest return. We can help you with everything – from deciding whether IoT options are right for your business, all the way down to the ongoing management and monitoring involved in securing and optimizing the technologies in your environment. Get in touch with us to get started: Oops! We could not locate your form.
<urn:uuid:3c24f889-e2a9-46f0-b37d-ae0fe0fb8255>
CC-MAIN-2022-40
https://coopsys.com/iot-for-manufacturing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00096.warc.gz
en
0.950952
730
2.515625
3
Share Blog Post - The NSA published two advisory reports on securing personal devices or networks from external threats. The reports are primarily designed for system administrators and teleworkers associated with the National Security System (NSS), and DoD. - Researchers at the NIST developed a new method called Phish Scale to help organizations avoid phishing attacks. Phish Scale uses a rating system that is based on the message content in a phishing email. - NIST also published a cybersecurity practice guide to help organizations recover from ransomware and other malware attacks. The goal is to effectively monitor, detect, and retrieve the data in case of attacks. - The U.S. Federal Energy Regulatory Commission (FERC) and the North American Electricity Reliability Corporation (NERC) released a report outlining the best cybersecurity practices for electric utilities. The guidelines are aimed at making these industries cyber-resilient. - Software vendor Tyler Technologies, eyecare giant Luxottica, and laser company IP Photonics were targeted in different ransomware attacks. This resulted in huge losses of data and disruption of systems. - Unsecured databases were responsible for data leaks at Midwest Property Management and Town Sports International. While the Midwest Property Management exposed 1.2 million records, the data leak at Town Sports International affected a terabyte of data associated with the company. - The official website of the Ukraine National Police was temporarily taken down after an intrusion by malicious actors. The threat actors and the attack method are still unknown. In another incident, hackers leaked personal details of 1,000 high-ranking Belarus police officers on Telegram. The leaked data included names, dates of birth, and job titles of officers. - The online retail platform, Shopify Inc suffered a customer data breach after two employees stole transaction records. The exposed data included email addresses, names, and physical addresses of customers. - ArbiterSports paid a ransom to hackers to prevent the data leak of 540,000 sports referees. The attack occurred in July this year. - Microsoft patched one of its backend servers that exposed over 6.5TB of log files containing 13 billion records originating from the Bing search engine. - Encrypted email service, Tutanota, experienced a series of DDoS attacks, resulting in downtime of several hours for its users. - The College of the Nurses of Ontario fell victim to a cyberattack, forcing the governing body for nurses to shut down its services. On the contrary, Long Island’s tertiary care center, Regional Trauma Center and Stony Brook University notified their patients about a data breach due to Blackbaud’s ransomware attack. - The University of Tasmania notified that the personal details of almost 20,000 students were compromised in a phishing attack. Information belonging to 19,900 students was made public through the Microsoft Office365 platform SharePoint. - This week’s list of newly discovered malware includes the likes of Taurus Project, Alien Android trojan, and TinyCryptor ransomware. While the new Taurus Project information stealer was observed in a malspam campaign targeting users in the U.S, the Alien trojan came with the capabilities to steal credentials from 226 Android applications. On the other hand, TinyCryptor is a creation of the OldGremlin hacking group that recently launched a successful attack on a Russian medical company. - A new ransomware operation named Mount Locker was found to be active since July 2020, stealing victims’ files before encrypting them, and then demanding multi-million dollar ransoms. The ransomware uses ChaCha20 and RSA-2048 to encrypt files. - Microsoft removed 18 Azure Active Directory applications from its Azure portal that were created and abused by a Chinese state-sponsored hacker group. These apps were a part of a spear-phishing campaign that used COVID-19 themes to target organizations. - The return of Zebrocy and Emotet, in different cyberespionage campaigns, was also reported by researchers and federal agencies. While the Zebrocy campaign leveraged fake NATO documents to target government bodies in specific countries, the Emotet trojan made use of legitimate email threads to evade detection. Additionally, security agencies in Italy and the Netherlands issued an advisory on the uptick in Emotet’s activities. Meanwhile, the recently discovered AgeLocker ransomware was also uncovered targeting QNAP NAS devices and in some cases, stealing files from victims. - Talking more about the return of certain malware strains, the Cybersecurity and Infrastructure Security Agency (CISA) warned of an uptick in attacks using LokiBot information-stealer. The alert issued by the agency highlighted its intrusion, detection, and prevention methods. - In a recent report, IBM revealed that the Mozi botnet accounted for 90% of the attacks on IoT devices between October 2019 and June 2020. The targeted devices included Netgear, D-Link, and Huawei routers. Posted on: September 25, 2020 More from Cyware Stay updated on the security threat landscape and technology innovations at Cyware with our threat intelligence briefings and blogs. Explore Industry Briefs Cyware for Enterprise Adopt next-gen security with threat intelligence analysis, security automation... Cyware for ISACs/ISAOs Anticipate, prevent, and respond to threats through bi-directional threat in...
<urn:uuid:dded8211-b91a-4fb3-895b-ce3ee1bfd439>
CC-MAIN-2022-40
https://cyware.com/weekly-threat-briefing/cyware-weekly-threat-intelligence-september-21-25-2020-84f4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00096.warc.gz
en
0.931455
1,100
2.515625
3
Securing IoT Medical Devices With the expansion of the Internet of Things come increased cybersecurity concerns. The technology is finding its way into every industry, elevating connectivity and making our lives easier. However, there are downsides to this, and ever-growing connectivity is becoming more of a problem as cybercriminals seek to exploit it. Perhaps this is most visible in the medical industry, where patient data security is of utmost importance. Regardless of it being a top priority for healthcare providers, patient data frequently gets compromised. This is why it’s imperative to secure IoT medical devices by preventing malicious exploits and adopting various security-related procedures. Let’s have a closer look at the requirements for securing IoT medical devices. What Manufacturers Are Doing to Secure IoT Medical Devices For manufacturers of medical devices, it’s crucial to stay on top of the latest trends that make it easier to implement higher cybersecurity measures. They devote plenty of attention to integrating multicore system-on-chips or SoCs into their designs because there is an increasing need for systems to process information and tasks quickly. It’s especially important to choose the right silicon for the material of the processor, as well as to include processor security features. Secure boot and boot fuses, device partitioning and crypto engines are used to prevent different kinds of possible cybersecurity problems. There’s also the implementation of Linux as the operating system that fits with medical devices best. Thanks to its proven toolchains and APIs, Linux is safer than most operating systems, and it’s also feature-rich. Another critical challenge for the medical device manufacturers and developers is making sure that IoT doesn’t make it easier for cybercriminals to invade the system despite increased connectivity. There are different ways in which they are doing that. Protecting the IoT Medical Devices Developing security for a medical IoT device is no easy task. Those in charge of it need to assess weaknesses in the system and determine what’s the surface area that’s most vulnerable to cyber attacks. To prevent cybersecurity issues, developers need to take a look at the three stages of data security. How to Secure IoT Medical devices: Data at rest From the moment when a device is being powered to the point when it’s fully operational, data has vulnerabilities. Possible measures to take in this stage are related to the security of storage, the root of trust and chain of trust. The heart of trust and chain of trust need to be established (through software-based solutions if need be) and storage security should be kept on a high level, with anti-tampering methods and proper storage of the bootable image. Data in use When the device is operating normally processing and generating data, the data is vulnerable to attacks. Developers can improve cybersecurity in this stage by using hardware-enforced isolation, software-enforced separation, userspace isolation or information and data obfuscation. Physically isolating application by keeping multiple SoCs side by side is an excellent solution for those who can afford it. Alternatively, you can separate applications by using software solutions and add another level of security by obfuscating the data. Data in transit When the device is on, and data is leaving or entering it; data might be at its most vulnerable. In this stage, it’s important to make sure that the receiver of information has been adequately identified by using mutual attestation. In case that fails and the device gets hijacked, developers have to ensure that data is still protected by encrypting it. That’s usually accomplished by using SoCs with crypto engines that are quite difficult for hackers to crack. For any conscientious IoT medical device manufacturer and developer, cybersecurity should be at the top of the list of priorities. A specialized IoT security partner is your best choice to help you reach and maintain your IoT security goals “while minimizing your costs and the security risks.” Originally this article was published here. This article was written by Roland Atoui, Managing Director & Founder of Red Alert Labs, expert in Information Security and Certification with more than 10 years of experience in the industry. From smart cards to smart phones to smart manufacturing, Roland is a new technology enthusiast with a current mission to bring trust to the Internet of Things.
<urn:uuid:4ed98dd8-055d-44ff-8ae9-d951bda414cc>
CC-MAIN-2022-40
https://www.iiot-world.com/ics-security/cybersecurity/securing-iot-medical-devices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00096.warc.gz
en
0.944882
880
2.578125
3
Security and Privacy Big cloud service providers are already thinking about how to bring the Internet of Things together on an enterprise scale. IBM, Microsoft, Amazon, and Samsung (they’re obsessed with the IoT) have all developed platforms to help businesses manage their Things. This allows your business to create a single point of access, reducing their attack surface. Moreover, because these companies are in the business of data, their efforts to keep that data secure will always be cutting edge. A centralised platform also means greater control. While the cloud may not discourage the collection of private data, it can make it much easier to control who has access to it, how it is used, and how securely it is stored. Data Storage and Analysis The number of connected devices has grown from a few thousand beginning in the 1980s (think ATMs and pay-at-the pump) to an expected 25 billion by 2020. Most of these devices can be used to gather data that can be useful in making a business run more efficiently, says Lance Spellman, president at Workflow Studios. Currently, most IoT data are not used. For example, on an oil rig that has 30,000 sensors, only 1 percent of the data are examined. That’s because this information is used mostly to detect and control anomalies—not for optimisation and prediction, which provide the greatest value. McKinsey The amount of data made available by the Internet of Things is unprecedented. For companies to take advantage, they need a lot of space to store it and a lot of power to process it. Cloud storage provides a reliable, secure, and expandable option for the former while SaaS offerings continue to provide more and more options for processing and analysis. This ability to actually use information being gathered has implications for almost every other area of IoT concern. There are a dozen ways for various devices to connect to the Internet and each other: 2G/3G/4G, NFC, WiFi, ZigBee, Bluetooth, WSAN, Z-Wave, good old fashioned wires. They are even developing technology (called LiFi) that uses light waves to transmit information. Unfortunately, most devices operate on a single channel, preventing them from interacting in ways that could be useful. The information they transmit comes to rest in a single-device app, and there is where the usefulness ends. To deal with the first problem, the tech world has started to develop hubs (we know, yet another Thing) that are able to communicate through multiple channels and collect signals from a wide variety of devices, directing those communications through a single app. As these technologies become increasingly sophisticated, they will become more and more adaptable and can be made to interface with custom software designed to help make those communications more useful. Energy and Bandwidth The ability to monitor precisely the amount of energy and bandwidth actually being used helps to identify areas where there may be waste and eliminate them. The reality is, most IoT devices use very little bandwidth to operate, if any. It is transmitting the data they gather that may require a bump in your connection. Uniting devices on a cloud platform allows you to reduce redundancies in data transmission, which may help to keep this under control. On the energy side, though the cloud may not help you get around the need for more outlets, it can eliminate the need for more servers, one of the largest consumers of energy in an office. The datacenters that make the cloud possible are, admittedly, huge consumers of power, but they also have huge resources at their disposal to make the generation of that power cleaner and its use more efficient. This reduces the environmental impact and the cost of the energy needed to power your Things. Knowing exactly what you need can help you predict exactly what back-up you need – what kind of battery or generator power, how much bandwidth to avoid maxing out. Centralised analysis can help you do that. What’s more, the cloud allows you to be flexible, which means that, even if the power at your office goes out, workers can keep working from other locations and still have access to the entire business. Even a total loss at your physical office leaves your data and operating systems secure and functional. Monitoring and processing data allows you to easily see where automation has actually improved efficiency and where human hands and minds are needed to push processes forward and create more innovative approaches. Centralised control puts the technology in your hands, at your disposal, making it the useful tool it is meant to be. While we can’t claim that the cloud can keep computers from getting too smart, the clear picture it can provide of the landscape of our technology may keep us from feeling like we need an A.I. smarter than us to keep it all up and running. And if not, at least it could reduce the number of plugs we need to pull when it all goes to hell. Robot icon by Simon Child, from The Noun Project, all other icons from pixabay.com Original Article: http://workflowstudios.com/taming-the-internet-of-things-with-the-cloud/ The author of this blog is Lance Spellman, president at Workflow Studios. Comment on this article below or via Twitter: @IoTNow_ OR @jcIoTnow
<urn:uuid:6e7c3ab4-3b5f-44f8-9b50-2ff7b4316bec>
CC-MAIN-2022-40
https://www.iot-now.com/2016/06/20/48717-how-the-cloud-makes-the-iot-manageable/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00096.warc.gz
en
0.94118
1,105
2.578125
3
We’ve entered a time when hard drives are becoming less important than data speeds, syncing, and remote storage. More and more end-users are saving their files in the cloud for convenience, safety, and cost savings. That said, some people still have concerns about cloud computing -- namely around security. How safe are files that are stored hundreds or thousands of miles away, on some other organization’s hardware? Because of these concerns, many conversations still happen around cloud data protection, security threats, outages, and potential cloud data breaches. All that aside, the truth is that data stored in the cloud is probably more secure than files, images, documents, and videos stored on your own local hardware. Your security can be even more assured by following a few best practices for cloud data storage. These are some of the questions we’ll be exploring in this article. What this article will cover: - What is cloud data? - How secure is data in the cloud? - Why a secure cloud is important - Tips for securing cloud data - Maintaining a secure cloud for yourself and for IT clients What is cloud data? Simply put, the cloud refers to any type of software or service that is delivered over an internet connection rather than being housed on your personal computer or device. Most end-users have some familiarity with this concept from using Dropbox, MS OneDrive, Google Docs, or even Netflix. Through cloud services, users can access these files whenever they are by using a device connected to the internet. Documents can be easily shared between devices. Files can be accessed on the go. Many modern digital cameras even automatically push photos into cloud storage so they can be accessed from a phone or laptop immediately after shooting them. There are many popular cloud service providers in the market who specialize in large-scale data storage and cloud infrastructure. The top players include familiar names like Google Cloud Platform, Amazon Web Services (AWS), and Microsoft Azure. How secure is cloud data? For the most part, the cloud can be as secure or more secure than your own hard drive, physical server, or data center. As long as the cloud provider has adopted a comprehensive, robust cybersecurity strategy that is specifically designed to protect against risks and threats, modern cloud is extremely safe and reliable. This truth does create a small problem. Many organizations haven’t realized that legacy security solutions and pre-cloud postures may not be enough to protect them when they’ve migrated, even partially, to the cloud. As you might expect, security planning must be updated to meet the requirements of this specific environment. It’s also important to remember that the cloud provider is only partially responsible for data security. Cloud security falls under a shared responsibility model, which means the security of cloud data is the responsibility of both the cloud service provider (CSP) and its customers. Cloud security risks The benefits of the cloud are undeniable but there are challenges that must be addressed. IT’s important to know what risks exist for organizations who don’t take the proper security measures: Cloud data breaches are executed differently than those against local hardware. Whereas traditional attacks make heavy use of malware, cloud attackers exploit misconfigurations, access controls, stolen credentials, and software vulnerabilities to gain access to data. The top vulnerability in a cloud environment comes from improperly configured accounts and software. Misconfigurations can lead to superfluous privileges on accounts, insufficient logging, and other security gaps that can be easily exploited. End-users and organizations often use APIs to connect services and transfer data between entities -- whether that be different applications or entirely different businesses. Because APIs are built to pull and transmit data, changes to policies or privilege levels can increase the risk of unauthorized access. Privileged access management Organizations using the cloud should not maintain the default access controls of their cloud providers. This is especially troublesome within a multi-cloud or hybrid cloud environment. Inside threats should never be underestimated, and users with privileged access can do a great deal of damage. 8 tips for securing cloud data What steps should you take if you’re partially responsible for the security of your data stored in the cloud? The following tips and best practices will help you keep your information safe: 1) Use encryption A smart first step in cloud protection is using a cloud service that encrypts your files both in the cloud and on your computer. Encryption of data in motion and data at rest helps ensure that hackers or third parties -- including your cloud provider -- can’t make use of your data even when it’s stored on their systems. 2) Stay on top of updates As a general rule, you should always keep your software up to date. Patching software is a major cybersecurity concern both inside and outside of the cloud, as out-of-date applications can leave doors open for intrusion or exploits. Although your CSP is responsible for updating software in their own data centers, some of your local software for accessing the cloud may still need to be updated locally. IT providers who are responsible for numerous updates across many machines are advised to use Patch Management Tools to automate the important task of updating. 3) Configure privacy settings Once you sign up for a cloud service provider, look for privacy settings which allow you to specify how your data is shared and accessed. These settings will usually let you choose how long data is stored and what information a third party is allowed to retrieve from your devices. 4) Always use strong passwords The vast majority of successful cyberattacks are possible due to weak passwords. End users should always use strong password practices for all of their accounts, but even more so when using cloud services that are designed to be accessible by anyone with the correct login credentials. 5) Use two-factor authentication Along with strong passwords, multi-factor authentication or two-factor authentication provides a huge boost to cloud security. The most effective options are those which ping your phone or an app like Google Authenticator with a one-time use code whenever you try to log in. This ensure that even if a malicious actor gets your login credentials, they won’t be able to complete login without access to your personal device. 6) Don't share personal information Social media has been a boon for hackers who are extremely skilled at bypassing passwords and security questions by skimming personal information. Some trends on social media -- like “games” where users are asked to repost something with their first pet’s name or the street they grew up on -- are purposefully used to gain information that helps bypass security challenge questions. Avoid posting such personal information publicly, regardless of how innocuous it might seem. 7) Use a strong anti-malware and anti-virus tool While traditional AV isn’t the cybersecurity cure-all that it once was, it’s still an important part of an overall cybersecurity plan. Look for solutions that offer comprehensive features like remote wiping and AI-powered threat detection. 8) Be cautious with public wifi It’s difficult to prove that public wifi connections are safe, so use them sparingly. Never connect to a hotspot that you’re not 100% sure is legitimate. Cybercriminals often use portable wifi interceptors and spoofed hotspots to gain access to personal devices, especially in places like cafes and airports. Using a VPN can help protect against some of these dangers as well. NinjaOne and cloud security Features like Ninja Data Protection are purpose-built for securing local and cloud data. Encryption and integrated backup solutions help ensure that your organization’s most valuable information assets remain protected around the clock. - Full Image Backup - Document, File, and Folder Backup - Endpoint Management - Patch Management - Bitdefender Advanced Threat Security - Comprehensive Ransomware Defense With more data moving to the cloud, ensuring cloud security is more important than ever before. The cloud, while more secure overall than during its inception, still presents a lucrative target for hackers looking for intellectual property, trade secrets, and personal information. It’s important to know that cloud security is a shared responsibility between the cloud provider and their customer. Choosing the right CSP and supporting tools will help you keep your cloud data secure, as will following the best practices outlined above. By taking a few smart steps and partnering with the right solution providers, you can rest easy knowing that your data in the cloud is safe.
<urn:uuid:f734e201-9433-4ce0-93c4-90fe96d9577a>
CC-MAIN-2022-40
https://www.ninjaone.com/blog/how-to-secure-your-data-in-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00096.warc.gz
en
0.922288
1,761
2.71875
3
ISPs do exactly what the the name describes: provide internet service to customers, in both residential and commercial environments. Internet service providers first emerged in the late 1980s and the early 1990s, when internet access first started becoming widespread. Initially, telephone companies operated as ISPs, due to the prevalence of dial-up internet. Later, these companies expanded into broadband providers of DSL (Digital Subscriber Line). Then, in the late 1990s and early 2000s, cable broadband was introduced, and many cable television companies began offering internet service alongside TV and voice services. Today, there are hundreds of different internet service providers, and they can be commercial companies, non-profits, or even publicly-owned utilities. Depending on where you are in the country, you may have a number of different ISPs to choose from to get internet service.
<urn:uuid:d9692ca2-a080-437d-b47a-5740346403b1>
CC-MAIN-2022-40
https://www.centurylink.com/home/help/internet/internet-service-providers.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00096.warc.gz
en
0.958486
170
3.203125
3
PCI Requirement 4 Encrypt Transmission of Cardholder Data Across Open, Public Networks Welcome to PCI Requirement 4. The culture we live in revolves around satellite technology, cell phones/GSM, Bluetooth, laptops, wireless Internet, and more. We may consider these things private, but the PCI DSS deems them to be public. PCI Requirement 4 helps prevent organizations from being a target of malicious individuals who exploit vulnerabilities in misconfigured or weakened wireless networks. To comply with PCI Requirement 4, sensitive data that your organization transmits over open, public networks must be encrypted. In these videos, you will learn about cryptography, security protocols, authentication, and unprotected PANs related to the transmission of card holder data. Click on a vide below to get started with PCI Requirement 4.
<urn:uuid:51243efb-57bd-4da6-92b3-8c73c1d53f23>
CC-MAIN-2022-40
https://kirkpatrickprice.com/audit/pci-dss/requirement-4/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00297.warc.gz
en
0.852959
163
2.515625
3
Sharing Files Using the Public Folder Windows Vista doesn’t have the Shared Documents folder that Windows XP offered; however, the Public folder is included, offering a very easy way to share files and documents with others on the same network in addition to other user accounts on the PC. As Figure 1 shows, you can access the Public folder from Windows Explorer or Computer. You can simply drag and drop (or copy and paste) files and folders into the Public folder (or one of its subfolders) to share them with users on the same PC and others on the same network. Although Vista automatically shares the Public folder with other network users, there is a security measure in place to help prevent unintended sharing of your Public folder when on public and other un-trusted networks, such as Wi-Fi Hotspots. As mentioned in Intro to Wi-Fi Networking Using Windows Vista, there’s a new network classification scheme where you’re prompted to classify the networks you connect to: Home, Work, or Public. For example, if you choose Public for your network location, Vista will automatically disable all network discovery and sharing (the Public folder and any manually shared folders) to protect your documents and privacy while on the unsecured network. Then if you go back home and connect to your network (which you most likely classified as Home), sharing will be re-enabled. You can also easily disable the sharing of the Public folder at any time via the Network and Sharing Center, which can be accessed by right-clicking on network status icon in the system tray. Then just scroll down to the green and/or gray status lights, click the arrow to the right of the Public folder sharing light, select your desired setting, and click Apply. Sharing a Specific Folder In addition to dragging files over to the Public folder, you can also enable the sharing of just about any folder on your PC, just like you could in Windows XP. Setting up sharing for folders in Vista isn’t much more difficult than in XP, although it is a bit more confusing at first. Here’s how to do it: 1. Right-click on the folder you want to share and select the Share… option. The File Share window pops-up. Figure 2 shows an example. The list box with the Name and Permission Level fields are those who can access the shared folder (we’ll call it the Access List). The Windows account you’re currently logged on is automatically added to the Access List. 2. Using the drop down list (just above the Access List), select who you want to add to the Access List and click Add. To share the folder among network users (and consequently all other user accounts on the PC), select and add the Everyone entry from the drop down list. 3. After adding an entry to the Access List, you can modify the Permission Level by clicking its arrow. Here are the attributes of the levels: - Reader: Can view shared files, but not add, alter, or delete them. - Contributor: Can view or add shared files, but can only alter or delete files he or she has contributed. - Co-owner: Can view, add, alter or delete any shared file. 4. Once you’re done click the Share button to apply the changes. Then you’ll see a window letting you know the folder is now shared and its path. 5. Click Done to exit. Sharing a Printer Just like in Windows XP, you can easily set up a printer that’s connected to a PC to be shared among users on the network; here’s how: 1. Open the Printers folder from the Control Panel. 2. Right-click on the printer you want to share and select the Share… option. The printer properties window pops up with the Sharing tab selected. 3. Click Change Sharing Options. If you are prompted for an administrator password or confirmation, type the password or provide confirmation. 4. Check the Share this printer option. 5. Enter the name in the Share name field that you would like to show in the network resources. 6. Click OK. Using a Shared Printer Once you have enabled the sharing of a printer, you can add that printer to other PCs on the network so you can print from it. Here’s how to do it in Windows Vista: 1. Open the Printers folder from the Control Panel. 2. Click the Add a printer button on the toolbar. 3. Select the Add a network, wireless, or Bluetooth printer button. It will begin searching for any shared printers on the network. 4. Select the printer and click Next. If you don’t see the printer you want, click the appropriate button to manually find it. 5. Enter your desired name for the new printer. 6. If you don’t want the printer to be the default one selected/used when printing from the PC, uncheck the appropriate option. 7. Click Next. A window should appear indicating the printer was successfully added. 8. To ensure its setup correctly click Print a test page. 9. Click Finish. If you’re unable to find the shared printer during the setup, you may want to ensure that printer sharing isn’t disabled on the PC hosting the printer. You can check this by opening the Network and Sharing Center and scrolling to the appropriate entry on the status light area Enabling Password Protection In Windows Vista you can enable password protection for your shared folders. When enabled, however, your shared resources aren’t shared with others on the network. The shared resources will only be available to other user accounts on the same PC, and of course access is only given by entering the password. 1. Right-click on the network status icon in the system tray and select Network and Sharing Center. The Network and Sharing Center pops up. 2. Scroll down to the green and/or gray status lights and click the arrow on the right of Password protected sharing. The settings will appear, as seen in Figure 3. 3. Select Turn on password-protected sharing and click Apply. Viewing All Your Shared Folders Unlike Windows XP, Vista allows you to easily and quickly see all the folders you’re sharing. It’s very easy to forget which folders you’ve shared over time, and as a result this helpful feature enables you to always know exactly what is being shared and to whom. This enables you to better protect your data and privacy, which is particularly important for those who often use un-trusted networks such as Wi-Fi hotspots. Here’s how to view the lists of shared files and folders: 1. Right-click on the network status icon in the system tray and select Network and Sharing Center. 2. Scroll all the way to the bottom of the Network and Sharing Center. 3. Click on the links, as pointed out by the red arrow in figure 4, to view the files and folders you are sharing. It’s a good idea to periodically check your shared folders, their permission settings, and their contents to make sure you don’t unintentionally share something that’s private or sensitive. Stay Tuned for more on networking using Windows Vista, including our Introduction to Wi-Fi Networking with Windows Vista as well as tips for Connecting to Wi-Fi Networks using Windows Vista. Eric Geier is the founder and president of Sky-Nets, Ltd., which operates a Wi-Fi hotspot network serving the general aviation community. He has also been a computing and wireless networking author and consultant for several years. One of Eric’s latest books is Wi-Fi Hotspots: Setting up Public Wireless Internet Access, published by Cisco Press. This article was first published on Wi-Fi Planet.
<urn:uuid:a953db34-449c-4f01-a51b-10d5e2e4524f>
CC-MAIN-2022-40
https://www.datamation.com/networks/vista-networking-tips-sharing-on-a-wi-fi-network-using-windows-vista/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00297.warc.gz
en
0.895652
1,671
2.640625
3
INTERNET OF THINGS TESTING The number of connected devices has rocketed in the past few years and, as Nettitude documented in our 2016 threat intelligence report, the Internet of Things (IoT) has become a significant target for threat actors aiming to build botnets. Such botnets are then often employed to launch some of the largest Distributed Denial of Service (DDoS) attacks ever seen. For example, the Mirai malware discovered in 2016 infected hundreds of thousands of IoT devices and then utilised them to launch high profile, high bandwidth DDoS attacks against high profile websites. Nettitude routinely work closely with the creators of smart devices in order to provide assurance around the security posture of their devices. Internet of Things testing services provide a valuable way to assess the security levels associated with a given connected device. Nettitude has extensive experience in IoT testing and assuring: - Smart devices for domestic usage - Smart devices for industrial usage - Smart metering - Connections for utilities - Smart devices aimed at the automotive and transport sector When Is IoT Testing Applicable? Nettitude recommend an Internet of Things security test is performed for any device that will be connected to a network under normal use. From cameras to toothbrushes, connected devices are actively being targeted by threat actors aiming to: - Build botnets - Serve malicious or illegally obtained software - Compromise individual and corporate privacy - Details of the motivations and goals for the relevant threats In particular, devices that are designed to be ‘plug and play‘ should be subject to an Internet of Things penetration test; their low barrier to setup often means that they are deployed in suboptimal security configurations. For organisations that produce Internet of Things devices and are concerned about their security posture, Nettitude offer a world class penetration testing service. How Do Nettitude Perform An IoT Security Test? Compared with more traditional areas of penetration testing Internet of Things presents a number of unique challenges. One of the main challenges lies in diversity; varying architectures, communication protocols, coding and operating systems result in almost immeasurable combinations of technology. Therefore, Nettitude utilise only the most experienced penetration testers for IoT testing. Nettitude’s security consultants ensure that the full attack surface and all use cases are considered in order to give full levels of assurance. Broadly, an IoT test focuses on the following areas: What’s The Output Of An IoT Security Test? Any organisation that works with Nettitude on Internet of Things security testing can expect two fully quality-assured reports per engagement. The first is a management report, which is designed to be consumed by a non-technical audience and relays the overall security posture of the target device in terms of risk. The second is a technical report, which provides in-depth technical detail for each finding, including relevant and actionable remedial advice. Of course, the engagement doesn’t stop there. Nettitude always encourage a debrief to ensure full comprehension has been achieved. It’s an opportunity to ask absolutely any questions at all. After the debrief, the organisation is welcome to stay in touch with Nettitude and receive top-quality security advice. Get a free quote
<urn:uuid:33c916ea-2d2a-47b3-96de-b6b1a19e98bf>
CC-MAIN-2022-40
https://www.nettitude.com/uk/penetration-testing/iot-testing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00297.warc.gz
en
0.919356
688
2.546875
3
Directory Traversal Defined Directory Traversal (DT) is a HTTP exploit that malicious hackers use in order to gain access to account directories and the data contained within. A successful exploit can result in the entire web server being compromised, including access to directories that are used to control access to restricted areas. For example, the Root Directory is the top-level directory on the server's file system. Directory Traversal can be used to gain unauthorized access to this sensitive directory. However, Access Control Lists (ACLs) can be used to control and manage user access for viewing, modifying and executing files. This vulnerability occurs when browser input is not properly validated, thus allowing malicious attackers to gain access to privileged areas. The Directory Traversal vulnerability can be found in multiple coding languages including Perl, PHP, Apache, Python, ColdFusion and others. How the DT exploit works There are two main types of DT vulnerabilities - web server vulnerabilities and application code vulnerabilities. - Web server: This type of attack typically targets the execution of files. A customized URL containing the name of the target file is sent to the web server along with specific escape codes and other malicious commands. These escape codes allow the attacker to bypass filtering software which results in unauthorized execution of the target file. - Application code: This exploit is performed when an attacker sends a customized URL to the web server that commands the server to return specific files to the application. But first, the attacker must discover the correct URL that commands the application to retrieve the file from the web server. Once the URL has been discovered, it is modified with the name of the target file for the purpose of maliciously executing it.
<urn:uuid:1fc414e4-8fc1-4450-b39f-7c66afe5afb2>
CC-MAIN-2022-40
https://checkmarx.com/glossary/directory-traversal-vulnerability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00297.warc.gz
en
0.889797
339
2.9375
3
Why Email is Not Instantaneous — and Not Supposed to Be The common perception is that email messages seem to arrive almost as soon as they are sent. Messages often appear to be delivered “instantaneously.” So, when an email delay occurs, it seems like something must be wrong. Sometimes there is a problem. Sometimes the delay is the result of normal email flow. If the messages never show up at all, that is a different situation altogether. See “Where’s the Email? The Case of the Missing or Disappearing Email” for diagnosing those issues. The multi-server delivery path When an email message is sent, it is given to an email server for processing and delivery. That email server may forward it on to another email server, and so on, until it ultimately arrives in the recipient’s mail box. Generally messages will pass though at least two email servers (the sender’s and the recipient’s). In some cases, when the sender and recipient are on the same machine and it is setup in a certain way, the email may be delivered on arrival. This is not unusual for internal corporate email, but rarely happens for general email messages. In most cases, the sender’s and recipient’s email systems may each pass the message though multiple servers for various reasons, resulting in the message traversing many servers (a.k.a “hops”) in its delivery path. Email is highly reliable – messages should always make it to their destinations, if at all possible. Each server in the delivery path that accepts the email message is responsible for ensuring that the message makes it to the next server. If nothing goes wrong, the hand off can take place very quickly (in less than a second in many cases). However, many times, things do happen such as: - DNS or network issues prevent the server from being able to determine what server is supposed to be next. - Communications with the next server are temporarily failing due to network issues or internet congestion. - The server itself is very busy at the moment. - The next server is very busy at the moment and temporarily refusing connections. - Additional processing of the message needs to take place before it can be relayed to the next server. Each of these things can result in the message being delayed in reaching the next server. If the server itself is busy or needs to perform further actions on a message, it may defer (or queue) processing of the message for a short time until it has available capacity. This often happens if there is a “spike” where many messages are arriving at a server in a short time, pushing its ability to process them all. In cases like this, servers respond by delaying some messages until they can catch up. If the next server cannot be determined or reached or is not accepting email temporarily, then the message must be queued, and delivery retried later. Queue processing delays When messages are temporarily deferred for later retry or processing, they are often “queued.” This means that they are dumped into a special location for pending messages. Mail servers will check their queues periodically and process the messages waiting there to get them going. However: - It is up to the mail server administrators to manage how often the queue is checked and processed. This interval can be low (like once/minute) or slower (like once every 5-10 minutes, every hour, or worse). - If the queues are getting full (e.g. lots and lots of messages are there), processing them all may take a long time. - Even if a message is quickly retried, it will not be delivered until the next server is available and reachable. Generally, messages are kept in mail queues and retried over and over for up to five days; however, some systems (such as public providers like AOL, Hotmail, Yahoo, Gmail, bulk mailers, etc.) may have much shorter grace periods for successful delivery. Some other common reasons for apparent delays include: - Large Messages: Messages size can affect delivery times when it is transmitted over relatively slow, low-bandwidth, or very busy connections. It may take some time to upload a large message to the server, resulting in an apparent delay. For example, if you have a 50MB email message to send and a 512 Kbps DSL line, it will take over 13 minutes to upload the message. Additionally, large messages will slow down processing and delivery at every stage in the process. - Many Recipients: Messages addressed to large numbers of recipients take much more work to process, resulting in additional apparent delays. “Sender Offline” delays: Often when customers ask about message delays and we look into the cause, it appears that for example the message was sent last night but did not arrive until the following day. What has often occurred in these cases is that the sender composed and sent the message when offline. Their internet connection was down, they already unplugged, etc. The “Date” stamp on the message is when they “sent” it (and it was put in their Outbox). However, the message never left the sender’s computer until the next day when the sender went back online and opened his/her email program. In this case, the apparent delay is the sender’s “fault” for not being online when sending. Occasional delays aren’t uncommon When multiple servers must each process the message, must be able to communicate with each other, and must have the capacity to manage the processing requirements, delays can and do occur. In fact, the more servers in the path, the more likely a delay is to happen. It is typical to see occasional delays of up to three minutes in email delivery due to random issues like server processing loads and network traffic spiking here and there on the internet. Delays of tens of minutes to hours are most often the result server maintenance or outage issues, or mail queues not being properly or promptly processed. Where was the delay? We are commonly asked to look at a message that was received to determine where/why the delay occurred. It is relatively easy to figure this out. First, you need to get the full headers of the received email message. See Viewing the Full Source/Headers of an Email message. An example delayed email messages might contain header lines that look like this: Received: via dmail for +INBOX; Tue, 3 Feb 2013 19:29:12 -0600 (CST) Received: from abc.luxsci.com ([10.10.10.10]) by xyz.luxsci.com (8.13.7/8.13.7) with ESMTP id n141TCa7022588 for <email@example.com>; Tue, 3 Feb 2013 19:29:12 -0600 Return-Path: <firstname.lastname@example.org> Received: from [192.168.0.3] (verizon.net [22.214.171.124]) (email@example.com mech=PLAIN bits=2) by abc.luxsci.com (8.13.7/8.13.7) with ESMTP id n141SAfo021855 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT) for <firstname.lastname@example.org>; Tue, 3 Feb 2013 19:18:05 -0600 Message-ID: <email@example.com> Date: Tue, 03 Feb 2013 20:10:10 -0500 From: "Test Sender" <firstname.lastname@example.org> MIME-Version: 1.0 To: "Test Recipient" <email@example.com> Subject: Example Message Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Comment: Lux Scientiae SMTP Processor Message ID - 1233710941-9110394.93984519 It is the “Date” and “Received” header lines that are of importance. The “Date” header is added by the sending email program (i.e. Outlook) and may be blatantly inaccurate, for example if the sender computer’s clock is off. Also, the “Date” is added when the message is sent, not when the message leaves the sender’s “Outbox.” So, if the sender is offline for some reason, or the message is otherwise sitting in the Outbox, the message may look “delayed.”In this example, the delay is merely in getting the message off the sender’s computer. The “Received” header lines are added by the mail servers that process the message. One Received header line is added each time a mail server accepts the message for processing. They are added from the “bottom up” — so the last added Received line is at the top and the first is at the bottom. You can detect where the delays happened by looking at these header lines, in order, and comparing the date and time stamps (once you correct for differences in time zones). In this contrived example, we see that: - The message was sent at 8:10:10 pm Eastern Time - The message was delayed in the sender’s Outbox for 7 minutes, 55 seconds, based on the first “Received” line. It also may have taken a long time for the message to be uploaded to the server, i.e. if the message was big and the connection was slow. - For some reason, this mail server delayed the message for about 11 minutes before it made it to the recipient’s server, where it was accepted and delivered immediately. One could then ask the IT staff running example server “abc.luxsci.com” to look into their logs for information as to why the message was delayed for 11 minutes, if this delay is significant to you.
<urn:uuid:bb481602-a793-471e-8ae5-805d9511533a>
CC-MAIN-2022-40
https://luxsci.com/blog/why-email-is-not-instantaneous-and-not-supposed-to-be.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00297.warc.gz
en
0.929589
2,158
2.671875
3
While there’s nothing wrong with an abundance of caution for high case fatality rates we are seeing for the COVID-19 coronavirus, the data is not accurate. We keep hearing 3.5% or 4% of the population is going to die. Why is the rate so high? The denominator is inaccurate. Most countries have not done broad testing to know how many cases are prevalent in the general population. So, let’s start with the definition of Case Fatality Rate. Case fatality rate is calculated by dividing the number of deaths from a specified disease over a defined period of time by the number of individuals diagnosed with the disease during that time; the resulting ratio is then multiplied by 100 to yield a percentage. 1. Case Fatality Rate or Mortality Rate = Number of Deaths / by Total Number of Cases X 100 2. Total Number of Cases = Prevalence 3. Prevalence is all the reported cases AND the estimated cases in the environment The denominator here is very important. What makes up the total number or cases is all the reported cases that we know of in the hospital and the broad sample of what's in the environments A good example of why the rates look so scary at first, can be shown in South Korea in early reporting. The early cases were only the sick ones or those who fell ill. After broad testing in South Korea, the case fatality results were 0.6%, much lower than earlier results of 3 or 4% of case fatality rates in early reporting. Public Response To Date Fails To Account For Accurate Prevalence In Case Fatality Rates After broader testing, you could see how fast the virus had spread and how much lower the number of deaths were. Don't get me wrong, this virus is very contagious but the good thing is the virus is not as deadly as some may have first believed. Moreover, it’s not from watching the media and folks on social media going nuts, screaming, “Oh my god, this is the Bill Gates 100 year Spanish Flu pandemic!” Understand how case fatality rates are studied, then we can figure out the appropriate proportionality of response. IN THE US, WE HAVE NOT DONE BROAD TESTING. WE COULD ALL BE CARRIERS AND NOT SHOW IT. The Bottom Line: Understand Proportionality Of Response Before We Do More Self-Inflicted Damage To The Economy Let’s take another way to look at our response to this outbreak: In the US, prevalence of a specific type of flu was 15M as of Jan 2020. We had: - 140k hospitalizations - 8200 deaths, - 54 pediatric deaths What would you do in that situation? - quarantine everyone? - cancel events? - stop sports? - hunker down? - close schools? That’s Influeza B. A known flu which we even have vaccines for, albeit they don’t always work so well. We don't go crazy on the flu because we're accustomed to the risk and have factored for it. Right now we're going ape $sh!t because of imperfect data and taking a massive abundance of caution (nothing wrong with that). However, the response to this crisis is 10X of what we do for the normal flu. Either we step up when the regular flu shows up in the same manner and shut down everything and self-inflict wounds on 0.5% to 1.0% of global GDP, or let’s get a grip on the panic. One more note though, in a regular flu season, we may see 140k hospitalizations over 6 months, Covid-19 is compressed over 6 weeks and our systems are not ready for this. Proportionality of response is key here. Stop going crazy folks! Put in precautions and watch a little less TV during the election year.
<urn:uuid:93c282e0-198e-4485-8658-fc0abf102987>
CC-MAIN-2022-40
https://www.constellationr.com/blog-news/personal-log-understanding-case-fatality-rates-covid19-coronavirus
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00297.warc.gz
en
0.94572
818
3.125
3
Predictive maintenance as an asset management strategy relies on operational data to determine when an asset requires attention. Its goal is to reduce operational disruption because of unplanned maintenance. However, data collection and analysis can also play a role when it comes to environmental, social, and governance or ESG planning. What is ESG? Environmental, social, and governance (ESG) refer to a set of criteria that companies can voluntarily adhere to. By meeting these standards, companies demonstrate their commitment to environmental and social responsibility. Organizations participating in the movement receive an ESG score that informs investors and the public about their ongoing efforts. Environmental criteria focus on how a company works to protect environmental resources. It evaluates ecological risks a company may face and how they are mitigating those risks. Those efforts may include waste management, pollution reduction, energy conservation, or natural resource protection. Social standards consider how companies interact with employees, customers, communities, as well as suppliers. Evaluators look at volunteer initiatives and community involvement. They assess the work environment of employees and the standards in place for business partnerships. Social criteria focus on businesses’ responsibility to the society in which they operate. Governance refers to an organization’s internal operations. Adhering to ESG standards means using transparent accounting methods and board member appointments. Companies need to demonstrate that their internal culture supports their environmental and social initiatives. Younger investors are increasingly using ESG scores as a barometer for investing in environmentally and socially conscious companies. Almost 90% of millennial investors wanted to pursue investments that reflected their values. However, the ongoing focus on climate change is moving more investors to look at a company’s ESG score. An estimated $120 billion was committed to sustainable investments in 2021. That is almost $70 billion more than the $51 billion in 2020. In 2022, it’s estimated that one-third of all assets will include sustainable investments. As more investors move towards sustainable investing, the ESG score will become critical. Although ESG scores are not required in financial reports, more publicly traded companies are including them. Many organizations are trying to get out in front of the requirement by voluntarily including the information. Most analysts believe that it is only a matter of time before ESG disclosures will be required. How Can Predictive Maintenance Support ESG Planning? Predictive maintenance solutions interface with the internet of things. They collect data from IoT-connected devices such as sensors to assess air quality, room temperatures, and lighting. They can analyze the data to determine energy use as well as air and noise pollution. Predictive maintenance informs facility managers of the condition of their assets. The data allows organizations to: - Extend the life of an asset - Minimize the risk of business disruption - Improve resource management It also ensures that assets are performing at an optimum level to decrease energy use, reduce pollution, and create a healthy work or living environment. Predictive maintenance not only helps organizations save money but also aids in evaluating ESG risks. This ability is essential as the environment changes and climate challenges become more prominent. IoT devices can pick up slight deviations in asset performance. Increased precipitation may shorten the life of a roof, or high winds may loosen shingles faster than expected. Being able to tie asset performance to climate change earlier enables management companies to adjust their behaviors to minimize risk. Part of environmental responsibility is minimizing the operational impact on the environment. Knowing that climate change is shortening a roof’s lifecycle enables management companies to explore different materials or alter maintenance cycles. By extending the roof’s lifecycle, company’s can reduce the high environmental impact of the construction industry. With predictive maintenance technology, facility and property management firms can provide the scientific data to support environmental-friendly initiatives. Whether it is maintaining heating and cooling systems or managing lighting to lessen energy use, predictive maintenance can highlight changes in asset performance long before it is noticeable. Hybrid work environments also present challenges for facility and property management. As more people work from home, office space may be reduced. However, most organizations assume that everyone will be in the office at least once or twice a week. Flexibility will become a crucial factor in cost-effective property and facility management. According to Microsoft, 70% of workers want more remote work flexibility, while 65% want some in-person time with co-workers. Facilities must be able to accommodate these fluctuations in occupancy. With integrated predictive maintenance, facility managers can monitor occupancy and adjust the air and temperature controls remotely. They can ensure that workspaces maintain their environmental quality while minimizing unnecessary energy use. Balancing environmental and social responsibility is an essential part of ESG planning. Governance requires transparency. Outlining accounting methods lets investors evaluate a company’s financial stability. Providing environmental data through predictive maintenance solutions demonstrates transparency in how a business monitors its initiatives. Unfortunately, companies have not always been open to disclosing their ESG efforts which makes it crucial that they have data to support their claims if they want to attract potential investors. Deploying IoT devices and integrated predictive maintenance for interior or exterior assets can help organizations plan their ESG initiatives. Implementing Predictive Maintenance in Support of ESG Every property, facility, or field operation is different because every company has a different culture that guides its operations. Predictive maintenance solutions need to support those differences by providing a flexible solution that allows customization of workflows. The solutions should also supply data to support ESG initiatives. Gruntify‘s API enables an integrated solution that can take data from IoT devices and issue work requests. Its solution does not force a single management strategy but provides the flexibility to ensure that assets are maintained regardless of the approach. The collected data can be analyzed and used in support of ESG planning and reporting. If your organization is looking for a flexible solution for your maintenance needs, contact us to begin the process of implementing predictive maintenance processes that can ensure sustained growth through ESG planning.
<urn:uuid:8f56ed79-daa7-482a-8926-95e1e0eca3c8>
CC-MAIN-2022-40
https://www.gruntify.com/predictive-maintenance-and-esg-planning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00297.warc.gz
en
0.936973
1,249
2.609375
3
Account Enumeration describes an application that, in response to a failed authentication attempt, returns a response indicating whether the authentication failed due to an incorrect account identifier or an incorrect password. In essence, it describes an authentication process in which the user is informed whether they provided a valid account identifier or not. Account Enumeration is so named because the presence of the vulnerability allows an attacker to iteratively determine (i.e. to enumerate) the valid account identifiers recognized by the application. If each failed attempt indicates the legitimacy of the identifier used, then it is possible to ascertain all valid accounts given sufficient time. Account Enumeration Is A Vulnerability Account Enumeration is a vulnerability because it facilities the task of password cracking by allowing attackers to discern the valid set of account identifiers. As discussed in the article entitled “What Is Weak Authentication ?“, Password Cracking is facilitated if one or more account identifiers are known in advance. It is for this reason in fact, that one could argue that an Account Enumeration vulnerability is a subtype of Weak Authentication vulnerability. For insight into how to detect Account Enumeration, please see the article entitled “How To Test For Account Enumeration“. For insight into how to avoid or fix Account Enumeration vulnerabilities, please see the article entitled “How To Prevent Account Enumeration“. About Affinity IT Security We hope you found this article to be useful. Affinity IT Security is available to help you with your security testing and train your developers and testers. In fact, we train developers and IT staff how to hack applications and networks. Perhaps it was a network scan or website vulnerability test that brought you here. If so, you are likely researching how to find, fix, or avoid a particular vulnerability. We urge you to be proactive and ensure that key individuals in your organization understand not only this issue, but also are more broadly aware of application security. Contact us to learn how to better protect your enterprise. Although every effort has been made to provide the most useful and highest quality information, it is unfortunate but inevitable that some errors, omissions, and typographical mistakes will appear in these articles. Consequently, Affinity IT Security will not be responsible for any loss or damages resulting directly or indirectly from any error, misunderstanding, software defect, example, or misuse of any content herein.
<urn:uuid:c0bd53cd-79c5-476d-ac89-6dc5a7e24c84>
CC-MAIN-2022-40
https://affinity-it-security.com/what-is-account-enumeration/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00497.warc.gz
en
0.886783
489
2.609375
3
NASA's Gateway space outpost will field two scientific investigations to study space weather and the sun's radiation. The agency said Thursday it has built a space weather instrument platform that will work to gather data from solar particles and solar wind to potentially augment space weather forecast capabilities. The radiation instrument tool is developed by the European Space Agency to help astronauts trace radiation exposure around the Gateway's orbit. Gateway is slated to orbit near the moon and house astronauts on a regular basis as part of the agency's lunar exploration efforts. NASA noted it will deploy additional payloads from the outpost in the future. “Using the Gateway as a platform for robotic and human exploration around the Moon will help inform what we do on the lunar surface as well as prepare us for our next giant leap – human exploration of Mars,” said Jim Bridenstine, NASA administrator. NASA is currently in talks with ESA, the Canadian Space Agency and the Japanese Aerospace Exploration Agency to seek their support in Gateway's construction. About The Wash100 The Wash100 Award, now in its seventh year, recognizes the most influential executives in the GovCon industry as selected by the Executive Mosaic team in tandem with online nominations from the GovCon community. Representing the best of the private and public sector, the winners demonstrate superior leadership, innovation, reliability, achievement and vision. Visit the Wash100 site to learn about the other 99 winners of the 2020 Wash100 Award. On the site, you can submit your 10 votes for the GovCon executives of consequence that you believe will have the most significant impact in 2020.
<urn:uuid:75cf4f84-9e25-4294-93b6-a6486cfe6e8d>
CC-MAIN-2022-40
https://executivegov.com/2020/03/nasa-to-launch-space-weather-instruments-from-gateway-outpost-jim-bridenstine-quoted/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00497.warc.gz
en
0.931925
322
3.234375
3
Scientists have observed that severe weather events have increased in severity and/or likelihood in recent years. This includes record-breaking heat, heavy rainfall, and extreme windstorms. There’s no shortage of recent examples, from the Pacific Northwest heatwave in June 2021 to the Corn Belt derecho of August 2020 to the EF-4 tornadoes that ripped through the Midwest and southeastern states in December 2021. In fact, the decade between 2010 and 2020 saw record-breaking billion-dollar weather and climate disasters. Five of the six most disaster-ridden years occurred in this decade. To put things into context: there were more than twice the number of billion-dollar disasters in the 2010s (119) than in the 2000s (59). Figure 1: Courtesy of Climate.gov and the National Centers for Environmental Information. These disasters don’t just cost money. They impact our jobs and workplace safety, including (and perhaps especially) those in facility inspection. Here’s a closer look at how climate change affects weather, how severe weather threatens the safety of facility inspectors, and how software that lets inspectors do their jobs remotely can help mitigate that risk. Climate Change’s Effect on Severe Weather Our current climate is fundamentally different from the planet’s pre-industrialized climate. It’s increasingly hotter, and that’s why there’s extra heat and moisture in the air. As global temperatures rise, so do ocean temperatures. The warmer the ocean is, the more liquid water evaporates into the gaseous water vapor that fuels storms, especially tropical storms that form over oceans. Increasingly hot air temperatures compound the problem: warm air holds more moisture than cool air, providing an abundant supply of “jet fuel” to storms that then bring crushing rainfall and extreme winds. Although the planet has warmed overall, it has not warmed in a uniform way. The arctic has seen the biggest jump in average temperatures. Yes, this melts arctic ice and raises sea levels. But it also shifts the jet stream – the current of air that circles the globe and influences weather patterns – from its standard course, causing it to dip erratically. This causes extreme heat, extreme cold, extreme winds, sustained drought, and sustained rain in places that do not normally experience such weather. Figure 2: Courtesy of NOAA Climate.gov using NOAA ESRL / PSD data How Severe Weather Can Impact Workplace Safety The severe weather we’ve discussed can bring dangerous conditions to facility inspectors from coast to coast. We often think of coastal areas being among the most vulnerable to climate change-fueled storms and rising sea levels. But inland areas are also impacted by extreme weather, including… - Sustained heat. - Severe thunderstorms. A warming world means more extreme weather events everywhere. Bad weather doesn’t stop facility inspectors. In fact, when facilities are suspected of taking on damage, inspectors may be called into action to check structures, equipment, and systems. After all, facility inspectors play a critical role when it comes to integrity assessment and repairs. Often, for the safety of facility employees or surrounding communities, this information must be obtained as quickly as possible, even (in some cases) before the severe weather has subsided. This allows facilities to assess damage and jump into action to resolve issues. Commuting in severe weather or its aftermath is often dangerous. Sustained rains, for example, can cause flash flooding that sweep cars and people away, leaving debris that block roads, even after the waters have receded. Blizzard conditions can leave commuters stranded in dangerously low temperatures. The list goes on. Of course, commuting is just one part of it. On site, compromised structures, systems, and equipment can pose threats to facility inspectors as they complete their work. Facility inspectors already incur a level of risk during the best of times: working around heavy machinery, electrical systems, and equipment in high locations is dangerous as it is. Severe weather adds an X-factor that can increase risk exponentially. Take the December 2021 tornadoes, for example. One caused major structural damage to the Amazon warehouse in Edwardsville, Illinois, which killed six people. Rescue and assessment were hampered by “loose power lines, unsecured concrete, and excess water from the warehouse’s fire suppression system.” Remote Video Technology Protects Inspectors from the Elements The changing climate requires new solutions to protect the health and safety of facility inspectors. Remote video inspection allows facility inspectors to do their jobs without ever stepping foot in the building. This remote video technology makes traveling in severe weather unnecessary, preventing dangerous commutes and costly traffic accidents. Inspectors can be connected to facilities with the click of a button and collaborate with those already present to complete their inspection work. A streamlined suite of inspection tools makes collecting data fast and easy, reducing inspection times and saving money. These tools can even improve the accuracy of inspections. For example, the inspection itself is automatically recorded and saved, which allows inspectors and managers to go back to the footage for review. As a bonus, remote video technology reduces a facility inspector’s carbon footprint. Companies doing 1,000 inspections a month save an average of 69.33 metric tons of CO2 emissions per year. Remote video inspection not only allows inspectors to avoid working during and after severe weather but also reduces a company’s contribution to the pollution that warms our planet. Remote Video Inspections Increase Workplace Safety As severe weather becomes more frequent, remote video inspection can become an integral part of your workplace safety strategy. It reduces inspectors’ risk while traveling and working and makes inspection footage easy to review. Want to learn more about how to mitigate workplace risk with remote video technology? Schedule a test drive with Blitzz today!
<urn:uuid:b764a6b8-78b3-4e0b-b727-19df8d17c32b>
CC-MAIN-2022-40
https://blitzz.co/blog/facility-inspectors-prepared-for-severe-weather-events
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00497.warc.gz
en
0.934552
1,209
3.875
4
A bootkit is a type of malicious infection which targets the Master Boot Record located on the physical motherboard of the computer. Attaching malicious software in this manner can allow for a malicious program to be executed prior to the loading of the operating system. The primary benefit to a bootkit infection is that it cannot be detected by standard operating systems processes because all of the components reside outside of the Windows file system. Bootkit infections are on the decline with the increased adoption of modern operating systems and hardware utilizing UFEI and Secure Boot technologies. Well-crafted bootkit infections may provide little indication of compromise as pertinent files may be hidden from the operating system and security defenses present on the computer. More often, bootkit infections may cause system instability and result in Blue Screen warnings or an inability to launch the operating system. Some bootkit infections may display a warning and demand payment via digital currency to restore the computer to an operational capacity. Malwarebytes recommends to never pay these types of ransom. Bootkits were historically spread via bootable floppy disks and other bootable media. Recent bootkits may be installed using various methods, including being disguised as harmless software program and distributed alongside free downloads, or targeted to individuals as an email attachment. Alternatively, bootkits could be installed via a malicious website utilizing vulnerabilities within the browser. Infections that happen in this manner are usually silent and happen without any user knowledge or consent. Malwarebytes can scan and detect for the presence of some bootkit infections. These detections utilize a specific set of rules and tests to determine if a bootkit infection is present on the computer. This testing method is more intensive and more effective, but including rootkit scans as part of your overall scan strategy increases the time required to perform a scan. Malwarebytes can detect and remove many Bootkit infections without further user interaction. More advanced infections may require rebuilding of the Master Boot Record Select your language
<urn:uuid:24be4e26-0d0c-469d-a527-cb899f3952b9>
CC-MAIN-2022-40
https://www.malwarebytes.com/blog/detections/bootkit
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00497.warc.gz
en
0.913021
392
2.6875
3
Types of Single Sign-on Protocols Single Sign-on (SSO) allows a user to use a single set of login credentials – such as a username and password, or even multi-factor authentication – to access multiple applications. This is a Federated Identity Management architecture, sometimes called identity federation. In order for SSO to work, most applications rely on open standard protocols to define how service providers (SPs) and identity providers (IdPs) can exchange identity and authentication information with one another. For more information on how SSO works and the benefits of one of the most common protocols, SAML, visit our SAML for Single Sign-on page. To seamlessly integrate all applications PortalGuard’s Single Sign-on Solution supports many types of SSO protocols, including: Central Authentication Service (CAS) Developed by Shawn Bayern at Yale University, CAS differs from typical SAML SSO by enacting Server-to-Server communication. The Client Machine is used to initiate the token request, but the final verification is handled by a back-end communication between the CAS server and the Service Provider. CAS is a typical SSO protocol used in education organizations because of reliance on that extra, more direct verification. Like SAML, no passwords are exchanged through the SSO token. CAS is a common SSO protocol for higher education. Check out the SSO for Education page for more details. Shibboleth is another SSO protocol typically seen in educational organizations – specifically where a high number of institutions are federated to share applications and/or services. Shibboleth is built with SAML as a foundation but uses Discovery Service to improve upon SAML’s organization of data from a large number of sources. Additionally, Shibboleth helps to automate the parsing of metadata to handle security certificate updates and other configurations that may be set by individual institutions within a federation Works by using Web based HTTP Cookies to transport user credentials from browser to server without input from the user. Existing credentials on the client machine are gathered and encrypted before being stored in the cookie and sent to the destination server. The server receives the cookie, extracts and decrypts the credentials, and validates them against the internal server directory of users. Claims (aka “assertions”) are created by a claims issuer that is trusted by multiple parties. Claims are typically packaged into a digitally signed token that can be sent over the network using Security Assertion Markup Language (SAML). It is possible for a user to prove they know their password without actually providing the password itself. NTLM achieves this using a challenge and response protocol that first determines what type of NTLM and encryption mechanisms the client and server mutually support, then cryptographically hashes the user’s password and sends it to the server requiring authentication. Kerberos enables users to log into their Windows domain accounts and then receive SSO to internal applications. Kerberos requires the user to have connectivity to a central Key Distribution Center (KDC). In Windows, each Active Directory domain controller acts as a KDC. Users authenticate themselves to services (e.g. web servers) by first authenticating to the KDC, then requesting encrypted service tickets from the KDC for the specific service they wish to use. This happens automatically in all major browsers using SPNEGO (see below). There are instances when the client application and remote server do not know what types of authentication the other one supports. This is when SPNEGO (Simple and Protected GSSAPI Negotiation Mechanism) can be used to find out what authentication mechanisms are mutually available. Some of these mechanisms can include Kerberos and NTLM authentication. Reduced Single Sign-On is widely used for limiting the number of times a user will be required to enter in their credentials to access different applications. With critical applications, reduced SSO also offers a technique to make sure that a user is not signed on without a second factor of authentication, having been provided by the user. A user logging into a website may choose to have their credentials permanently remembered for that site. This is accomplished by creating an encrypted cookie on the user’s machine for that web browser that contains the user’s credentials. This cookie persists across different browser sessions and restarts of the machine but will be set to expire after a set period. The next time the user accesses the website, the server recognizes the cookie, decrypts it to obtain the user’s credentials, and completely bypasses the login screen after validating them successfully. Form-filling allows for the secure storage of information that is normally filled into a form. For users that repeatedly fill out forms (especially for security access), this technology will remember/store all relevant information and secure it with a single password. To access the information, the user only has to remember one password and the Form-filling technology can take care of filling in the forms. Banner XE/Banner 9 Banner XE/Banner 9 supports CAS SSO. While not the newest SSO protocol, CAS SSO improves Banner’s usability. Additionally, CAS SSO simultaneously increases the integration points for Banner in various institutions. Higher education institutions that are looking for additional, more feasible options, may land on using Banner XE/Banner 9 to fill the gap. CAS SSO opens Banner XE/Banner 9 up to more unique configurations and deployments.
<urn:uuid:a6e337c7-7a9f-4a4f-89aa-8f97e68184f8>
CC-MAIN-2022-40
https://www.bio-key.com/single-sign-on/types-of-single-sign-on-protocols/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00497.warc.gz
en
0.909316
1,193
2.796875
3
Atari’s name is almost synonymous with video games and everything ’80s. However, the video game titan made home computers for over a decade. Have history turned a little bit differently, this article might have been written on an Atari-made machine. It doesn’t take to be a member of a closed group of enthusiasts to be familiar with the Atari brand. Even though the original company behind the names has been mostly dead for around two decades, its legacy lives on. Sente, henne, atari? The familiar name entered the world in 1972, when Nolan Bushnell and Ted Dabney, creators of an arcade game ‘Computer Space’, decided to enter what was to become the video game market. Bushnell took the name Atari from his favorite board game ‘Go’. The term roughly means the same as ‘check’ in chess. According to Scott Cohens’ book Zap! The rise and fall of Atari, Bushnell registered three different company names based on moves in ‘Go’: atari, sente (equivalent to ‘checkmate’), and hanne (used to acknowledge an overtaking move). Atari stuck because it was the first one to get approved. Companies’ initial performance would have been in line with all three names, however, in 1972, the same year of its inception, Atari Inc. released Allan Alcorn’s game ‘Pong’, the first truly commercially successful video game. Games’ appeal came from a revolutionary multiplayer option. ‘Pong’ was an arcade-based game, which meant that every copy of the game was sold with an arcade machine, making the production a costly endeavor. Cohen notes that even though Atari sold over 8,000 machines instead of the planned 2,500, the company was always in dire straits. Fathers of video games Worth around $20M, by 1974, Atari had to lay off half its staff to avoid bankruptcy. The company was saved by the successful performance of the game ‘Tank’, selling off a Japanese branch of the company, and a successful merger with Kee Games. Cash influx allowed the company to focus on development, such as adapting ‘Pong’ to home use. After the console’s success, Bushnell started eyeing an opportunity to refocus the business towards home video games. To pursue his vision, Bushnell opted to sell Atari to Warner Communications for close to $30M in 1976. Cohen notes that Warner got interested in Atari after the Warners’ CEO Steve Ross saw how absorbed his wife and son were in playing ‘Pong’ with their friends. “Everybody was losing interest in the digital watch and the pocket calculator, and most of the people we went to wondered why video games would be any different. Warner Communications was the only one with the guts to put over $100 million into the company while everybody else was saying it was another CB radio,” Bushnell told Cohen. Warner invested $120M to develop Atari Video Computer System (VCS) or Atari 2600 as it was branded from 1982. The key idea behind the VCS was to sell a machine compatible with game cartridges which meant that one device could support several games. The concept was so successful that in the early ’80s, Atari was synonymous with video games. From a business perspective, machines were not as profitable as game cartridges. Latter were less than $10 to produce yet sold for over $20. By 1980, Atari revenue was around $400M, and Atari made almost 70% of home consoles in the US. Few tweaks in the history of computers would have allowed Atari to enter the PC market as early as 1975-1976. It would have likely put the company among top competitors for Commodore, a now-perished-titan of the early PC era. A couple of then-Atari employees, Steve Jobs and Steve Wozniak, floated the idea of developing a ‘family-friendly’ computer that would be suitable for playing games and help parents with doing taxes and children with homework. A machine that would ‘feel’ personal. Bushnell scoffed. Only two years later, he will see how expensive his mistake was. Commodore Radioshack and Apple, established by the former Atari duo, put forward personal computers now called the ‘1977 trinity’ to signify their importance to the dawn of the PC era. Of the three pioneers, only Apple managed to successfully navigate through the treacherous waters of the home computing business. Ataris’ attempt to enter the PC market coincided with the departure of Bushnell. He was displeased that the company’s management focused too much on developing games and not updating the VCS. In 1979 Bushnell departed to become a successful owner of ‘Chuck E. Cheese pizza’ chain of restaurants. Change of plans According to Jamie Ledino’s book Breakout: How Atari 8-Bit Computers Defined a Generation, after Bushnell’s departure, his replacement, Ray Kasaar, ordered the company engineers to repurpose a planned upgrade for the VCS to a home computer. Unlike competing machines, Atari PC’s were designed to be compatible with most TV sets. Atari set devices to run on MOS 6502 microprocessor that supported advanced graphic chips developed by Jay Miner, future designer of the Amiga PC series. “They wanted their computer to be just as good at gaming as consumers would expect from the Atari brand while simultaneously delivering a real computing experience,” Ledino notes. The operating system for the upcoming PC was developed by future founders of Activision: Larry Kaplan, Alan Miller, Bob Whitehead, and David Crane. Bill Gates was supposed to write a version of BASIC that could fit into 8KB but was fired due to stalling deadlines. Chuck Peddle was an early tech visionary responsible for the game-changing MOS 65xx series of microprocessors who convinced Commodore to focus on PC’s rather than calculators. Peddle told the Computer History Museum that at the time of its release, Atari machines were superior to Commodore, especially in terms of video capabilities. Atari released the first PCs of what was to become its ‘8 bit series’ of computers in 1979 with Atari 400 and Atari 800. 8KB RAM machines (later upgraded to 16KB) were sold with four controllers and were the first to have coprocessor chips. Atari marketed 400’s as a low-end machine and 800’s towards more advanced users. Since the 400 should have gotten more interest from younger users, it was equipped with a spill-resistant membrane keyboard, eventually dreaded by system owners. The 800 was fitted with a full keyboard, two cartridge slots, and upgradable RAM. According to data by Jeremy Reimer, Atari PCs fared well upon release, sizing a significant portion of the PC market share and outselling Apple by 2 to 1 in 1979. Sales of Atari home computers peaked in 1982 with over 600,000 machines sold in that year. In December 1982, Atari released its newest machine, 1200XL, a 64KB computer designed to look completely different than its predecessors. 1200XL, together with 800XL and 600XL, was Ataris’ answer to Commodores’ VIC-20 and 64 computers that came to dominate the PC market. Ledino notes, however, that by that time, the majority of software developers had their eyes on Apple and Commodore computers, which meant that fewer products were made with Atari in mind. To make matters worse, Commodore owned MOS technologies, a prime supplier of microprocessor chips, which allowed Commodore to undercut competitor prices continuously. 1983 was a catastrophic year for Atari. Coupled with unsuccessful Atari-made game releases, the saturated video game market created something akin to a housing bubble crash of ’08. Atari reported $538M in losses by the end of the year even though the company pulled $1.7 billion in operating profit a year prior. Six months later, after a threefold slump in stock value, Warner announced selling off Ataris’ consumer products division. Somewhat ironically, the buyer was Jack Tramiel, the ousted founder of the Commodore who agreed to take hundreds of millions in debt that Atari accumulated. Atari Inc. was no more, as Atari Corporation was born in 1984. Under Tramiel, Atari slowly phased out its 8-bit family computer in favor of 16 or 32-bit processors, called Atari ST. S for ‘sixteen’ and T for ‘thirty-two’. XE was the last line of 8-bit computers by the company, with 65XE and 130XE discontinued in 1991 and 800 XE in 1992. Even though the computers were discontinued in the early 90’s, their popularity was sustained by new markets in former socialist countries, where people did not have the spending power for a 16 or 32-bit computer. For example, out of 250,000 XE series sold in 1989, 70,000 were sold in Poland. The ST line did not fare any better, being discontinued in 1993, thus ending Ataris’s 14-year long stint in the home computer market. Like many other PC manufacturers, Atari was not equipped to compete with what has become a market of IBM clones based on Microsoft Windows. End of an era The Atari Corporation met its end in 1996. After a failed attempt to compete with Sega by releasing a Jaguar console, the company suffered almost $50M losses in 1994. In 1995, Atari attempted to enter the PC gaming market, albeit unsuccessfully. With no new products scheduled for release and flailing finances, Atari agreed to merge with JTS Inc., a maker of hard disk drives. After that, Atari’s name disappeared. Currently, several corporations have the rights to the Atari logo. Up until this day, Atari has a cult following with troves of enthusiasts continuing the legacy of the once all-powerful brand. The name has remained on countless memorabilia t-shirts and attempts to create modern nostalgia-based consoles such as Ataribox. “Perhaps Atari’s most significant contribution is that it paved the way for the personal computer, which is not a fad. If nothing else, video games have prepared the world for the computer age. A computer is, after all, a video game, except it’s smarter,” Cohen notes in his 1984 book. More great CyberNews stories: Subscribe to our monthly newsletter
<urn:uuid:9158c7ff-39a7-4275-b986-4b3d160b27f9>
CC-MAIN-2022-40
https://cybernews.com/editorial/a-decade-long-attempt-to-enter-pc-history-atari-home-computers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00697.warc.gz
en
0.971128
2,226
2.515625
3
This course provides learners with an introduction into how JES3plus is initialized at start up and how the initialization stream is used to identify system resources to JES3plus. Following on, the learner is shown what resources need to be defined to JES3plus such as spool data sets, checkpoint data sets, mains, storage, and buffers among others which are all vital to JES3plus’ processing functionality. This course is suitable for all system programmers that need to install, configure and customize JES3plus. Successful completion of Interskill’s – JES3plus Fundamentals course, or equivalent knowledge. After completing this course, the student will be able to: - Describe how JES3plus is initialized - Discuss how to code the initialization stream - Identify key resources required by JES3plus to function - Identify other resources that can be defined to JES3plus Defining the JES3plus Environment Introduction to JES3plus Initialization The JES3plus Cataloged Start Procedure The Initialization Stream Coding the Initialization Stream Identifying JES3plus Resources Spool Data Sets Checkpoint Data sets Defining I/O Devices Defining Mains and Storage Defining Multiple Console and Remote Job Processing Consoles Defining Network Nodes Defining Tape Libraries
<urn:uuid:acf0d509-82ef-48de-900f-ab4818903e51>
CC-MAIN-2022-40
https://interskill.com/?catalogue_item=jes3plus-jes3plus-for-system-programmers&noredirect=en-US
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00697.warc.gz
en
0.78846
319
2.8125
3
Grip strength is a useful metric in a surprisingly broad set of health issues. It has been associated with the effectiveness of medication in individuals with Parkinson’s disease, the degree of cognitive function in schizophrenics, the state of an individual’s cardiovascular health, and all-cause mortality in geriatrics. At IBM Research, one of our ongoing challenges is to obtain a better understanding of the effects of diseases on an individual’s overall health, as well as how AI can help clinicians to monitor individuals in their natural environments, and potentially point to indicators and clues into the progression of a patient’s conditions. In new research published in Scientific Reports today, our team details a first-of-a-kind “fingernail sensor” prototype to help monitor human health. The wearable, wireless device continuously measures how a person’s fingernail bends and moves, which is a key indicator of grip strength. Fingernail sensor that can monitor your activities/health. IBM Lab, Yorktown Heights, NY. (Feature Photo Service) The project began as an attempt to capture the medication state of people with Parkinson’s disease. Getting a new therapy approved requires quantifying how people on the therapy are doing in relation to controls. The majority of people with Parkinson’s are older, an age group with increasingly brittle, friable skin. Comprised of skin, nails and hair, the integumentary system covers most of our bodies. Its primary purpose is to protect our internal components from pathogens, toxins, ultraviolet radiation, dehydration, and changes in temperature. It also provides a structure for sensory receptors of the somatosensory system of neurons across our bodies. One method to measure a disease’s progression is to attach skin-based sensors to capture things like motion, the health of muscles and nerve cells, or changes in sweat gland activity, which can reflect the intensity of a person’s emotional state. But with older patients, such skin-based sensors can often cause problems, including infection. This is where the potential of a fingernail sensor comes into play. We interact with objects throughout the day using our hands, such as the tactile sensing of pressure, temperature, surface textures and more. Our team realized it might be possible to derive interesting signals from how the fingernail bends throughout the course of a day, as we use our fingers to interact with our environment, and tap into the power of AI and machine learning to analyze and derive valuable insights from that data. One of the functions of human fingernails is to focus the finger-tip pulp on the object being manipulated. It turns out that our fingernails deform – bend and move — in stereotypic ways when we use them for gripping, grasping, and even flexing and extending our fingers. This deformation is usually on the order of single digit microns and not visible to the naked eye. However, it can easily detected with strain gauge sensors. For context, a typical human hair is between 50 and 100 microns across and a red blood cell is usually less than 10 microns across. Since nails are so tough, we decided to glue a sensor system to a fingernail without worrying about any of the issues associated with attaching to skin. Our dynamometer experiments demonstrated we could extract a consistent enough signal from the nail to give good grip force prediction in a variety of grip types. We also found it is possible to deconvolve subtle finger movements from nail deformation. We were able to differentiate typical daily activities which all involve pronation and supination such as turning a key, opening a doorknob or using a screwdriver. An even more subtle activity is finger writing, and we trained a neural network to produce a very good accuracy (.94) at detecting digits written by a finger wearing the sensor. Our system consists of strain gauges attached to the fingernail and a small computer that samples strain values, collects accelerometer data and communicates with a smart watch. The watch also runs machine learning models to rate bradykinesia, tremor, and dyskinesia which are all symptoms of Parkinson’s disease. By pushing computation to the end of our fingers we’ve found a new use for our nails by detecting and characterizing their subtle movements. With the sensor, we can derive health state insights and enable a new type of user interface. This work has also served as the inspiration for a new device modeled on the structure of the fingertip that could one day help quadriplegics communicate. Founded in March 2020 just as the pandemic’s wave was starting to wash over the world, the Consortium has brought together 43 members with supercomputing resources. Private and public enterprises, academia, government and technology companies, many of whom are typically rivals. “It is simply unprecedented,” said Dario Gil, Senior Vice President and Director of IBM Research, one of the founding organizations. “The outcomes we’ve achieved, the lessons we’ve learned, and the next steps we have to pursue are all the result of the collective efforts of these Consortium’s community.” The next step? Creating the National Strategic Computing Reserve to help the world be better prepared for future global emergencies.
<urn:uuid:ec9068e8-7b9b-4cf8-af03-3e83990f578c>
CC-MAIN-2022-40
https://www.ibm.com/blogs/research/2018/12/fingernail-sensors/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00697.warc.gz
en
0.946188
1,086
3.203125
3
Eggs are fairly high in quality animal protein and contain all the essential amino acids that humans need. Whole eggs are among the most nutritious foods on the planet, containing a little bit of almost every nutrient you need. Omega-3 enriched and/or pastured eggs are even healthier. High in cholesterol, but eating eggs does not adversely affect cholesterol in the blood for the majority of people. Eating eggs consistently leads to elevated levels of HDL (the “good”) cholesterol, which is linked to a lower risk of many diseases. The best dietary sources of choline, a nutrient that is incredibly important but most people aren’t getting enough of. Consumption appears to change the pattern of LDL particles from small, dense LDL (bad) to large LDL, which is linked to a reduced heart disease risk. The antioxidants lutein and zeaxanthin are very important for eye health and can help prevent macular degeneration and cataracts. Eggs are high in both of them. Omega-3 enriched and pastured eggs may contain significant amounts of omega-3 fatty acids. Eating these types of eggs is an effective way to reduce blood triglycerides. For more tips, follow our today’s health tip listing.
<urn:uuid:22876681-4549-4a76-b407-b44d47e75751>
CC-MAIN-2022-40
https://areflect.com/2019/08/06/todays-health-tip-benefits-of-eggs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00697.warc.gz
en
0.947028
264
3.140625
3
It is a fact that IT related business is growing by leaps and bounds, and has changed the usage of organizational devices significantly within the last decade. With the growth of business, it has become essential to have some security measures. IT professionals have warned about risks associated with hyper-connectivity. Enterprises face great risks because the shortcomings of unpatched software are exposing data to cyber crime attacks. Therefore, it is the need of the hour to use advanced systems for data security. Windows XP has lost its value due to security issues and instabilities which are associated with the operating system. When an OS developer like Microsoft cannot provide patches and critical updates to cope with security issues, the need for a new and well-defined operating system emerges. What are the consequences of data breaches when complying with HIPAA (Health Insurance Portability and Accountability Act)? HIPAA Breach Rule HIPAA is an act related to accountability and health insurance portability. In order to give maximum protection and privacy to health information, federal law has set some rules and regulations. OCR (Office for Civil Rights) is responsible for enforcing security law. For data security, OCR gives notification to organizations when certain information is breached. When businesses do not apply latest software patches, customers could get affected by data breaches and their social security numbers and/or credit card numbers could be stolen; and as a result, HIPAA penalty could be levied. Organizations have been fined by OCR because many have become the victim of malware caused by failure in selecting software patches. It is notable that HIPAA or OCR does not inform organizations to keep their software updated. When companies do not pay attention to software flaws and keep on working with patching software, issues such as data breaches become unavoidable. How to Run System Smoothly? To check data security, it is significant to use vigilant approach and observe applications of third-party against protection vulnerabilities. Unsecure data can be the result of supportive software or operating system that is functioning in the environment. To cope with data issues, security updates and advanced patches should be applied. For a small business or an enterprise, assistance of IT administrator is the only solution to get peace of mind. In a nutshell, the penalty of data breaches should not be only in form of levying fines, but there must be some rules to meet the demands of industry. Organizations know methods to find out security risks and satisfactory solutions. In other words, patch your software and use updates to keep your data safe and secure. Organizations can improve ways of data protection by using cloud computing. As compared to public cloud, private cloud using companies are much more satisfied with security of data.
<urn:uuid:d644b2b5-ef79-4b6c-989f-7d13e291aedd>
CC-MAIN-2022-40
https://blog.backup-technology.com/14637/unpatched-software-give-way-data-breaches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00697.warc.gz
en
0.951681
540
2.5625
3
What is Single Sign-on (SSO)? Single Sign-on (SSO) Meaning Single sign-on (SSO) is an identification method that enables users to log in to multiple applications and websites with one set of credentials. SSO streamlines the authentication process for users. It takes place when a user logs in to an application and is automatically signed in to other connected applications, regardless of the domain, platform, or technology they are using. This eases the management of multiple usernames and passwords across various accounts and services. A good example is when a user logs in to Google and their credentials are automatically authenticated across linked services, such as Gmail and YouTube, without having to separately sign in to each individually. How Does SSO Work? A common question is what does SSO stand for? It stands for single sign-on and is a federated identity management (FIM) tool, also referred to as identity federation. It performs identity verification, a crucial identity and access management (IAM) process, which is a framework that allows organizations to securely confirm the identity of their users and devices when they enter a network. This is critical to assigning user access permissions and ensuring users only have the right level of access that they need to carry out their role effectively. SSO works by sharing and verifying login credentials between service and identity providers. A service provider (SP) is typically a vendor who provides products, solutions, and services to users and organizations, such as an application or website. An identity provider (IdP) is a system that creates, manages, and maintains user identities and provides authentication services to verify users. These trusted providers enable users to use SSO to access applications and websites and improve user experience by reducing password fatigue. SSO services do not store user information or identities. Instead, they typically work by checking and matching a user’s login credentials with information stored in an identity management service or database. Single sign-on solutions use the following steps to ensure a user's credentials are redirected from an SP to an IdP: - The user accesses an SP, such as a website or application. - The SP sends an authentication token to the IdP, such as the SSO system. - The IdP sends an SSO response back to the SP. - The user will be prompted to log in. - When the user’s credentials are validated, they will be able to access other websites and applications from the SP without having to log in separately. What Is an Authentication Token? When a user signs in to an SP using an SSO service, an authentication token that identifies the user is verified is created. An authentication token is digital information stored within the user’s browser or the SSO service’s servers. Every application the user accesses will then check with the SSO service, which passes the token to the application to approve the request. Authentication tokens are passed back and forth between SPs and IdPs to share, confirm, and verify user identification information, such as their username, email address, and password. This is crucial to SSO protocols, which enable identity verification to occur away from other cloud services. How Are SAML and OAuth Used with SSO? Authentication tokens use communication standards to ensure they are valid. The main standard is Security Assertion Markup Language (SAML), which is the language used to write authentication tokens. The SAML standard uses Extensible Markup Language (XML) to enable user authentication and authorization to be exchanged over secure domains. When used in SSO, SAML communicates between the user, an SP, and the IdP. The process of securely providing users access to multiple services with just one login requires the user’s information to be authorized. This happens through open authorization (OAuth), which is a framework that enables a user’s account information to be used by various third-party services. When a user requests access to an application, the SP sends a request to the IdP, which then verifies and authenticates the request to grant the user access. A good example of this is choosing to use a Facebook account to sign in to a website instead of entering a username and password. OAuth and SAML are separate protocols that can both be used in conjunction with SSO. OAuth is used to authorize users while SAML authenticates users. Benefits of Single Sign-on (SSO) There are many benefits for organizations that use SSO to verify user identities. The process is simple and convenient for users, and also highly secure. SSO ensures that users only have to enter one password to access multiple applications or services. This helps avoid password fatigue, whereby people struggle to remember different passwords for different accounts and can lead to them recycling credentials across multiple services. This presents a major security risk because attackers exploit commonly used passwords to hack into additional accounts. Signing in only once means users spend less time signing in to applications. This, in turn, lowers the risk of them using weak passwords or forgetting their login credentials and improves productivity levels. Fewer Help Desk Tickets Because users only have to log in once to access multiple services, they are less likely to forget their password and ask the IT help desk to reset their credentials. This means IT professionals spend less time handling help desk tickets for password resets. Instead, they have more time to focus on meaningful tasks that add value to the organization. SSO encourages users to deploy stronger passwords on their accounts. It also helps them avoid repeating the same password on multiple accounts. Only requiring one login password for several services makes it easier for users to remember their password. This also reduces organizations’ risk of cyber attacks because websites have to store less user credential information. However, passwords should, at the minimum, be supported by two-factor authentication (2FA), which provides extra certainty that the user is who they say they are. When a user logs in using their username and password, 2FA requires them to provide an additional verification factor, such as their fingerprint or a code from an authenticator application on their phone. Requiring additional authentication factors before granting a user access to an application, service, or website enhances security levels compared to relying on usernames and passwords alone. Less Shadow IT Risks Shadow IT occurs when users circumvent their organization’s security policies to use applications, devices, services, or software that have not been sanctioned for official use. SSO helps organizations avoid this by monitoring which applications employees are using, which reduces the chances of identity theft or data loss and enforces compliance rules. How Fortinet Can Help? Centralized authentication services like SSO are crucial elements to establishing a zero-trust approach. Zero-trust security ensures that only the right people have the right level of access to the right resources, while simplifying the access process for users. SSO delivers exactly that, ensuring users sign in once and gain access to multiple services, all while increasing security by removing password reliance and building in additional security factors like multi-factor authentication (MFA). The Fortinet FortiAuthenticator is an access management solution that helps organizations prevent data breaches through effective security policies. It allows organizations to incorporate SSO across their internal and cloud-based environments and networks. It also enables seamless, secure 2FA across organizations in tandem with FortiToken, an event- and time-based one-time password (OTP) generator application for mobile devices. FortiAuthenticator prevents unauthorized access to corporate networks and resources by providing a centralized authentication process to the Fortinet Security Fabric. This can be through SSO as well as other options like certificate management and guest access management. Using FortiAuthenticator, organizations can identify network users and implement identity-driven policies on Fortinet-enabled enterprise networks. What is the meaning of Single Sign-on (SSO)? Single sign-on (SSO) is an identification method that enables users to log in to multiple applications and websites with one set of credentials. How does Single Sign-on work? SSO works by sharing and verifying login credentials between service and identity providers. Why do we need Single Sign-on (SSO)? This eases the management of multiple usernames and passwords across various accounts and services.
<urn:uuid:5ee3d9bb-d45f-431d-8cd1-b210b5557dbb>
CC-MAIN-2022-40
https://www.fortinet.com/fr/resources/cyberglossary/single-sign-on
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00697.warc.gz
en
0.917253
1,726
3.265625
3
In new findings, criminals are using keystroke grabbers, also called keyloggers, on public computers to steal information. In one case, a man stole the information of 150 people, which included names, birth date information and social security numbers. With this type of information the criminal was able to file tax returns for many and receive their money. A keystroke grabber is a device thats inserted into a USB port and then connected to the hardware. The keystroke gabber acts as a conduit between the computer and keyboard. Not only that, but computers are blind to this hardware. You wouldnt even know its there unless you check the back of the computer. The device tracks the victims keystrokes which then provides a fraudster with account access information, passwords, and other personal information that can be utilized to verify an identity. The devices can be used anywhere and can send data wirelessly back to a remote location. malicious employee or even a member of the cleaning crew could use this device in corporate offices and public facilities. All a fraudster has to do is get in the door. To protect yourself and your customers, we recommend checking your hardware on a regular basis, especially if the computers are open to the public. Always scan your entire system for viruses and spyware. Never log into personal accounts, type credit card numbers or use confidential passwords on public computers. Maintaining a fraud prevention strategy is an ongoing challenge most businesses face. Cyber attacks affect numerous industries and criminals are constantly coming up with new ways to steal consumer information. The more information sharing that can be facilitated, the more effective fraud-fighting efforts will be. Increasing your awareness of these activities and protecting your customers (as well as yourself) from identity theft should always be a top priority. Reinforce your fraud prevention strategy with education and awareness of the criminal activities that are [Contributed by EVS Marketing]
<urn:uuid:12f9cbb7-16e8-4448-94af-c394f3bc9fd8>
CC-MAIN-2022-40
https://www.electronicverificationsystems.com/blog/Fraud-Prevention-for-Your-Hardware
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334871.54/warc/CC-MAIN-20220926113251-20220926143251-00097.warc.gz
en
0.927934
421
2.765625
3
You have two kinds of passwords you can create: General and Embedded. A general password is a password that's created from the main Passwords section, and then usually linked as a related item to the relevant assets. These passwords have many uses, but should always be used whenever you have a password that can be linked to multiple assets. Think one to many relationships. For example, you have a password for a domain registrar (such as GoDaddy) that's associated with several domains. You could create embedded passwords in the relevant assets instead, but each time the same data is entered more than once, it causes a drop in productivity levels and also introduces the risk of data entry error. Key benefits of general passwords: - Eliminates data duplication. - Reduces risk of accidental deletion. - Can set security permissions on just the password itself. When this kind of password can be particularly useful: - Active Directory - Domain registrar - DNS hosting - Web hosting An embedded password is a password that is created from within configuration items and other assets through an Embedded Passwords section on the side panel. You may want to use an embedded password when you have a password that can only be used in one context, such as one device. Think one-to-one relationships. When this kind of password may be useful: - Administrative Web Interface (username, password, and URL) for a firewall or switch - Local admin account on a Windows server Protecting your passwords We know how important your password security is to you. With general passwords, you will always have fine-grained control over who can access each individual password. But keep in mind that there are no permission settings for individual embedded passwords. To change who can access an embedded password, you will need to apply permissions to the containing item. For more information about managing network and server credentials in IT Glue, see Passwords.
<urn:uuid:042729df-fa09-470a-9b83-ebc6c94e4506>
CC-MAIN-2022-40
https://support.itglue.com/hc/en-us/articles/360004935677-Choosing-between-general-and-embedded-passwords
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00097.warc.gz
en
0.909084
406
3.328125
3
As defensive technologies based on machine learning become increasingly numerous, so will offensive ones – whether wielded by attackers or pentesters. The idea is the same: train the system/tool with quality base data, and make it able to both extrapolate from it and improvise and try out new techniques. Finding and exploiting vulnerabilities At this year’s edition of DEF CON, researchers from Bishop Fox have demonstrated DeepHack, their own proof-of-concept, open-source hacking AI. “This bot learns how to break into web applications using a neural network, trial-and-error, and a frightening disregard for humankind,” they noted. “DeepHack works the following way: Neural networks used in reinforcement learning excel at finding solutions to games. By describing a problem as a ‘game’ with winners, losers, points, objectives, and actions, a neural network can be trained to be proficient at ‘playing’ it. The AI is rewarded every time it sends a request to gain new information about the target system, thereby discovering what types of requests lead to that information,” the company explains. Apparently, DeepHack does not need to have any prior knowledge of apps or databases, and based on a single algorithm, it learns how to exploit multiple kinds of vulnerabilities. “AI-based hacking tools are emerging as a class of technology that pentesters have yet to fully explore. We guarantee that you’ll be either writing machine learning hacking tools next year, or desperately attempting to defend against them,” the researchers concluded. Bypassing antivirus software At the same conference, Hyrum Anderson, Technical Director of Data Science at Endgame, explained how an AI agent trained through reinforcement learning to modify malware can successfully evade machine learning malware detection. Most next-generation antivirus software relies on machine learning to generalize to detect never-before-seen malware. As DeepHack, Anderson’s AI agent was able to “learn” by playing thousands of “games.” Its “opponent” was a next-gen AV malware detector, and with each game the agent was closer to creating a solid idea of which sequence of functionality-preserving changes it can perform on a Windows PE malware file in order for it to bypass the detector. Their final results were modest – only 16 percent of the customized samples lobbed at the AV got through – but Anderson believes others could do better.
<urn:uuid:515b7b21-eb69-47af-b1db-c6ce57e68279>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2017/08/01/machine-learning-cyber-defense/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00097.warc.gz
en
0.937901
510
2.59375
3
Databases are extremely attractive targets for cybercriminals due to the sensitive and valuable information they contain. This data can range from intellectual property, financial information, to personal user data and mission critical business data. A database vulnerability is a weakness in the security of your database that allows unauthorized access. This can lead to data loss, stolen information, and even identity theft. Data breaches are a growing concern for businesses, as malicious actors seek to find vulnerabilities into databases to access the information within. So it’s imperative to know if your database is vulnerable to data breaches and how to act to prevent it from happening. Common database vulnerabilities There’s a risk that even small mistakes can lead to a database compromise. In order to safeguard your company from any security breach, it’s important to take the necessary measures that will provide maximum security. Following are the most common types of database vulnerabilities: How to protect against database vulnerabilities The first step to understanding data security is to find out where your company’s vulnerabilities are and work to mitigate the risks of potential data breaches. Human error is responsible for most data breaches, which makes it even more important to implement robust security policies to protect your databases. While this won’t completely eliminate risk, it reduces vulnerabilities and protects against potential data breaches. Database backups are also an important part of ensuring your Partnering with a specialist database service provider can give you peace of mind and satisfaction knowing you’re taking the necessary steps to keep your database and critical information safe. Everconnect’s database managed service team can perform a comprehensive security review of your system and protect your business from malicious attacks, keeping your sensitive information safe.
<urn:uuid:12eab57a-3419-44a1-a811-96e8678769e0>
CC-MAIN-2022-40
https://everconnectds.com/blog/common-database-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00097.warc.gz
en
0.908261
345
2.75
3
Databases are structured set of information stored in a computer that is accessible in many ways. Relational, flat file based, hierarchical, network and object oriented are different types of database structures. SQL is a popular database management system used by most developers today. This blog category discusses the different types of databases that are available in the market. We will also share some useful tips on how you can use these databases for your websites. Healthcare organizations face constant pressure to deliver better care and results at a lower cost. To achieve that goal, data must be collected, analyzed, and Databases are essential to store and manage data across applications. However, traditional databases can be slow and limiting, especially for modern applications that support mobile, Database systems are critical for businesses. They store and process data, and they play a significant role in the management of business information. Understanding how Data is the new gold and as such, businesses need to invest in data management technologies to make their enterprise smarter, faster, and more productive.
<urn:uuid:d88b6af1-9087-4ace-a1bc-e195df00cf0a>
CC-MAIN-2022-40
https://everconnectds.com/blog/tag/database-support/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00097.warc.gz
en
0.933684
206
2.78125
3
Zero Trust is a security framework that requires all network users, internal and external, to be authorized, authenticated, and continuously validated for their security configuration and posture before being granted access to applications and data. The Zero Trust model is a "principle of least privilege" that's applied to network connectivity. "It assumes that assets and users will interact in hostile environments which, by their very nature, cannot be trusted," explains Bryan Fite, CISO at BT Americas. "Therefore, controls must be leveraged to offset this lack of trust." Zero Trust provides organizations with a multi-layered approach to securing their environment, says Steve Ryan, a senior consultant at BARR Advisory. "Through network segmentation, granular user-access controls, and continuous monitoring, organizations that implement a Zero Trust model are able to mitigate the risks of a security breach by minimizing the areas of their environment vulnerable to an attack." Even if an attacker manages to gain entry, a Zero Trust model requires re-validation at every entry point in the network, he adds. Whether an organization is a primary target, a victim of multi-target assault, or collateral damage, it's vulnerable to attack if it uses the public, unsecured Internet. "Fortunately, its ability to operate, be resilient, and thrive can be quantifiably improved by adopting Zero Trust principles and controls," Fite says. Zero Trust provides a model that creates appropriate risk coverage for all technology layers, says Nick Puetz, a managing director at global consulting firm Protiviti. "As adversarial attack complexity increases, and available skilled resources continue to lag, automation and orchestration are going to play a key role in scaling cyber operations," he notes. "Modern technology requires modern frameworks and capabilities to address risks. Zero Trust is one example of modern risk mitigation." Planning and execution Organizations should begin their Zero Trust journey by defining a cyber capabilities architecture. "Don’t start with a specific technology in mind," Puetz cautions. "Instead, enumerate the capabilities you want to enable through technology." Take stock of the technologies that are already in place, Puetz suggests. Most modern network technologies can easily integrate with or already include Zero Trust functionality. "Start small, get some quick wins, prove out the model–crawl, walk, run," he advises. Deciding what to deploy and where becomes the guidebook for your Zero Trust journey, says Scott Riccon, principal consultant, cybersecurity with global technology research and advisory firm ISG. "Organizations that don't spend the time up front to establish a shared vision will rapidly find numerous Zero Trust projects sponsored by different teams within the organization." Such initiatives will eventually get to a level of Zero Trust, but only with duplicative capabilities, longer project times, and additional costs, he notes. When embarking on their Zero Trust journey, network leaders need to remember that any areas left undone can easily turn into exploitable gaps and seams. Don't get lulled into a false sense of security. "You are reducing the threat surface, not eliminating it," Riccon warns. Nearly all network owners already possess some or all of the building blocks needed to begin their Zero Trust journey. "Organizations can accelerate their journey by getting more value out of their existing estate," Fite says. "Moreover, by integrating, optimizing, and automating existing controls, organizations can gain the confidence and credibility needed to properly transform and thrive in the Internet of Dangerous Things." A new philosophy Zero Trust is not a technology you can buy or a person you can hire. "It's a holistic philosophy that could take years to fully realize, and many companies will not fully realize Zero Trust nirvana," Puetz says. "Treat this as a journey, not a specific destination, and your expectations will be well aligned." Don’t let Zero Trust's new approach keep you from exploring it, Riccon says. "Change can be good," he notes. "We are evolving from the blocking and tackling fundamentals of cybersecurity to the more advanced plays that allow us to move the ball farther and faster down the field." "Every organization’s Zero Trust journey will be different," Fite concludes. Building a Five-Step Zero Trust Strategy Joe McMann, global cybersecurity portfolio lead at business advisory firm Capgemini, offers the following five steps for building a Zero Trust network security strategy. 1. Define the Attack Surface. If you don’t fully understand what you have in terms of network resources and how everything is interconnected, you won't be able to devise an appropriate protection strategy. 2. Devise a Network Segmentation Plan. Include key business functions and required network traffic. Look to isolate functions that need to be protected and eliminate lateral movement if compromised. 3. Establish Firm Policies. Implement access control policies to better manage access to each new network segment. 4. Create Strong Zero Trust Control Practices. These practices should become an integral part of the security playbook that's used to help network team members fully understand the Zero Trust architecture and how it works. 5. Build a Managed Detection and Response Strategy. Zero Trust is an important concept that should be built into all existing network architectures. But don't forget that a managed detection and response strategy is still necessary to prevent attacks, as well as to respond to network breaches.
<urn:uuid:2ef40298-2f33-44f7-9ff8-290d2f32e68f>
CC-MAIN-2022-40
https://www.networkcomputing.com/network-security/getting-started-zero-trust-security-network-model
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00097.warc.gz
en
0.932586
1,109
2.71875
3
Hair Analysis | Science or Objective Opinion Scientific Method & Evidence When evidence is presented in the courtroom, it must face severe scrutiny by all sides in a trial. This scrutiny has been defined by previous court decisions regarding what is considered valid evidence or forensics that can be used in trial courts. These decisions have confirmed that regardless of the type of evidence presented, any analysis or conclusive results are given in court regarding findings on the evidence must come from the scientific method. The first written court decision came in 1923 from the Frye v. The United States. An update to this decision came in 1975 the decision was passed down to use Rule 702, giving some authority to review the validity of evidence to the judge. In later years this ruling has been updated by the Daubert Standard, named such from the 1993 case Daubert v. Merrell Dow Pharmaceuticals, Inc., this standard keeps Rule 702 but also helps add more definition. The main goals of these decisions are that any forensics that is processed for analysis and presented to trial courtrooms as evidence should be based in science, and follow the scientific method to achieve results. Conclusions must be validated and accepted in a variety of ways. These validations or views of accepted science can be from known, accepted standards or numerous professional peer reviews. Hair Analysis: The Basics “It’s a match!” We all hold our breath during crime and mystery movies awaiting those words coming out at just the right moment. It is much like the game of Clue. It was the candlestick in the library, and now, we have matched a suspect’s hair to the hair on the sofa. What were the chances? Indeed, forensic hair analysis was once a go-to for solving crime and providing proof in the courtroom. It has been a standard scene since the 1950s. Today, it is only suitable for television drama. Hair analysis, even by the top FBI crime labs, has all but been debunked. There are different types of hair analysis, so it does continue as a valid means of research. Certain hairs may contain a root or other cells on which DNA testing can be done. Under these circumstances, a hair DNA test result is compared to the suspect’s DNA, and this is conclusive science. Here is a case in which it is possible to match the hair to another human being with certainty. Another type of hair analysis, which is also used in courtrooms as evidence, does not require a ‘match.’ The hair is removed from the person’s head, labeled, and sent to the lab. There is no question as to whom the hair belongs. The type of testing can be to determine the past history of drug use or even can show environmental factors. These types of hair evidence are generally used when there is a chance that someone facing the judge may try to hide or lie about their use of illicit drugs. These results can indicate more than a urine screen. A urine screen can tell what may currently be in the system, or what has been used in the last 24-72 hours, in most cases. However, a drug analysis done on hair can tell much more, like the type of substances used as well as how often. The results can show the judge a behavioral pattern with the history of use up to three months before testing. The results for environmental exposures can be similar. Forensic hair analysis for establishing the identity of a suspect in a crime scene is entirely different. Since the 1950s it has been used in courtrooms as a source of scientific certainty to identify murderers, rapists, and other criminals and sentence them to long periods in prison, sometimes life and the death penalty. Scientists, in a lab setting, strands of hair are mounted on a microscopic slide and examined in detail. Forensic scientists compare the hair found at the crime scene to hair from the suspect. Distinguishing features are looked at, such as the color, pigment depth, thickness, and textures. From this study, these forensic scientists have presented to court judges and juries that this is evidence that it identifies a specific individual as the defining suspect in the case. Many FBI agents have even testified that the results mean that it identifies the suspect, and the results prove it to 1 in 10,000 people. Judges and juries alike, have accepted these testimonies as scientific truth and have decided that individuals were guilty of the crime in question. Today, the FBI admits that it is not science at all. No “uniform standard” exists in the study of matching hairs. Scientists have yet to determine the precise number of hair characteristics to know the certainty of a match. The numbers that are thrown out in court testimony over the years of the matches being “1 in 10,000” was utterly false. No one really knows, and the idea that nearly 25% of the planet has the same hair structure, how can there be a scientific measurement designating a match of any kind? The elite FBI crime laboratory has been under federal investigation since 2012. For years, their scientists have been overstating the validity of hair matches in trials. The problem is that when a scientist presents their findings as truth to a jury, a jury of the peers which are everyday people selected from the community, they do not have the background to understand that what is presented to them is exaggerated. There may be other evidence presented at the trial, but when the jury hears the hair match as truth, they may base their conviction on that single piece of evidence. Facts show that many of these cases, the person who was convicted solely on identification by hair analysis, were declared guilty and later found innocent. Innocent People Convicted by Junk Science Decades have passed while the FBI has used junk science results that they pushed upon unsuspecting courts, judges, and juries. Many individuals have been wrongfully convicted based on these findings. In 2009, there was a push to change the position of these types of sciences presented in court, as well as prove the innocence of these individuals. In 2009, when confronted by the fact that the science behind the analysis of many types of forensic matching was junk, the FBI still wanted to hold on to some of its background and abilities to analyze results. The only thing pressing to the FBI seems to be getting a conviction, and not getting to the truth. Many individuals are now facing hard work of appealing convictions, waiting for acceptance by pro-bono attorneys. However, according to the research by Tim Cushing, “the damage has been done, and the FBI’s belated recognition of its contribution to the farce that is our criminal justice system isn’t going to give back years of wrongfully-obtained lives.” He is right. There is no way to give back years of someone’s life that is wrongfully taken. The FBI has acknowledged that they sent out experts to testify in courts for decades that overstated the findings. It was not just in one or two instances; it was nearly every forensic scientist sent out to testify in trials. In other words, they lied. Lt. General John F. Sattler, chosen motivational speaker for the 71st Scientific Meeting of the American Academy of Forensic Sciences (AAFS), brought his military background into the discussion of the junk science problem facing courtrooms across the country. Sattler began the debate by pointing to the theme of the 2019 conference: “Diligence (to the Effort), Dedication (to the Handling of Details), Devotion (to the Field).” These were the words that headed the top of the program handed out to all attendees. He spoke about how these goals could not fail a person in their daily goals, and if followed in forensics, these goals could solve many problems. Sattler continued with emphasis on the words, “Truth matters.” General Sattler then went on to describe that no matter what area of life, settling for the status quo is dangerous and, in some cases, deadly. Sattler specified the military bases in Iraq, and every base had a sign posted in large letters as you enter, “Complacency kills.” His point in telling this story was that becoming complacent about the types of forensic evidence that was being used in court trials had killed innocent victims. He wanted to bring that point home to the attendees, which consisted of lawyers, scientists, and forensic specialists worldwide. Hair analysis results across the country have been rebuked, turned over since the 2015 revelation regarding the junk science coming out of the FBI’s hair microscopy division. The news regarding wrongful convictions was more than alarming. According to testimony given by the FBI, their hair microscopy division specialists had given erroneous statements on the stand in court testimony, convicting 33 death row cases that were later found to be wrongfully convicted. Of these 33 cases, “Nine of these defendants have already been executed, and five died of other causes while on death row.” With this type of damning evidence, what then is the value of hair analysis at all? It is still a very relevant part of crime scene investigation. However, depending on the results, some of it is not considered science as recognized under the rules of the scientific method. Comparing hair microscopically can give some indication towards the identity of a suspect, though it cannot confirm it based on a view comparison. It can look for cells or roots that are still attached to the hair, which could lead directly to a DNA analysis that could conclusively include or exclude a defendant. Pulling all of this together, as a crime scene investigator, it is essential to gather as much evidence as possible to solve the crime. This would include any available hair that can be analyzed. Be aware; those results cannot distinctively point out your suspect. It can be the icing on the cake. If you have other positive identifying evidence, a comparison can suggest that hair is similar to the hair that belongs to your suspect. You may even get lucky and find a way to DNA analysis through your hair samples. Every sample can be a clue. The idea is to allow the evidence to lead you to solve crime situations and not use the clues to fit a specific scenario. If you follow this, and of course the 3 D’s above, your career in forensics, law enforcement, or crime scene investigation is sure to go far.
<urn:uuid:36eac700-8390-4968-ab0b-4be602957249>
CC-MAIN-2022-40
https://caseguard.com/articles/hair-analysis-science-or-objective-opinion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00097.warc.gz
en
0.971311
2,149
3.140625
3
Updating your beliefsHow Bayes Rule affects risk Usually, changing our beliefs is seen as a negative thing. But when those beliefs represent our state of uncertainty regarding a particular cybersecurity risk, you’d better use all the tools at hand to reduce that uncertainty, i.e., measuring. Why do we speak of "belief" and not "probability" here? Intuitively, when we have mentioned probabilities, we’re meaning some belief or measure of uncertainty. For example, when giving a confidence interval, we say we believe the actual value is between the boundaries of the interval, up to a certain degree of confidence. When we simulate multiple scenarios in Monte Carlo simulations and finally aggregate the results, we’re expressing that we believe that the loss will be so many millions or larger. In science, hypotheses are disproven trough observable, measurable evidence. Similarly, testing in general —and pentesting in particular— can change our beliefs, that is, our initially proposed or prior probabilities, based upon evidence. The mathematical tool for updating these beliefs is a simple one: Bayes Rule. However, it does require us to discuss a few basic probability theory facts. If you’re familiar with it, feel free to skip straight to application to cybersecurity risk. Let us consider a simple example for illustrating the basic rules of probability: we have a bag with 2 blue marbles and 3 red ones, and we’re going to draw marbles from the bag (without looking!) and we want to find the probabilities of drawing each kind. Let us call R the event of picking a red marble and B for picking a red one. Their probabilities are P(B) = 2/5, and P(R) = 3/5, in principle. What if now we draw a second marble? Now the probabilities are subject to the result of the first draw. For example, if we’re given that the first marble picked was blue, then the probability of drawing a red marble is now 3/4 since now there are only 4 balls altogether. This is a conditional probability; it is the chance of event R subject given B happened, denoted P(R|B). This situation can be illustrated with a tree diagram like this: Figure 1. Probability tree diagram. Via MathsIsFun We can find the probability of a branch, that is, of the succession of two events, by multiplying the probabilities on the arrows, as seen above. And we can add related branches to make up single events: The probability of the third branch from top to bottom is 30%, so if we add that to the previous result, we get that the probability of the second marble being red is 40%. This is an application of the total probability theorem. We know the conditional probabilities for the second marble given the first, but what if they show us that the second one is blue and we had to guess what the first one was? That’s where Bayes Rule comes in: Figure 2. Bayes Cause Evidence If we think of the first event as the cause and the second one as the effect, we have that P(evidence) = 40%. We know that the a priori chance of the first ball being red is 60%, and the probability of observing the evidence given the cause, i.e., P(B|R) = 50%. Hence Figure 3. First given second Notice how the extra piece of information, namely that the second marble is blue, narrows down the chance of the first marble being red from the prior probability of 60% to 75%. Hence the probability of the first being blue is the remaining 25%. So I would bet on the first one being red, and I would give you 3 to 1 odds. This is the power of Bayes Rule: observable evidence, whose likelihood generally depends on the assumed probabilities of the causes, can update or refine our estimates on the likelihoods of the causes. So how does this apply to cyber risk? Since Bayes Rule helps us reduce our uncertainty, it works as a measurement technique. While our initial estimates about an event such as suffering a denial of service or data breach may be way off, we can still get a measurement with those bad estimates, plus evidence, plus their probabilities. Consider the following random events: V: there is a critical vulnerability leading to remote code execution, A: suffering a successful denial of service attack (in a reasonable time period v.g. a year) T: penetration test results are positive, indicating the possibility of critical vulnerabilities. Normally, the chain of events here would be that a positive pen test points to the existence of vulnerabilities, and such a vulnerability might lead to the threat (in this case, the denial of service) materializing. Suppose that we know, from the false positive rate, the probability of the existence of vulnerabilities based on a positive and negative pen test, i.e., P(V|T) and P(V|~T). Here the ~ symbol denotes an event not happening. Now, the existence of a vulnerability does not necessarily imply that the organization will suffer an attack so we might estimate the probabilities of an attack in the case vulnerabilities exist and in the case they don’t. Let P(A|V) = 25% and P(A|~V) = 1%. This, together with P(T) = 1%, the a priori probability that a given penetration test will yield positive results (which we may estimate based on historical data), is all we need to know in order to estimate the posterior probabilities for V, A, and, in fact, anything we might ask about this particular situation. We might draw a tree diagram like this to describe the situation: Figure 4. Probability tree cyber Probabilities in blue are the given ones. Since branching in a probability tree implies that the involved probabilities are complementary, i.e., they add up to one, we can compute all others, but we chose not to write them in the above diagram to keep it tidy. Recall that the probability of a single branch is the product of the probabilities that lead to it so we can compute the probabilities of every branch that ends in A, and add them so that P(A) = 1.3%. If the pen test is positive, what is the probability of being attacked? We could fiddle with formulas, but it’s easier to just look at the subtree after the T, the part of the tree that is framed above. In that case, we have shorter branches ending in A: Figure 5. Attack Positive Test What if it is negative? Figure 6. Attack Negative Test Whatever its results, penetration testing gives you more information about the risk your organization is facing. It is especially remarkable that the initial estimate of 1.3% goes up by more than 18 times when the test is positive. Suppose a year passed, and no denial of service attack happened. Does that mean there are no vulnerabilities? We know the probabilities of attack given the existence of vulnerabilities, but not the other way around. First, we find P(V) by total probability (ignoring all the A nodes in the third column): Figure 7. Probability of Vulnerabilies We already know that P(A) = 1.3%, so P(~A) = 98.7%. Finally, by Bayes Rule: Figure 8. Bayes Rule So even it the threat does not materialize, there is still a latent risk of having vulnerabilities. This is yet another example of how we can measure risk, even when our initial estimates are bad, using basic probability theory facts and an appropriate decomposition of the problem. We can estimate the probabilities of events given certain assumed conditions, put that together in a probability tree diagram and use the tools learned in this article to generate the rest. Better Explained. An Intuitive (and Short) Explanation of Bayes' Theorem. D. Hubbard, R. Seiersen (2016). How to measure anything in cibersecurity risk. Wiley. D. Lindley (2006). Understanding Uncertainty. Wiley. Ready to try Continuous Hacking? Discover the benefits of our comprehensive Continuous Hacking solution, which hundreds of organizations are already enjoying.
<urn:uuid:f66431d7-e03b-463a-bf7e-e65cffe6d20f>
CC-MAIN-2022-40
https://fluidattacks.com/blog/updating-belief/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00297.warc.gz
en
0.924486
1,749
3.046875
3
Earlier this month the National Security Agency (NSA) issued a technical advisory warning of an attack dubbed ALPACA. It highlights the cybersecurity risks of using wildcard TLS certificates. So how does this attack work and why does it make wildcard certificates more concerning? What is the ALPACA Attack? The application layer protocol content confusion attack (ALPACA) was first disclosed in June and presented at Black Hat USA 2021. To understand ALPACA, it's helpful to understand how TLS works: The protocol is designed to protect data in transit during a transaction, but it does not bind TCP connections to the intended application layer protocol—whether that's HTTP, SMTP, or any of the many other protocols often secured with TLS. In practice, this means that while TLS secures the data as it's transported and verifies the server name it's connecting to, it doesn't check the application the data is being sent to or even the validity of that data. The researchers estimate that 1.4 million web servers are vulnerable to these cross-protocol attacks. Of these, 119,000 web servers could be attacked via exploitable application servers. What Are Wildcard Certificates? A wildcard certificate is a public key certificate that is used to authenticate multiple hosts. For example, the certificate "*.example.com" can be used for "www.example[.]com", "smtp.example[.]com" and "ftp.example[.]com". It has therefore become a popular way for administrators who are under pressure to rapidly roll out and manage servers across large numbers of hosts. It saves time and money because a single certificate can be applied to all relevant servers. What Are the Cybersecurity Risks of Wildcard Certificates? If the same certificate is used both for HTTPS and another SSL-enabled protocol server such as SMTPS or FTPS, then an attacker can trick the browser into exfiltrating session cookies or executing a cross-site scripting (XSS) attack. XSS is a common technique for gaining initial access, and it can also be used for lateral movement. The convenience of sharing certificates also makes them a security risk. As the NSA explains, "If one server hosting a wildcard certificate is compromised, all other servers that can be represented by the wildcard certificate are put at risk. A malicious cyber actor with a wildcard certificate's private key can impersonate any of the sites within the certificate's scope and gain access to user credentials and protected information." Using wildcard certificates increases the risk of ALPACA by opening the door to the cross-protocol attacks described above. ALPACA threat actors can exploit the security weaknesses of wildcard certificates by redirecting traffic from one host to another. How Might This Attack Play Out? The researchers that disclosed this vulnerability focused their efforts on a few protocols, including SMTP, IMAP, POP3, and FTP. However, they noted that there are "hundreds of possible cross-protocol scenarios possible with current TLS enabled applications and servers." The NSA's report on the ALPACA attack also noted that "While the conditions permitting this complicated technique to succeed are uncommon, ongoing research in this area is likely to identify additional configurations vulnerable to this type of malicious activity." Check for Wildcard Certificates in Your Organization ExtraHop Reveal(x) 360 network detection and response (NDR) can discover wildcard certificates in use on your network, as well as expired certificates and self-signed certificates that also represent security risk. In the Records tab, select SSL Open as your record type, then filter by Certificate Subject Starts With * and click View Records. Then add a filter for your own domain to view only your own certificates. This will yield a list of all SSL Open events that used a wildcard certificate in your environment during the selected time window. You can explore the list to make sure you don't have any wildcard certificates that are leaving you vulnerable to the ALPACA attack technique by being too broadly scoped or being used across multiple server types such as HTTP and SMTP. Through passive observation of network traffic, Reveal(x) can identify which systems are using which certificates so that you can quickly identify any that are scoped too broadly, detect lateral movement, and remedy the situation to reduce your risk of future ALPACA attacks.
<urn:uuid:6337f3a6-2948-46ae-ba0f-17f67916578f>
CC-MAIN-2022-40
https://www.extrahop.com/company/blog/2021/wildcard-certificate-risks-and-alpaca-attack/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00297.warc.gz
en
0.925251
978
3.4375
3
The ABCs of Phishing This guide is to walk you through everything you need to know about phishing, starting with what it is, the different types, common techniques being used and tips for prevention. If you are looking for specific information about a specific topic feel free to skip to the section you are looking for using the shortcuts below. What is phishing? Phishing is a type of social engineering that refers to attempts to extract personal information such as user names, passwords, credit card details and other information by impersonating a reputable entity or a known individual and making a request to the target. If the attacker is successful, they will then use the information to access accounts to transfer money, buy things, or other damaging activities including identity theft. Phishing attacks attempt to stimulate emotions and call for an immediate action to lure the victim into giving away credentials or other information before taking a moment to analyze the message and question its legitimacy. Attackers use a variety of different techniques and are masters of disguising their attacks. In some cases, attackers will research their targets and use any personal information they find to increase the perception of legitimacy in the eyes of the victim; improving the attacks likelihood of success. To stay safe from these cyber criminals, it is vital to know the different methods they use to be able to identify an attack. The 3 Types of Email Phishing 1. Spear Phishing What is spear phishing? By far the most successful type of phishing, spear phishing targets specific individuals or companies by using fraudulent emails that are designed to appear as if they are from someone the recipient knows and/or trusts. The mission of the email is to have the recipient do something (click on a link, download a document, send login credentials, etc.) that the attacker cannot do to achieve their goal. For example, the attacker may need a recipient to click on and visit a website that will infect their system. The attacker can create the malicious website and construct a legitimate looking email and even reference a “mutual friend” or a recent purchase the target has made but, they cannot infect the system without the recipient’s action. There are two tactics for spear phishing emails that are dictated by the information the attacker is after. - Mass Attack If the attackers are looking for a foothold in an organization, they may gather as many email addresses they can find and send an email to each of them. - Targeted Data If the objective is to obtain specific data, such as the names and addresses of customers, the attackers may only send emails to targeted individuals. In doing so, the likelihood of being detected/reported is kept to a minimum while the probability of finding the desired data from an infected system is high. What does spear phishing look like? Below is an email that was sent to me, the addresses have been removed to keep the anonymity of both organizations. The email address of the sender replicated this format, [email protected]. John is the name of a director at RealCompany. However, after visiting the website the email address listed for the director is [email protected]. The attacker is hoping that the targets that know John will not think twice about the address and assume the message was from John at RealCompany. Since the email appears to be automated there is no need for the attacker to attempt replicating John’s communication and writing styles; keeping the recipients suspicion low. The “View Scanned Documents” button brings the recipient to an email sign-in page. First of all, if we were sent a document over DropBox, why are we being brought to a Microsoft Office sign-in page? On top of that, why is the URL not login.microsoftonline.com and instead a very crowded and complicated domain. The attacker is trying to extract the recipients log-in credentials. This could be to see confidential emails, use the account for other, even more targeted phishing attacks or for other malicious reasons. See other examples in our Spot the Phish series. Why does spear phishing work? As the example above shows spear phishing emails work because they are believable. The attackers search the internet to gather as much information as they can on their potential targets. The bits of personal information give the email a lot of credibility. According to a Fire Eye report, spear phishing emails have an open rate of 70 percent while mass spam emails have an open rate of 3 percent. Additionally, 50 percent of those who open the spear phishing email will also open the malicious links. The 2016 Q4 APWG Phishing Attack Trends Report shows that the amount of phishing attacks in 2016 was 65 percent higher than 2015. With spear phishing attacks increasing in volume combined with their high success rates, the data suggests that people are the weakest link in security. Since there is no silver bullet in Cybersecurity, businesses must use multiple, layered solutions to minimize their risk. As the old saying goes, a chain is only as strong as its weakest link. A proactive approach to fortify security and combat spear phishing is to conduct individualized cybersecurity awareness training for all employees to aid in identifying possible threats. 2. Mass Phishing What is mass phishing? This method is the most common type of phishing and as its name suggests, messages are sent to as many people as the attacker can find to extract their personal and/or financial information. These messages may ask for the recipient to download a file with malware, visit a malicious website or respond directly with personal information. BYOD (“bring your own device”) is a popular concept being used by organizations, due to this, mass phishing is not only a threat on the consumer level but also a threat to businesses. While some of the other methods rely on attackers researching their victims to succeed, the engine that drives mass phishing is volume. With no specific target in mind, the entity that the attacker impersonates (banks, credit card companies, government agencies, etc.) will not apply to everyone who receives the message. Resulting in only a small fraction of the messages being opened and even a smaller amount being executed. What does mass phishing look like? The attacker must decide which entity they will be impersonating. Below is an email that was sent to me in late 2016 where the attack decided to impersonate PayPal. Ironically, the replica email claimed that my account experienced an unexpected sign-in attempt. The attacker poorly attempted to spoof the email address to look like the email is from PayPal or should I say PlayPal. Before clicking on the link in question, mouse-over the link (do not click) and the tool tip will show you where the link is really taking you. The “Go to Your Account” button contains a bitly URL address, which is a shortened version. For example, this link would bring you to our homepage: http://bit.ly/2uem2qf To find out where a shortened link will take you visit www.unfurlr.com and plug in the link in question. This free tool will provide you with the real destination and a plethora of other information that may influence your decision to click on the link. The destination for the link in the email was “webapps-sumarry.billgateways.com” (even though it is no longer working, do not visit) and the page looked like the PayPal sign in page. Now, this was not the best phishing attempt (especially since I do not have a PayPal account with this email) but, it is not crazy to believe that this may trick an unsuspecting victim. Why does mass phishing work? Let’s say the attacker above had a list of 50,000 active email addresses. If only 10 percent of the messages are opened and only 5 percent of the recipients who opened the message fulfil the request the attacker would have successfully gained access to 250 accounts. PayPal accounts can be linked with bank accounts, credit cards and contain other personal information. Imagine you receive the email but you have never had a PayPal account; odds are you would delete the email and move along. Conversely, if you do have a PayPal account, you may be concerned and decide to investigate. You click on the link and the nearly identical log-in page opens, you fill in your credentials and then are redirected to your account page. There is no issue, which is strange but you are relieved so you move along. What users miss is that the website recorded your credentials and in the background used them to sign you into the real site. Now the attacker has control and can do as they please. The attacker isn’t trying to fool everyone instead, they are playing the numbers game; with enough emails sent there is bound to be a situation where the minor details of the email are overlooked and the desired data is successfully extracted. “Whale hunting” is used as a metaphor for landing large accounts that can transform the business. Instead of acquiring business from a large organization, cyber criminals target valuable individuals such as CFOs, celebrities, politicians, or others with high valued credentials. The phishing attempts are extremely targeted and attempt to scare and convince the recipient into acting, like to prevent legal fees or to save their job. Although whaling is uncommon it can be the costliest, if cybercriminals can successfully trick the large profile target they can gain access to confidential company information or take advantage of their esoteric resources. Facebook and Google fell victim to a phishing attack in which they wired over $100 million; it can happen to anyone. What does whaling look like? While other phishing scams may be designed to be from PayPal that attempt to frighten the recipient with claims that their account has been compromised. Whaling emails disguise themselves as the FBI requiring information, a C-level executive making a request or as an authority figure that the target is likely to serve. Below is a series of emails that was sent between the CFO and what appeared to be the CEO of the company. To keep their identities and company private, all identifiable information has been removed. The email address of the sender replaced an “i” in the domain with the letter “l” in an attempt of going unnoticed. The CEO told us that, “Dave, our CFO, has received several of these. They started after we began listing our staff members on our website. Interestingly, they only seem to arrive when I am out of the office. That might be a coincidence since I am out a lot but it is still interesting.” Again, the little bits of information that attackers gather and use adds a lot of credibility to the email. In this case, Dave responsibly called Bob asking about the request before wiring the money. Why does whaling work? It works because they are believable. Although the targets for whaling are high-profile individuals it does not mean that they are experts in Cybersecurity. Proactive tips to prevent falling victim to phishing emails Implement several layers of defense. The unfortunate reality with Cybersecurity is that there is not a single, catchall solution. Therefore, to mitigate these attacks, it is required to have multiple layers of defense. The goal is to make a successful attack extremely difficult by forcing the attacker to jump through as many hoops as possible. Implementing firewalls, anti-spam software and filters, encrypting sensitive information, employee awareness training, and so on decreases the attacks margin of error, thus, creating more opportunities to identify and eradicate threats. Below are several technology solutions to aid in preventing phishing attacks, however know that relying on technology alone is not a solution. The best defense are the employees themselves, making training the biggest change-bringer. - Use effective firewalls. Firewalls are the first line of defense against phishing emails. They are designed to protect private networks by preventing unauthorized Internet access, meaning that, all messages leaving or entering the private network must pass through the firewall. - Utilize anti-spam software. This type of software has a variety of functions that are designed to protect the users from various threats. Although spam filters are not perfect and will not catch everything, they will identify and remove a good amount of phishing emails that would otherwise end up in an inbox, thus reducing risk, making anti-spam software an effective layer of defense that is essential to protecting organizations from phishing attacks. - Implement encryption. Any email that contains sensitive information should be encrypted. Doing so reduces the potential damages that phishing attacks can have. - Keep systems and software up-to-date. Whenever available, update your firewalls, spam filters, anti-virus, and malware detection and ensure that your systems have the latest security protections and patches. They often include bug fixes and new features that will help mitigate new types of threats. Otherwise, you run the risk of attackers taking advantage of known exploits and deficiencies of the outdated version. - Create a security culture. Technical solutions are only half of the puzzle, the other half being human behavior. The board and executives should create a meaningful security culture by constantly reinforcing cybersecurity awareness and best practices to all members of the organization. - Educate employees. Implement a cybersecurity awareness training program to give employees the information necessary to identify potential threats they may encounter. The program should include clear company procedures for employees to take once a threat has been identified. - Stay updated on the latest phishing attacks. Cyber criminals are constantly devising and creating new phishing methods. Keep an eye out in the news for different phishing scams and relay the information to employees. Avoid being the latest victim by being aware of the emerging threats. - Measure the success of your security awareness program. Doing so will aid in identifying individuals who need additional training and will keep security top-of-mind throughout the organization. Other Common Phishing Methods In addition to phishing emails, attackers try to trick us by using communication channels on our phones. When compared to email phishing, the following two methods are not as common but, they can be just as damaging. Being aware of the scams to be able to identify and react to an attack when you, or your organization becomes the target. What is vishing? Voice phishing – or vishing – is conducted over the telephone and like the other types, attempts to trick people into giving up personal information or their money. Like mass phishing, vishing is a numbers game. Attackers cast a wide net by calling every phone number they can find to maximize their returns. Attackers will typically spoof their caller ID to alter the incoming call to appear as if it was coming from a legitimate or known phone number. With Voice over Internet Protocol (VoIP) services, spoofing a phone number is easier than ever. Some providers even allow users to configure the displayed number on the provider’s configuration webpage. What does vishing look/sound like? Vishing is different; it requires the listener to make decisions in real time instead of having the time to analyze and verify the legitimacy of the message. Messages designed to stimulate emotions – such as fear or excitement – combined with processing verbal information leaves the listener scrambling to organize their thoughts. Thus, creating the ideal opportunity to be taken advantage of. It is common for the attacks to begin with a prerecorded message that will: - Introduce themselves as being from an authoritative organization - Alert the listener about an issue with their account or payment - Ask the listener to press a specified number to resolve the issue From there, either a prerecorded message or a real person will ask for the listener’s Social Security number, bank account information, or any other form personal information to “authenticate themselves.” After the attacker has extracted all the information they were looking for, they will either drop the call or transfer the call to the actual support line. This was just an example and there are many variations and methods. Tips to avoid falling victim to a vishing attack - Be suspicious of unknown callers. People should be just as skeptical of a phones call asking for personal information as they are of an email asking for personal information. - Don’t trust caller ID. Even if you recognize the number, remember that they can be easily modified to display the number of a legitimate organization. - What if the phone call is real? After receiving an unsolicited phone call, it is important to proceed with caution. Before giving away any personal information be sure to ask the caller questions that someone in their position should know. If your bank is calling, ask them for your account number. The person calling should have that information readily available. Regardless if the individual provides the correct information, you should tell them that you will call them back in a moment. From there you will be certain that you are talking to an employee of the company in question and can ask about the previous call. Block all automated called to reduce your vulnerability from vishing scams. This can be done by visiting the National Do Not Call List Registry. Plus, who wouldn’t mind a few less automated calls? What is smishing? Smishing is short for SMS phishing. Attackers send text messages with the intent of deceiving the recipient into giving out their personal information or to download a harmful virus onto their mobile phone. With BYOD (“bring your own device”) increasing in popularity, smishing has become a threat to businesses in addition to being a consumer threat. What does it look like? Like the other types of phishing, smishing plays on emotions to trick the recipient. Smishing messages may use fear by claiming to be from the IRS, your bank, or another service giving notification of an urgent issue. Other messages use excitement and hope by claiming to be someone you gave your number to at a bar or that you won a prize. Attackers are creative and are always trying to come up with new angles. The goal is to have you click on a link or reply with your personal information. Why does smishing work? Most people are aware that there are risks in clicking on links and downloading files from an email while on their computer. However, people often overlook these risks when they are on their smartphone, especially when in the form of a text message. Tips to avoid falling victim to a SMiShing attack - SMiShing scams usually begin with a pressing call to action. If you receive a text message alerting you of an urgent security alert or an act-now or miss out proposition, ignore and delete the message. - Do NOT send personal, private or financial information via text messages. - No financial institution will send you a text message asking for your account information. If you have any concerns about your account, contact the institution via their website or their listed phone number. - Never click a link or reply to message from an unknown sender that you are unsure about. Simply don’t respond and delete the message. - Look at the number the message is coming from. Keep an eye out for phone numbers that do not look like traditional 10-digit mobile phone numbers. Attackers often use email-to-text services instead of their real phone number to conceal their identity. Turn your employees into a human firewall with our innovative Security Awareness Training. Our e-learning modules take the boring out of security training. Get a curated briefing of the week's biggest cyber news every Friday. Intelligence and Insights
<urn:uuid:45a8d66d-becb-4160-8ac5-330b389de0b9>
CC-MAIN-2022-40
https://ermprotect.com/blog/abcs-of-phishing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00297.warc.gz
en
0.948876
4,081
3.453125
3
When you open a web page, all sorts of things will need to be done in the background before you get your shiny website on your screen. We will see now what is happening in the networking system to make that possible. TCP/IP protocol is what makes sending and receiving most of the data across the Internet possible. But how data packets know how to find us and how we know how to find the IP addresses of the web servers where these pages are saved? Data will maybe not even take the same route in each direction. It can happen that when we send something, a request to the server, the packets will flow through one route and the server response towards our computer will take some other route. The Internet is the biggest computer networking system. It knows in every moment how to find the best route to some device connected anywhere in-between all his nodes. But how is this data transferred across the wires, fibres and air? Data is divided into small packets. Every time we send a request towards a server, our request must firstly be divided into packets, most of the same size. Each of those packets needs an IP address of the destination to be written on it so that he can be routed through the network. In order to find out what is the destination IP address of the server – (remember that we are typing an URL into the browser, usually we are not typing IP address into it) – your computer, before sending out all those packets, will contact public DNS server – domain name server, that will have the information about IP address to which packets must be forwarded in order to get to your URL-linked page. Public DNS servers are set up into a hierarchical system that keeps the knowledge of IP addresses for all URLs (domain names) that are registered on the Internet. With this database, DNS is able to translate our request for the web page URL into the IP address of the server on which the web page is stored. So, when you open your web browser on your computer and type in the URL of this page howdoesinternetwork.com, the computer must connect to the server where this page is stored and download this page to your computer in order for you to see it. Your computer knows the IP address of DNS server because you are manually configuring this on into your Network Card NIC or it will get configured by your DHCP server which is at your home your Internet router. Computer DNS request goes through the Internet and at some point hits the DNS – domain name server who will look in his huge database and match the domain name you’ve entered with the IP address of the proper server. If the DNS server does not find a match, it will forward the request to next DNS server who maybe has more information. Not all DNS servers have all the data about all the domain names and they addresses. When the server is found and the request has come to it, he will respond by sending the requested files in a series of packets. Packets are files divided into small pieces that range between 1,000 and 1,500 bytes. Packets have some extra data called headers in the beginning and footers at the end and that data tell computers what’s in the packet and how the information can be put together with other packets to recreate an entire file that has been sent. Each packet travels through the Internet into your computer. Packets don’t necessarily all take the same path, they will usually travel with the path of least resistance, which is the best path in that moment. I fact, that is the most powerful feature of the Internet. Packets can travel with different routes and in this way avoid congested routes and come to their destination even when an entire part of the Internet is down. In the case of different types of files, the way that the network is making communication work is basically the same. VoIP calls, e-mails and other different files are also divided into packets and sent through the network to be recreated in the other end with the information in the header.
<urn:uuid:bdcf65a0-7f82-4ee8-b212-3cf3bf57068e>
CC-MAIN-2022-40
https://howdoesinternetwork.com/2011/how-we-open-web-page
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00297.warc.gz
en
0.94883
830
3.90625
4
Threat actors are looking for ways to disrupt the election with tactics such as stealing sensitive information, modifying votes after they have been cast to sway the outcome and spreading misinformation. With less than a week until the 2020 presidential election, cybersecurity is top of mind as malicious actors look to undermine the nation’s democracy. As voters make their final decisions, threat actors are setting their sights on vehicles for attack, disruption and misinformation. Threat actors are looking for ways to disrupt the election with tactics such as stealing sensitive information, modifying votes after they have been cast to sway the outcome and spreading misinformation. With the majority of today’s cyberattacks including tactics such as establishing persistence, evading defenses by impeding security tooling and even deploying destructive malware, extra security measures are required to secure our democracy. According to VMware Carbon Black’s 2020 Global Incident Response Threat Report, custom malware is now being used in 50% of the attacks surveyed, demonstrating that malware and malware services can be purchased to empower criminals, spies and terrorists, many of whom do not have the sophisticated resources to execute these attacks. In response to these mounting threats, security concerns ahead of election include the following: - Voter registration systems and databases. Because these platforms are managed on a state-by-state basis, and often without built-in security, attackers could manipulate results (especially in swing states) or alter the integrity of voter records for a specific political party by changing names or addresses and ultimately preventing people from casting their ballot. - Websites monitoring real-time election results. While these sites help inform local media on the status of the election, they can also be easily manipulated by hackers to show false information, risking participation in the election as this causes confusion and distrust among voters. - Major media outlets. Those outlets with a strong partisan stance will be particularly attractive to hackers as they can manipulate their social channels to create deceptive accounts that promote disinformation or data mine these major outlets’ followers for potential target lists. This year has shown us that nothing is off-limits, and cybercriminals will continue to exploit such events like the presidential election for personal gain. The threat report also found that 73% of respondents believe that there will be foreign influence on the 2020 presidential election, and 60% believe it will be influenced by a cyberattack. These threats to our democracy are exacerbated by increasingly sophisticated disinformation campaigns, designed to sow division and create conflict through propaganda. While these are some of the more prominent issues facing November’s election, we are only scratching the surface and must continue our efforts toward a more secure democratic system, keeping cybersecurity at the forefront. NEXT STORY: Georgia election system hit with ransomware
<urn:uuid:872be770-bd11-4eba-b197-d5c33c9449e6>
CC-MAIN-2022-40
https://gcn.com/cybersecurity/2020/10/cybersecurity-threats-to-the-2020-presidential-election/315712/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00697.warc.gz
en
0.950293
547
2.5625
3
As mobile devices users, every day we find ourselves more and more attached to them. Not only for daily tasks or work and leisure issues, but also for security issues: Our passwords, conversations, and keywords are in there and in many aspects, cyber assaults represent a problem not only for us but also for our companies. Every day we hear more news about cyber assaults, malware or hackers that seize email account passwords. And usually, they get access to this information thanks to cell phones because we do not know how to use them. We only need to see the data: According to the PEW Research Center, 30% of people do not block the screen of their phone. And if this is the easiest step in protecting us, that says a lot about our interest in security in general. The proliferation of apps that remember passwords only emphasizes the importance of this. Being able to access any of our accounts, personal or professional, has reduced the interest in keeping devices safe. And many people have suffered for it. The oldest method to steal information from electronic devices is through Malware. This kind of malicious software has different levels of danger. They can either interfere with normal phone use to disable it completely. Although it is the oldest practice, malware is evolving and thus it’s easier to plug them in mobile devices. Possible cyber assaults breaches A fact that could surprise many is how vulnerable our electronic devices are to hacking when we connect to public charging stations. By connecting our phone it not only transmits energy, but also data. This is a technique of hacking known as “juice jacking”. Another common way to subject our phones to possible cyber assaults is through Hotspots or Access Points. This is due to the lack of interest in setting their protection and the fact they’re accessible by anyone. This encourages hackers to load them with malicious software. The lack of a firewall, such as our own home router or Wi-Fi protected areas makes everything dangerous for our devices. Phishing is a practical but unreliable way to steal information from a device. It’s when hackers create pirated apps that simulate the functionality of legitimate ones. When these apps get installed, they grant access to the phone through the malware. The best we can do is check and verify everything we install on our devices. The use of official sources and reading the permissions requested by these apps is very important. Protecting your mobile devices from cyber assaults Although digital security is increasing, consumers represent the last barrier against hackers. It’s important to maintain healthy practices on our devices to avoid any of our personal information to leak. A simple way to prevent cyber assaults or security breaches is two-step authentication. It’s a practice in which the app requests not one, but two steps to grant access to sensitive content. One of the steps is usually a unique code sent via email or text message, like a temporary password. This prevents any malware that copies data stored on the phone to get our information. Turn off Wi-Fi and Bluetooth It is possible to receive cyber-attacks through the access ports that connect us via Bluetooth or Wi-Fi. We keep these apps active even if we are not using them, and some malware can access our phones this way. Keeping these features turned off prevents any possibility of access by hackers. Always double-check each time you connect to a network or Bluetooth to make sure it’s the right one. Avoid “Jailbreaking” cell phones Many users love to personalize their devices. Others like to browse through the functionalities of their phones as administrators. Yet this kind of practice increases vulnerability to cyberattacks and malware. Although a common practice, all its disadvantages speak against doing it. Shield your digital environments against cyber assaults The most effective way to protect your work environment is by using digital security tools like Trend Micro. This program has comprehensive security systems that cover many problems. It allows the inclusion of various devices, protecting your company by shielding your employees’ phones. With 28 years of experience and a security system that is evolving, Trend Micro is the best option when it comes to protecting work environments. It’s a safe bet and prevents the filtration of sensitive, personal content to the public. If you want to receive more information about this tool, do not hesitate to contact us. At GB Advisors, we offer only the best in the market. We are a team of professionals willing to share advice and help you on your way to a more efficient IT environment.
<urn:uuid:f2b7e4d4-1eff-452a-8a74-ba9a028a6fee>
CC-MAIN-2022-40
https://www.gb-advisors.com/mobile-devices-the-gateway-to-possible-cyber-assaults/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00697.warc.gz
en
0.931148
939
3.078125
3
The cloud provides an alternative way of procuring IT services that offers many benefits, including increased flexibility as well as reduced cost. It extends the spectrum of IT service delivery models beyond managed and hosted services to a form that is packaged and commoditized. However, in a recent survey by global IT association ISACA, 30% of the 3,700 respondents said cloud computing is one of the top issues expected to impact their enterprise’s security in the next 12 months. Clearly, a good understanding of cloud is critical, as is effective governance over the cloud. The cloud is not one thing; it covers a wide spectrum of types of service and delivery models ranging from in-house virtual servers to software accessed by multiple organizations over the internet. For example, an organization can run the IT services in-house; this is the most flexible but usually the most expensive arrangement. It can contract the running of the services through a managed service or hosting agreement; this is less flexible but may be cheaper. Infrastructure as a service provides a commoditized and packaged hosting service, which requires no capital expenditure. A similar spectrum applies to business applications; an organization can develop its own applications, these can be designed to the organization’s exact requirements, but it is very expensive. It can use commercial applications, which are tailored to the organization’s needs; this is usually cheaper, but still involves the management and running costs. Software as a Service provides access to a packaged application which is managed and run by the service provider and can be bought on a charge per use basis. Choose the right type of cloud service Infrastructure as a Service (IaaS) provides basic computing resources that the customer can use to run software (both operating systems and applications) and to store data. IaaS allows the customer to transfer an existing workload to the cloud with minimal if any change needed. The customer does not manage or control the underlying cloud infrastructure but remains responsible for managing the OS and applications. IaaS removes the need to buy, house and maintain the physical servers and can provide the ability for an organisation to respond quickly to changing demand. Platform as a Service (PaaS) provides and environment upon which the customer can use to build and deploy cloud applications. These applications may be for use by the customer or offered as a service to others. Building applications using PaaS means that they are inherently cloud enabled and the PaaS provider also provides the service upon which these applications run. The benefits include no need for capital hardware investment and rapid deployment. The major downside is “lock-in”; most PaaS platforms are based on proprietary programming interfaces (APIs), so it can be very difficult to change provider at a later date. Software as a Service (SaaS) provides an application and data that can be accessed via a network (usually the internet) using a variety of client devices such as web browsers, and mobile “phones. The major benefit of SaaS is the immediate availability of a working solution for a specific business problem with no need for up-front investment. This is particularly valuable for areas such as mature business processes which are essential, well understood and need to be delivered at minimal cost. SaaS provides an opportunity for service vendors to offer the best solution to this kind of problem at the lowest cost. The risks associated with SaaS include loss of governance, data privacy issues and return of customer data. Mature business processes are often subject to regulations and laws and organizations have invested heavily in IT to ensure compliance. Using SaaS means devolving control to the SaaS provider and it is essential to have independent confirmation that the provider will comply with the regulatory requirements. The SaaS provider also has control of the business data held by the service. Contracts need to specify how this data will be returned in a useable form at termination of contract to allow business continuity and provide flexibility to switch provider. Choose the right cloud deployment model Public cloud services are available for anyone to subscribe to and use. The key benefit of a public cloud approach is one of scale; the cloud provider can potentially offer a better service at a lower cost because the scale of their operation means that they can afford the skilled people and state of the art technology. The Public cloud model inherently provides service on demand. The cloud provider can dynamically reallocate resources as they are required. Spreading the service delivery across multiple locations also improves resilience. Local problems with power supplies, telecommunication, natural disasters and so forth can be managed more effectively when there are several data centres in multiple geographies. The downside of the public cloud is the risks of compliance and data security. For example, data privacy laws in the EU mandate that personal data must be processed within defined guidelines. The cloud service customer, who is the “data controller” is responsible in law, and needs to ensure that these guidelines are adhered to. Large cloud providers have recognized this need and can offer compliant services. Sharing applications and infrastructure with unknown co-tenants can lead to concerns over data security and data leakage. There are standards and best practices for this and it is essential to check that the cloud provider is externally certified as adhering to these. The HMRC online tax filing service is Software as a Service with a Public deployment model and this has been praised by the Audit Office, although it unclear whether it provides value for money. A private cloud service is used exclusively by a single organization. The private cloud allows organizations to outsource the management of their IT infrastructure while retaining tighter control over the location and management of the resources. The price to pay for this is that the costs are likely to be higher because there is less potential for economy of scale, and resilience may be lower because of the limit on service resources available. Isolation is one of the key techniques for ensuring security and, while in the public cloud applications and data exist in a shared environment, the private cloud offers greater isolation by dedicating resources to a particular customer. A community cloud service is for the exclusive use of a specific community of organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations). A community cloud provides many of the benefits of scale of the public cloud while retaining greater control over compliance and data privacy. Community cloud services already exist but under a different name! For example NHSmail, the national email and directory service available to NHS staff in England and Scotland, is effectively Software as a Service with a Community deployment model. As regards security, NHSmail is accredited to Government RESTRICTED status, and is the only NHS email service that is secure enough for the transmission of confidential patient information. When moving to the Cloud it is important that the business requirements for the move are understood and that the cloud service and deployment models are selected to meets these needs. Taking a good governance approach, such as COBIT, is the key to safely embracing the cloud and the benefits that it provides: - Identify the business requirements for the cloud based solution. This seems obvious but many organizations are using the cloud without knowing it. - Determine the cloud service needs based on the business requirements. Some applications will be more business critical than others. - Develop scenarios to understand the benefits and risks. Use these to determine the requirements for controls and questions to be answered. Considering the risks may lead to the conclusion that moving to the cloud is not appropriate. - Understand what the certification and accreditations offered by the cloud provider mean and actually cover and how these support your needs. - In most organizations cloud computing will co-exist with other IT service delivery models. Therefore an approach to governance and management is needed which covers both traditional and cloud models.
<urn:uuid:cd5621b7-067b-412d-8ca2-a6ba1ab09705>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2012/05/25/cloud-computing-choices/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00697.warc.gz
en
0.953513
1,580
2.828125
3
A newly discovered security vulnerability dubbed as Venom, found in the virtualization platforms put millions of virtual machine under cyber attacks. Geffner says- the flaw could be exploited by an attacker to compromise any machine is a data center’s network, according to the expert millions of virtual machines are vulnerable to attack. What is VENOM?VENOM is the acronym for “Virtual Environment Neglected Operations Manipulation,” it is a flow that affects the floppy disk controller driver for QEMU, which is an open-source computed emulator known as a hypervisor that is used for the management of virtual machines. VENOM is considered very dangerous and critical security issue as, exploiting the VENOM vulnerability one can get access to corporate intellectual property (IP), sensitive and personally identifiable information (PII), which will potentially affect thousands of organizations and millions of end user’s connectivity, storage, security, and privacy. CrowdStrike have already reported the issue to many of the vendors, and some of the following vendors have already released a patch for the vulnerability. - QEMU: http://git.qemu.org/?p=qemu.git;a=commitdiff;h=e907746266721f305d67bc0718795fedee2e824c - Xen Project: http://xenbits.xen.org/xsa/advisory-133.html - Red Hat: https://access.redhat.com/articles/1444903 - Citrix: http://support.citrix.com/article/CTX201078
<urn:uuid:e405899c-547f-4cf9-b3fa-deb11babea1c>
CC-MAIN-2022-40
https://www.cyberkendra.com/2015/05/venom-flaws-puts-millions-of-virtual.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00697.warc.gz
en
0.880149
340
2.78125
3
Picture a computer sitting on a desk: main processor enclosure, monitor and keyboard. It has no networking cables going anywhere and no Wi-Fi. Not even a phone-line modem. It’s alone and completely isolated. How long has it been since you saw such a thing? 20 years? Longer? These days, the thought of an isolated computer is difficult to imagine. Whether at home, in an office or on a plant floor, connectivity matters. The ability to provide connectivity for users to reach other users and access the web is now a requirement for most computers. The question isn’t whether to have networks, it’s what are the best kinds of networks, and how can they be used most effectively? What are the best network designs to create and support a truly connected facility? In manufacturing, the demand for control, data gathering and analytics is constantly driving the need for connectivity. The benefits to companies are substantial: - Increased uptime and productivity - Reduced costs - Real-time decision making - Improved mobility - Improved safety Networks have to move ever more critical data, which means more capacity and simultaneously, improved security. Network managers have to deal with a wide variety of manufacturing assets which have their own ways of communicating and presenting data. This can lead to some interesting translation challenges as they try to get disparate devices and systems to talk to each other. Networks need to connect components such as programmable logic controllers (PLCs), remote terminal units and other automation system elements. The reach of networks is getting deeper all the time as the number and variety of components grows. This is critical to implementing modern manufacturing concepts where it may be necessary to make regular adjustments to products and processes to keep up with changes in market demand, feedstock changes and other factors. As mentioned, most of the discussion related to connected factories relates to discrete manufacturing—but what about process environments? Connected Versus Connectivity Process manufacturers like to point out that they’ve had connected plants for a long time. But in many cases, this just means most analog field devices are connected to a distributed control system (DCS). The only way to gather information from them is through the DCS, which depending on its age, may be much easier said than done. Getting information from a process historian may help, but outside of the most modern DCS architectures, platforms designed 15 or 20 years ago did not anticipate the need to have the kind of connectivity necessary to send device-level information to other users within the company. Making it happen often requires extensive custom code writing, if it can be done at all. Given the age of most DCS installations, many companies are in trouble. Even the PLCs and sub-controllers working under the DCS may be too old to offer Ethernet connectivity, making it difficult to reach anything below Level 2 (L2) as depicted in Figure 1. As a practical matter, some of these components may have been updated individually. This helps, but in most plants and facilities this type of functionality is hit-or-miss. So if a process manufacturer wants to reach down deeper and gather information directly from devices at network Level 1 and Level 0, are there any practical options? The answer usually requires going around existing legacy networks. The most practical approach is using a wireless instrumentation network to supplement the existing wired system. It has to work hand-in-hand with wired and wireless plant Ethernet networks for maximum effectiveness. Talking to Field Devices Most DCS I/O is designed to only handle analog or simple binary (on/off) data. However, most field devices installed in the last 15 plus years have the capability of sending supplementary data via HART. Since the I/O system usually can’t handle this data, it gets stranded at the device, but a wireless adapter can be added which can send this data via WirelessHART. Adding the adapter does not interfere with the existing wired I/O, and the field device can send the same data as it always has to the DCS. The primary process variable and supplementary data can be sent simultaneously via the WirelessHART signal to a gateway. The gateway does not have the constraints of the native DCS I/O. Using a standard Ethernet interface with the existing IT infrastructure, it can send data anywhere, all the way up through the levels to the corporate network, and even to the internet if this functionality is enabled. It can also interface with the DCS, which means a WirelessHART instrument can be added to the process unit and send its data via the wireless network. The WirelessHART network can serve both requirements as needed. It can also provide a path to add instrumentation to a process unit when the conventional I/O is fully saturated. WirelessHART networks are self-organizing and secure, and can be fully redundant. This may come as a surprise to network designers and managers used to dealing only with Ethernet, but this makes WirelessHART an important tool able to extend the reach of manufacturing networks far closer to individual field devices at the edge. Effective decision-making may hinge on having detailed information, and getting it depends on the right type of connectivity at all levels. Josh Hernandez is a wireless product manager for Emerson.
<urn:uuid:39944021-979f-4423-9c52-8bdb347a9383>
CC-MAIN-2022-40
https://www.mbtmag.com/best-practices/article/13248242/building-a-connected-process-manufacturing-industry
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00697.warc.gz
en
0.945791
1,078
2.609375
3
The requirements HIPAA has for encryption are, at best, vague. Defined as “addressable” requirements, encryption of Protected Health Information (PHI) must be carried out by “covered entities” (CEs) whenever it is appropriate. “Addressable” is not equivalent to “optional”; instead, it means that if the required encryption cannot be provided, another safeguard should be implemented. Any CE that transfers information, either within or outside the company’s own firewall, must encrypt its PHI. This ensures that there is minimal risk to the integrity of PHI. However, once the data is not held within the company firewall, encryption may be harder to employ. Nevertheless, it is necessary unless a patient has given their permission for their data to be transmitted without encryption. Issues of Encryption When the first Security Rule (part of HIPAA legislation) was enacted, technology was less sophisticated than it is today. However, those who wrote the rule had the foresight to include deliberately vague wording, allowing for future technological advances. The requirements are thus seen as “technology – neutral”. By ensuring that the Security Rule was still relevant irrespective of technological advancements, the Department of Health and Human Services also gave CEs the agency to decide the best course of action. To avoid a HIPAA violation, every aspect of the company’s IT system must have some form of encryption. It is up to HIPAA CE’s whether or not they will encrypt email. Though the HIPAA Security Rule stipulates that the information must be adequately protected, it does permit PHI to be transmitted by email. The decision to encrypt is usually decided by an organisation-wide risk assessment. Any encryption plan, or alternative safeguard, must be made available to the OCR should an audit occur. For more information on encryption, CEs and their business associates may find out more from the National Institute of Standards and Technology (NIST). NIST recommends the use of Advanced Encryption Standard (AES) 128, 192 or 256-bit encryption, OpenPGP, and S/MIME. Secure Messaging Solution Maintaining workplace security has been complicated in recent years by the ubiquity of portable personal devices. It is estimated that around 80% of healthcare workers use mobile devices for work. Prohibiting employees from using such devices would have serious costs for companies. However, CEs and their associates may use a secure messaging platform to ensure HIPAA encryption. These ensure that the PHI is protected in transit, as well as when it is stored on a device. Should the information be accessed by an unauthorised device, it will be rendered unreadable.
<urn:uuid:8dc0d329-e7a4-4a92-961e-a91323b65270>
CC-MAIN-2022-40
https://www.defensorum.com/hipaa-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00697.warc.gz
en
0.94999
561
2.90625
3