text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
Advanced Addressing Scheme Securely Connects Billions of Devices and Things Digitization and automation is now a familiar feature in many homes. Mobile connectivity is not just for phones anymore. Today, we have lots of things that generate data or environments that we want to control (locally or remotely). And our access point for control is not limited to our smartphones – televisions, tablets, smartwatches, health monitors and even kitchen appliances can all serve as “digital control points.” Ubiquitous connectivity and control are fundamental elements of the Internet of Things (IoT) value proposition. The challenge of delivering seamless user experiences through communications between all of our devices and things that we want to control is becoming more broad and complex. According to the 2017 Cisco Mobile Visual Networking Index (VNI), there will be nearly 12 billion global mobile-connected devices and machine-to-machine (M2M) connections by 2021, approximately 1.5 per capita. Globally, mobile networks will support about 4 billion new mobile-connected devices and connections from 2016 to 2021. The chart above indicates that nearly a third of all mobile devices and connections (about 3.3 billion) will be some form of M2M by 2021, However, the full vision and potential of IoT can only be realized if real-time information is transmitted securely to a wide variety of users and things. IPv6 is a key enabling component of of this aspirational networking goal. Service providers around the world understand the fundamental importance of IPv6 and the inherent innovation possibilities that it it can unlock. Service Providers like Comcast see IPv6 as much more than just a more scalable addressing scheme. “The interesting thing with IPv6 is that we’re going to rethink how address space is used,” said Kevin McElearney, SVP of network engineering for Comcast. “Right now, everybody thinks that IP addresses are devices, but if the Internet of Things is really the Internet of virtual things then every device could have 100 or 1,000 addresses so it’s going to get interesting if you want to start addressing things like blocks of storage, application calls, or services.” The Cisco Mobile Visual Networking Index (VNI) forecasts that globally, there will be 8.4 Billion IPv6-capable devices/connections by 2021, up from 3.4 Billion in 2016. Here’s specifically how IPv6 addresses the three primary characteristics for successful IoT growth: Real time information: One of the key metrics to evaluate the quality of information is whether it can be acted upon in a timely fashion. ‘Real time actionable information’ can be life-saving, be it the multitude of wearable health monitoring devices monitoring a patient’s health vitals or communication devices that enable pilot and air-traffic control communication. IPv6 enables faster communication by eliminating significant administrative overhead that exists in the IPv4 networks today – faster packet processing through elimination of IP checksums, faster routing through elimination of multi-layered routing and shorter routing tables and bandwidth efficiency through multi-casting in place of broadcasting, to name a few. Security: While there are significant technological benefits that IPV6 provides in enabling IoT, one of the key benefits is the processing and transport of information in a secure fashion. IPSec, which provides end-to-end confidentiality, authentication and data integrity, is already present in IPv6. What that means is from the point where the data originates to its point of destination the data is secured and encrypted thus reducing cyber attacks where data can be hacked during transit. User Adoption: With the plethora of devices and things that users are surrounded with, the key component of user adoption is ‘ease of setup’ and ‘ease of use’. Users now expect devices to come without extensive product manuals and work upon first power-on as soon as they remove it from the box. IPv6 offers this ‘out of the box’ experience through static IP addresses for each device or M2M connection, which eliminates the need for extensive manual configuration to connect new digital devices or things to a network. IPv6 connections can be pre-configured for first-time use, thus enabling and simplifying IoT. So, even though the initial value of the IPv6 protocol was seen as a solution the acute IP address shortage, we now know that it delivers much more than just scalability. IPv6 can also help service providers build larger, more efficient networks with greater mobile connectivity and interoperability (especially for IoT). These networking transformations can support greater business innovations and revenue generation opportunities for service providers.
<urn:uuid:c73f0d48-a058-4b78-bd41-cc6bcc911b79>
CC-MAIN-2022-40
https://blogs.cisco.com/sp/ipv6-enables-global-mobile-iot-innovation-and-proliferation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00559.warc.gz
en
0.922333
958
2.578125
3
What Is UEM and Cloud Security? As technological devices continue to diversify, there is increased demand for streamlining control systems for security. According to Techopedia, Unified Endpoint Management (UEM) is a new digital system that integrates the range of devices that are available for use now, combining this wide range of software within a single organized program for increased efficiency and effectiveness. The system can, therefore, be used to improve control over computer systems used in workplaces, smartphones integrated with business systems and other “Internet of Things” (IoT) or online devices that may be used for some aspect of business or system operations. Combining all of these controls into a single system makes it more convenient for administrators to use and oversee, thereby making them safer. With concepts such as “bring your own device” (BYOD) now in existence for increased employee convenience analogous to the introduction of “plug and play” technology in the past, there is a greater potential for attacks and thus, an increased demand for better security. UEM systems have increased capacity to control endpoints in comparison to previous system designs and can work to have more proactive strategies in place to accomplish this. UEM practices now include security embedded within request processes, cross-functional strategies, cross-platform designs, and increased capacity to streamline cloud security. UEM can, therefore, be highly useful in helping to simplify a diverse range of security needs in the cloud. The Origin of Cloud Computing Cloud computing has been around for some time, and its security demands continue to diversify. According to Pianese’s 2010 study, cloud computing, as the practice of using remote rather than local servers in a network hosted online to manage information, has demanded programs emphasizing control. It requires policies that provide improved information integration. In the past, there was no system capable of integrating the range of cloud resources in existence. Therefore, system administrators were unable to experience the extent of flexibility and efficiency available with streamlined systems. The author of the study reported on his research team’s efforts in assessing the significance of establishing and improving virtual distributed operating systems for cloud computing. UEM can now address these through meeting the demands for elasticity, fault tolerance, and autonomous decentralized management. Can UEM Better Address Modern Cloud Security Demands? As both cloud security demands and technological diversity increase, UEM can help to streamline cloud security and its growing needs. According to SecurityIntelligence, cloud computing, the diversity of technological devices, and the IoT continue to expand in both hardware and software types. This has increasingly given hackers new opportunities for exploitation. There is an ever-growing need for better security all around. With this, it has become more difficult for business leaders and IT specialists to maintain tight security over the extent of otherwise effective new programs and efficient integrations of hardware that can be networked through a cloud. In addition to the software security demands, the technological improvements challenge the development and maintenance of relevant policies that are developed for these purposes. Businesses generally require policies to address technological aspects before software is installed that address specific security needs. It can be challenging for businesses to keep up with the extent of new devices that are available for networking, especially when projects or outsourcing changes frequently. UEM has been increasingly sought to address all of these demands, because it was designed to streamline old and new software and hardware capacities within an IT network, combining the entirety of endpoints. The system, therefore, allows organizations to integrate desktop systems, networked laptops, smartphones, tablet devices, and the range of users and apps (including relevant content) that potentially operate within a network into a single security system for network administrators or others supervising and securing the company technology. Improved Productivity and Efficiency With UEM, in addition to the increased efficiency in streamlining cloud security, organizations can experience improved productivity or output. Infrastructures previously considered complex through wide distribution can be more efficiently managed through the centralization, thereby freeing company resources to focus on output. Through this, end-user productivity can be increased as IT management costs are reduced. This approach is regarded as superior to other strategies or models focusing on disparate point solutions, as the latter involves greater demands for costs and resources amid lower levels of efficiency. Beyond these fundamental advantages, UEM: According to SecurityIntelligence, over 80 percent of organizations are expected to use a form of cognitive computing or AI for these endpoint demands in the next two years. Just over half are expected to have the current UEM model as their model for centralized management. Cheuvront explained other potentially beneficial UEM capacities include: If your business needs include any of the above, then you may benefit from increased examination or integration of UEM as research and development in the area continues.
<urn:uuid:ada44073-b568-4ff6-a69c-cd1dc167205a>
CC-MAIN-2022-40
https://www.go-concepts.com/blog/unified-endpoint-management-uem-can-this-help-streamline-cloud-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00559.warc.gz
en
0.93973
984
2.671875
3
Snapshot technology is commonly defined as a virtual copy of a set of files, directories or volumes as they appeared... in a particular point in time. Snapshots are often used in storage systems to enhance data protection and efficiency and were originally created to solve several data backup problems, including recovering corrupted data, backing up large amounts of data, and increasing application performance while a backup is in process. And although snapshot backups can be used to replace data backup systems, snapshots are not the equivalent to backups, and there are certain misconceptions users need to be aware of before they use snapshot technology in their data storage systems. To help you learn more about snapshots, we've collected our best resources on snapshot technology. Find out how to properly use snapshots to replace your traditional backup system; see what you need to watch out for when using snapshot technology for data backups; learn about other data reduction techniques; and read about the new snapshot feature in the CommVault Simpana 9. Snapshot backups can be used in place of your traditional backup system to back up and restore your primary and critical data. Snapshots allow you to back up your data faster and offer fast recovery time objectives (RTOs) and recovery point objectives (RPOs). But users must understand that snapshots aren't backups; they are point-in-time copies of your data. Because of that, there are several things you need to be aware of if you plan on using your snapshot technology in place of your backup system. Read the full column, and find out how to use snapshot backups properly. Before using snapshot technology, users must understand the challenges that come with snapshots. In this SearchStorage.com Q&A, expert W. Curtis Preston explains the basics of snapshot technology. Find out what the main challenges associated with snapshot technology are, the differences between a snapshot and a copy, how you know if snapshots are right for your backup environment and where snapshots are used. Listen to the full Q&A on snapshot technology for data backups. Deleting unwanted and unnecessary data is a good method of data reduction, but sometimes that's not always an option in today's industries. Some of the most popular data reduction techniques available today are data deduplication, snapshots, thin provisioning and compression. Find out more about these data reduction techniques and how they can improve your storage efficiency in this feature. CommVault added a new array-based snapshot feature to CommVault Simpana 9 in October 2010. The new array-based snapshot feature can read the snapshot capability in a particular array. This feature was introduced to help offload the snapshot process from the server onto the array; they are then moved into a tiered storage environment and can eventually be transferred to disk, tape or the cloud. Read more about CommVault Simpana 9's snapshot feature.
<urn:uuid:1e772942-7f97-449b-8a44-d4085ca931fd>
CC-MAIN-2022-40
https://www.computerweekly.com/tutorial/Snapshot-technology-The-role-of-snapshots-in-todays-backup-environments
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00559.warc.gz
en
0.926271
577
2.578125
3
Edge computing optimizes cloud computing systems. But how? Read on and find out. It used to be that computers were these complex and bulky machines that you only see in offices. If you were lucky enough to use a computer terminal, you were a pioneer in computing technology. It was the personal computer that first gave people the chance to own the hardware that comes with a computer terminal. Since then, a lot of technologies came and went. We now live in a world where computers fit the palm of your hand, and you can take it anywhere you want. There are still a few people who own a personal computer, but these are used for a limited number of instances. Everything is on the cloud now – from your files, to your documents, to the movies you watch, and the music you listen to. When was the last time you bought a DVD set for a TV show you liked? The thing with cloud computing is that most organizations and people rely on a few providers. The names IBM, Amazon, Google, and Microsoft are the giants when it comes to machine learning, hosting, compute resources, and infrastructure in the cloud. Among public cloud providers, Amazon is king, with over $12 billion revenue. All that being told, it just means that there are very limited opportunities for growth when it comes to cloud computing. Everything that you can put on the cloud is already there. This means that there is no new service, no new product to be offered. And this is the reason why edge computing is fast becoming THE buzzword right now. What is edge computing? The thing with the cloud is that it is all centralized. If you need something, you access it from the cloud. If you need to process something, you do it through the cloud. If you want to store something, you put it on the cloud. All that information going back and forth is going to take time, eat up bandwidth, and make you more vulnerable to cyber attacks. Edge computing helps you avoid all that. It optimizes cloud computing systems by processing your requests near the source rather than send it to data centers hosted in a faraway place. For most cloud services, this means relying less on the provider’s servers, and having locally powered artificial intelligence or machine learning services. But edge computing does not really just involve local processing. It could also entail having a server closer to where you are. So instead of sending your data halfway across the world (which in turn takes more time, more bandwidth and more chances for hackers to get your data), you can send it to a server that is much closer, usually 100 sq. ft. or less. A good example to illustrate this is Amazon Alexa speakers. When you ask Alexa if it is going to rain today, Alexa will send your query to Amazon’s data centers in a compressed file. Amazon’s systems will need to uncompress it and figure out what you asked (the weather) then connects to a weather API to get the answers. It will then send the answer back to your Alexa speaker. All that work! With edge computing, the processing happens locally. Instead of going through a middleman (Amazon’s servers), Alexa would be figuring out what you said and get the answers you want on its own. This is the reason why Amazon is currently working on artificial intelligence chips for its speakers. Aside from making things a whole lot faster, edge computing is also more secure. Every bit of information you send to the cloud is an attack point that hackers can use to steal your data. Transmissions may be intercepted in man in the middle attacks. With edge computing, you are sending nothing over the air. For instance, smartphones send account details and passwords over the air. But take a look at how iPhones are encrypting your biometric and facial recognition data on the device itself. With the information you need to unlock your phone stored on the device itself, the whole process is more secure. Furthermore, you also save on bandwidth. When you send nothing to the cloud, you use up no bandwidth. Just think about it. A typical home Internet connection would not have problems sending the feeds of one security camera to the cloud. But what if you have 20 security cameras, all of them sending HD video feeds to the cloud. You would probably curse at the amount and frequency of buffering you would experience on YouTube. When you have 20 security cameras, you would definitely want to cut down on the bandwidth requirements. Instead of sending everything to the cloud, your cameras would be processing the videos locally and then decide on which footage is important and which ones just show your empty living room for hours on end. By sending only the important video clips to the cloud, edge computing has saved you a whole lot of bandwidth. All in all: What edge computing would entail? With the Internet of Things (IoT), sensors are pretty much the most important components. The sensors get the data you need and sends it to the cloud. The cloud would then process all that data to give you the information and reports you need. Which of your factory’s machines are close to breaking down? The cloud would answer them for you. With the advent of edge computing, mere sensors are not enough. Each of your machinery would have a module that not only gathers information about possible issues, but also be able to process the information and alert you directly without having to go through the cloud. In the same manner, the age of installing software would be over. You would not need to put the software on your device; you will just use it as is provided by Amazon, et. al. So what exactly is edge computing? Edge computing is the tech world’s way of saying that we did everything we did on our end to get the answers we need, rather than sending you the things that could have been done better locally. It allows for faster and more secure cloud transactions, without expending too much bandwidth. Put in another way, cloud computing is centralized computing. You send all the data to the cloud provider’s servers where all the processing, AI, machine learning, and all other processes are performed, and then the results are sent back to you. Edge computing is just the opposite of centralized computing. Photo courtesy of Brookhaven National Laboratory.
<urn:uuid:3ae1d844-998f-448c-989d-d7b73a16036f>
CC-MAIN-2022-40
https://fourcornerstone.com/what-is-edge-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00559.warc.gz
en
0.956399
1,297
2.84375
3
Second of two parts. Read Part One. Today we leap right into smb.conf and configure our Samba primary domain controller. Remember- There Can Be Only One. Do not use this if there is already a PDC on your network. It may help to print and annotate smb.conf. Be sure to make a backup copy before changing anything. Samba’s man pages are exceptionally useful, start with man samba and man smb.conf. Some comments below are abbreviated, see smb.conf for the full text. A complete list of global parameters is in man smb.conf. You can’t just invent them- must use the official Samba Put your domain name and server hostname here: # workgroup = NT-Domain-Name or Workgroup-Name workgroup = MYGROUP netbios name = HOSTNAME # server string is the equivalent of the NT Description field server string = Samba PDC %v %h %v displays the Samba version number, %h displays the hostname. This shows up in Network Neighborhood. See man smb.conf for a full explanation of all variable substitutions. Or say anything you want: server string = Carla’s Samba server, and a darn fine one it is # This option is important for security… hosts allow = 192.168.1., 127. hosts allow = 192.168.1.0, 127.0.0.1/255.255.255.0 The localhost 127.0.0.1 will always be allowed access, unless denied by a “hosts deny” option. Use space, comma, or tab delimiting. Individual IPs can be excluded here with the EXCEPT hosts allow = 192.168., EXCEPT 192.168.1.100 # Put a capping on the size of the log files (in Kb). max log size = 50 Side note: I like to isolate /var in its own partition, to prevent crashes if something causes a log file to grow hugely, such as a DOS attack or other mayhem. # Security mode… security = user # You may wish to use password encryption…. encrypt passwords = yes smb passwd file = /etc/samba/smbpasswd # The following are needed to allow password changing from Windows to # update the Linux system password also. unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *New*password* %nn *Please*retype*new*password* %nn *password*successfully*updated* # Browser Control Options: local master = yes #OS Level … os level = 64 # Domain Master specifies Samba to be the Domain Master Browser…. domain master = yes # Preferred Master … preferred master = yes # Enable this if you want Samba to be a domain logon server for # Windows95 workstations. domain logons = yes # Where to store roving profiles (only for Win95 and WinNT) # %L substitutes for this servers netbios name, %U is username # You must uncomment the [Profiles] share below logon path = %LProfiles%U Add these lines: logon home = %L%U logon drive = H: (or whatever you like) logon script = netlogon.bat #=== shares === comment = Home Directories browseable = no writable = yes valid users = %S create mode = 0664 directory mode = 0775 comment = Network Logon Service path = /home/samba/netlogon writable = no share modes = no path = /home/samba/profiles browseable = no Be sure to select a group ID that does not conflict with existing groups, groupadd won’t let you anyway. (In case you were wondering, my PC has a noisy fan- hence the hostname windbag.) Next, create the directories as named in smb.conf: [[email protected] carla]# mkdir -m 0775 /home/samba /home/samba/netlogon [[email protected] carla]# chown root.admins /home/samba/netlogon [[email protected] carla]# mkdir /home/samba/profiles [[email protected] carla]# chown 1757 /home/samba/profiles Do this exactly as shown, for security reasons- please see Resources for information on Linux file permissions. Now add machine accounts. Each computer on the network needs an account, as well as each user. This adds a Unix account: [[email protected] carla]# /usr/sbin/useradd -g machines -d /dev/null -c “machine nickname” -s /bin/false test$ Which means belonging to the machines group, no home directory, cutesy nickname of your choice, no shell access; I used “test” as the NetBIOS or hostname, and $ identifies it as a trust account. Create authentication and lock password: [[email protected] carla]# passwd -l test$ Changing password for user test$ Locking password for user test$ Now add to /etc/samba/smbpasswd: [[email protected] carla]# /usr/bin/smbpasswd -a -m test If /etc/samba/smbpasswd does not exist, smbpasswd will create it. Note that smbpasswd does not require $ appended to the machine name. smbpasswd may not be in /usr/bin/, use the locate command to find it. smbpasswd exists twice: as a command, and as text file. A quick way to read a file is using the cat command: [[email protected] carla]# cat /etc/samba/smbpasswd For your human users, the procedure is the same: useradd and passwd to create a Unix account, only don’t lock the password, then smbpasswd for Samba. There is probably a clever way to automate this with a shell script. Unfortunately I’m a lousy scripter, so I’m afraid I can’t be helpful here. Run the command “testparm” to find syntax errors, see man testparm for all options. Start Samba: as root, type /etc/rc.d/init.d/smb start Stop: /etc/rc.d/init.d/smb stop Test: smbclient -L localhost That takes care of the server configuration. Now join Windows clients to your domain. Windows 9x/ME is easy: make sure that Client For Microsoft Networks is selected as the Primary Network Logon. Then Client For Microsoft Networks -> Properties -> Logon to NT Domain. For Windows NT/2000, set the domain name, then be sure that your first logon is as root. An ordinary user will not work. After the initial root login, any user can log in on their own account. If the machine account was created manually, be sure to not select “Create a Computer Account In the Domain.” The Samba PDC howto tells how to create machine accounts “on the fly.” Windows XP is a bear. The Home edition cannot be joined to a domain. XP Professional sometimes requires a registry patch to connect to Samba, sometimes it goes as easily as Win2000. Please visit the smb-clients mail list for the best help. I left out creating user and printer shares on purpose, it’s simple and abundantly documented. The O’Reilly book “Using Samba” is invaluable, especially for troubleshooting, and so is the documentation on samba.org. The most common mistakes are typos in smb.conf. Be kind to yourself- get enough sleep and take it slowly.
<urn:uuid:0f0581a7-3d05-41ca-aa72-341f466831ea>
CC-MAIN-2022-40
https://www.enterprisenetworkingplanet.com/guides/build-a-primary-domain-controller-with-samba-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337723.23/warc/CC-MAIN-20221006025949-20221006055949-00559.warc.gz
en
0.796645
1,807
2.515625
3
GDPR: An Overview Technology is changing at a rapid pace, and this has significant consequences for laws and regulations that are in place to protect consumers. The 1995 Data Protection Directive was quickly becoming obsolete; it was created in an era before the Internet was widespread, and lawmakers were unable to foresee how ‘the age of big data’. Individuals now store an unprecedented amount of information online, and businesses hold tremendous amounts of data on their customers. Countries tried to adapt to this change by introducing laws or regulations that attempted to hold these companies responsible for the data they hold, but often these laws were weak and not robust to further technological developments. As a result, it was mainly up to organisations to implement their data protection strategies. To combat this problem, in January 2012, the European Commission set in motion plans to reform data protection laws across the European Union to make the law “fit for the digital age”. This process eventually produced the 99 articles of the General Data Protection Regulations, or GDPR. Since its implementation in May 2018, GDPR has already revolutionised the data security landscape across the globe. It is essential that all organisations that any organisation that handles the personal data of individuals is aware of its requirements to remain fully compliant with its stipulations. The penalties for a violation are substantial; either €20 million, or 4% of the company’s global annual turnover-whichever is higher. Who is protected by GDPR? There is much confusion about who’s data GDPR protects. It is commonly believed that GDPR only protects the data of EU citizens. This is incorrect; GDPR protects the data of any individual, regardless of their nationality, who has their data collected while they are within the borders of an EU country. Conversely, GDPR does not apply to the data of EU citizens if the data is collected outside of the EU’s borders. An example may illustrate this point. If an American citizen is temporarily residing or travelling in an EU country, such as Ireland, and provide personal information during a transaction at a local business, such exchanging some information to use a WiFi service, this personal information is covered by GDPR as the person is located within the EU. The American citizen still has rights over their data even if they travel back to America, as that data was collected in the EU. Conversely, if an Irish citizen is travelling in America, would not be covered by GDPR. Any data that they provide to an organisation in a similar transaction to above would be subject to American individual data protection laws. Following its introduction in May 2018, GDPR has granted individuals new rights concerning their data. These include, but are not limited to: - Right of access - Right of rectification - Right to object to how their data is handled - Right to restrict processing - Right to erasure (Right to be forgotten) - Right to data portability - Right to complain to a supervisory authority if they are dissatisfied if how their data is being handled - Right to be represented by an independent, not-for-profit body when lodging a complaint Who must comply with GDPR? GDPR covers any organisation collects or processes data within the EU is subject to GDPR compliance, regardless of where the physical location of their headquarters. Even businesses that only collect or process data through subsidiary or branch of the leading company which is based in the EU must comply with GDPR. Due to the nature of modern international business, it is clear that GPDR has had a significant impact worldwide. Organisations must comply with GDPR even if the EU is only a small part of the business’s consumer base. GDPR covers all types of organisations, including public agencies, governments, or companies of various sizes. As a reminder, the EU Member States are: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czechia, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, the Netherlands, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, and the UK. Although the UK is due to leave the EU in March 2019, GDPR was introduced to their laws in May 2018 along with the other member states. Therefore, GDPR will remain a part of UK law after Brexit. Data Processors, Data Controllers, and Data Protection Officers Understanding the roles of data processors, data controllers, and data protection officers (DPOs) is critical to becoming compliant with GDPR. Each has a specific role to play in the protection of private data. Here, we give a brief outline of each role. Data controllers are an organisation that oversees the collection of data. Data controllers have specific responsibilities under GDPR, which include: - Affording transparency with the data subject as to how they handle their data - Ensuring that data may easily be translated from one place to another - Providing evidence to the data subject that they are fully GDPR-compliant - Ensuring that they can uphold the rights of a data subject Data processors are defined as a body which processes data on behalf of a data controller. They must: - Have a pre-arranged contract with a data controller regarding the processing of data - Ensure that the rights of the data subject are respected - Adequate safeguards must be in place to protect the integrity of sensitive data GDPR requires data controllers and processors based within the EU must appoint a DPO to assist in monitoring their internal compliance. The DPO is usually appointed from the organisation ’s staff and must have expert knowledge of data protection laws and practices. If an appropriate individual is not found within the organisation, they may hire a third-party contractor to act as a DPO. However, the DPO may not hold a conflict of interest and must be impartial in carrying out their role. All large businesses which are covered by GDPR must appoint a DPO. However, if a small business is processing sensitive information, as described in Article 9 of the GDPR, it may be a requirement for them to appoint a DPO too. The responsibilities of a DPO include: - ensuring that data is protected to the standard outlined in GDPR - the education of staff on subject data rights and their responsibilities under GDPR - advising to senior management regarding GDPR compliant business practices - monitoring activities across the organisation to ensure they are GDPR compliant - cooperation with the Lead Supervisory Authority - assessing IT systems, computer networks and data protection safeguards to ensure they are of the required standard - notifying data subjects in the event of a data breach GDPR and US Businesses The EU-US Privacy Shield Framework was adopted in 2016 and concern the protection of data shared across the Atlantic. The EU has ruled the US privacy laws to be inadequate and below their standards. Therefore, organisations must take extra measures to prove they have ‘adequate safeguards’ in place to protect data if they wish to use the data of EU citizens. The Framework allows private data to be transferred outside of the EU if the recipient organisation is certified by the US Department of Commerce or the EU Supervisory Authority. Certified organisations must process and use the data following the guidelines set out by the Framework. The US Federal Trade Commission or Department for Transportation is responsible for enforcing these rules. Organisations must conduct an annual review to self-certify that they are compliant with the Framework to prove they are still capable of handling EU data. It is important to note that being Privacy Shield-certified does not guarantee that an organisation is also GDPR-compliant. Organisations may need to adopt new practices and procedures to comply with the new rules introduced by GDPR. US companies may be required to hire a local GDPR representative. This role is somewhat comparable to a DPO for EU-based organisations. GDPR requires organisations based outside of the EU, but that collect or process the personal data of EU citizens are required to hire a local GDPR representative based within the EU. For example, if a US based company sells products to residents based in France, but does not operate through a French branch or subsidiary, they are required to hire a local GDPR representative. In contrast, if a France-based organisation were to do the same thing, they would only need to hire a DPO. The primary role of an EU Representative is to act as the mediator between the data controller and EU authorities Data Protection Authorities and data subjects. They do not have the same amount. The primary tasks of an EU representative are: - responding to any inquiries Eu authorities or data subjects may have concerning data processing - receiving legal documents for the company as an authorised agent maintaining records of processing activities - giving data processing records to authorities upon request Summary: GDPR-Compliance checklist 1) Become familiar with the basics of GDPR and its implications for your organisation 2) Perform a comprehensive audit on data, and assess what data is being held and for what purpose 3) Check that all processes and procedures that involve consumer data are GDPR- compliant 4) Ensure that all consent-obtaining procedures follow the new news 5) Recognise high-risk data and processes as described by Article 9 of GDPR and change business practices to handle this data in a safe and secure manner 6) Have a data breach response plan in place 7) Train staff in GDPR compliance 7) Consult with a data security expert to ensure that your organisation’s security framework is meeting GDPR’s standards
<urn:uuid:25796db3-d401-4184-885e-1a541f84bfdb>
CC-MAIN-2022-40
https://www.compliancejunction.com/gdpr-compliance-for-dummies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00759.warc.gz
en
0.948551
1,973
3.375
3
Smart City is the concept of managing public infrastructure in cities from a centralized location by deploying information and communication technology (ICT) tools. This is expected to enhance the performance and quality of urban services such as energy, connectivity, transportation, utilities, and governance, among others. A smart city is a framework of an ‘intelligent network’ of connected objects and machines that transmit data using a communication technology command center (ICCC) that receives and manages data in real time to help improve the quality of life. The communication network is the backbone of any ICT infrastructure and even more so for the smart city. The data for smart city is typically generated at various touch, including cross roads, notorious crime prone locations, market streets, entry-exit points, control or monitoring points in water distribution systems, sewage treatment plants, waste collection/treatment, streetlights, power supply distribution locations, bus stops, government offices, service buildings and many other such locations that are generally not well covered by existing commercial telecom networks. Millions of data points need to be gathered from these sources using CCTVs, devices and sensors and assimilated at the data center. To provide for the aspirations and needs of the citizens, urban planners ideally aim at developing the entire urban eco-system, which is represented by the four pillars of comprehensive development: institutional, physical, social and economic infrastructure. This can be a long-term goal and cities can work towards developing such comprehensive infrastructure incrementally, adding on layers of ‘smartness’. In the approach of the Smart Cities Mission, the objective is to promote cities that provide core infrastructure and give a decent quality of life to its citizens, a clean and sustainable environment and application of ‘Smart’ Solutions. The focus is on sustainable and inclusive development and the idea is to look at compact areas, create a replicable model which will act like a light house to other aspiring cities. The Smart Cities Mission of the Government is a bold, new initiative. It is meant to set examples that can be replicated both within and outside the Smart City, catalyzing the creation of similar Smart Cities in various regions and parts of the country. The core infrastructure elements in a smart city would include: - adequate water supply - assured electricity supply - sanitation, including solid waste management - efficient urban mobility and public transport - affordable housing, especially for the poor - robust IT connectivity and digitalization - good governance, especially e-Governance and citizen participation - sustainable environment - safety and security of citizens, particularly women, children and the elderly - health and education Accordingly, the purpose of the Smart Cities Mission is to drive economic growth and improve the quality of life of people by enabling local area development and harnessing technology, especially technology that leads to Smart outcomes. Area based development will transform existing areas (retrofit and redevelop), including slums, into better planned ones, thereby improving livability of the whole city. New areas (greenfield) will be developed around cities in order to accommodate the expanding population in urban areas. Application of Smart Solutions will enable cities to use technology, information and data to improve infrastructure and services. Comprehensive development in this way will improve quality of life, create employment and enhance incomes for all, especially the poor and the disadvantaged, leading to inclusive cities.
<urn:uuid:fd327f93-1d5b-4422-9191-9027a01cb025>
CC-MAIN-2022-40
https://www.duraline.com/applications/smart-cities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00759.warc.gz
en
0.936784
679
3.734375
4
Could we program a cognitive computer to examine the problem of cancer, or the yearly flu outbreak, balancing the federal budget or just finding a way to reduce traffic congestion around the city? John Breeden II is an award-winning journalist and reviewer with over 20 years of experience covering technology and government. He is currently the CEO of the Tech Writers Bureau, a group that creates technological thought leadership content for organizations of all sizes. Twitter: @LabGuys. If you happen to have a free 30 hours or so, I would highly recommend watching Google’s AlphaGo program take on one of the best players in the world at the ancient Chinese board game Go. If you don’t have that much time, you could instead just watch the 6-hour third match, where the program wrapped up the best of five series. It’s literally history being made. Some news outlets have covered this feat, but I don’t think many people understand how monumental this actually is. Back in 1997, when Garry Kasparov was beaten by IBM’s Deep Blue in chess, people were more excited about the future of computing. But when you look at the underlying technology, the AlphaGo win is so much more impressive because it required a computer to actually think like a human to beat a human. Deep Blue was a supercomputer able to use brute-force tactics to beat a grandmaster at the game of chess. Some say Deep Blue was programmed to specifically be able to beat Kasparov, and while I don’t know if that was true, it was likely within the realm of possibility for the program. AlphaGo could not have been similarly programmed to take on Korean master Lee Se-dol, considered the best in the world. To beat a human at Go, a program has to really think like a human. You see, chess and Go are fundamentally different games. In chess, each time a player moves, he or she has an average of about 20 options. With Go, the average is about 200. Because of the more limited options in chess, it makes it easier for a computer to store specific game variations and act accordingly to follow a winning strategy. In chess, by the second move in the game, there are 72,084 possible game combinations. On the third move, it jumps to 9 million possible games. By the fourth move, it jumps again to 318 million possible games. That seems like a lot, but a dedicated supercomputer could crunch those numbers within the time allotted for a turn in chess. Plus, Deep Blue could have had the most likely moves Kasparov would make based on his history set at the top of its memory stack, which leads to the accusations it was designed to beat one person. You simply can’t do any of that with Go, which is played on a 19-by-19 square grid with two players alternatively placing white and black tiles. Players can capture their opponent’s tiles by surrounding them and turning them to their color. The winner is whoever owns the most tiles at the end of play. In terms of game variations, those who study Go say the possibilities are almost equal to the number of atoms in the universe. To win at Go, and especially to beat the best human in the world, AlphaGo had to use superior pattern recognition and actual learned strategy, just like a human player. And AlphaGo played like a human, too. If you listen to the commentators for match three, at times they seemed confused at the strategy the program was taking, which at different points was even risky, though it got away with it. They concluded the program must have been “showing off” for the third match to sweep the series. The real victory here is the ability to show that thinking machines are starting to use their intelligence much better, and within proper context. AlphaGo used qualitative reasoning, which is one part of artificial intelligence. Wikipedia defines the new science as an area of research within artificial intelligence that automates reasoning about continuous aspects of the physical world, such as space, time and quantity, for the purpose of problem solving and planning using qualitative rather than quantitative information. This allows the intelligence to contemplate infinite possibilities, at least within structured rulesets. Unlike true artificial intelligence, AlphaGo will never become fully cognizant. It’s doesn’t care if it wins or loses, and only “knows” reality within the terms of the game. But think about how powerful that could be for humanity. Could we program a cognitive computer to examine the problem of cancer, or the yearly flu outbreak, balancing the federal budget or just finding a way to reduce traffic congestion around the city? AlphaGo is hopefully the first of these superstar cognitive computers we will experience. It showed what was possible, and now we need to direct that technology at things which are more important than just a game.
<urn:uuid:5087de0b-474e-408b-82c0-f6753c973cd1>
CC-MAIN-2022-40
https://www.nextgov.com/emerging-tech/2016/04/artificial-intelligence-flexes-its-cognitive-muscle/127396/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335355.2/warc/CC-MAIN-20220929131813-20220929161813-00759.warc.gz
en
0.96552
1,010
2.75
3
By David Ashamalla, Director of Security Operations Most everyone is familiar with using a username and password to sign in to services online. This typical sign in method is known as “single sign on”. The security with this is moderate since there is only one requirement you must satisfy to be given access. Two factor authentication (also known as multi factor authentication, or MFA) provides additional security. This is when a service requires that you correctly submit a combination of two of the following: something you know, something you are, and something you have. - Something you know: a username and password - Something you are: generally known as “bio-metrics”- your fingerprint, retina, DNA, or whatever future items have yet to be thought of - Something you have: a FOB, phone, USB key, smart card. Some physical device in your possession to help prove who you are For example, you might first need to enter your username and password. When that is verified, you would be prompted to get a code off of your phone, then enter that code correctly to be signed in to the service. This second factor adds increased protection. Standards reduce complexity, and the number of devices you may need In the past, using MFA often meant carrying costly devices that generated 8 digits every minute. Devices were specific to the services they were used for, which meant you could be carrying 3 or more devices for the services your company used. We have come a long way since those days. Luckily, the next development was the acceptance of the TOTP standard (time-based one-time password). This is a temporary pass-code, generated by an algorithm that app writers could use to code a single mobile application that would support multiple services. You have seen this when using the Google Authenticator, the Microsoft Authenticator, or the DUO App. The apps use the same standards for generating 8 digit pass codes for multi factor authentication. This simplified the process and meant that you could use one app instead of multiple costly code-generating devices. A New Standard for the Web The newest standard in two factor authentication is called U2F. This is sponsored by the FIDO alliance (Google, Microsoft, Intel, Facebook, Amazon, are some of the members) and promises to simplify MFA even further. In the U2F protocol, physical USB tokens are used. These are very cost effective and much more automatic. The U2F allows a web browser to access the keys and pass them to a web server automatically. Depending on the implementation there may be no need to press any button to confirm, and no need to type in a one-time password. If you lose your device – delete it and provision a new one. If your credentials are compromised, your key protects your accounts. So how can these new developments in MFA protocols be beneficial to your business? The U2F standard helps block phishing attacks. During Two Factor Authentication setup, a “site signature” is generated using the URL, Port, and TLS Certificate, and the key will only submit the onetime password to the site with that signature. This is behind Google’s announcement that since implementing physical keys with the U2F standard they have not had a single phishing incident lead to an account takeover. While this, and MFA in general, won’t protect against all types of attacks, it does provide an extra layer of security to help increase your business’ safety.
<urn:uuid:ad06c0c8-a97f-4e43-b4cf-cd7f82c35c28>
CC-MAIN-2022-40
https://www.ciosolutions.com/username-and-password-is-it-enough/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00759.warc.gz
en
0.947116
730
3.15625
3
The first ever study on the intakes of omega-3 polyunsaturated fatty acids in pregnant women in New Zealand. Only 30 per cent getting the daily amount. A group of 596 pregnant women, in their last trimester of pregnancy volunteered to take part in the online study. Participants have to complete a food frequency questionnaire. Designed to investigate polyunsaturated fatty acid intakes over the last three months of their pregnancy. Dr Kathryn Beck from Massey’s School of Sport, Exercise and Nutrition says omega-3 fatty acids are important during pregnancy for a number of reasons. “They help form important building blocks for our cells, and are essential for the development of baby’s brain and growth. These fatty acids help support mothers to have a healthy pregnancy.” Docosahexaenoic Acid (DHA) The omega-3 fatty acid known as docosahexaenoic acid (DHA) is critical during the time when the neural tube closes. It causes throughout pregnancy as it accumulates in the fetal brain and retinal tissues. The amount of DHA accumulated by the fetus occurs mainly in the third trimester of pregnancy, and is influenced by the maternal diet. The recommendation for combined omega-3 polyunsaturated fatty acids during pregnancy is 115mg per day. However, several international organizations recommend pregnant women should aim to achieve at least 200 mg of DHA per day, Dr Beck says while 77 per cent of participants met the lower target. Only 30 per cent were ingesting the international recommendations for DHA of 200mg per day. 150g per serve Fish and seafood are the richest sources of omega-3 polyunsaturated fatty acids and also provide several nutrients. Including protein and iodine, all of which are important for fetal development. “Two serves of fish [150g per serve] per week can substantially contribute to meeting omega-3 polyunsaturated fatty acids recommendations. Despite guidelines encouraging fish and seafood as safe to eat during pregnancy, women may decrease or limit these foods due to concerns regarding food safety and the potential for mercury poisoning,” Dr Beck says. The likely reason for the deficiency was the low intake of fish and seafood. “Women who are currently pregnant or planning to be should aim to eat a variety of healthy foods every day from each of the four food groups to get all the nutrients they need to protect the long-term health of both themselves and their baby. Those who have any concerns related to their diet should seek advice from their doctor, registered nutritionist or dietitian.” Dr Beck says there is little concern with canned tuna (skipjack or albacore tuna), canned salmon, mackerel or sardines, farmed salmon, tarakihi, blue cod, hoki, john dory, monkfish, warehou, whitebait and flat fish like flounder. According to the Ministry of Health nutrition guidelines for pregnant women, fish and seafood well cooked, served hot.
<urn:uuid:dc69a8fa-9512-4d89-b463-735c0f1f586e>
CC-MAIN-2022-40
https://areflect.com/2017/09/24/more-pregnant-women-lack-of-omega-3/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00759.warc.gz
en
0.949774
614
3.109375
3
While some companies focus on achieving the highest number of qubits, others try to develop a full quantum stack to make this technology available to a high number of industries and developers. Here we analyze the quantum computing leaders that are pushing this technology forward. Conventional computers store information using bits represented in 0s or 1s. Quantum computers use quantum bits (qubits) to encode information as 0s, 1s, or both at the same time. This new dimension, along with superposition and entanglement, enables quantum computers with the power to provide solutions to some of the world’s toughest problems. Understanding, controlling, and utilizing the quantum world is challenging but holds great potential for designing new forms of matter and new paths for information processing. Although the development of quantum computers is still in its infancy, many are the companies that are running the race for quantum supremacy. Here we analyze the quantum computing leaders of the moment and their developments that will determine the evolution of quantum technology. Top Organizations Innovating in Quantum Computing Out of the players working on and innovating in quantum computing, the majority are SMEs and startups (40%) and universities (33%). However, when looking at the top ten leaders in Quantum Computing, most are prestigious universities such as Oxford, Harvard, or MIT, and big tech corporations like IBM, Microsoft, and Google. Precisely Google has recently unveiled news milestone Bristlecone, a quantum computing chip with 72 qubits. They think the chip can achieve quantum supremacy: the point at which quantum computers can do calculations beyond the reach of today’s fastest supercomputers. The University of Waterloo (Canada) is the one with the highest activity in publications and conference proceedings (229 and 61 respectively). One of their lines of research focuses on the precision measurement of photons, one of the main constraints to the development of quantum applications. And recently, they have published a study that shows that a well-designed optomechanical device can non-destructively detect phonon signals of a wide range of frequencies and in single-quantum level. For its part, the University of Oxford (UK) is the entity that has gained more public funding: $117.59 M for 62 grants. The majority of them come from the national UK agency GTR. Since 2016, they focus on lattice systems and high-performance sensors for the development of quantum computing. Finally, D Wave Systems (Canada) has recently announced that has received 50M in funding to deploy its next-generation quantum computing system with more densely-connected qubits, as well as platforms and products for machine learning applications. Top 10 entities worldwide leading the innovations and advancements in Quantum Computing. Source: Linknovate.com NOTE: Since we are constantly updating our data, results may vary. Click on the link above for more updated results. The Quantum Computing Leaders All five quantum computing leaders are well-known tech leaders from the USA. When we think of Quantum Computing, we think in IBM and their 50 qubit quantum computer. Plus, they have launched the IBM Q Experience as a forum of discussion and collaboration to build commercially available universal quantum computers. So what’s in their horizon? Earlier this year they have patented a new technique of assembling superconducting qubit devices for scalable quantum computing. And they are participating in different grants to develop a quantum-resistant trusted platform module and to apply quantum computing to cryptography. However, the most active organization is Microsoft, which has the highest number of patent applications in the field since 2010: 45. Their approach uses topological qubits for an integrated, scalable solution that combines both quantum and classical computing. This is a different approach that focuses on Majorana fermions. These particles can be their own antiparticle and, when they are used in quantum computers, the error rates are significantly lower. As for the MIT, since 2010 they have received $17.4M for 32 different quantum computing projects. One of the best-funded ones is EFRI ACQUIRE ($2M), for scalable quantum communications with semiconductor qubits. They want to develop solid-state technology for quantum networks using semiconductors (nitrogen vacancy and silicon vacancy color centers in diamond). In addition, the Physics Frontier Center from NSF has awarded the Center for Ultracold Atoms (CUA) $4M so far. This center is a joint effort between the MIT and Harvard University that aims to harness the power of quantum mechanics for new physics and new applications. Through the NSF award, they will develop a variety of theoretical and experimental tools. With these tools, they will study strongly-interacting many-body systems of ultracold atoms and molecules, as well as coherent single or few particle systems. Most Active Countries in Quantum Computing The USA is, without comparison, the most active country in quantum computing. They triple the second country on the ranking: China. And their position is mainly due to the work of their enterprises, both corporations, and SMEs. It stands out that the investment bank Goldman & Sachs has two patent applications in quantum computing, which revolve around methods for fast computations using quantum computing. As for Intel, their quantum computing technology is at the level of IBM’s. They have developed the 49-qubit Tangle Lake test chip. With this chip researchers can simulate computational problems, and improve error correction techniques. Intel is also big in neuromorphic computing and has developed the powerful Loihi chip. Regarding Rigetti, they develop quantum virtual machines and have 11 patent applications in the field since 2010. Top Quantum Computing Trends Main trends in quantum computing revolve around quantum optics and optical qubits. So far, these topics are mainly being researched in Universities and Research Centers (80%). They have yet to reach the market. However, using Linknovate’s platform we can find interesting examples of startups that are applying quantum computing in novel ways. Starting with Zapata Computing. They are an American startup that spun out of Harvard University and that is applying quantum computing to the Chemistry industry. They have designed the CUSP algorithm, a quantum circuit optimizer that dramatically improves quantum algorithm efficiency; and has been selected as one of the first companies to utilize Google‘s Cirqa software. Also from the USA, QC Ware offers quantum computing as a service (QC PaaS) and have received funding from the NSF to develop their platform. With this solution, they want to make quantum computing power available for commercial and research applications. And in Canada, 1QBit solves industry’s computational challenges by recasting problems to harness the power of quantum computing. They have several patent applications in the field, which revolve around systems for quantum ready and quantum-enabled computations. Quantum Computing over Time As we can see, quantum computing has been experiencing constant growth since 2010. As new research is being made and new players enter the field, we can expect will follow the same trend in the next years. Programa Innovapeme 2018. Funded by GAIN, FEDER funds (EC).
<urn:uuid:8bc6ebaa-05e7-49d3-ab31-3d2cd7b4ee7e>
CC-MAIN-2022-40
https://blog.linknovate.com/quantum-computing-leaders-must-know/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00759.warc.gz
en
0.918576
1,716
3.203125
3
In April 2018, criminals hit Atlanta, GA with a ransomware attack that cost the city $2.6 million in recovery costs. March of this year saw Jackson County, GA paying a $400,000 ransom to bring their computer systems back online. And, Baltimore, MD has spent an astronomical $18 million to fix issues from a May ransomware attack. These three examples comprise just a small number of the numerous cyberattacks that have targeted government organizations in the past few years. Cybercriminals are increasingly turning local- and state-level governments into victims of a particular type of cyberattack, called ransomware. In a ransomware attack, a hacker infiltrates an organization’s computer system, blocking access to critical information or services. After successfully entering the network, the criminal begins to encrypt essential files or even block resources until the victim pays a ransom. These ransoms are sometimes so significant that the organization continues to feel the impact months, or even years, after the hack. Although cyberattacks on government entities are nothing new, the magnitude and frequency of ransomware seem to have increased drastically over the past few years. There are a few causes we can attribute to this rise in crime: Budget Constraints Are Hindering Cybersecurity Efforts Often, government entities at the local level struggle to find room in the budget to implement proper cybersecurity practices. A 2016 International City/County Management Association (ICMA) survey revealed that 80 percent of local governments feel that a lack of funding is preventing them from achieving the highest possible level of cybersecurity. Over half of surveyed local governments cite a lack of funding as a somewhat severe or severe barrier to proper cybersecurity. | Source: ICMA Cybersecurity 2016 Survey An additional effect of underfunding, local governments also report that they’re unable to hire a sufficient number of cybersecurity specialists due to an inability to pay competitive salaries. On a positive note, government officials appear to be aware of the gaps in their networks’ security and support fixing them. Respondents in the ICMA survey stated that more support from department managers and top appointed officials would have the least effect on improving cybersecurity as they already support those efforts. Department managers and top appointed officials aren’t the bottlenecks to better government cybersecurity. | Source: ICMA Cybersecurity 2016 Survey Poor Disaster Recovery Plans Amplify Ransomware Damage Not only does insufficient funding cause lower barriers to potential attackers, but it also prevents organizations from implementing adequate disaster recovery plans. Many of the attacks on government organizations are as successful as they are because the victims fail to have a back-up strategy in place. A lack of contingency plans can quickly turn a thousand-dollar mistake into one that potentially costs millions. The attack on Baltimore serves as a prime example of why organizations need to regularly back-up their data and create comprehensive recovery plans. The ransomware attack infiltrated several systems, including a parking fines database, government communications, and other payment platforms before officials were able to stop it. If the Baltimore government had backed-up these systems, operations most likely wouldn’t have been affected as they could then work from those back-ups. Instead, the ransomware blocked use of those systems. And the government had to shut down the rest to prevent the attack from spreading. By the end of the attack, the city spent several months of working hours and millions of dollars to get the systems back online. If they had implemented a proper disaster recovery plan, those resources would have been saved. Government Computer Systems Are Typically Out-of-Date Inadequate cybersecurity funding isn’t the only issue, unfortunately. Government organizations at all levels fail to regularly update their software and hardware, sometimes using resources that are more than 50 years old. Antiquated computer resources often contain vulnerabilities that vendors will no longer (nor have an obligation to) patch, giving criminals effortless access to internal systems. Even at the national level, the U.S. government is using severely out-of-date technology. | Source: United States Government Accountability Office Several types of malware capitalize on these out-dated systems. Many utilize the EternalBlue vulnerability, for instance. EternalBlue is a bug found in old Windows systems that allows attackers to execute code on a victim’s computer. And although Windows has released fixes for the vulnerability, computer systems that haven’t been updated remain at risk. Considering that some government computers are still running Windows 3.1 (1992), it’s likely that there are numerous systems exposed to the EternalBlue threat even today. Continuing with our Baltimore example, many people believe that the ransomware behind the attack on the city, RobbinHood, took advantage of the EternalBlue vulnerability. If the organization had updated their software regularly, it’s likely that the ransomware would not have been a threat. Or, it would have at least been contained. Government Organizations Are Relying More on Digital Systems Unrelated to cybersecurity practices, criminals are more often targeting government entities simply because they’ve become more reliant on their digital networks. We’re seeing an apparent disconnect between the amount of funds governments are allocating to critical IT networks and the budgets of cybersecurity programs meant to protect them. Experts predict that local and state governments will spend almost $108 billion on information technology in 2019, up from the $103 billion spent in 2018. And, this trend will continue to proceed as the world moves further into the digital era. 2019 IT spending is projected to grow nearly five percent from last year’s local and state spending. | Source: Govtech Navigator The actual ransom component of ransomware attacks is usually negligible in comparison to the cost of offline systems. The hackers in the Atlanta attack, for example, only demanded a $50,000 payment. So why was the final bill in the millions? The criminals caused disruptions in five government departments, which prevented residents from paying bills, interrupted the court system, and halted vital communications, among other things. In other words, hackers blocked government officials from accessing the computer networks on which they had become reliant. Doing so forced the Atlanta government to spend millions on emergency contracts to bring their infrastructure back online. How Can Governments Protect Their Computer Networks? The combination of limited budgets, a lack of cybersecurity staff, poor contingency planning, and years of technical debt present a bleak picture for the state of government cybersecurity. As a government organization, there are a few low-cost precautions you can take to protect your network. Most importantly, you need to regularly back up your data and create contingency plans for common threat scenarios. You may not be able to stop every hacker from attacking your system, but you can be well prepared if, or when, they do. No matter the budget, it’s vital to become aware of the specific vulnerabilities your organization faces. Start by receiving a free cyber checkup today to discover where the security gaps in your organization reside.
<urn:uuid:981148ae-deb8-4553-ae34-cf080b71cca6>
CC-MAIN-2022-40
https://ledgerops.com/blog/vicious-ransomware-attacks-are-costing-governments-millions-08-14-2019/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337421.33/warc/CC-MAIN-20221003133425-20221003163425-00759.warc.gz
en
0.952254
1,416
2.75
3
A group calling itself the International Electromagnetic Field Collaborative released an impassioned, 44-page report on Aug. 25 with the intent of drawing attention to studies showing a significant risk of brain tumors from cell phone use and exposing what it calls “design flaws” in the Interphone study protocol. The 13-country Interphone study is said to be the largest case-control study to investigate the relationship between brain tumors and cell phone use. The EMF Collaborative, which comprises the EM Radiation Research Trust, the EMR Policy Institute, ElectromagneticHealth.org and The Peoples Initiative Foundation, describes the Interphone study as funded by the telecom industry and biased in its methods and findings. Wanting to “raise red flags” to alert government officials and journalists to findings beyond those of Interphone, the group cites data from international sources, including a Swedish study that found an 280 percent increased risk of brain cancer after 10 or more years of digital cell phone use. The Swedish study reportedly also cites a 420 percent increased risk of brain cancer for users who began using a cell phone as teens or younger, and among adults, it found the risk of brain cancer to increase by 8 percent for every year of cell phone use. The EMF Collaborative report, “Cellphones and Brain Tumors: 15 Reasons for Concern,” includes concerns that research funded by the telecom industry has also found cell phone use to elevate the risk of brain tumors; that there have been warnings from governments, including those of the United Kingdom, Israel, Finland and Germany, about children’s cell phone use; that cell phone radiation is shown to damage DNA, an established cause of cancer; and the little-discussed fact that many cell phone manuals warn users to keep the phone away from their bodies when it’s not in use. The Collaborative additionally offers recommendations for public safely, in light of its concerns. “We wholeheartedly echo the European Parliament’s recent call for actions,” the group writes. These actions include reviewing the scientific adequacy of existing cell phone use limits, creating wireless-free areas, such as schools and day care centers, and creating awareness campaigns geared toward children and young people. John Walls, vice president of public affairs for CTIA, The Wireless Association, a nonprofit representing all aspects of wireless communication, issued a statement saying: ““… The peer-reviewed scientific evidence has overwhelmingly indicated that wireless devices do not pose a public health risk. In addition, there is no known mechanism for microwave energy within the limits established by the FCC to cause any adverse health effects. That is why the leading global heath organizations such as the American Cancer Society, National Cancer Institute, World Health Organization and the U.S. Food and Drug Administration all have concurred that wireless devices are not a public health risk.”“ A copy of the Collaborative’s report is available at RadiationResearch.org. Its author, Lloyd Morgan, told PC World: “Cell phones can be used appropriately and have a certain usefulness, but I fear we will see a tsunami of brain tumors, although it is too early to see that now since the tumors have a 30-year latency. I pray I’m wrong, but brace yourself.”
<urn:uuid:b7c1662d-b36e-42fb-baaa-8c49d32e0a7d>
CC-MAIN-2022-40
https://www.eweek.com/mobile/cell-phone-use-increases-risk-of-brain-tumors-new-study-finds/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00759.warc.gz
en
0.93572
680
2.578125
3
People have a natural inclination to open and respond to emails, and therein lies the reason for the tremendous success of social engineering attacks. Virtually all criminal-minded individuals using email as their primary entry point to a computer network, include social engineering as one of their main assault methods. Social engineering routinely causes all kinds of havoc with security systems, including business email compromise fraud (BEC), advanced persistent threat (APT) actors, and financially-driven cybercrime. There is a constant game being played between perpetrators and victims. Companies invest in educating their employees to the dangers of social engineering crimes, and that makes employees better equipped to identify potential threats in their inbox. On the other hand, cybercriminals recognize that workers are being trained and are not as naive, so they take steps to make their attacks more sophisticated. Criminals realize they have to stay one step ahead of employees, or their attacks will not bear fruit. Current Social Engineering Approaches Given the fact that most employees have become aware of the dangers posed by unknown email senders, cybercriminals have evolved their attacks to allay the suspicions and fears of employees. Here are some of the most popular approaches to social engineering: - Criminals frequently make use of themes in their emails which are known to be socially relevant, currently topical, or timely in nature. - They also incorporate phone technology into their attacks, especially since virtually everyone on the planet currently owns one. - When they hit on a successful strategy, e.g. using a trusted company’s services, they immediately exploit and expand that usage. - Criminals have access to conversation threads which take place between co-workers, and they make use of this knowledge to exploit them. - They try to establish prolonged and continuing conversations with employees, so as to build trust and confidence, before making a move to exploit them somehow. Successful 2021 Social Engineering Attacks 2021 was a good year for Cybercriminals perpetrating social engineering attacks. In November, an attacker called Robinhood, an investing app which is commission-free. He got his customer service call escalated to a higher level, and eventually was able to access some of the customer support systems for the trading platform. Five million emails of customers were breached, and two million more investor names were exposed. As you might expect, the COVID-19 pandemic provided easy pickings for Cybercriminals. All kinds of calls were made to people with the criminal posing as a government official, requesting people to submit data about their vaccination status. Using Deepfake technology, a Cyberattacker was able to convince a bank employee that he was making a legal and legitimate transfer of $35 million to a business client – instead that money went to the Attacker. The phone conversation was supported by phishing emails previously sent by the perpetrator to help with the use. The transfer went through, and the bank lost the $35 million. Why Is Social Engineering So Effective? Despite the best efforts of companies to educate their workers, the fact remains that humans are almost always the weakest link in any network chain. It is very often far easier to dupe them and exploit them than it is to circumvent the sophisticated security systems in place at other endpoints of a network. Many cybercriminals have grown so sophisticated themselves that they’re now setup to emulate legitimate businesses, and that provides them with a stable platform from which to operate and to carry out their attacks. This has also allowed them to scale their operations, so as to reap in even greater profits than before. In short, it is becoming even more lucrative for Cyberattackers to carry out their attacks than it was in the past. Having been reinforced in their belief that humans are the most easily exploitable part of any infrastructure, criminals have made it a point to continue preying on the emotions, instincts, and behaviors of human employees. And that means that social engineering will not go away anytime soon – in fact, for the foreseeable future, you can count on social engineering to remain as one of the most effective tools in the arsenal of Cybercriminals. What To Know About Social Engineering Attacks Social engineering can take a great many forms, because there are all kinds of ways that employees can be duped into providing something an attacker needs. Of all the possible forms social engineering may take, the three listed below are by far the most common: Baiting and scareware – baiting promises the victim something that increases their curiosity, and convinces them to supply personal data, or into doing something that permits the entry of malware into the company network. One popular use is to convince the victim that their system is infected, and that they have to install some software (which actually carries malware) in order to clean it. Pretexting – this involves using a fake identity for the purpose of securing sensitive information. All kinds of data can be acquired using this approach, because it’s fairly easy to convince an employee that they’re actually talking to a company manager. Phishing and spear phishing – still the all-time champion among social engineering tactics, phishing persuades an email recipient to click on an attachment that quickly downloads malware into the network. Spear phishing is a bit more exclusive, targeting specific employees with a message that has been customized just for them. When the employee responds or performs some kind of action, that allows the attacker into the system, where all kinds of damage can be done. Outlook for the future Social engineering is here to stay, just like human employees are here to stay. The only way to prevent or minimize damage done to a company network is to constantly train employees and increase their awareness of the risks posed by social engineering, and the ways that attackers operate. With increased awareness and constant training, there is hope that severe social engineering attacks in the future can be avoided.
<urn:uuid:2d3322d7-22da-47d3-97bc-0c2305f04731>
CC-MAIN-2022-40
https://www.2secure.biz/2022/07/social-engineering-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00159.warc.gz
en
0.971603
1,194
2.609375
3
In this article, made by Maximilian Sand, Teamleader Artificial Intelligence by Dallmeier Electronic, se ponen de relieve algunos principios básicos que permiten valorar la funcionalidad y beneficio del análisis de vídeo basado en Inteligencia Artificial. Video analysis based on Artificial Intelligence (The) promises a quantum leap in technology with great benefit to the customer. But only if the critical user – or what is the same, informed- is able to evaluate the technology correctly. For a long time now, video security technology incorporates procedures based on Artificial Intelligence. More and more new applications and products are using algorithms to deliver new analytics or make existing ones more robust.. The goal is a clear added value for users, and the results speak for themselves. If in the past, for example with classic image processing, it was a great effort to reliably recognize a tree moved by the wind as a false alarm, nowadays an AI does it without problems. The essential distinguishing feature between image or video analysis with classic image processing and those with artificial intelligence is that algorithms are no longer just programmed but taught., with a lot of data. Using this data, the system learns to recognize patterns and so on, For example, differentiate a tree from an intruder. But the concept of machine learning also poses new problems and challenges.. A prominent example for this is the differences in the quality of recognition of different ethnic groups., a problem that has even made headlines. Although the background is relatively simple: only when there is data in sufficient quantity and with sufficient diversity and equal distribution, an Artificial Intelligence can learn robustly. AI System Quality All this leads to questions about the performance capacity of a system that uses artificial intelligence.. What measures are used to compare, For example, two procedures, different systems or manufacturers? What does it mean if a brochure promises for example "95% detection accuracy" or "reliable recognition"? How good is a precision of the 95%? And, in short, what is reliable recognition? To do this, first of all, you have to understand how AI procedures can be evaluated. The first step is the specific definition by the application and the client of what "false" means and what "correct" means., especially in borderline cases. For example, in a system of recognition of persons, Should a detection be assessed as correct if the image or video does not see a real person but only an advertising poster with a person? This and other parameters have to be established. Once this definition exists, you need a dataset that knows the correct results you expect. The AI will analyze this dataset and determine the ratio of correct and false detections.. Mathematics provides the user with different metrics, such as sensitivity (proportion of expected detections that have actually been detected) or accuracy of accuracy (proportion of detections that are actually correct). So, the "quality" of AI, after all, is always a statistical statement about the evaluation dataset that has been used. Summer or winter? How useful this statement really is to the user or potential customer of a system depends on the distribution of the dataset.. An assessment can certify good detection performance. But if the dataset is based exclusively on images from summer months, this assessment has no informative value on the quality of AI in winter as light and weather conditions can differ considerably.. As a result, in general, statements about the quality of an AI analysis – particularly, those with concrete numbers such as "99.9%"-, take them with caution when not all the parameters are known. No knowledge of the dataset used, of the applied metric and other parameters, an unequivocal statement about the degree of representativeness of the result is not possible. There can be no exact indications Each system has its limits, Even, naturally, AI systems. So, knowing the limits is the basic requirement to make informed decisions. But also here, statistics and reality intersect, as you can see in the following example. An AI recognizes worse, logically, objects in the image/video the smaller they are. The first question, that is posed to a user before the purchase of a system, is the maximum distance up to which objects can be detected, as it influences the number of cameras needed and, therefore, on the costs of the entire system. But indicating an exact distance is not possible. Simply, there is no value up to which the analysis provides results 100% correct and another value from which a detection is not possible. An evaluation here is only able to provide statistics such as, For example, the accuracy of detection based on the size of the object. Better to compare directly In relation to the limits of the system, it has been chosen to describe, as far as possible, the limits of the system with specific minimum or maximum values: for example in product data sheets. Among those would be the minimum distance or a minimum resolution. This is reasonable, as customers or installers need benchmarks to be able to evaluate the system. However, there is still a lot of uncertainty, for example if these limit values are indicated by the manufacturer rather conservatively or optimistically. The user will do well to always keep in mind that in video analysis there may be no clear and defined limits. With each system it will be like this: also within certain parameters errors will occur and, at the same time, under good conditions, useful results can be deeded, even exceeded the limit. Whether a user wants to determine the true quality of an AI-based analysis, this will only be possible through a direct comparison; the numbers and parameters of different manufacturers are too different. And, In addition, framework conditions and input, Of course, they have to be the same in all systems. A real test with demo products, borrowed or similar, is a good possibility for it. Additionally, system performance is displayed right in the required use case. Is, By the way, also the key when evaluating the performance of AI systems in general: depends entirely on the use in each case. This should be specified as accurately as possible. Then, with the right solution, it will be possible to give a real added value to the customer. Teamleader Artificial Intelligence by Dallmeier Electronic Did you like this article? Subscribe to our RSS feed And you won't miss anything. • Section: Systems control, MAIN HIGHLIGHT, Detection, Grandstands, Video surveillance
<urn:uuid:9737d540-3603-46ca-bb1d-00b2ec8ffc83>
CC-MAIN-2022-40
https://www.digitalsecuritymagazine.com/en/2021/11/05/entre-expectativa-y-realidad-como-de-buena-es-la-inteligencia-artificial/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00159.warc.gz
en
0.919381
1,327
2.921875
3
Top 10 Languages Spoken in the World Here is the list of Top 10 Most Spoken Languages In The World. The ranking is based on the number of language speakers. There are approximately 6900 languages currently spoken around the world, the majority of which have only a small number of speakers. 10 Most Spoken Languages In The World This list of top 10 spoken languages is presented below in ascending order of the numbers of native speakers. 10. French - Number of speakers: 129 million Often called the most romantic language in the world, French is spoken in tons of countries, including Belgium, Canada, Rwanda, Cameroon, and Haiti. Oh, and France too. We’re actually very lucky that French is so popular, because without it, we might have been stuck with Dutch Toast, Dutch Fries, and Dutch kissing (ew!). To say "hello" in French, say "Bonjour" (bone-JOOR). 9. Malay Indonesian - Number of speakers: 159 million Malay-Indonesian is spoken in Malaysia and Indonesia. Actually, we kinda fudged the numbers on this one because there are many dialects of Malay, the most popular of which is Indonesian. But they’re all pretty much based on the same root language, which makes it the ninth most-spoken in the world. Indonesia is a fascinating place; a nation made up of over 13,000 islands it is the sixth most populated country in the world. Malaysia borders on two of the larger parts of Indonesia (including the island of Borneo), and is mostly known for its capital city of Kuala Lumpur. To say "hello" in Indonesian, say "Selamat pagi" (se-LA-maht PA-gee). 8. Portuguese - Number of speakers: 191 million Think of Portuguese as the little language that could. In the 12th Century, Portugal won its independence from Spain and expanded all over the world with the help of its famous explorers like Vasco da Gama and Prince Henry the Navigator. (Good thing Henry became a navigator. Could you imagine if a guy named "Prince Henry the Navigator" became a florist?) Because Portugal got in so early on the exploring game, the language established itself all over the world, especially in Brazil (where it’s the national language), Macau, Angola, Venezuela, and Mozambique. To say "hello" in Portuguese, say "Bom dia" (bohn DEE-ah). 7. Bengali - Number of speakers: 211 million In Bangladesh, a country of 120+ million people, just about everybody speaks Bengali. And because Bangladesh is virtually surrounded by India (where the population is growing so fast, just breathing the air can get you pregnant), the number of Bengali speakers in the world is much higher than most people would expect. To say "hello" in Bengali, say "Ei Je" (EYE-jay). 6. Arabic - Number of speakers: 246 million Arabic, one of the world’s oldest languages, is spoken in the Middle East, with speakers found in countries such as Saudi Arabia, Kuwait, Iraq, Syria, Jordan, Lebanon, and Egypt. Furthermore, because Arabic is the language of the Koran, millions of Moslems in other countries speak Arabic as well. So many people have a working knowledge of Arabic, in fact, that in 1974 it was made the sixth official language of the United Nations. To say "hello" in Arabic, say "Al salaam a’alaykum" (Ahl sah-LAHM ah ah-LAY-koom). 5. Russian - Number of speakers: 277 million Mikhail Gorbachev, Boris Yeltsin, and Yakov Smirnoff are among the millions of Russian speakers out there. Sure, we used to think of them as our Commie enemies. Now we think of them as our Commie friends. One of the six languages in the UN, Russian is spoken not only in the Mother Country, but also in Belarus, Kazakhstan, and the U.S. (to name just a few places). To say "hello" in Russian, say "Zdravstvuite" (ZDRAST-vet- yah). 4. Spanish - Number of speakers: 392 million Aside from all of those kids who take it in high school, Spanish is spoken in just about every South American and Central American country, not to mention Spain, Cuba, and the U.S. There is a particular interest in Spanish in the U.S., as many English words are borrowed from the language, including: tornado, bonanza, patio, quesadilla, enchilada, and taco grande supreme. To say "hello" in Spanish, say "Hola" (OH-la). 3. Hindustani - Number of speakers: 497 million Hindustani is the primary language of India’s crowded population, and it encompasses a huge number of dialects (of which the most commonly spoken is Hindi). While many predict that the population of India will soon surpass that of China, the prominence of English in India prevents Hindustani from surpassing the most popular language in the world. If you’re interested in learning a little Hindi, there’s a very easy way: rent an Indian movie. The film industry in India is the most prolific in the world, making thousands of action/romance/musicals every year. To say "hello" in Hindustani, say "Namaste" (Nah-MAH-stay). 2. English - Number of speakers: 508 million While English doesn’t have the most speakers, it is the official language of more countries than any other language. Its speakers hail from all around the world, including the U.S., Australia, England, Zimbabwe, the Caribbean, Hong Kong, South Africa, and Canada. We’d tell you more about English, but you probably feel pretty comfortable with the language already. Let’s just move on to the most popular language in the world. To say "hello" in English, say "What’s up, freak?" (watz-UP-freek). 1. Mandarin - Number of speakers: 1 billion+ Surprise, surprise, the most widely spoken language on the planet is based in the most populated country on the planet, China. Beating second-place English by a 2 to 1 ratio, but don’t let that lull you into thinking that Mandarin is easy to learn. Speaking Mandarin can be really tough, because each word can be pronounced in four ways (or "tones"), and a beginner will invariably have trouble distinguishing one tone from another. But if over a billion people could do it, so could you. Try saying hello! To say "hello" in Mandarin, say "Ni hao" (Nee HaOW). ("Hao" is pronounced as one syllable, but the tone requires that you let your voice drop midway, and then raise it again at the end.)
<urn:uuid:4145107f-5149-4d14-919b-9fdb190c59e1>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/204/top-10-languages-spoken-in-the-world.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00159.warc.gz
en
0.929931
1,491
2.71875
3
Everyday Science Quiz Questions & Answers - Part 3 Question: Why is it less difficult to cook rice or potatoes at higher altitudes? Answer: Atmospheric pressure at higher altitudes is low and boils water below 100oC. The boiling point of water is directly proportional to the pressure on its surface. Question: Why is it difficult to breathe at higher altitudes? Answer: Because of low air pressure at higher altitudes the quantity of air is less, and so that of oxygen. Question: Why are winter nights and summer nights warmer during cloudy weather than when the sky is clear? Answer: Clouds being bad conductors of heat do not permit radiation of heat from land to escape into the sky. As this heat remains in the atmosphere, the cloudy nights are warmer. Question: Why is a metal tyre heated before it is fixed on wooden wheels? Answer: On heating, the metal tyre expands by which its circumference also increases. This makes fixing the wheel easier and therefore cooling down shrinks it; thus fixing the tyre tightly. Question: Why is it easier to swim in the sea than in a river? Answer: The density of sea water is higher; hence the up thrust is more than that of river water. Question: Who will possibly learn swimming faster-a fat person or a thin person? Answer: The fat person displaces more water which will help him float much more freely compared to a thin person. Question: Why is a flash of lightening seen before thunder? Answer: Because light travels faster than sound, it reaches the earth before the sound of thunder. Question: Why cannot a petrol fire be extinguished by water? Answer: Water, which is heavier than petrol, slips down permitting the petrol to rise to the surface and continue to burn. Besides, the existing temperature is so high that the water poured on the fire evaporates even before it can extinguish the fire. The latter is true if a small quantity of water is poured. Question: Why does water remain cold in an earthen pot? Answer: There are pores in an earthen pot which allow water to percolate to the outer surface. Here evaporation of water takes place thereby producing a cooling effect. Question: Why do we place a wet cloth on the forehead of a patient suffering from high temperature? Answer: Because of body?s temperature, water evaporating from the wet cloth produces a cooling effect and brings the temperature down. Question: When a needle is placed on a small piece of blotting paper which is place on the surface of clean water, the blotting paper sinks after a few minutes but the needle floats. However, in a soap solution the needle sinks. Why? Answer: The surface tension of clean water being higher than that of a soap solution, it cans support the weight of a needle due to its surface tension. By addition of soap, the surface tension of water reduces, thereby resulting in the sinking of the needle. Question: To prevent multiplication of mosquitoes, it is recommended to sprinkle oil in the ponds with stagnant water. Why? Answer: Mosquitoes breed in stagnant water. The larvae of mosquitoes keep floating on the surface of water due to surface tension. However, when oil is sprinkled, the surface tension is lowered resulting in drowning and death of the larvae. Question: Why does oil rise on a cloth tape of an oil lamp? Answer: The pores in the cloth tape suck oil due to the capillary action of oil. Question: Why are ventilators in a room always made near the roof? Answer: The hot air being lighter in weight tends to rise above and escape from the ventilators at the top. This allows the cool air to come in the room to take its place. Question: How does ink get filled in a fountain pen? Answer: When the rubber tube of a fountain pen immersed in ink is pressed, the air inside the tube comes out and when the pressure is released the ink rushes in to fill the air space in the tube. Question: Why are air coolers less effective during the rainy season? Answer: During the rainy reason, the atmosphere air is saturated with moisture. Therefore, the process of evaporation of water from the moist pads of the cooler slows down thereby not cooling the air blown out from the cooler. Question: Why does grass gather more dew in nights than metallic objects such as stones? Answer: Grass being a good radiator enables water vapour in the air to condense on it. Moreover, grass gives out water constantly (transpiration) which appears in the form of dew because the air near grass is saturated with water vapour and slows evaporation. Dew is formed on objects which are good radiations and bad conductors. Question: If a lighted paper is introduced in a jar of carbon dioxide, its flame extinguishes. Why? Answer: Because carbon dioxide does not help in burning. For burning, oxygen is required. Question: Why does the mass of an iron increase on rusting? Answer: Because rust is hydrated ferric oxide which adds to the mass of the iron rod. The process of rusting involves addition of hydrogen and oxygen elements to iron. Question: Why does milk curdle? Answer: Lactose (milk sugar) content of milk undergoes fermentation and changes into lactic acid which on reacting with milk protein (casein) form curd. Question: Why does hard water not lather soap profusely? Answer: Hard water contains sulphates and chlorides of magnesium and calcium which forms an insoluble compound with soap. Therefore, soap does not lather with hard water. Question: Why is it dangerous to have charcoal fire burning in a closed room? Answer: When charcoal burns it produces carbon monoxide which is suffocating and can cause death. Question: Why is it dangerous to sleep under trees at night? Answer: Plants respire at night and give out carbon dioxide which reduces the oxygen content of air required for breathing. Question: Why does ENO’s salt effervesce on addition of water? Answer: It contains tartaric acid and sodium bicarbonate. On adding water, carbon dioxide is produced which when released into water causes effervescence. Question: Why does milk turn sour? Answer: The microbes react with milk and grow. They turn lactose into lactic acid which is sour in taste.
<urn:uuid:253c2b96-101b-44d4-a2c6-effa3c620c75>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/647/everyday-science-quiz-questions-answers-part-3.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00159.warc.gz
en
0.936914
1,417
3.171875
3
NAT (Network Address Translation) is an IPv4 tool that is not used in most IPv6 deployments. This has caused some users to ask whether IPv6 is as secure as IPv4. What is NAT? NAT is a tool that allows multiple computers behind an Internet connection to share the single address of that connection. Thus, if I have thirty computers in my office, they can all share a single connection. On the Internet, all the traffic comes from the address of the connection; inside the local network, each computer has an address that is unique on that LAN. The mechanism is simple to implement and is found in most routers, including home and small office routers. NAT was never intended as a security feature, per se! It was designed and deployed to "address" the diminishing number of IPv4 addresses available. It does have some security value, however: the addresses of the individual computers on a network do not appear on the internet so attackers cannot see them directly. Why doesn't IPv6 use NAT? The main reason IPv6 doesn't use NAT is that it doesn't need it! There are many more IPv6 addresses than IPv4 addresses (340,282,366,920,938,463,463,374,607,431,768,211,456 vs 4294967296, though not all in each group are usable!) and thus every device can have a unique one. There is a specification for NATv6, but it is seldom used. One issue with not using NAT is that if a computer uses a consistent address over the Internet, it can be tracked and potentially identified. Fortunately, there is a fix for that. The fix is to use a temporary address that changes every so often. In that way, observers on the Internet cannot see and track a specific address. The first part of the temporary address will reflect the Internet provider and organization, and the last part will change. How big each of those parts is will depend on the size of the organization. In my case, the last 64 bits of the address change. The Windows tool ipconfig shows the addresses a device uses. Here is part of the output from my desktop PC (with the prefix obscured): IPv6 Address. . . . . . . . . . . :xxxxxxxxxxxxxxxxxxxx:791a:da6a:a3b8:5759 Temporary IPv6 Address. . . . . . :xxxxxxxxxxxxxxxxxxxx:25b3:a2b4:167d:da10 Link-local IPv6 Address . . . . . : fe80::791a:da6a:a3b8:5759%6 Note the temporary v6 address. The third line is my address on the local network. Here is what the site https://whatismyv6.com/ shows as my address: Note that it is my temporary address. So is a temporary address enough? No. A temporary address provides some anonymity, but will not protect your computer. To protect your computer you need two things: a filtering firewall (as in a corporate firewall or the firewall in your home router, and a personal firewall on your computer (whether the one in Windows or a third party such as Zonealarm). Please use both: this is called "defense in depth" and is a core of good physical or network security. You can test to see whether the firewall is blocking ports by using an online scanning tool. I have used https://www.ipv6scanner.com/cgi-bin/main.py, but I'm not endorsing it, per se. So even though NAT is not generally a part of an IPv6 deployment, the available features provide the anonymity NAT did. Good practices such as a network and personal firewall continue to be necessary to protect local devices. To your safe computing,
<urn:uuid:76c24705-270f-4a3e-b1e5-07ab8647955a>
CC-MAIN-2022-40
https://www.learningtree.ca/blog/is-ipv6-less-secure-without-nat/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00159.warc.gz
en
0.935616
803
3.75
4
Art Nouveau was an internationally popular style of art and architecture that reached its peak of popularity around the turn of the 20th century, roughly 1895 through 1905. It was a short-lived movement in its prime, but it remains very prominent over a century later. It features organic forms, curving shapes, features that led to the term "biometric whiplash art", and women represented in a very stylized fashion. The overall design included fonts for posters and signs, furniture, fixtures, and structural elements within homes and other buildings. The movement seems to have gained its popularity largely from one advertising poster displayed along the streets of Paris in 1894. Alphonse Mucha, a Czech graphic artist, had created a poster for the play Gismonda starring Sarah Bernhardt. Bernhardt was extremely popular. She was a stage actress in an era before the Internet, television, movies, and even widespread photography, but she has been described as "the most famous actress the world has ever known". Sorry about that, Angelina Jolie... Bernhardt's enormous fame made Gismonda an immediate success. She also brought a great deal of attention to the poster itself, making it popular on its own. Much like a very pre-1970s Farrah Fawcett, if Farrah had been selling swimsuits. The general style of the poster, featuring organic and other plant-based forms, quickly caught on. It was first called Style Mucha after its primary developer, then called Art Nouveau. The distinctive style continued to be used in marketing and sales, in advertising posters and in the design of commercial buildings like department stores. For example, it was called Style Liberty for a while because of its use in London's Liberty & Company department store. It was also used in both private and municipal architecture. Paris' Métro stations are famously designed in this style by Hector Guimard. Salavador Dali spoke of "The terrifying and edible beauty of Art Nouveau Architecture." For some reason, the red lamps over the Métro entrances always make me think of the invaders in the original War of the Worlds movie. Maybe that's what terrified Dali. Great minds think alike! Brussels has a lot of Art Nouveau architecture. The blue crosses on the below map show the approximate locations of just some of these interesting buildings. For more detail, get a quality map of Brussels and refer to one of these lists: Art Nouveau on Two Wheels List of Victor Horta's designs As you can see, many of the Art Nouveau buildings are clustered in the Ixelles district, southwest of the city center and outside Petite Ceinture, the inner ring road. You will also see why the central old town, where the larger labels Brusell (Flemish) and Bruxelles (French) appear, is known as "the Lower Town". Ixelles is significantly higher. Unlike the grim area around Gare du Midi, and east of there and south into Saint-Gilles, Ixelles is very nice. There are plenty of nice cafes and upscale shops. The major street (in yellow on the above map) running to the southeast through the label "Bruxelles-Louise" is Avenue Louise. A tram line runs down the grassy central median strip, making it easy to get there from the center. But it's not a difficult walk, even from the Lower Town. A nice detailed map is available at the information booth in the Gare du Midi, Brussel's main train station and the point of arrival for international trains. Victor Horta and Hôtel Tassel Victor Horta was the primary Art Nouveau architect. The style had spread from posters to decorative arts (that is, furniture and interior decoration). His Hôtel Tassel, at no. 6 rue Paul-Emile Janson, was the first application of Art Nouveau style to architecture. Remember, "hôtel" means "town house" or "mansion". To find it with GPS: 50° 49' 40" N 4° 21' 43.3" E. Horta was born in 1861 in Ghent. By the time he was twelve, he had helped his uncle on a construction site and become interested in architecture. He studied in Ghent and then moved to Paris, where he was exposed to the emerging impressionist style of art and developments in iron and glass working. He returned to Belgium in 1880, settling in Brussels and continuing his studies of fine arts. He had begun working on his own by 1885, when he designed three houses. The professor and scientist Emile Tassel commissioned a Horta design in 1893, and the latest Art Nouveau influences appeared in the private residence Hôtel Tassel. This was a year before Mucha's famous and influential poster. The relatively plain stone façade fits in with the neighboring buildings, and the elaborate and ornate designs are hidden inside. You can see just a little — the organically curving ironwork and the etched glass in the curved bank of windows. Unfortunately, a law firm occupies the building and you cannot see the interior. Horta designed all the interior details — stained glass windows and panels, mosaic floors, handrails and decorative woodwork, and even the door handles and electrical fixtures. Victor Horta's Hôtel Solvay In 1898 Horta was commissioned to design a large town house along the broad and tree-lined Avenue Louise. The project was for Armand Solvay, the son of a wealthy chemist and industrialist. The resulting Hôtel Solvay is at no. 224 Avenue Louise. To find it with GPS: 50° 49' 34.75" N 4° 21' 55" E. The client was wealthy enough that money was no object for the materials. Various tropical woods were used in addition to an array of decorative stones. The Belgian pointillist painter Théo van Rysselberghe was enlisted to decorate the staircase. The Hôtel Solvay is privately owned. The good news about this is that the owners have carefully restored and preserved the building. The bad news is that it can only be visited by appointments that seem to be difficult to arrange. Hôtel Solvay, along with Hôtel Tassel and two other Horta buildings, collectively make up a UNESCO World Heritage site: Another Horta design, originally the Grand Magasin Woucquez, at 20 Rue du Sable, is now the Centre Belge de la Bande Dessinée. Horta designed over forty buildings. Some were were demolished in the name of progress between the end of World War II and the 1960s, but many remain. A list of Victor Horta's designs is available. Paul Hankar's Hôtel Ciamberlani and Maison Hankar Paul Hankar met Victor Horta when they were studying at the Academie des Beaux-Arts in Brussels. They shared an interest in forged iron, which would feature in the later designs of both. Hankar is another of the primary Art Nouveau architects. His Hôtel Ciamberlani, built in 1897, is at no. 48 rue Defacqz. Yes, "Defacqz." The Hôtel Ciamberlani façade features sgraffito, a technique of wall decoration. Maison Hankar is, obviousment, Paul Hankar's private residence and studio. It's at 71 rue Defacqz. Albert Rosenboom and 10 Rue Faider Here we are looking down Rue Faider. The buildings are built in a variety of similar and complementary styles, but notice the second one from the right with the white brick façde and curving bay of windows. It was built by Albert Rosenboom, and it has some distinctive Art Nouveau details. See the organically curving details, the curving ironwork, and the top floor sgraffito on this building at 10 rue Faider. This closer view better shows the sgraffito around the top floor windows. The smoothly curving structural details are made from carved stone components set into brick walls. Near Place Royale The Old England building was designed in 1899 by Paul Saintenoy as a department store. It sat empty for several years, but has now been opened as le Musée des Instruments de Musique. This building is at the top of the open-air staircase of the Mont des Arts, on the way from Gare Central to Place Royale and the art museums. Saintenoy was the general secretary of the Royal Society of Archaeology for a time. He began teaching at the Academie des Beaux-Arts in Brussels in 1910. At the end of World War I in 1918, he was appointed a member of the Royal Commission of Monuments and Sites and played an significant role in reconstructing Belgium. Along Rue Royale I was staying at Hostel Jacques Brel, in the northeast corner of the innermost ring and about a 15 to 20 minute walk from the Grand Place: 30 Rue de la Sablonniere, Brussels Walking along the Rue Royale between there and the Lower Town or the cluster of art museums, I passed a place that wasn't really of the Art Nouveau style, but was unusual and caught my eye. It had an interesting and ornate windowed room extending slightly out from the building façade. Another interesting place just two doors further down Rue Royal is Armes Binet, E.J. Binet et Fils, at no. 17 Rue Royale. It's the building with the off-white awnings. "Derrière une belle façade datant de la fin du XIX siècle (1876) est abritée l'une des plus belles collections d'armes de tous calibres." It's like something from a Frederick Forsyth novel, this is an old-school Belgian arms merchant. Well, maybe not like a Forsyth novel. More like an amalgamation of Fitzgerald and Hemingway. E.J. Binet et Fils outfits the very high-end safari market. Guns, clothes, and assorted safari accoutrements. One of the many hunting rifles in the window, with a "melted" style muzzle brake like the others, was a .416 Weatherby Magnum rifle priced at a cool 3875€. I was intrigued, but it did not seem to be the sort of place where you simply walked in and browsed. In fact, I got the idea that you only got in with a prior appointment. Victor Horta's last work was the rather ugly Gare Central. He designed it when Art Nouveau had been replaced by Art Deco. His 1912 design was kept until the station was finally built in 1952. It doesn't really seem that it was worth the wait, as this was not his best work.
<urn:uuid:5570a575-3c83-4c9f-8811-fa0f532e9410>
CC-MAIN-2022-40
https://cromwell-intl.com/travel/belgium/art-nouveau-architecture/Index.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00159.warc.gz
en
0.960883
2,360
2.84375
3
The Difference Between a Shared and a Dedicated IP Address Every domain name that is a home to a website has an IP address assigned to it. An IP address is the real address of a website. Domain names were developed because it is difficult to remember long IP numbers like 22.214.171.124. A shared IP is an IP address that is used for multiple sites. A shared IP can host most or all sites on a webserver. Because the IP address of a website is used for multiple sites on the server, the actions of one site owner can affect the IP reputation of everyone sharing that IP on the server. For example, if an IP address is blacklisted for sending SPAM email, this will blacklist mail for all sites using that shared IP address. As your partner in hosting, we work hard to prevent and to resolve these issues immediately and take corrective action against anyone who abuses the system. However, the administrator of a blacklist is the sole authority to decide when to de-list an offending IP. Many site owners are able to host their site on a shared IP without ever being affected by another site hosted on the server. If mail is critical and you use our mail services, we recommend a dedicated IP address so that your reputation is affected only by the actions of users on your domain. A dedicated IP is an IP address that is assigned to one site. Large websites or e-commerce sites often use a dedicated IP address to have full control over the reputation of their IP. Ecommerce sites must use SSL and therefore a dedicated IP address is often recommended although not required.
<urn:uuid:0ad08b9a-07f5-4833-9bde-0fdc0c51ad9c>
CC-MAIN-2022-40
https://support.managed.com/kb/a584/differences-between-shared-and-dedicated-ip-addresses.aspx
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00159.warc.gz
en
0.965219
333
2.8125
3
PoE vs PoE+ vs PoE++ Switch: How to Choose? PoE, or Power over Ethernet, is a proven time-saving and money-saving technology that delivers both data and power safely over the same Ethernet cable for the local area networks (LANs). In the current market, if you take notice of Power over Ethernet switch types, you will find that there are PoE switches, PoE+ switches, and PoE++ switches. But how much do you know about these three PoE switch types? What are their distinctions of PoE vs PoE+ vs PoE++? And how to make a proper selection among them? What Is PoE and PoE Switch? What is PoE? PoE technology was defined by the IEEE 802.3af standard in 2003. Under this standard, the PoE allows a PD (powered device) like VoIP phones to receive up to 12.95W PoE wattage, utilizing just two out of the available four twisted pairs in Ethernet cabling. Then what is a PoE switch? PoE switch refers to an application of PoE technology. Functioning as a kind of PSE (power sourcing equipment), a PoE switch can supply power to PDs via Ethernet cables to realize network connectivity. Generally, an 802.3af switch supports max power consumption up to 15.4W per PoE port with a voltage range between 44V and 57V. And the voltage range of PDs, connected with the PoE switch, is from 37V to 57V. What Is PoE+ and PoE+ Switch? PoE+ technology (IEEE 802.3at standard) is an upgrade of PoE technology, which was published in 2009. PDs in the market tend to require more wattages, like wireless access points that require PoE wattage more than 12.95W to work normally. To solve that, here comes the PoE plus technology, which can support high power consumption. Similar to a PoE network switch, the PoE plus switch also supplies power over two pairs, but it adds an additional power class that is able to deliver power up to 25.5W for a PD with a voltage range from 42.5V to 57V. The max power delivered by each port of a PoE+ switch is 30W, along with a voltage range from 50V to 57V. What Is PoE++ and PoE++ Switch? In the pursuit of adding more power to broader device applications, the IEEE 802.3 standard once again is required to upgrade its PoE+ technology to PoE++ (IEEE 802.3bt standard) in 2018. PoE++ can be classified into two types: Type 3 and Type 4. Type 3 enables two or all four twisted pairs in a copper cable to deliver power at a PD up to 51W. Type 4 is up to 71W at a PD over four twisted pairs in an Ethernet cable. By the way, Cisco's proprietary technology UPoE (universal Power over Ethernet) works similarly to the PoE++ Type 3, which extends the IEEE PoE+ standard to double the power to a PD to 51 watts. In some cases, UPoE is also called as PoE++. As an upgrade to Power over Ethernet switch and PoE plus switch, PoE++ switch can deliver up to 60W on each PoE port under the Type 3 and up to 100W under the Type 4. PoE vs. PoE+ vs. PoE++ Switch: Which to Choose? Based on the above mentioned introduction, a reference chart that summarizes detailed specifications among PoE vs PoE+ vs PoE++ is presented below, which may be helpful when choosing the PoE switch depends on different requirements. |IEEE Standard||IEEE 802.3af||IEEE 802.3at||IEEE 802.3bt| |PoE Type||Type 1||Type 2||Type 3||Type 4| |Switch Port Power| |Max. Power Per Port||15.4W||30W||60W||100W| |Port Voltage Range||44–57V||50-57V||50-57V||52-57V| |Powered Device Power| |Max. Power to Device||12.95W||25.5W||51W||71W| |Voltage Range to Device||37-57V||42.5-57V||42.5-57V||41.1-57V| |Twisted Pairs Used||2-pair||2-pair||4-pair||4-pair| |Supported Cables||Cat3 or better||Cat5 or better||Cat5 or better||Cat5 or better| Note that the presented figures are just valuable in theory. In fact, PoE series switches often oversubscribe the total power capacity of a switch with more ports. That is because many devices will use less than maximum power. For instance, if you have a switch with all PoE++ Type 4 ports, it does not mean you will use all of them at maximum load 24×7. Consequently, you need to calculate the power requirements for all the powered devices that you plan to connect to switch and select corresponding patch cables for your PoE design. Evidently, the major differences among PoE vs. PoE+ vs. PoE++ switches lie in their working mode and power supply, which reflect on their applications. An 802.3af switch is usually used to support devices that require the power delivery less than 15.4W, such as VoIP phones, sensors, meters, wireless access points with two antennas, and simple, static surveillance cameras that can't pan, tilt or zoom. As for the PoE+ switch, it supports devices such as more complex surveillance cameras that pan, tilt, or zoom, as well as wireless access points with six antennas, and video IP phones. With a higher power wattage, the PoE++ Type 3 switch can support devices such as video conferencing system components and building management devices. And the PoE++ Type 3 switch can support devices such as laptops and TVs. Assumed that your data center only requires low standard power levels, you may stick to PoE switches. However, if you'd like to build a more robust and high-performance network with multiple varied devices, plus, don't want to bother to consider the port limitations, then picking PoE+ or PoE++ switches will be the right choice. When starting to build infrastructures of higher requirements or plan for upgrades, take a look at PoE+ or PoE++ technologies may be wiser. However, not everyone needs a full upgrade. If your current PoE solution is adequate and fits your demands, it might be reasonable to remain your existing PoE network design. The growing power requirements make the PoE technology evolving from PoE to PoE+ and to PoE++. The PoE based switch has also upgraded from PoE network switch to PoE + switch, and PoE++ switch. This article sheds light on the differences between PoE vs. PoE+ vs. PoE++ switch as well as their applications. Hope this post will give you some inspiration on choosing a suitable PoE network switch.
<urn:uuid:8870b8a4-61bf-47a2-b03d-3d4c1ae520a8>
CC-MAIN-2022-40
https://community.fs.com/blog/poe-switch-types.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00159.warc.gz
en
0.89964
1,540
2.59375
3
No Phones Allowed in New Framework for Self-Driving Cars The U.K. Government is preparing for a self-driving revolution on Britain’s roads – including allowing passengers to watch TV while being transported by autonomous vehicles. Proposed changes to the nation’s Highway Code, which outlines road safety and vehicle rules, will permit viewing content on built-in screens, although human drivers have to be ready to take control of vehicles when prompted, such as when approaching motorway exits. Using mobile phones in self-driving mode will still be illegal, as research shows they pose a greater risk. The edicts concerning TV and smartphone use are among several alterations to the Highway Code that are scheduled to come into force in summer, as the U.K. government continues to evolve its plans for a full legal framework for self-driving vehicles. The new rules also specify that insurance companies, rather than individual motorists, will be financially liable for accidents in self-driving cars. “With self-driving technology rapidly developing across the globe, Britain’s first vehicles approved for self-driving could be ready for use later this year,” according to the Dept. for Transport (DfT). “Vehicles will undergo rigorous testing and only be approved as self-driving when they have met stringent standards.” Presently, no self-driving cars are allowed on U.K. roads. The intention is to have the full legal framework in place for 2025, with the initial regulations identified in the Highway Code considered an interim measure. The measures follow a public consultation by the government which found most respondents supported moves to clarify drivers’ responsibilities in self-driving vehicles. The introduction of the technology is likely to begin with vehicles traveling at slow speeds on motorways, such as in heavy traffic, and is the logical next step following the government’s announcement last April that hands-free driving in cars fitted with automated lane keeping system (ALKS) technology would be allowed. This is considered “assistive” tech, according to the DfT, which means drivers must “always remain in control and responsible.” The recognition that individual motorists should not be held responsible for accidents is also an important breakthrough and mirrors the recommendations of the Law Commissions of England, Scotland and Wales, which concluded in a report published earlier this year that, “The person in the driving seat would no longer be a driver but a “user-in-charge.” A user-in-charge cannot be prosecuted for offences which arise directly from the driving task. They would have immunity from a wide range of offences.” “This is a major milestone in our safe introduction of self-driving vehicles, which will revolutionise the way we travel, making our future journeys greener, safer and more reliable,” said British Transport Minister Trudy Harrison. “This exciting technology is developing at pace right here in Great Britain and we’re ensuring we have strong foundations in place for drivers when it takes to our roads.”
<urn:uuid:fa8f7f86-a903-43a8-94fd-fe9e0587e9b7>
CC-MAIN-2022-40
https://www.iotworldtoday.com/2022/04/20/no-phones-allowed-in-new-framework-for-self-driving-cars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00159.warc.gz
en
0.960269
637
2.515625
3
Intelligent IVR systems are made up of several components. The key component is the voice recognition module, which in modern systems is often capable of interpreting voice input accurately based on grammar rules and semantics. Directly linked to the voice recognition module are dialogue flow interpreters, which convert the voice input into computer commands. Text-to-speech synthesis systems are available for generating voice portal output. These convert texts into spoken information with the help of computer-generated voices. In the background, an application with the necessary computer logic ensures integration of the data and processes in the voice portal. Voice portals facilitate very natural, direct communication with computer systems without the need to input data manually or look at the screen. The time-consuming exercise of familiarising oneself with the user interface or cryptic commands can be avoided. Another advantage is the fact that the user’s device only needs to be low spec. In general, a conventional telephone without any additional functions is sufficient. Moreover, voice portals can be set up in such a way that they are multilingual and understand different user languages. Voice portals are often used in call centres for answering calls from a central office and pre-classifying queries from callers. In large companies, the central fault hotline is also often equipped with a voice portal. Voice portals permit callers’ queries to be specified more precisely, for example, and then allocated to a suitable member of staff for handling. With the help of Yes/No answers, the caller is navigated through a clearly structured option menu.
<urn:uuid:f8d36e4d-05b8-485b-bc74-64adb1bc74e0>
CC-MAIN-2022-40
https://www.nfon.com/en/get-started/cloud-telephony/lexicon/knowledge-base-detail/voice-portals
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00360.warc.gz
en
0.926724
309
2.953125
3
Malware is the term used to describe "malicious software" and can be applied to a wide range of destructive applications. This article discusses the most common categories of malware and how they are used to exploit or destroy systems. Types of Malware Viruses - We usually think of malware and viruses and being the same thing, but that's not the case. Rather, a virus is a single category of malware. Viruses infect files and spread by infecting other files (like a biological virus does). In most cases, viruses destroy data and corrupt files. Traditional viruses have little use in modern cybercrime and are one of the least common types of malware today - except in ransomware, which we'll discuss soon. Worms - A worm is similar to a virus, except that worms self-replicate and spread across networks. Worms can infect entire networks from a single file download or remote vulnerability exploit and can devastate business networks. Worms often use other exploits to spread through networks, quickly compromising targets on a large scale. Today, we often see worms (or worm-like actions) used in more advanced ransomware and banking attacks. Trojans - Trojans are pieces of software that appear to be legitimate, but in fact, hide other malware. Users are commonly tricked into downloading software, often via email, and executing the program which then infects their system. Trojans are often equipped with other spyware such as remote access tools, allowing for attackers to steal data or otherwise remotely access the victims computer. These are called Remote Access Trojans - or RATs. Ransomware - We discuss ransomware in another article at length, however ransomware has become it's own class of malware due to its use of multiple attacks methods as well as its prevalence. Ransomware often infects networks through file downloads, much like a Trojan, and usually spreads across networks like a worm would. Ransomware also acts like a virus, in that system files are modified/encrypted and rendered useless without the decryption key. Adware & Scareware - Adware is mostly a nuisance, but it can certainly affect business operations. Adware is software that displays advertising and other sales materials in your browser, as popups, or as applications. They are usually installed like any other malware or as browser plugins. Attackers also use this technique to trick users into believing their systems are hacked and that the only way to fix it is by purchasing more software (often with more malware) - this technique is called scareware. Spyware - Spyware is malware that attempts to steal your data or provide remote access to your networks. This is similar to the discussion about Trojans. Trojans often include spyware (e.g. remote access technologies) to increase their malicious usefulness. Spyware tools can include keyloggers, remote access tools, web scrapers, and much more. Malware is simply computer software that has been designed with malicious intent. Like legitimate software, there are a variety of applications that are very similar in the marketplace. Further, there are different versions of legitimate software with varying features and capabilities. This also applies to malware and is what we call "variants." This is one of the challenges with malware detection. It's not possible to get a single "signature" that identifies all malware, even malware within the same version or variant. Some modern malware can also rebuild itself, thereby giving itself a new signature and making detection extremely difficult. This is called polymorphic malware. Identifying the specific variant of malware is often required to determine how to remove the infection and to determine the damage that could have been done. This is part of what forensic analysts do when cleaning up from malware attacks. Preventing malware infections is not an easy task. In fact, even security professionals fall victim to malware on occasion. However there are things you can do to beat the odds of successful malware attacks affecting your network. Modern Anti-malware Software - Seek out and install modern malware software - not the one that came pre-installed with your computer or the free versions. For business, we recommend using software that incorporates "application white-listing" like CarbonBlack and Panda Adaptive Defense. We also recommend considering software that uses advanced AI and machine learning technologies, such as Cylance. (Note: There are other worthy vendors on the market.) Safe Downloading Practice - Only download software from known sources. Never download software from suspicious websites or emails from people you don't know. While this may seem like common sense, downloading malware from email is still one of the biggest attack vectors for malware infections. Keep Software Updated - Malware often uses out-of-date and vulnerable software to spread. It's important to keep your operating system, browsers, office tools, and other software updated. Don't Expose Risk Services - Do not leave services, such as Microsoft Remote Desktop or VNS exposed to the open internet. These services are scanned for continuously by attackers and used to infect networks. Preventing malware is one of the toughest challenges in cybersecurity. Continue reading about ransomware to learn more about how to protect your business. As always, please reach out to us if you have any questions!
<urn:uuid:f012777c-9386-493f-8c41-9095e57439bc>
CC-MAIN-2022-40
https://help.coalitioninc.com/en/articles/3539972-understanding-and-preventing-malware-infections
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00360.warc.gz
en
0.947232
1,082
3.515625
4
Most Commented Posts Virtual and augmented reality (VR and AR) – collectively known as extended reality (XR) – can find game-changing use in teaching and training applications, with the technology enabling professionals to experience dangerous situations that no coaches would dare to expose trainees to in the real world. Other benefits of XR for training applications include cost advantages, repeatability of exercises and remote use of the technology. Unfortunately, many certification bodies and institutions are still behind the curve on XR technology or hesitant to accept virtual environments as valid teaching and testing grounds. At South by Southwest in March 2022 in Texas, US, a stage discussion on topics related to the metaverse, From buzz to reality: metaverse now and tomorrow, looked at regulatory challenges and opportunities. Geoff Bund, head of software partnerships for headset manufacturer Varjo; Vesa Koivumaa, head of growth for industrial-equipment provider Wärtsilä; Miikka Rosendahl, founder and CEO for virtual-world creator Zoan; and Leslie Shannon, head of ecosystem and trend scouting for Nokia all provided perspectives of interface manufacturers, industrial users, metaverse designers, and infrastructure providers. (Understanding the metaverse – a discussion at SXSW has more information about the participants’ companies.) Nokia’s Shannon highlighted the advantages that VR has for training. Training can be conducted remotely, and beginners can handle and understand difficult-to-use, even dangerous, equipment in challenging situations. VR applications also enable coaches to measure a wide range of metrics of the trainees’ performances, enabling very granular measurements, she said, adding: “You can tell if somebody knows what they were doing.” Labster, for instance, is a developer of virtual labs to help professionals train the next generation of scientists. The market for these types of applications of VR has been growing rapidly for years. Regulatory issues do exist, however. Once trainees finish the training, many cannot get certified, because VR is not yet considered a credible approach to learning real-world skills, Shannon said, adding: “We have to get the regulators to catch up.” Varjo’s Bund confirmed such challenges, saying his company is working in a highly regulated environment for military, industrial and medical applications. He said that it took Varjo two and a half years to get its hardware on a certified helicopter training, commenting “we’re still catching up” in the regulatory world to adopt and trust the use of VR. Wärtsilä has been in the cloud with its simulation system already since 2014, adopting virtual applications at an early stage. But Vesa Koivumaa cautions that institutions and certification bodies are adopting technologies very slowly. So, while Wärtsilä – or industrial players generally – can be an early adopter, both government and industry organisations need to accept these new technologies and approaches to realise the tremendous potential AR and VR offer. The institutional environment and the lack of a commonly accepted regulatory framework has obvious implications for the adoption and diffusion path of metaverse-related applications, adding to the many legal minefields. Real-world applications of XR Not surprisingly, the military has been looking at the use of AR and VR for quite some time and is driving adoption. In March 2021, the US Army awarded Microsoft with a $21bn contract for AR headsets, systems and training to support soldiers with battlefield maps and easy access to intelligence by overlaying information onto the field of vision. More recently, in May 2022, a pair of fighter pilots used their AR headsets to perform a refueling-maneuver exercise. The US Air Force also leverages the technology so that pilots can practice dogfights against virtual enemies. Meanwhile, the US Navy has changed its curriculum for flight training for the first time in half a century in 2021. Pilot training will now make use of VR and AR. The goal is not only to provide better practice, but to shorten the time required to train new pilots. Certification progress for civil aviation is also moving ahead. In April 2022, the European Union Aviation Safety Agency (EASA) approved the use of a VR-based training device for flight simulation. VRM Switzerland developed a device that will help rotorcraft pilots to practice risky manoeuvres in the safety of a virtual environment. In May 2022, the US Navy signed a contract for 14 mixed-reality systems from Aechelon Technology. The systems, which integrate Varjo headsets, will find use in the training programme for the tilt-rotor helicopters. Bund mentions other considerations that will affect adoption. This year, the OpenXR standard is finally experiencing widespread acceptance, but Bund cautions that there many challenges and that Varjo’s partners still require substantial assistance when creating content. He discussed blockchain technologies, standards and open source as relevant concepts for the development of the metaverse, adding: “There’s currently a window where we could guide XR into a very positive direction.” Zoan’s Miikka Rosendahl also believes that blockchain applications and ownership of virtual objects matter, highlighting past efforts by Nokia in Second Life that enabled ownership of objects. Ownership matters because individuals in the metaverse will use virtual land and objects to create services and applications. Without ownership many applications will be difficult to establish – and blockchain technology enables users to establish and transfer ownership. Zoan was the first company to launch an NFT (non-fungible token)/initial coin offering (ICO) from Finland, and because of the regulatory environment there, the offer had to take many legal issues into considerations that other companies that launched from less regulated regions could ignore. The issue of standards and compatibility are crucial adoption considerations that will require closer collaboration among industry players. For financial applications, such as cryptocurrencies and ownership considerations, legislative bodies will have to set frameworks. The process will take time and fast resolution of the many issues should not be expected. But progress is made here too. In the US, it was not even clear which institution will have to establish related regulations until recently. Now, it appears that the Commodity Futures Trading Commission (CFTC) will be in charge rather than – as many industry players had expected – the Securities and Exchange Commission (SEC). The smaller size of the CFTC has caused worry that the task could overwhelm officials, resulting in delays of legislative efforts. For the time being, many regulatory issues and uncertainties will slow the adoption of XR and metaverse-related and -enabling technologies. But in the long run, legal certainty will play an important role in driving these technologies across industries and deep into a wide range of applications.
<urn:uuid:369f151f-59d8-46bf-8973-54b18250c107>
CC-MAIN-2022-40
https://ihowtoarticle.com/challenges-to-adoption-and-diffusion/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00360.warc.gz
en
0.949705
1,390
2.515625
3
While the difference between cyber-security and cyber-resilience might not be obvious to some, the implications of failing to address both are significant. In simple terms, cyber-security describes an organization’s ability to protect themselves from security threats, such as malware, phishing, DDoS, SQL injection and insider threats. Cyber-resilience, on the other hand, focuses more on damage limitation and remediation, whether that be damage to an organization’s systems, finances or reputation. Of the two, we tend to focus more on cyber-security than cyber-resilience. Most large organizations will have policies in place, such as password policies, remote access policies, acceptable use policies and email and communication policies. They will also utilize a number of threat detection technologies, such as anti-virus software, firewalls, intrusion prevention systems, and solutions that can detect and respond to anomalous user activity. However, relatively few have a tried and tested incident response plan (IRP) in place. This is not surprising, as “prevention is better than cure”. However, given that organizations spend on average $3.86 million recovering from security incidents, the latter is becoming increasingly more important. We must also accept the fact that no cyber-security strategy is perfect. Even if your policies have been carefully considered, and you have the latest and greatest threat detection technologies that money can buy, a significant number of security incidents are caused by human error, which we can’t find a simple solution for. The threat landscape is constantly evolving. Social engineering techniques are becoming increasingly more sophisticated – leveraging technologies such as AI to better impersonate company executives. As they say, it’s not a question if, but when, a security incident will unfold, and we need a robust damage limitation strategy in place to help us sail through the storm without sinking. Now that we have a better understanding of the difference between cyber-security and cyber-resilience, let’s take a closer look at how they work in practice. What is Cyber-Security? In order to have an effective cyber-security strategy, you should have at least some of the following in place: - You should have policies that deal with remote access, internet access, passwords, encryption, BYOD, email and communication, and more. - All employees should be familiar with the policies, know where to find them, and are trained to comply with them. - In addition to having a strong password policy in place, you should be using multi-factor authentication whenever possible. - Patches and updates should be installed on all endpoints in a timely manner – preferably via an automated patch management solution. - You should have a carefully selected suite of security technologies, which might include AV, SIEM, UBA, DLP, IPS, VPN software, and a commercial-grade firewall. You may also want to install vulnerability scanning or penetration testing software. - You should be encrypting sensitive data, both at rest and in transit. - You should be adhering to the Principal of Least Privilege (PoLP), to ensure that employees are granted the least privileges they need to carry out their role. - You should know exactly what sensitive data you have, and where it is located. - You should be monitoring all access to privileged accounts and sensitive data, and are able to quickly determine who has access to what data, when, how, and why. - All public facing web forms should be thoroughly tested for vulnerabilities to prevent SQL injection and cross-site scripting attacks. - You should be controlling and monitoring physical access to your data using ID badges, locks, alarms, CCTV cameras, etc. What is Cyber-Resilience? The first thing you will need to do is familiarize yourself with the six stages of incident response, which include preparation, identification, containment, eradication, recovery and lessons learned. Understanding these steps will help you to develop a comprehensive incident response plan (IRP). A complete breakdown of these stages is beyond the scope of this article, however, in order to limit the damage caused by a security incident and recover in a timely manner, you should have clear and accessible documentation relating to the following; - What needs to be done in the event of a security incident. This might include scanning for vulnerabilities, reviewing event logs, conducting a traffic analysis, restoring backups, and so on. - The individuals that are responsible for initiating and executing the incident response plan. - The protocols for communicating with the relevant stakeholders, authorities, and possibly even the press. - The protocols for assessing both the impact and potential impact of a security incident. You will need to prioritize the assessment based on your most valuable assets. It’s a good idea to use a data discovery and classification solution to ensure that you know exactly what sensitive data you have, and where it is located. - The data privacy regulations that are relevant to your industry. In order to protect your organization from any fines and laws suits, you need to ensure that you understand the regulations that apply to your industry and have taken the necessary steps to ensure that you are compliant. Both cyber-security and cyber-resilience require an investment of time, effort and resources. However, when you take into consideration the potential costs associated with the disruption to your network and business operations, the damage to your reputation and the potential law-suits and fines, you will find that a failure to make such an investment could end up costing you more in long term. Want to find out just exactly how secure and resilient your organization is against security threats? Schedule your free data risk assessment with Lepide.
<urn:uuid:a7d53432-af82-4e95-b784-03c6bfb0ab3d>
CC-MAIN-2022-40
https://www.lepide.com/blog/cyber-security-vs-cyber-resilience/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00360.warc.gz
en
0.941848
1,176
2.546875
3
In many cases, the spoofed emails come from IP addresses outside of the U.S. One out of every eight emails sent from what looked like a government address in October was a phony email sent by hackers and spammers, according to data released Friday and Monday by the cybersecurity firm Proofpoint. About 10 percent of those spoofed emails came from IP addresses outside the U.S., the company said. In the case of one agency that Proofpoint doesn’t name, 80 percent of spoofed emails that appeared to come from the agency actually originated from Russian IP addresses. Digital miscreants may spoof a government email to con the recipient into responding with personal information or clicking a link that contains malware. The report comes as agencies are in the midst of installing new email security protections ordered by the Homeland Security Department known as DMARC. Agencies have until Jan. 16 to install the updated protections, which would prevent hackers from spoofing emails from government domains in most circumstances. The Proofpoint study was based on roughly 70 million messages visible on systems protected by the company and includes federal, state and local government email addresses, the company said. The emails spoofed 296 federal departments and agencies, ranging from extremely large departments to very small ones. DMARC, which stands for Domain-based Message Authentication, Reporting and Conformance, pings a sender’s email domain—such as Commerce.gov—and asks if the sender is legitimate. If the domain says the sender is illegitimate, DMARC can send the email to the recipient’s spam folder or decline to deliver it at all. DMARC must be installed on both a sender’s and a recipient’s email services to work. If it is, the tool will both prevent federal employees from opening phishing emails from spoofed accounts and prevent hackers and spammers from spoofing federal domains to trick people into opening malicious emails. About 85 percent of consumer email inboxes use DMARC, including Google’s Gmail, Microsoft’s Outlook and Yahoo Mail. About 26 percent of agencies were using some level of DMARC protection as of Nov. 6 and 10 percent were using the highest level, which would reject those spoofed emails unread, according to a study by the Global Cyber Alliance. An October report from the cybersecurity firm Agari found that one in four emails sent to Agari customers that purported to be from government addresses was actually phony. That study only included Agari customers who used DMARC protection. NEXT STORY: Email Bot Fights Back Against Scammers
<urn:uuid:becdbae1-eee0-43d1-945c-7fa8f8b76bde>
CC-MAIN-2022-40
https://www.nextgov.com/cybersecurity/2017/11/1-8-government-emails-received-october-was-phony/142504/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00360.warc.gz
en
0.948929
529
2.546875
3
The IBM algorithm Deep Blue beat chess champion Garry Kasparov in 1997. It was 2011 when IBM’s Watson won the game show Jeopardy. Shortly after, the IBM Research team was ready to go beyond game playing and began to brainstorm the next feat to challenge an artificial intelligence algorithm. They decided to create an AI algorithm that would be trained on the art of debate. This past June, a small group of viewers got to see the IBM Project Debater’s public debut and its first two debates, when it went head-to-head with Israeli debaters Dan Zafrir and Noa Ovadia on increased investment in telemedicine and government subsidies for space exploration respectively. From all accounts, IBM Project Debater was a formidable opponent and surprised many with its ability to make human-like arguments. It even swayed more audience members to its position on telemedicine that Zafrir did. This project was the latest in IBM Research’s goal to build a system “that helps people make evidence-based decisions when answers aren’t black-and-white.” Debate not only helps us convince others of our opinion, but it can help us understand and learn from other’s views. By training machines in this way, it is hoped that in the future, AI algorithms will be able to help humans make important decisions regularly. IBM Project Debater doesn’t just search its database of millions of articles from well-known newspapers and magazines—its corpus—but it has AI technology that can “work with humans to discover, reason and present new points of view.” The IBM Research team was able to create an algorithm with the ability to: · Generate an opinion driven by data · Listen and understand an opponent, parsing out the critical bits of data from flowing narrative · Express the situation and arguments with concise language and complete human-like sentences Even though there were visible stumbling blocks in IBM Project Debater’s debating skills, for the most part, its debut was a resounding success. Since it’s gone from theory to actual implementation, albeit with some tweaks still necessary, it makes you wonder what’s next. Avoid blind trust by implementing checks and balances It might be easy for many people to put too much faith in a machine. Although a machine can cull through the data at a rate and depth impossible for humans in a similar timeframe, it’s not immune to bias in its findings. The machine is only as good as the information it was fed. If some of the resources it used to develop its argument contained false logic, the algorithm was influenced by that logic in its debate. Being able to search and summarize millions of human-generated articles is no small feat, but Project Debater’s prowess isn’t representative—yet—of some superintelligence capable of reasoning in a self-generated manner (although that’s likely on the horizon). To avoid machines just echoing back erroneous human opinions—or being manipulated by a government or corporation for its own purposes—there needs to be a system of checks and balances to ensure the program’s credibility. IBM’s Project Debater work critical to natural language processing advances Natural language processing is progressing on many fronts; however, what Project Debater exhibited was progress in loosely structured language in the form of conversations and articles. An algorithm’s ability to put together an argument based on small pieces of text supported by facts while understanding all the facets of an argument (logical, emotional) is a higher level function. Project Debater can analyze its opponent’s argument and determine the appropriate response supported by facts. This represents a massive leap from “present information” to “make an argument.” Practical applications of this technology One of the impressive abilities IBM Project Debater exhibited was the combination of AI techniques it relied upon to solve many problems and join them together in a solution. Now that IBM Research succeeded in this first debate, the team needs to determine practical applications of this technology that they can sell. That’s precisely what Arvind Krishna, Director of IBM Research said he plans to do: “Project Debater’s underlying technologies will also be commercialized in IBM Cloud and IBM Watson in the future.” Now that AI has gone beyond playing games to learning the art of persuasion and debate; it has proven that it can handle the “gray area” and nuances of human interaction and not just follow clear-cut rules. “From our perspective, the debate format is the means and not the end. It’s a way to push the technology forward and part of our bigger strategy of mastering language,” said Aya Soffer, who runs IBM Research’s global AI team. It was an impressive debut, and it will be intriguing to see what’s up next.
<urn:uuid:ac3abbd8-6bba-40eb-9000-75dfe73367c5>
CC-MAIN-2022-40
https://www.ciocoverage.com/ibm-showcases-artificial-intelligence-superiority-with-project-dabater/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00360.warc.gz
en
0.945726
1,017
2.96875
3
In this blog post ‘Learn from your mistakes’ is easily said than followed. But I have never thought that this statement has made so much of an impact in the minds of the technologists that they started adopting this technique of making machines learn from their mistakes so that they can intelligently work in their future actions. This act of parenting is the new cool in the tech world as the question always lingers in the readers’ minds as to how a machine would learn from the mistakes it commits. The logic behind this concept is very simple and easy to understand. It is very much like how a normal person learns from his mistake and how efficiently he uses his senses to avert committing the same mistake again. The Idea of Reinforcement Learning As I told you earlier, you can easily understand Reinforcement Learning if you learn how the machine analyzes its behavior, how it learns from its own mistakes and how it takes the appropriate decisions based on its analysis. Now assume that a baby is trying to walk. For the first few days it would analyze how the people around are walking. It’s learning starts right from seeing how others walk, how others move and what others do while walking and continues until it stands up and walks by itself. Now whenever a baby tries to stand up but falls, it learns from itself and again gets up to stand by itself and proceeds until it starts to walk. The Math and Science behind Reinforcement Learning According to me, the concept of reinforcement learning was actually bought out from a video game wherein the player (here the machine) gets a credit whenever he/she takes a right step towards achieving the goal and loses it whenever he takes a bad decision. In Reinforcement Learning, the player which is an agent, manipulates the environment and then takes the decision. If the decision is correct there is a reward that gets added to the score (0 at the beginning) and if the decision is wrong the reward gets reduced. So, this process is allowed to happen until the agent attains victory in the game. According to the various decisions that are taken by the agent the overall score is calculated and with this information the best way of winning the game is formulated. Now if you have a look at the math behind the concept, we have the following factors that directly affects the decision making in Reinforcement Learning. - Set of states, S - Set of actions, A - Reward function, R - Policy (Pi) - Value, V Here the conclusion is arrived at, when the State S is said to have attained the desired state(WIN). The actions are the steps that the player takes during the progress of the game and Reward function is either added or deducted according to the resultant of the steps. The rewards that we get at the end after attaining is the value (V). The Policy (Pi) is created with a value (V) for each time the game Is won and with many (Pi) been created from many samples the one with the least value V is chosen as the best solution. Solution= E (R |Pi, S) Algorithm behind this calculation The basic algorithm is given by the concepts of Reinforcement Learning and the overall algorithm is been given by the Deep Q-Learning concept which is as follows: Initialize the Values ‘s’ and ‘a’. - Observe the current state ‘s’. - Choose an action ‘a’ for that state such that the next state is attained according to the best way of environment analysis. - Take the action, and observe the reward ‘r’ as well as the new state ‘s’. - Update the Value for the state using the observed reward and the maximum reward possible for the next state. The updating is done according to the formula and parameters described above. - Set the state to the new state, and repeat the process until the objective of the game is reached. The same concept of Reinforcement Learning was also applied on the board game Go by the parent company of Google, the Alphabet. They did arrive to a point and formulated the Policy (Pi) with all their outcomes. They named their Policy and theory as AlphaGo. In order to test it, they allowed a game of Go to be played between AlphaGo and South Korean Champion Lee Sedol in March 2016. The 5 match series resulted in the AlphaGo winning the 9th ranked champ with an astonishing figure of 4-1. The AlphaGo utilized the Reinforcement Learning concept along with the concept of Deep Learning to formulate its set of rules to achieve the winning position. In fact, after the match was concluded Lee Sedol awarded the highest award of Honorary 9 Dan in Go to the AlphaGo. Following this Google announced that the money that AlphaGo earned with the match winnings will be donated to the charities including UNICEF. Reinforcement Learning vs. Artificial Intelligence Reinforcement Learning, though involves many algorithms to formulate the Policy for arriving at a conclusion, at the end of the day it is still a Machine Learning mechanism and it definitely needs many arbitrary processes that needs to be carried out in order to formulate a proper solution or a policy. As compared to the yesteryear concepts of Machine Learning there are many differences while formulating a solution using Reinforcement Learning and while using Artificial Intelligence. What makes Reinforcement Learning stand unique from being a normal technique like machine Learning or Artificial Intelligence are as follows: - First of all, unlike any other concepts or mechanisms, the objective of the game will never be known to the agent and the agent realizes only after it starts to take the steps forward and reaches the goal. - Artificial Intelligence is all about providing the model with what needs to be done at which time and sometimes involves in providing the correct actions. But Reinforcement Learning is just the opposite of that. - Artificial Intelligence doesn’t involve any mathematical or scientific model that it can learn from, but apparently behaves differently at different situations. Deep Learning and reinforcement learning are all devised only to attain the positive result and hence the outcomes of each step are periodically very much similar. Why Deep Learning is coupled with Reinforcement Learning? Deep learning is a complex function approximation, for image recognition, speech (supervised) as well as for dimension reduction and deep network pretraining (unsupervised). Reinforcement learning is more in line with optimal control, wherein an agent learns to create, develop and maintain an optimal policy of sequential actions that needs to take by interacting with an environment. There are various branches within RL, such as temporal difference, Monte Carlo and dynamic programming. Where deep learning and reinforcement learning combine (as seen in deep Q learning, Google deep mind Atari) is when a deep neural network is used to approximate the Q function in Q-learning, one popular algorithm that falls under temporal difference learning. In the Atari game playing example, because the state space is so large (since they are using game video pixels), using a neural network to approximate Q. The Reinforcement Learning is all about devising a plan to achieve the positive end result of the game. But as the algorithm makes the agent take arbitrary steps to viciously analyze the next states, it leads to a question of doubt on how quick the algorithm would come up with the solution. Even though it is a statement of pride that Reinforcement Learning takes us all in getting to the finishing line, the question always remains if this algorithm would be effective in a complex environment that keeps changing randomly. One has to clearly wait with patience for the Reinforcement Learning plan to be deduced and tested before which it can be followed. Basically, the biggest 2 disadvantages are: - This methodology is a slow one as the agent takes considerable amount of time to learn the environment to find out the best solution. - The whole process becomes very tedious and tough if the environment that it is performed in is complex. There is a cloud of uncertainty that always surrounds when all the above said points are considered while deciding if Reinforcement Learning is to be considered as the desired algorithm for us to deduce the best solution in any environment. We may have to wait for the outcomes of more such trials been performed in many complex environments in order to reach a conclusion that could well fit all our needs and expectations.
<urn:uuid:d7652dd4-4ac6-41b8-8548-5970c873e2b1>
CC-MAIN-2022-40
https://www.gavstech.com/reinforcement-learning-the-art-of-teaching-machines/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00360.warc.gz
en
0.962322
1,719
3.421875
3
A Public Key Infrastructure (PKI) is a system that helps facilitate secure communications through the use of digital certificates. Reflection supports the use of a PKI for both host and user authentication. Like public key authentication, certificate authentication uses public/private key pairs to verify the host identity. However, with certificate authentication, public keys are contained within digital certificates An integral part of a PKI (Public Key Infrastructure). Digital certificates (also called X.509 certificates) are issued by a certificate authority (CA), which ensures the validity of the information in the certificate. Each certificate contains identifying information about the certificate owner, a copy of the certificate owner's public key (used for encrypting and decrypting messages and digital signatures), and a digital signature (generated by the CA based on the certificate contents). The digital signature is used by a recipient to verify that the certificate has not been tampered with and can be trusted. , and in this case, two key pairs are used. For example, for server authentication, the host holds one private key and the CA holds a second. The host obtains a certificate from the CA. This certificate contains identifying information about the host, a copy of the host public key, and a digital signature Used to confirm the authenticity and integrity of a transmitted message. Typically, the sender holds the private key of a public/private key pair and the recipient holds the public key. To create the signature, the sender computes a hash from the message, and then encrypts this value with its private key. The recipient decrypts the signature using the sender's public key, and independently computes the hash of the received message. If the decrypted and calculated values match, the recipient trusts that the sender holds the private key, and that the message has not been altered in transit. created using the CA's private key. This certificate is sent to the client during the authentication process. To verify the integrity of the information coming from the host, the client must have a copy of the CA's public key, which is contained in the CA root certificate. There is no need for the client to have a copy of the host public key. Certificate authentication solves some of the problems presented by public key authentication. For example, for host public key authentication, the system administrator must either distribute host keys for every server to each client's known hosts store, or count on client users to confirm the host identity correctly when they connect to an unknown host. When certificates are used for host authentication, a single CA root certificate can be used to authenticate multiple hosts. In many cases the required certificate is already available in the Windows certificate store. Similarly, when public keys are used for client authentication, each client public key must be uploaded to the server and the server must be configured to recognize that key. When certificate authentication is used, a single CA root certificate can be used to authenticate multiple client users.
<urn:uuid:61ee4ac8-3bd9-4334-97aa-8ee587889a06>
CC-MAIN-2022-40
https://www.microfocus.com/documentation/reflection-desktop/17-1/rdesktop-guide/rsitclient_client_pki_cert_ov.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00360.warc.gz
en
0.919545
588
3.71875
4
The terms related to vetting identities are often used incorrectly, and can result in significant confusion around the different processes associated with making sure an identity is real. “Identity Verification”, “Identity Validation” and “Identity Authentication” are often used interchangeably, but actually have subtle differences in meaning. Identity Validation means ensuring that identity data represents real data, for example ensuring that a Social Security Number has been issued by the Social Security Administration is not associated with a deceased individual. Identity Verification means ensuring that identity data is associated with a particular individual, for example matching date of birth and address to an individual’s name. Identity Authentication refers to a process of determining that an individual is who they claim to be, for example asking dynamic Knowledge-Based Authentication questions that would be difficult for an a different individual to answer. IdentiFraud Consumer by Electronic Verification Systems can perform Identity Validation, Identity Verification, and Identity Authentication with a single transaction.
<urn:uuid:296f14e9-829b-47a8-82d0-6eaa9bd65828>
CC-MAIN-2022-40
https://www.electronicverificationsystems.com/verification-vs-validation-vs-authentication
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00360.warc.gz
en
0.929084
210
2.734375
3
Want to know what tailgating is in cyber security? Tailgating or piggybacking is one of the hacking techniques hackers use. The main goal of tailgating is to enter the local place with no proper authentication. So, do you want to know more about tailgating in-depth? You must read this article because here I have explained it in detail. You will also understand various ways to prevent such hacking efforts from hackers. Thus, read this essential article to get fresh updates on the same. What is Tailgating in Cyber Security? Tailgating is the most common way hackers use to get access in a minimal place. This approach of hacking is also known as piggybacking in cyber security. In this, the hackers follow the authorized person to enter in very restricted place of the organization. They may arrive as the delivery men to deliver the bunch of boxes and wait to open the door of the targeted location. So, they can get access to the electronic access control unity of that place. Therefore, it is considered a social engineering cyberattack as it is always an electronic component. And using this technique, they perform crimes like phishing, whaling, and spear phishing. How Does Tailgating Work? It mainly includes following the authorized person at the restricted place to access the door by lock code. So, to get the lock’s code, the fraudsters can trick the authorized person using various ways. They may arrive as repairmen, men struggling in handling boxes, and delivery men. These are just a few lists, and all the time, hackers try different approaches to use this technique. Hence, you need to stay up to date with cyber security awareness training to keep yourself safe. However, it is a simple technique that mainly relies on human decisions. And if such fraudsters trick the bug in the human hardware, they might succeed in getting access to that restricted place. Psychology of Tailgating? Tailgating is the technical name of the psychological manipulation of the human. And there, your brain considers that hacker as a victim and asks you to help them. But if you get tricked, you may suffer from the loss of money, reputation, and confidence. It is an information technology technique similar to phishing and built to fool people. If you haven’t had any idea about the same, you might be at significant risk of getting tricked into this dangerous thing. Hence, you need to know about the consequences you may face due to tailgating. Who is At The Risk in Tailgating? Nowadays, you can see several cases of tailgating where hackers and fraudsters steal the organizations’ assets. They might steal costly equipment including laptops and other costly things. They may even install spyware in the computer devices of the organizations. However, some hackers also steal the sensitive data of the companies to harm them. You may even see some try to access the company’s server room by making a backdoor. And all these activities are more than sufficient to cause a significant loss to the company. In reality, it has become a severe challenge for a large organization to deal with. Most businesses have faced the loss of reputation damage, financial loss, and even loss of their sensitive data. Therefore, companies, influential organizations train their employees about the responsibility and these threats. So, they can prevent such harmful activities in their organization. How to Prevent Tailgating? There is not any guarantee that by using some technique, you can stop tailgating. Because every time hackers come with a new approach to harm such organizations. But still, there are some crucial things to know to prevent or minimize the chances of being a victim of tailgating. Firstly, companies need to make strong policies in the workplace and use advanced access control at the entrance. They should make the place restricted by using advanced security features. Also, companies can train their current and new staff with the latest cyber security training. They can be aware of the employees about such crimes, solutions, and the consequences. Primarily the large organization becomes the victim of tailgating. Because they have many contractors for every different task. And they also have large business areas with many floor buildings. They might even be working with freelance or remote professionals. Hence, they need to give more attention to implementing essential ways to prevent such activity. Because for hackers, even a small loophole can be a great way to execute their harmful intentions. Therefore such companies can use the below-mentioned things. #1. They can allow intelligent ID cards for every employee and give different badges. #2. Hiring security guards is also a preventative idea to cope with tailgating. #3. Such companies can also start using biometric access control at restricted places. #4. Some companies can also consider access controls that require a unique PIN. #5. They make it compulsory to wear badges for every visitor in the workplace. #6. Install cyber security tools at various sections of the server areas. #7. They can even hire cyber security experts or a team of experts. These are just a few things that a company can do from its side. And they can even train their employees to prevent such situations. Because there is not any benefit of using expensive tools if the employees are not well trained. Hence, look below-mentioned points that will highlight how the employees can prevent tailgating in the organization. #1. Tell them if they are in the office uniform and they need to help others. They must not hold the door for anyone entering the workplace, especially at restricted places. #2. Tell employees to stop the people if they are coming with them to the restricted place. #3. Give them such confidence and power to ask or challenge suspicious people in the workplace. #4. Guide all employees to tell the visitors to visit the reception area for any query. #5. Be aware of the repairmen, delivery person, and other outsiders as they can be hackers. Tailgating can seem a simple thing, but many organizations have faced huge losses. Hence, you must be aware of tailgating and use the best ways to prevent tailgating at your workplace. So, to prevent such consequences, use some key things we discussed in the article. Because if you stay tuned with cyber security, you can surely get much help in preventing tailgating efforts from hackers. We hope you liked this article and found it very useful. So, please share it with your friend as well so they can know about this exciting thing.
<urn:uuid:01939d50-452d-4895-a064-f44de14a45cf>
CC-MAIN-2022-40
http://dztechno.com/what-is-tailgating-in-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00560.warc.gz
en
0.958556
1,371
2.53125
3
“What’s your mother’s maiden name?” How many times have you been asked to answer this question when you create an account? Do you give the right answer? Let me explain why you shouldn’t give the correct answer to this or any other security question. Today, we’re going to talk about how to answer security questions securely to protect your accounts and privacy online. How many people know your mother’s maiden name? How many people know your favorite color? How many other people have the same favorite color as you? The problem with the answers people choose for security questions is that they’re too easy to guess. An analysis by Google and Stanford in 2015 found that most users’ answers were insecure. They could be easily guessed or found through basic research. There’s no reason to think things have improved since 2015. Here are some of the problems summarized in the report: Questions with common answers. Many personal knowledge questions have common answers shared by many in the user population which an adversary might successfully guess. Schechter et al. were able to guess approximately 10% of user’s answers by using a list of other answers provided by users in the same study. Questions with few plausible answers. A number of potential questions, such as “who is your favorite superhero?” have very few possible answers. An empirical study … found that 40% had trivially small answer spaces. User-chosen questions appear even worse: … the majority of users choose questions with trivially few plausible answers. Publicly available answers. Rabkin found that 16% of questions had answers routinely listed publicly in online social networking profiles. Even if users keep data private on social networks, inference attacks enable approximating sensitive information from a user’s friends. Other questions can be found in publicly available records. For example, at least 30% of Texas residents’ mothers’ maiden names can be deduced from birth and marriage records. Social guessing attacks. Users’ answers may be easily available to partners, friends, or even acquaintances. … acquaintances could guess 17% of answers correctly in five tries or fewer. You might think, “Well, security questions only matter if I lose my password and need to get back into my account.” But that’s overlooking the fact that anyone can try using your security questions to reset your password and log into your account. How To Answer Security Questions Securely To Increase Your Security You don’t want to use answers that others could guess or figure out through research. The best way to do this is to provide false answers. It turns out that many people already do this. The Google and Stanford report mentioned above says, We found that a significant cause of this insecurity is that users often don’t answer truthfully. A user survey we conducted revealed that a significant fraction of users (37%) who admitted to providing fake answers did so in an attempt to make them “harder to guess” although on aggregate this behavior had the opposite effect as people “harden” their answers in a predictable way. For example, when asked “What city were you born in?” people try to be clever and give an incorrect city. But, they tend to choose a city that many others also chose (whether they did so honestly or not), such as Paris. If the question is “What city were you born in?” don’t use a less-popular city or fictional city (such as Minas Tirith) and think you’re being clever. Others are likely to use the same answer (though not as many as will use Paris). Instead, use a word or words that don’t answer the question, such as magnet or Megatron or, even better, urBFbaFv3HMl. Be sure to choose answers that it’s extremely unlikely someone else would use. You can even let your password generator — I use LastPass — create an answer. However, be aware that some websites don’t allow special characters in security questions. Some don’t even allow spaces, so you can’t use more than one word (although you could always smash multiple words into one string of text). Also, consider that you may need to give your answers over the phone. For example, I have a financial account for which I used a long, randomly-generated answer that contains special characters. There have been several times that I’ve needed to spell it over the phone, which is a pain. Some websites let you create your own security questions. If this is an option, do it! Be sure to make your questions nonsense, too, and irrelevant to your answers. For example: You may ask, “How am I supposed to remember these nonsense answers?” You don’t need to. Save them in your password manager. I store my passwords in LastPass, and I use the Notes field to store the security questions and answers. Even if you come up with answers that you love, resist the urge to use them again. Don’t reuse answers from site to site, just as you shouldn’t reuse passwords from site to site. If someone discovers one of your answers, they’ll try it on other sites. Some websites allow you to use multi-factor authentication (they may call it two-factor authentication). This is when you use an app (best option) or SMS/text messages (next best option) to verify your identity. If you have the choice, use this instead of security questions. How To Answer Security Questions Securely – Further Reading - Secrets, Lies, and Account Recovery: Lessons from the Use of Personal Knowledge Questions at Google (google.com) - New Research: Some Tough Questions for ‘Security Questions’ (googleblog.com) - Time To Kill Security Questions—or Answer Them With Lies (wired.com) - Crack the “Security Question” Code: 5 Tips for Creating the Most Secure Online Passwords (mensjournal.com) - Use Fake Answers to Online Security Questions (lifehacker.com) What You Should Do - Create nonsense answers to security questions. A random string of letters and numbers (and special characters, if allowed) is better than a real word. - If given the option, create your own security questions rather than choosing from the provided questions. Make your questions nonsense, and irrelevant to your answers. - Save questions and answers in your password manager. - Don’t reuse answers on other accounts. - If given the option, use multi-factor authentication instead of security questions. Keeper is a top-rated password manager for protecting you, your family, and your business from password-related data breaches and cybersecurity threats. 1Password remembers all your passwords, so you can easily log in to sites with a single click. Dashlane fills all your passwords, payments, and personal details wherever you need them, across the web, on any device.
<urn:uuid:b4f15dd0-005f-4ba5-a136-f27c2422e0e7>
CC-MAIN-2022-40
https://defendingdigital.com/how-to-answer-security-questions-securely/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00760.warc.gz
en
0.939274
1,517
2.84375
3
The first tutorial in this series considered the history and requirements for Internet Protocol version 6, or IPv6, and our second installment looked at the enhanced capabilities that the new protocol brings to the market. But except for the daring few, most of us would rather let someone else be on the forefront of a new technology, and be content to observe their trials and tribulations before we jump into the fray ourselves. In this, our third installment in the series, we will review some of the global networks that have served as test beds for IPv6, and also look at some of the other industry resources that are supporting this new protocol. From our previous discussions, recall that the Internet Engineering Task Force (IETF) chartered the Internet Protocol – Next Generation (IPng) Working Group, which was subsequently renamed the IP Version 6 Working Group. In the decade or so that the group has been active, it has been primarily responsible for moving the protocol development through the various stages of the IETF standardization track – from Internet Drafts to Request for Comments documents to Proposed Standards to Draft Standards, and then finally to Internet Standards. For example, the baseline IPv6 document, RFC 2460, and the IPv6 addressing specification, RFC 4291 are both considered Draft Standards. To reach this Draft Standard level, at least two independent and interoperable implementations of the protocol or process must have been developed, plus sufficient operational experience must have been obtained, thus assuring the Internet community of the viability of this new technology. Several worldwide experimental networks have been used to provide that verifiable IPv6 experience. The first one of note was the 6bone Network, which phased out of operation on June 6, 2006 after almost a decade of operation. The 6bone was a worldwide network, with oversight from the IETF NGtrans (IPv6 Transition) working group within the IETF. It started out as a way to transport IPv6 packets over the existing IPv4-based Internet using a process called tunneling, and later evolved into a network that supported IPv6 directly. The 6bone also served as a testing environment for the new addressing formats. Recall from our previous tutorials that one of the most significant enhancements to IPv6 is the move from 32-bit to 128-bit addressing formats, and that anything that is address-related, such as routing protocols and routing tables, requires modification. Included in the addressing research were mechanisms for testing the new routing protocols, such as RIPng (Routing Information Protocol – next generation, documented in RFC 2080; the Open Shortest Path First (OSPF) protocol for IPv6, documented in RFC 2740 (see ftp://ftp.rfc-editor.org/in-notes/rfc2740.txt); the Domain Name System (DNS) extensions to support IPv6, documented in RFC 3596; plus packet routing, converting IPv4 to IPv6 addresses, and other addressing-related functions. Like most of the work from the IETF, the phase out of the 6bone network was very well planned, announced in March 2004, and published in RFC 3701. A second test network for IPv6 was called the 6REN, the IPv6 Research and Education Network, sponsored by the Energy Sciences Network (Esnet), the network for the Energy Research program of the U.S. Department of Energy at the University of California’s Lawrence Berkeley Laboratory. Where the 6BONE network used the existing IPv4-based Internet as the transport mechanism, the 6REN network was strictly based on IPv6 for all routers and hosts, so that the IPv6 protocol was deployed on an end-to-end basis, with no tunneling involved. Another North American test network is named Moonv6, which is a global effort led by the North American IPv6 Task Force, the University of New Hampshire’s Interoperability Laboratory, the higher education Internet2 Project and a number of technology vendors. The unusual network name was coined from a discussion as to whether or not the U.S. Government should take the IPv6 technology as serious as the NASA project to put a man on the moon during the 1960s. This project encompasses two parts: lab testing of vendor products to determine IPv6 functionalities, and a worldwide IPv6 network for application sharing and end-to-end testing. The European community also participated in IPv6 testing with 6NET, a three year project involving 35 organizations representing the commercial, research and academic sectors in 16 different countries. This project built a native IPv6 network to test a number of IPv6 services and applications, and to gain interoperability experience with existing applications. This work concluded in June 2005, but elements of the research were continued with the IPv6 Dissemination and Exploitation (6DISS) network, which provides training and knowledge transfer in eight developing regions around the world. This work is funded by the Information Society Technologies Program of the European Union, and will conclude on September 30, 2007. Another European initiative is the Euro6IX network, a project that will research, design and deploy a test Internet exchange backbone network throughout Western and Central Europe. This work is underwritten by a consortium that includes telecom operators (such as British Telecom, France Telecom, Telecom Italia, and others), the Universities of Madrid, Southampton, and others, plus manufacturing, consulting and government organizations. Test networks are not the only sources of deployment and information transfer regarding the new protocol – there are many members of the vendor community that are firmly behind the IPv6 technology. Our next tutorial will examine some of these other key resources that can speed your IPv6 deployment along. Copyright Acknowledgement: © 2007 DigiNet Corporation®, All Rights Reserved Mark A. Miller, P.E. is President of DigiNet Corporation®, a Denver-based consulting engineering firm. He is the author of many books on networking technologies, including Implementing IPv6, and the Internet Technologies Handbook, both published by John Wiley & Sons. This article was first published on EnterpriseITPlanet.com.
<urn:uuid:fa56dd93-b138-4bcb-8ae5-fa2af0bae52d>
CC-MAIN-2022-40
https://www.datamation.com/networks/ipv6-ready-for-prime-time-part-iii-testing-the-new-protocol/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00760.warc.gz
en
0.939866
1,247
2.734375
3
The importance of a Wireless LAN survey With our experience in performing installations of wireless LAN networks, a survey is the most crucial stage of the implementation. There are a number of survey techniques that are used to determine how a Wireless Network is ideally designed. Most use heat map pictures overlaid onto site plans that indicate the signal strength, with a key showing what the colours indicate. The wireless surveys can be achieved in a number of ways. 1. Passive On-Site Survey A passive survey is where a physical survey is performed with one or more access points in operation, and signal test readings are taken with special site survey and planning software (Airmagnet & Ekahau are the industry leaders in this field). Positions for AP's are placed to determine to viability of such location according to the layout of wireless access points across the coverage floor to meet end user requirements. A survey report can be generated that provides details such as AP locations, Signal Strength charts in heat map format, photographs of AP mount locations, detailed analysis of interference, and descriptions of cable installations. 2. Active On-Site Survey This is also a physical survey which is performed after a wireless network has been just deployed, or to the health check an existing network. This is called an "Active Survey" which measures signal coverage, throughput tests SSID and VLAN per AP allocation and behaviour of data packets. This is usually performed after an install has occurred to ensure the wireless network is performing as per the scope of the requirements. This type of survey can also be performed when an established wireless network is experiencing issues that may occur over time, which may be contributed by internal changes within the network (e.g. Config changes, higher number of concurrent users), or due to exterior reasons outside the network (e.g. additional interference from new networks, new non network devices outputting signal). 3. Predictive Survey This is where the site plans are inserted into a program that simulates the walls and floors. Sometimes a site visit is not even required. This is NOT recommended as survey tool. A program cannot take into consideration the thickness, density, material type, interference etc of the interior space of a property, and is therefore less than useful for the design of a Wireless LAN, as it can be providing very wrong information which may lead to an inferior network, which can potentially cost more in time, funds, and resource to correct. For a new installation the survey aspect of an installation determines how many access points are required to provide coverage in the most economical layout. This takes into consideration the internal layout of the interior space, which can be divided up by partitions for different rooms, what those partitions are made of, furniture, aesthetic considerations, bleed of radio signal through floors, interference from other networks, interference from none IP networks, etc etc, the list goes on. If a survey is not performed, then all the negative factors mentioned may cause the effect of a poor wireless network, so issues such as dark spots for coverage, lack of throughput due to interference, not enough Access Points to cater for concurrent users, and too many Access Points (causing self-interference) are issues that come to mind. This, in turn, may mean remedial work to resolve these issues post-survey, which will cost more in resource and possibly hardware. If you are performing work for a client, worse than monetary loss is reputational damage to yourself and your company. For any queries regarding a Survey, please feel free to give us a call.
<urn:uuid:6068b094-b2b7-4d2c-8bbe-3f0f5fd0939e>
CC-MAIN-2022-40
https://www.digitalairwireless.com/articles/blog/the-importance-of-a-wireless-lan-survey
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00760.warc.gz
en
0.944519
719
2.765625
3
Google has teamed up with the United States Holocaust Memorial Museum (USHMM) to promote awareness of recent atrocities in the Darfur region of Sudan. The USHMM has assembled photographs, data and eyewitness testimony to form a Global Awareness layer in Google Earth, which Google has enabled by default for all 200 million Google Earth users. Google Earth is a free application that blends satellite photos with a virtual globe to let users fly around the world and zoom in on any area they want to examine, including specific street addresses. It’s handy for navigating cities, finding camping sites or checking out the satellite view of anyone’s own backyard. The layer is called “Crisis in Darfur,” and it features icons of fire which indicate villages that have been burned, as well as icons for on-the-ground photographs, video and eyewitness testimony. It shows 1,600 damaged and destroyed villages, the remnants of 100,000 homes, schools, mosques and other structures destroyed by Janjaweed militia and Sudanese forces. “Educating today’s generation about the atrocities of the past and present can be enhanced by technologies such as Google Earth,” Sara Bloomfield, USHMM director, said. “When it comes to responding to genocide, the world’s record is terrible. We hope this important initiative with Google will make it that much harder for the world to ignore those who need us the most.” Moving Into Politics Darfur, despite widespread atrocities, has been largely overshadowed by the world attention currently focused on Iraq, as well as a general lack of media spotlight in the area. Google’s decision to turn the layer on by default in Google Earth, as well as take part in the USHMM announcement yesterday, sheds light on Google’s own role in the world. “At Google, we believe technology can be a catalyst for education and action,” said Elliot Schrage, Google’s vice president of Global Communications and Public Affairs. “Crisis in Darfur will enable Google Earth users to visualize and learn about the destruction in Darfur as never before and join the Museum’s efforts in responding to this continuing international catastrophe.” Google was unavailable for further comment, but Frank Taylor, author of the Google Earth Blog — which is unaffiliated with Google — told TechNewsWorld that Google has previously turned on layers by default to draw attention to content they have introduced, including a United States election layer turned on last November, as well as its Geographic Web layer. Google Earth is available in many languages and on multiple platforms and is widely used around the world, which has led to some interesting effects. “Google Earth has been very influential on some issues including political, environmental, military and more,” Taylor said. “For example, in Bahrain the average citizens discovered what they were missing when they could suddenly look over the palace walls of the upper elite in that country and see the yachts, pools, tennis courts, limousines and huge tracts of lands owned by the royalty and their families.” In addition, environmental organizations are increasingly using Google Earth to present issues and draw attention to damage caused by various forces. “See the ‘mountain top removal’ layer under Global Awareness,” Taylor noted.
<urn:uuid:b8c3f4bb-91ac-41ea-ad18-e179ff2e57fd>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/google-earth-zooms-into-heart-of-darfurs-darkness-56827.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00760.warc.gz
en
0.949169
696
2.859375
3
Following the most complex landing ever attempted on Mars, NASA’s most advanced Mars rover to date landed on the Red Planet at 10:32 p.m. PDT on Sunday. Dubbed “Curiosity,” the one-ton, car-sized rover had been in flight for 36 weeks. Its complicated touchdown — including the final severing of the bridle cords and flyaway maneuver of the rocket backpack — settled the device near the foot of a mountain three miles tall and 96 miles in diameter inside the Red Planet’s Gale Crater. Now, it begins an investigation expected to last two years focusing on whether the region has ever offered conditions favorable for microbial life. ‘A Fantastic Achievement’ “The landing followed an unprecedented set of complex maneuvers, and it demonstrated once more how human ingenuity and hard work can overcome incredible difficulties and lead to fabulous results,” Livio pointed out. In fact, “it is not an accident that the rover was named ‘Curiosity,'” he added. “It may give us an answer to a question that we have been curious about for ages: Did Mars ever host life? “I sat up all night to follow the approach and landing, and to tweet about it,” Livio said. “It was worth every minute.” ‘Where Does This Gravel Come From?’ Just minutes after its landing, Curiosity returned its first view of Mars: a wide-angle scene of rocky ground near the front of the rover. About two hours later, a higher-resolution image arrived. “Curiosity’s landing site is beginning to come into focus,” said John Grotzinger, project manager of NASA’s Mars Science Laboratory at the California Institute of Technology. “In the image, we are looking to the northwest. What you see on the horizon is the rim of Gale Crater. In the foreground, you can see a gravel field. The question is, where does this gravel come from?” Grotzinger asked. “It is the first of what will be many scientific questions to come from our new home on Mars.” Later in the week, color images are expected when the rover’s mast — equipped with high-resolution cameras — is deployed. Brand-New Scientific Tools Curiosity carries 10 science instruments with a total mass 15 times that of the science payloads that were carried on the Mars rovers Spirit and Opportunity. As a result, the rover itself had to be built twice as long and five times as heavy as either of those older models. Some of the instruments included aboard Curiosity, in fact, are the first of their kind on Mars, NASA noted, including a laser-firing instrument for checking the elemental composition of rocks from a distance. From its landing site in the Gale Crater, the rover sits within driving distance of layers of the crater’s interior mountain. Observations from orbit have identified clay and sulfate minerals in the lower layers, suggesting a potentially wet history. Drilling and Scooping As part of its mission, Curiosity will use a drill and scoop at the end of its robotic arm to gather soil and powdered samples of rock interiors. Next, it will distribute those samples for analysis among the various instruments it contains. The mission is managed by the Jet Propulsion Laboratory (JPL) for NASA’s Science Mission Directorate in Washington. The rover was designed, developed and assembled at JPL, which is a division of the California Institute of Technology. Confirmation of Curiosity’s successful landing came in communications relayed by NASA’s Mars Odyssey orbiter and received by the Canberra, Australia, antenna station of NASA’s Deep Space Network. ‘The Human Factor Is Still Missing’ “From the rocket engineering point of view, the successful landing on Mars proves once again that storable propellants are the ideal choice for near and deep space missions,” said Randa Milliron, CEO and cofounder of Interorbital Systems, which develops and manufactures orbital launch vehicles and satellites. “But even with yesterday’s superb technical performance, the human factor is still, sadly, missing,” Milliron told TechNewsWorld. “My thoughts return to the promises that were made to establish a U.S. colony on Mars by the 1980s. “If the extremely complex rover landing program could be carried out that flawlessly, there is no reason why a man or woman should not have been setting foot on the planet,” she explained. “All interplanetary missions are welcome and scientifically refreshing — and NASA conducts these magnificently — but a manned mission to Mars and a permanently staffed lunar base are both long overdue,” Milliron concluded. “It is obviously now up to the commercial space industry to perform these tasks.”
<urn:uuid:0860fc85-f8c8-48dc-8b61-6f1679e57f0a>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/curiosity-begins-capturing-martian-kodak-moments-75827.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00760.warc.gz
en
0.948185
1,025
3.328125
3
Post the 2008 recession JP Morgan Chase had to spend over $36 billion in legal fees and settlements. Most of the payouts were triggered because employees rigged markets, cheated clients, and violated trading rules. However employees do not wake up one morning and decide that they will commit a crime and violate rules. Such activity is always preceded by specific behavioral patterns and in theory these white collar crimes could be prevented if monitoring was good enough. Practically speaking, it’s tough for human managers to incorporate hundreds of signals like missed compliance meetings, data from emails and chat transcripts, breached risk limits etc to draw a coherent picture. Hence JP Morgan is using algorithms to analyze mounds of raw data and predict who is likely to trip up. This is predictive analytics in action. Predictive analytics is one of the ways big data can be put to use like at JP Morgan. And it’s not only in financial markets where this kind of analytics is seeing widespread acceptance. Hospitals are using predictive models to: Oil and gas companies are also emerging as heavy users of predictive analytics. Halliburton, which is a player in exploration and production industry, uses predictive analytics to: However it’s not only commercial organizations that are seeing benefits of predictive analytics. At the Los Angeles County Registrar Recorder/County Clerk’s Office predictive analytics is being used to: Predictive analytics has started to gain real interest and the industry is estimated to grow to $24 billion by 2018 with growth happening in sectors as diverse as BFSI, Retail, Transportation, Energy, Travel, Telecom, Sports, and Environment. Predictive analytics looks a lot like statistical modeling but there is a crucial difference between the two. Statistical modeling is pure math and while it also predicts outcomes based on input the results might be theoretical. For instance, a pharmaceutical company might use statistical modeling to predict the cost of testing for a particular drug based on current and past trends. However the analysis will not take into account important factors like regulatory climate and change in government policies or market sentiments. Predictive analytics does that, and thereby throws up an outcome that’s more closely tied to real world scenarios. It is thus both a science and an art, providing companies better business intelligence. But implementing it in an organization comes with its own set of challenges. Unless they are overcome the project will be still born. Some of them are: Unless you have a specific business goal or use case you cannot get the full advantages of predictive analytics. A goal will not only help you prioritize but also let you get the critical buy in from management. You need plenty of data to make realistic predictions. While the data quality needs to be good you cannot wait around to gather perfect data. The best strategy would be to clean the data that you already have as much as possible and improve data collection processes. While algorithms play a huge role in predicting outcomes the process cannot be completely automated. You will need data scientists and subject matter experts to not only ensure that proper data is fed into the system but also need them to make sense of the output and draw insights that business can use in decision making. Even after you have accounted for all these challenges your project can still fail because of cultural issues. Many managers still make decisions based on their intuitions and past experience. They have no trust in math, partly because they don’t understand how the methods work. Unless managers start building trust in models and look at algorithms as something that will help them take better business decisions predictive analytics initiatives will flounder. Quantum Computing for Business – Hype or Opportunity? Why is Data Fabric gaining traction in Enterprise Data Management? The Potential of Big Data in the Telecom Infrastructure Industry Achieving Operational Excellence through Digital Transformation Unbiased Human Centric AI Systems: The Basics you Need to Know Significance of Customer Involvement in Agile Methodology The emerging role of Autonomic Systems for Advanced IT Service Management How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:53052637-8f5d-47d2-b03a-c26e6a6e952a>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/how-predictive-analytics-is-making-businesses-more-intelligent/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00760.warc.gz
en
0.939217
993
2.671875
3
The Silk Road refers to a network of routes used by traders to exchange goods and ideas among diverse cultures for more than 1,500 years — from when the Han dynasty of China opened trade in 130 B.C.E. until the Ottoman Empire ended trade with the West in 1453 C.E. The Silk Road extended 6,437 kilometers (4,000 miles) across Eurasia, from the Black Sea to the Himalayas. Towns along the route grew into multicultural cities and the exchange of information gave rise to new technologies and innovations that would change the world. Regional digital corridors are creating today’s digital silk roads. European metro centers such as Frankfurt, London, Amsterdam and Paris (FLAP) are home to some of the world’s largest business centers, industry ecosystems and internet exchange (IX) hubs, which include Equinix International Business Exchange™ (IBX®) data centers. The total maximum IX peak traffic for Equinix Internet Exchange worldwide (in aggregate) is 19 Tbps at the time of this writing. Equinix Internet Exchange is the largest IP peering platform in the world, passing more traffic than any other. The time for a digital-first strategy is now. The Global Interconnection Index (GXI) is the industry’s leading source of data and insight on interconnection and its increasing impact on the digital world.Download GXI Volume 5 Interconnection is driving digital transformation Impressive as these figures are, private internet peering (physical interconnection) across Platform Equinix exchanges ten times more traffic than the Equinix Internet Exchange. Interconnection is the direct and private traffic exchange between two or more parties, inside a carrier-neutral colocation data center. Along the FLAP digital corridor, private interconnection is advancing Enterprise and Service Provider digital transformation, according to the Global Interconnection Index (GXI) Volume 5, a market study recently published by Equinix. FLAP core metros are forecast to make up 75% of the total interconnection bandwidth[i] capacity — 3,994 Tbps — in EMEA by 2024. The remaining private interconnection traffic is estimated at edge metros such as Madrid, Stockholm, Milan, Dublin and Zurich, where there is the greatest CAGR projection as companies distribute their digital infrastructure to serve an increasing number of edge use cases and business initiatives (see chart below). EMEA Core and Edge Metro Interconnection Bandwidth Capacity (Tbps) Growth Rates (2020 – 2024) The digital core is where organizations create the foundation for their digital platform. Digital edge is where an organization’s digital presence meets the physical world, enabling local proximity to customers, employees, endpoints and intelligent operations. Digital ecosystems are where organizations participate in the digital economy, leveraging networks’ effects by building composable business models, collaborating with data, subscribing and offering digital services, and transacting in marketplaces at scale. The GXI report provides critical insights into where the greatest interconnection growth is by core and edge metros and industry ecosystems to guide enterprise and service provider digital-first strategies. FLAP core metros are forecast to make up 75% of the total interconnection bandwidth capacity — 3,994 Tbps — in EMEA by 2024. Driving the 46% compound annual growth rate (CAGR) in EMEA interconnection bandwidth capacity (5,327 Tbps) over 5 years is the expansion of a wide range of industry ecosystems — specifically in the FLAP metros, which uniquely offer different industry ecosystems that are driving this interconnection bandwidth growth: - Frankfurt is increasingly attracting high numbers of international brands such as Procter & Gamble, Accenture and ING. German enterprises in the automotive, healthcare and financial sectors, are creating new demand through their use of technologies such as AI and IoT, and driving a local market for managed services and Cloud & IT Services. Its top three enterprise ecosystems by interconnection bandwidth CAGR are Industrial Services (52.6%), Wholesale & Retail (49.1%) and Consumer Services (49.1%). - London is the de facto digital and interconnection hub in Europe. A dense financial, commercial and industrial center make it the largest core metro in EMEA. Still in its early days, Brexit has yet to have an effect on London’s leadership as an interconnection hub or financial center. Its top three enterprise ecosystems by interconnection bandwidth CAGR are Industrial Services (53.9%), Wholesale & Retail (53.2%), and Healthcare & Life Sciences (53.2%). - Amsterdam is a financial center in its own right, as well as a major core location for network and cloud provider access. It has a rich ecosystem of high-tech material and manufacturing brands such as Nike, Philips Lighting and Heineken. Its top three enterprise ecosystems by interconnection bandwidth CAGR are Securities & Trading (50%), Healthcare & Life Sciences, (49.8%) and Industrial Services (48.7%). - Paris is the fastest growing core metro in interconnection bandwidth (47% CAGR) and is an important ecosystem location for Transportation and Energy & Utility companies. Though behind the other metros that make up FLAP, its interconnection bandwidth growth is shaping it into a competitive metro. Located a few hours southwest of Paris on the Atlantic coast, Bordeaux has thriving business enterprise and service provider ecosystems and is a strategically important subsea cable terminating point, drawing network and hyperscale cloud/SaaS providers to the region. The top three enterprise ecosystems in Paris by interconnection bandwidth CAGR are Healthcare & Life Sciences (60.1%), Industrial Services (59.5%) and Consumer Services (57.8%). Together, these interconnected digital core, ecosystem and edge locations make up a thriving digital corridor —connecting FLAP core, ecosystem and edge metros across EMEA and beyond into the Americas, Asia-Pacific and down into Africa. The subsea digital corridor between Europe and MENA Eighty-three percent of the Middle East’s internet bandwidth is connected to Europe — with nearly 80% of it being content and digital media (CDM).[ii] This exploding digital demand is mainly from the consumption of video traffic coming from Europe’s FLAP metros into MENA. The submarine cable map below highlights the MENA region’s reliance on European destinations for public and private traffic exchange. For example, our partnership with Vodafone, the world’s leading fixed and mobile communications provider, created a new subsea cable interconnection hub in Genoa, Italy (GN1). GN1 will serve as a strategic gateway for the 2Africa subsea cable system, coined “the cable of life,” connecting Europe, Africa and the Middle East. Driving the 46% compound annual growth rate (CAGR) in EMEA interconnection bandwidth over 5 years is the expansion of a wide range of industry ecosystems — specifically in the FLAP metros. Enterprise and service providers from around the world can gain access to both terrestrial and subsea interconnection routes via Equinix Fabric™, which connects digital infrastructure and services on demand at software speed via secure, software-defined interconnection on Platform Equinix®. The digital “silk road” is endless The expansive digital corridor extending from the FLAP metros out to the further edges of MENA and the beyond demonstrates that this new silk road has legs for future business growth and innovation. Vibrant core, ecosystems and edge metro hubs, along with a growing global internet exchange, are driving greater public and private interconnection capacity demand that will fuel new digital corridors from Milan to Mumbai. See which digital corridor your transformation journey will lead you down by reading the GXI report Vol. 5. [i] Interconnection bandwidth is a measure, calculated in bits/sec, of the capacity provisioned to privately and directly exchange traffic between two parties, inside carrier-neutral colocation data centers. Enterprise and service providers from around the world can gain access to both terrestrial and subsea interconnection routes via Equinix Fabric™… "
<urn:uuid:e387099c-e7bf-4387-ac38-84bce21bdd29>
CC-MAIN-2022-40
https://blog.equinix.com/blog/2021/11/08/the-new-digital-silk-road-runs-across-these-european-markets/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335444.58/warc/CC-MAIN-20220930051717-20220930081717-00760.warc.gz
en
0.898329
1,696
2.515625
3
Germany Aerospace Center Using Quantum Computer to Conduct Research to Develop Batteries & Fuel Cells (SpaceDaily) The German Aerospace Center (Deutsches Zentrum fur Luft- und Raumfahrt; DLR) is conducting research into new materials for more powerful batteries and fuel cells by using a quantum computer to simulate electrochemical processes within energy storage systems. This makes it possible to design the materials used in such a way that the performance and energy density of batteries and fuel cells increase significantly. The special thing about QuESt (Quantencomputer Materialdesign fur elektrochemische Energiespeicher und -wandler mit innovativen Simulationstechniken; Quantum computer material design for electrochemical energy storage systems and converters with innovative simulation technology) is that it uses quantum computers for a highly application-oriented task in materials research. QuESt thus combines both fundamental and applied research in the field of energy storage. In these simulations, the DLR scientists compare the quantum chemical interactions that occur with various novel materials and electrode structures. They are aiming to achieve the highest possible chemical bonding energies for electrons in batteries. In fuel cells, hydrogen and oxygen should react with each other as efficiently as possible. With the help of a quantum computer, the researchers study how atoms and molecules interact with the different electrode materials in batteries and fuel cells. “Quantum simulations have the potential to revolutionise computer-aided materials design. We want to use them to optimise the chemical compositions of the electrodes and their microscopic structure,” says Horstmann. “A quantum computer enables us to study the quantum-chemical processes occurring at the electrodes of batteries and fuel cells with the utmost precision. We are conducting research to find out the best way of programming our quantum computer for that purpose,” says Sabine Wolk of the DLR Institute of Quantum Technologies. The QuESt project is using the Fraunhofer Society’s IBM quantum computer, which is funded by the German Federal State of Baden-Wurttemberg. This uses very small, superconductive coils, referred to as Josephson junctions, as qubits.
<urn:uuid:ef717862-57cc-4b4a-888f-fb509bc8f4a3>
CC-MAIN-2022-40
https://www.insidequantumtechnology.com/news-archive/germany-aerospace-center-using-quantum-computer-to-conduct-research-to-develop-batteries-fuel-cells/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00760.warc.gz
en
0.875204
449
3.421875
3
One of the fastest-growing segments of the IoT market is the sector of healthcare devices. In fact, this industry – also known as the Internet of Medical Things (IoMT), is predicted to explode in the coming years. Before the Internet of Things, patients needed to book physical appointments with doctors or meet through teleconferences. It was exceedingly rare for doctors or hospitals to have the ability to regularly assess and make appropriate recommendations based on patients’ health records. But now, thanks to IoT-enabled gadgets, remote monitoring in the healthcare industry is possible. These devices empower clinicians to provide superior treatment to keep patients safe and healthy. As access to doctors has become more available and efficient, it has boosted patient participation and satisfaction. Without question, IoT has significantly benefited the healthcare industry. Let’s explore some of those benefits below. Simultaneous Reporting and Monitoring In the event of a medical emergency such as heart failure, diabetes or asthma attacks, remote health monitoring via connected devices can save lives. Connected medical devices can collect medical and other required health data and use a smartphone’s data connection to transfer that information to a physician or a cloud platform. It does this while simultaneously monitoring the patient’s health condition in real-time via an innovative medical device connected to a smartphone app. According to a recent study, remote patient monitoring of heart failure patients resulted in a 49 percent reduction in 30-day readmission rates. That’s because Internet of Things devices capture and transmit health information such as blood pressure, oxygen, blood sugar levels and weight and ECGs. This data is saved in the cloud and shared with an authorized person, such as a physician, an insurance company, a collaborating health firm, or an external consultant. Authorized persons can then access the information regardless of location, time or device. Acceleration of Treatment Processes Even on a typical day, hospitals lack the personnel and resources to provide all patients with the care they require. Now, in pandemic times, this lack of personnel and resources has reached an alarming new level, one that has exposed serious flaws in various healthcare systems. What’s more, the rate at which hospital physicians can perform procedures and surgeries slow down significantly when variables such as a lack of specialists and poorly equipped rooms are factored in. Treatment processes can be accelerated, and more effective health outcomes can be attained, though, when IoT technologies are incorporated into health services. IoT allows for automated workflows and accurate collection of data, making it possible for healthcare practitioners to offer speedy treatments and lowering the risk for errors. Consequently, this helps to shorten hospital stays and prevents readmissions. There is a wide range of stationary medical equipment available that can be leveraged for various purposes. This includes clinical operations and connected imaging lab tests, patient health monitoring, drug delivery, and medication management, among others. Indeed, the growing number of linked medical devices is changing how healthcare services are delivered. Telemedicine, or telehealth, has made healthcare more dynamic and patient-centered. The volume of data generated by linked medical devices increases as the number of devices increases. Healthcare networks and equipment are growing smarter and more complex, necessitating more efficient, convenient, and cost-effective solutions. Improved Patient Experience and Outcome Most hospitals have measures in place to ensure the best experience possible for their patients. Unfortunately, the environment in hospitals and other care facilities often can be unpleasant for patients. According to a study published in the Journal of Health Environments Research and Design, patients and their families value privacy, accessibility, and comfort in hospital rooms. Another priority mentioned was security. According to studies, improved patient comfort can lead to less stress and a speedier recovery. IoT technologies such as smart thermostats and adjustable lighting controls offer increased comfort for patients and control for caregivers in hospital rooms. Brightening a room aids caregiving chores and procedures, while dimming the lights creates a quieter and more comfortable environment for patients. Similarly, automated window blinds allow patients to benefit from the health and emotional benefits that sunshine can provide. Meanwhile, sensors in beds can track sleep patterns and alert personnel to any problems. All medical equipment must meet certain standards and be inspected on a regular basis. As a result, IoT preventive maintenance technology helps to eliminate service interruptions caused by missed service checks. For example, devices such as blood pressure monitors, bladder scanners and wheelchairs that need to be inspected or repaired can be easily located and scheduled, thanks to computerized maintenance management system (CMMS), smart QR code scanners, and GPS tracking technologies. The Internet of Things is changing the way we think about and access healthcare. When it comes to delivering healthcare solutions, myriad IoT apps and devices have transformed the way healthcare providers and patients interact. Even more importantly, IoT has helped lower healthcare expenses and improved treatment outcomes.
<urn:uuid:162bbb4e-82a1-4447-86cf-4b095ba79da0>
CC-MAIN-2022-40
https://www.iotinnovator.com/rx-for-success-the-internet-of-medical-things-is-a-game-changer-for-patients/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00760.warc.gz
en
0.936964
1,011
2.875
3
IBM listed technologies based on the five human senses in its 2012“5 in 5” — five innovations that will change lives within five years. The predictions include advances in cognitive systems to unlock the door to touch, sight, hearing, taste and smell. The list that IBM makes each year doesn’t reflect what its engineers, scientists and developers want under the tree, but where they see technology making some of its biggest advances in the near term. They reflect “a lot of things going on in the laboratory for the year, or in some cases several years,” IBM CTO of Telecom Research Paul Bloom told TechNewsWorld. There are teams “doing a lot of work around video and multimedia analytics,” he said, that helped shape the list this year. Technologies for computing the five senses have their basis in cognitive systems that allow computers to learn, adapt, sense and form conclusions with less human input. “This year we’re focusing on the next era of computing — cognitive computing,” said Bloom. “The reason we focus on the senses is because the senses stimulate action in the human. We’re not looking to replace the human with a machine. These are adjunctions to a brain-assist system.” Developments occurring in computing today are allowing technologies using cognitive systems to progress at a faster rate. “We see advances in supercomputing, in nanotechnology and in neuroscience that are coming together that allow us to really understand how the brain works,” Bloom said. C’mon, Touch Me Advances in haptic, infrared and pressure-sensitive technologies allow devices such as smartphone screens to replicate the feel of an object. Vibrations can emulate the feel of a silky fabric, rough Velcro or other surface, for example. “Think about how e-commerce will change if one has the ability to reach out and touch something,” said Bloom. IBM sees these technologies being applied not only in e-commerce, enabling consumers to feel the texture of a garment, but also in the medical field, allowing a doctor to assess a wound remotely. The Eyes Have It While touch tech will allow users to get a better feel, literally, of virtual objects, advances in sight technologies will give computers superhuman vision. Computers see images, photographs and video as pixels, and they rely on tags to determine what is in an image. IBM and other companies are developing systems that let computers analyze images pixel by pixel. In medical applications, computers will be able to analyze medical imaging technologies from MRIs, CT scans, Xrays and ultrasounds to identify tumors, for example. This technology could allow computers to detect problems that humans aren’t able to discern in standard images. “A system will be able to understand and read an Xray and be able to see things that a human can’t,” Bloom said. “A system like this could be more accurate than the capabilities of a human being.” In addition to medical applications, the technology could find uses in creating smarter cities. “Every city is starting to deploy cameras for crowd control and traffic,” Bloom said. Think now if these cameras are attached to a system that understands what it’s seeing. Integrating all this information, it can really change the way cities get managed.” Computers listening for sounds, vibrations, sound waves and even sound pressure will be able to detect and predict events in ways that humans can’t. Sensors along fault zones could provide early warnings of earthquakes, for example, allowing areas to be evacuated before actual seismic activity started. Sound sensors, combined with other sensors to measure vital signs, might give babies a computerized “voice.” Computers could decipher “baby talk” by listening to infants coo while observing physiological information such as heart rate, pulse and temperature. This information could determine whether a fussy baby is hungry, tired, in pain, or in need of attention. “People have been trying to figure out when a baby babbles, what it’s saying,” noted Bloom. “If we can now collect information and correlate it, we will be able to translate that babble.” The same technology could be applied to people with communication disabilities. A Matter of Taste A deeper understand of taste, down to the molecular level, can help program foods to be more appealing, perhaps enabling people to get more enjoyment out of healthy foods. “This is creating digital taste buds that will help us eat smarter,” said Bloom. “We will be able to provide ways of making broccoli more palatable.” The technology uses algorithms to determine structure to craft flavors, or to understand why one person likes a flavor that others do not. The technology can be used to add more appealing flavors to healthy foods. It could also be used to help people with diabetes — or those who follow strict diets for any reason — find their meals more appealing. The Smell Test Computers of the future might be armed with smell receptors to detect the presence of germs and illness, or replicate the smell of objects, such as produce, to bolster e-commerce. IBM is working on technology that will detect the presence of antibiotic-resistant bacteria such as Methicillin-resistant Staphylococcus aureus, or MRSA, in a room. In IBM’s labs, other scientists are working on technology for cellphones that will detect the early onset of the common cold or other health conditions from your breath as you talk on the phone. “There is some work going on in the near term in this area, where a sensor can detect odors, biomarkers, and different molecules in someone’s breath,” Bloom explained. A person can speak into a phone, for example, and “the phone can say there’s an 80 percent chance of coming down with a cold.” The Road Ahead IBM’s 5 in 5 predictions are based on work going on in its labs and at other companies to develop cognitive computing systems and technologies that replicate and augment the five senses in new ways. “What we’ve seen is the amount of processing capability and amount of storage space is so great, you need the next generation in computing to handle this in real time,” said Bloom. “As we look to where new technologies bring us, it allows us to take our limitations away and allows our imaginations to see what life will be like in five years.” These advances are meant to enhance the work of humans, not replace it. Computers will be able to detect and analyze beyond human abilities. “Beyond mimicry, the task is information synthesis and sensemaking, and even artificial sensation production, in an effort to retool human-machine interfaces to more naturally suit the ways people experience the world and consume and use information,” Alta Plana Corporation founder Seth Grimes, an industry analyst who covers sentiment and content analysis technologies, told TechNewsWorld. The advancements covered in cognitive computing have near-term uses, but also look down the road for more ways computers can aid humans in their daily lives. “What IBM is envisioning in the near term is advanced computers that could have these capabilities to provide systems to support human beings,” Institute for Global Futures CEO James Canton, Ph.D., told TechNewsWorld. “It’s not just about systems that can imitate human sensation — it’s about the higher order of cognition that I think is the ultimate endgame for cognitive computing,” he pointed out. This is the seventh year that IBM has published a list of five innovations that will change our lives in the next five years.
<urn:uuid:58c739d6-8968-49bf-95e2-7a8409615a4c>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/ibm-computers-are-going-to-start-making-a-lot-more-sense-76869.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00160.warc.gz
en
0.937345
1,650
2.75
3
As the world evolves, so does cyber security. With emerging technologies like Kubernetes, there is a need to be aware of the risks involved with using these tools. If not managed properly, they can cause serious damage to your organization’s cyber security posture. With recent incidents of data breaches and cyberattacks, companies have put more investments in cyber security solutions. In fact, the global cyber security market will reach $478.68 billion by 2030, experiencing a growth rate of 9.5% from 2021 to 2030. The basic idea behind Kubernetes is to create an environment where you can easily deploy your applications without any hassle. You just need to write some code and then let Kubernetes do its job for you. But there are many loopholes in Kubernetes that can compromise your cyber security posture if not handled properly. Therefore, if you are running Kubernetes as part of your operations, your company’s security posture will depend on how well you manage your own implementation of Kubernetes. Kubernetes Exploitable Loopholes Kubernetes is a container management platform that is used by many companies as a way to manage their applications. However, Kubernetes also has several exploitable loopholes that can hamper the cyber security of an organization. A recent survey shows that 46% of participants use Kubernetes to manage, scale, and automate computer application deployment. Given the vast adoption of Kubernetes by companies around the world, the vulnerabilities of Kubernetes can increase the attack surface, thus putting organizations at risk. The first of these loopholes is the lack of authentication and authorization services in Kubernetes. This means that anyone with access to the Kubernetes master server can access any user account, which could result in data breaches. The second loophole is the lack of encryption for sensitive information such as passwords and keys. This means that if someone gets access to these passwords or keys, they can decrypt sensitive information like credit card numbers and passwords, which would lead to another type of data breach. The third loophole is that there are no restrictions on what containers can communicate with each other, which means that even if you are running Kubernetes in your own private data center, it doesn’t mean that it’s secure because any container can communicate with another container without any restrictions. The Cloud-Level Security of Kubernetes Kubernetes is a powerful tool for managing containers and their dependencies. Kubernetes has been designed to be a highly available, self-healing system that can be run on any cloud infrastructure or even on physical servers. Kubernetes is the core of any enterprise or startup’s cloud-native application development strategy, but it’s also important to understand how vulnerabilities in cloud-level security of Kubernetes can hamper Cyber Security. The first step towards understanding this is to identify some common vulnerabilities in the cloud-level security of Kubernetes. The most common ones include: - Weak default configuration settings (kubelet) - Security misconfiguration (API server) - Insecure access control (authentication/authorization for accessing Kubernetes components) - Insecure data transfer (encryption of sensitive information during transfer between components) The Container Level Security of Kubernetes The container-level security of Kubernetes is one of the most important aspects to consider when deploying a Kubernetes cluster. This is because it helps ensure that only authorized users have access to the network and data. The container-level security can be compromised if there are any vulnerabilities in the container’s lifecycle management system. A container is defined as a lightweight virtual environment used for running applications, services, or processes. It can contain an operating system, applications, and libraries required for running an application. Containers isolate their content from other containers, which means that they provide an additional layer of protection against malware infections. The vulnerability in the Kubernetes container security can lead to theft of sensitive data or even attacks on other servers connected to your network. The Application-Level Security of Kubernetes Kubernetes has been designed to provide security at the application level and at the infrastructure level. The application-level security is provided by Kubernetes Service Account, which ensures that each pod gets its own user identity and access control rules. The infrastructure level security is provided by Pod Security Policies (PSP) which provides role-based access controls for pods running on Kubernetes clusters. However, there are some loopholes in Kubernetes that can compromise the cyber security posture of organizations using it as their container management platform: - Insecure Access Control Mechanism – Kubernetes Service Account provides an insecure access control mechanism because it uses a flat structure where each service account has read/write access to all objects on the cluster, which can be exploited by hackers to gain unauthorized access to sensitive information stored in these objects. - Lack of User Accountability – There is no way to track who created or modified an object. In this article, we discussed how loopholes in Kubernetes can threaten the cyber security posture of a company. We also analyzed some of the major vulnerabilities that can hamper and even nullify the efforts to protect a company’s data. To conclude, it is important to understand that vulnerabilities are not always malicious. In fact, most of them are caused by human error or negligence. Most companies do not have enough time and resources to ensure their systems are secure from common threats or attacks.
<urn:uuid:b4e21105-4d0a-40b8-97c8-c93d9dd7ff16>
CC-MAIN-2022-40
https://www.cyberdb.co/how-loopholes-in-kubernetes-can-threaten-the-cyber-security-posture-of-a-company/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00160.warc.gz
en
0.920517
1,170
2.6875
3
The Basics Of Cloud Computing In the most basic of terms, cloud computing provides on-demand computing services over the internet for things like storage, processing power, and application hosting. Instead of companies having to build their own data center or computing infrastructures, a task both extremely expensive and high-technical, companies can work with cloud service providers to gain access to the on-demand application they need. Companies will only pay for what they need when they need it, rather than devising a complex IT system that requires constant maintenance and investment. Cloud Computing Services There are many things that can be through the cloud, and commercial applications aren’t the only things taking advantage of the storage and hosting applications. Consider the applications on the average iPhone and the storage options for media, files, and gameplay. Computing services range from complex processing powers with artificial intelligence computing to basic data storage or office applications like Microsoft Word or Excel. Cloud services can assist with any service that doesn’t require direct or physically close contact with a specific piece of computer hardware. Cloud Computing Examples Thanks to improvements in 5G core network services, many smartphone users rely on cloud computing every day. Emails services like Gmail are accessed through smartphones or tablets, as the service has its data hosted in the cloud. Netflix and Disney+ are streaming services where videos and media are stored in the cloud and accessed by millions of individuals around the globe. Software development is being moved toward cloud-based applications, where software disks are no longer required to be installed in every computer needing access to a specific program. Cloud Computing Importance The cloud isn’t new to the IT scene, but it is quickly becoming one of the most important areas of IT development and enhancements. More than one-third of all the money spent around the world on IT services deals with cloud computing infrastructure. There are several leading cloud service providers spending money to continue developing access, but the average business is also spending money in order to move workloads and processes to cloud-based solutions. Enterprises with enough resources are also working to build private cloud systems for their organization. Cloud Computing Security As with any area of IT, security concerns are many with cloud computing and services. A cloud environment faces security threats similar to that of a traditional data center. The software systems running the service have vulnerabilities that could be exploited by the wrong individual or group. The challenge with these vulnerabilities is defining who bears that responsibility of a cyberattack successfully occurs. Cloud services providers and consumers share the responsibility of data security, but consumers have the least involvement with control over the cloud-based security protocols. Cloud security originates with the service provider, but the consumer should follow best practices in access and authorized use. On-premise IT departments can provide the monitoring and logging of all users, accessed application, and data storage for a specific company, adding another layer of security to the cloud host’s defenses against cyberattack. Because cloud computing is growing so rapidly, security concerns will continue to be prioritized to ensure the safe storage and transmission of data around the globe and within private companies. Cloud Computing Models There are three models that are used in cloud computing. The first is Infrastructure-as-a-Service. This is where foundational aspects of computing are rented either as a virtual or physical server, as a storage application or as a networking service. The next layer is Platform-as-a-Service, which includes the foundational elements but also includes the various software or tools a developer may need to construct an additional function. These are more complex and include operating systems, database management and middleware. Software-as-a-Service is the instant delivery of an application that is immediately consumed. The most dominant cloud computing model is Software-as-a-Service, as this is where most of the public spends money. However, it is still early in cloud research and development, and commercial ventures will soon rival the public’s dependence on cloud services.
<urn:uuid:acedbf7f-5ded-4c35-a619-a8c344c2692d>
CC-MAIN-2022-40
https://www.hostreview.com/blog/210510-the-basics-of-cloud-computing
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335469.40/warc/CC-MAIN-20220930113830-20220930143830-00160.warc.gz
en
0.942686
807
3.109375
3
OpenSSH, a suite of networking software that allows secure communications over an unsecured network, is the most common tool for system administrators to manage rented Linux servers. And given that over one-third of public-facing internet servers run Linux, it shouldn’t come as a surprise that threat actors would exploit OpenSSH’s popularity to gain control of them. Nearly five years ago, ESET researchers helped to disrupt a 25 thousand-strong botnet of Linux machines that were saddled with an OpenSSH-based backdoor and credential stealer named Ebury. The attackers wielding it first performed a check if other SSH backdoors are present at the targeted system before deploying the malware. This spurred the researchers to search for and analyze these type of (server-side OpenSSH) backdoors. They found that there is a wide spectrum of complexity in backdoor implementation, starting from off-the-shelf malware to obfuscated samples and network protocols, but that all of them are the result of modifying and recompiling the original portable OpenSSH source used on Linux. Also, that there are multiple code bases for the various backdoors, but that most of them share similar basic features (e.g., hardcoded credentials to activate a backdoor mode, credential stealing). All of the collected samples copy the stolen credentials to a local file, even though attackers then must log back onto the compromised machine to retrieve the file. But some of the malware families are also capable of pushing the credentials on the network. Read more: Help Net Security
<urn:uuid:18f4af52-6814-45a8-be27-a186fd9da128>
CC-MAIN-2022-40
https://www.globaldots.com/resources/blog/old-and-new-openssh-backdoors-threaten-linux-servers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00160.warc.gz
en
0.918586
319
2.8125
3
On May 16, 2017, Governor Jay Inslee signed into law H.B. 1493—Washington’s first statute governing how individuals and non-government entities collect, use, and retain “biometric identifiers,” as defined in the statute. The law prohibits any “person” from “enroll[ing] a biometric identifier in a database for a commercial purpose, without first providing notice, obtaining consent, or providing a mechanism to prevent the subsequent use of a biometric identifier for a commercial purpose.” It also places restrictions on the sale, lease, and other disclosure of enrolled biometric identifiers. With the new law, Washington has become only the third state after Illinois and Texas to enact legislation that regulates business activities related to biometric information. Although the three laws seek to provide similar consumer protections around the collection, use, and retention of biometric data, the Washington law defines the content and activity it regulates in different terms, and, similar to Texas, but unlike Illinois, the Washington law does not provide a private right of action. The Washington statute, as compared to existing biometrics laws, is notable for its definition of “biometric identifier.” In the law, a “biometric identifier” is “data generated by automatic measurements of an individual’s biological characteristics,” including “fingerprints, voiceprints, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual.” Washington’s definition of “biometric identifier” may be broader than that in the Texas statute, but Washington’s definition does not specifically provide for a “scan of hand or face geometry,” as is the case in the Illinois statute. Washington’s definition of “biometric identifiers” specifically excludes “physical or digital photograph, video or audio recording or data generated therefrom” (in addition to certain health-related data), suggesting the statute will have limited application in the context of facial recognition technology. Washington’s regulation of “enrolling” also appears to make the law unique. The other two state laws do not appear to regulate the multi-step process of “enrolling,” which is defined as an activity “to capture a biometric identifier of an individual, convert it into a reference template that cannot be reconstructed into the original output image, and store it in a database that matches the biometric identifier to a specific individual.” Illinois and Texas place requirements on the single activity of collecting or capturing biometric identifiers. Washington’s biometric law is furthermore notable for its provisions on “notice” and the enumerated exceptions it provides to its prohibition on selling, leasing, or “otherwise disclos[ing]” enrolled data “to another person for commercial purposes.” Indeed, the law provides that “[t]he exact notice and type of consent required to achieve compliance . . . is context-dependent” and only requires that notice be “given through a procedure reasonably designed to be readily available to affected individuals.” Moreover, “[n]otice . . . is not affirmative consent.” Compared to the analogous provisions from the other states, Washington appears to have a more defined standard than Texas, but less than Illinois, which mandates a “written policy” and the receipt of written authorization. As for regulation of the sale, lease, and disclosure of collected data, the Washington law has the longest enumerated list of exceptions to the prohibition on such activity. The three statutes all impose additional standards on the retention and disposal of biometric identifiers. Significantly, the Washington statute only allows for enforcement by the attorney general under the state’s consumer protection act. Because the Texas law also only provides for enforcement by the attorney general, Illinois remains the only state with a biometric statute that includes a private right of action. Washington is only the third state to enact a biometric law, but legislatures in other states around the country are considering similar bills, as they try to address innovations in a rapidly growing space. As of this posting, those legislatures include Alaska, California, Massachusetts, and New Hampshire. In Washington, there is additional activity around the collection and use of biometric identifiers. On the same day that Governor Inslee signed H.B. 1493, he also signed H.B. 1717, which covers government agencies. Both laws go into effect on July 23, 2017.
<urn:uuid:ec2e12ff-520d-4354-b70e-e1ecfa1fe79c>
CC-MAIN-2022-40
https://www.insideprivacy.com/united-states/state-legislatures/washington-becomes-the-third-state-with-a-biometric-law/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00160.warc.gz
en
0.917716
938
2.5625
3
The Social Power of Data Data Protection is about Control Today’s advertising is a complex interconnected infrastructure. You control your data and understanding what that means today, along with how it affects your privacy, is extremely important for your daily life and your future. Owning your data, keeping your data private means that you control your data and your destiny. Years ago, privacy meant something very different than its definition as applied to your personal information today. The harm that can come from the abuse of your digital data or other violations of your privacy is numerous. It’s just another day in your life, and every move you make is being monitored. From the calls you make, which send digital information to your provider and adds to the database, they keep on the digital information they maintain on you and your phone calls. They know when you made the call, whom you called, what time of day, and so on. Google chimes in and gives up your location data. Facebook throws in any relevant data they have on you as well. As your cell’s microphone picks up words in your conversations, you start to see advertising targeted directly at you. You and your mom have been having conversations about adopting a dog from a local shelter. Suddenly, you’re aware that ads for pet products are creeping into every aspect of your online world. Do you need a new dog bed? How about some Science Diet dog food? You don’t recall seeing these things before, but the fact is, you haven’t even adopted that dog yet. So, how did they know? Your Data in a Pandemic Controlling your personal information puts the power in your hands. There is real strength in the ability to keep your data under lock and key. Now that the world is facing a global pandemic, who is after your personal information now? Don’t laws protect you from having your privacy violated with the use of your data? That can depend on how tight you keep the reins on your data. Authorities and other health officials can only understand the virus and how it affects the population if they have access to data on its disease course and the rate of spread throughout the community. While it would be nice if they asked you for permission and asked you specific questions allowing you the choice of what information you’d like to give, it doesn’t work that way. While hospitals, schools, and digital platforms have been increasingly stressed with patient information, scared public citizens are looking for information. Everyone and even children by the millions turning to the internet full time for their education, there has been even more requests by companies and government agencies for information on patients and users. What about privacy laws like HIPAA (Health Insurance Portability and Privacy Act) and FERPA (Family Educational Rights and Privacy Act)? These laws have different applications during national emergencies and declared global pandemics. When the public health is at risk, your privacy goes out the window—the only way for the government to organize a responsible national response is to handle truths. Asking several health questions directly to the average citizen, many people hide the facts. Even sometimes, unintentionally. Health questions can be embarrassing or even unsettling, so patients may not give entire answers. When it’s a war against an invisible threat ready to take out a percentage of the population, there’s no time for half-truths. During a time of national emergency or a world-wide declared global pandemic, privacy laws are limited. Health records are open to those agencies with access. While the rules are temporarily relaxed, third parties find ways to access this information, too. Using Inclusive Disruption to Grow Companies look for distinctive ways to grow their customer bases. How they obtain your personally identifying data through various means is of no consequence to them. Privacy laws are not advanced enough to restrain the abuse, or the fines are not large enough to restrict companies from abusing data anyway, because the profit margin exceeds the loss of each data point they are fined for through the courts. Even in parts of the world where internet access through computers is limited, cell phone data can still be traced and quantified into databanks. No one on any part of the globe is immune from data abuse anymore. Where major players are concerned, inadequate data is not a reasonable excuse for exclusion, expansion, or setting themselves up for future profits. Technology does not have to be sophisticated to have an impact on society. In barely reachable areas of the world, such as in Ghana and Kenya, where there may be little available electricity, these power-house companies are using inclusive disruption as a means to grow their ever-expansive empires. Using non-profits to enter the regions, they bring information to the residents via easy to use Talking Book audio devices. They are designed specifically for the area and made to reach even those who can’t read. They work on available local batteries. These ‘Talking Books’ use a cloud-based technology platform with an audio content manager. The apps provide playlists but also collect data and feedback from the field. There is an analytics dashboard for monitoring and evaluating the data from every community. Presented as a method of user engagement and education, the reality is that the World Bank is collecting data for further expansion in different economic regions of the world. In the end, it doesn’t matter how far you run. If you are connected to an internet device of any type, then you are being tracked. You are traced through network applications, social networks, and satellites. The only way to take back control and have the power in your hands is to limit the data you provide. It’s your data, and it’s your personal information; if you want to have the power, don’t give it away. Data and Diversity Taking control of your personal data is one way to protect yourself from bias. Consider that AI or artificial intelligence is prepared to consume and infiltrate all areas of our lives over the next several years. AI will control everything from essential customer service roles to financial advice, job recruitment and placement, and even medical services. Is it of concern that the current environment of AI professionals is only 22% female and 78% male on a global basis? Could bias affect you in your daily life? The facts are that by 2025, 95% of all customer interactions will be driven by AI, and the AI developed for these systems will mimic humans so closely that customers will be unable to tell bots from human workers whether they are using chat or phone services. Today, in 2020, 30% of all B2B companies employ AI to supplement their primary sales operations. Currently, 80% of European companies are integrating AI for their customer and business analytics. If AI is so great, then what is the problem? The problem is that we, as humans, are imperfect, and humans built artificial intelligence systems. Therefore, they, too, are flawed. The risk is that rather than solving the gender bias issue, AI will intensify the problem. Artificial intelligence uses machine learning and algorithms built by humans to learn from real-world data. It can, and will, involuntarily, intensify the problem of gender biases. Results are predicted that by 2022, only two years away, that 85% of AI projects will deliver inaccurate outcomes due to bias in data. This can be the result of machine learning, algorithms, or even the teams who are charged with overseeing the management of the AI. Right now, gender bias is happening as top jobs, which bolster and increase the pay gap, target ads for better-paying jobs towards men. At the same time, virtual assistants or lower-paying subservient-types of office jobs are targeted towards women in advertisements. Artificial intelligence portrays these types of positions more as ‘female-oriented,’ at the same time, more women than men are forecasted to lose their jobs to automation. Mind Your Manners Brands are also paying close attention, not just to your gender, but to behavioral data. It may be a good idea to start minding your manners, as you are literally under surveillance. To market their products, brands realize that if they don’t pay attention to the behaviors of their customers, then they miss out on potential marketing strategies to sell their wares. Companies are out there getting to know you on a deep and personal level. They may even know you better than you know yourself. Behavior data differs from static data. Static data is data that does not change once it has been recorded. Static data can be information such as whom you called, what time it was, where you were located at the time you placed the call. These are examples of static data. Behavioral data can be dynamic or fluid; it can change over time. Suppose a customer likes ice cream, data shows that the customer purchases a particular variety regularly. The customer decides to make dietary changes and lose weight. One of the things the customer has done is cut out or buy a lot less ice cream. The data input has changed based on the customer’s behavior or actions. Companies pay a great deal of money for profiles from social media accounts to find out about the behaviors of their customers. Ever wonder what information is purchased about you and why? Examples of behavioral data include tracking the web pages that you visit—getting to know you on a very personal level. They use databases to define you by the people you associate with on social media, who and what you ‘like,’ and what items you are purchasing online from different retailers. The data file that corporations have on you is far more extensive than that of your physician, or even your psychiatrist if you also have one. Then they turn around and use that same data to manipulate you, drown you in ads tailored to make you see the world in a specific way. Only offer you the products that they believe you deserve, need, or would be willing to purchase. The guy on your left, he’s got a different show going. Who’s Following You? At this point, everyone. There is no need to get paranoid. You can take control. Taking control of the data that you send, receive, and release online is a way to take power back into your hands and away from the corporations who may want to use it in ways that could be malicious or even manipulating. Taking back control opens the world up to you for even greater possibilities. Without constraints and bias, you have more freedom. You are offered more opportunities in life. Take time to read more about taking control of your own personal information and what you can do to shut down the corporate sucking of data at your expense. Sometimes it might require hitting the ‘off’ button. That’s ok. Many of us need to remember that there is a world of possibilities – outside, in nature; we don’t need to have a leash. Get out and take your dog for a walk. Set yourself free.
<urn:uuid:de93264f-c1b4-4368-9b13-baba1aba4366>
CC-MAIN-2022-40
https://caseguard.com/articles/the-social-power-of-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00160.warc.gz
en
0.955081
2,258
2.78125
3
Carnivores, Herbivores & Omnivores Animals - Information and Facts There are three types of animals: Herbivores, carnivores and omnivores. Carnivores are flesh-eating mammals. This group includes a variety of animals such as cats, dogs, wolves, lions, tigers, and cheetahs. Most carnivores generally live alone but many of them also hunt in small groups. Carnivores usually feed on herbivores but many carnivores often attack and eat other carnivores too. The bigger the carnivore, the more it has to eat. The largest land carnivore is the polar bear. It is the only animal that actively hunts humans. - The weasel is the smallest living carnivore with an overall length of about 8 inches and weight of 1.5 ounces. - The grizzly bear or brown bear is the largest carnivore and weighs up to 850 pounds with a length of up to 8 feet. - Carnivores are at the top of the food chain. - Carnivores are divided into pinnipeds (fin footed) and fissipeds (land). - Carnivores are not able to move their jaws side to side very easily. Herbivores are animals that eat mostly plant materials. They are also called primary consumers. Herbivores are further subdivided into several types, such as frugivores or fruit-eating animals, folivores or leaf-eating animals, and nectarivores or nectar-eating animals. Herbivores usually have blunt teeth that are useful for stripping leaves, twigs, etc. Herbivorous birds do not have teeth to mince the vegetation they eat. - The moose is a large herbivore that eats any kind of plant and fruit. - Many herbivores have a digestive system that helps them get the most out of the plants they eat. - The bee is a small pollinator that uses nectar and pollen from some kinds of plants to make honey. - The stegosaurus and apatosaurus were herbivore dinosaurs. - Herbivores spend more time eating than doing anything else. Omnivores are animals that have specialized teeth that enable them to eat both plants and animals. Pigs, bears, foxes and chickens are examples of omnivorous animals. Because of their feeding habits, omnivores easily adapt to different environments. Omnivores have less specialized teeth than carnivores and herbivores. Some omnivores are pollinators which play a very important role in the life cycle of some kinds of plants. - Some of the omnivores eat eggs of other animals. - Omnivores cannot digest plants that do not produce fruits and grains. - Omnivores eat plants so they are able to survive in many environments. - Omnivores do not eat all kinds of plants. - The housefly is a scavenger that also eats fruit-bearing plants. - Black bears and grizzly bears belong to the order carnivora, but they are omnivores.
<urn:uuid:97f48f85-c5e0-4f36-87b1-2fff6d2f93d8>
CC-MAIN-2022-40
https://www.knowledgepublisher.com/article/1124/carnivores-herbivores-omnivores-animals-information-and-facts.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00360.warc.gz
en
0.929344
673
3.5625
4
The idea of machine learning sounds like a science fiction thriller or action movie where a computer takes over the world. It rarely goes well for the humans (remember I, Robot ?). copyright by www.forbes.com Machine learning is a form of artificial intelligence, and if you ask a computer scientist, you will get a highly technical answer that involves algorithms, pixels and both supervised and unsupervised learning. In simpler terms, a machine “learns” by looking for patterns among massive data loads, and when it sees one, it adjusts the program to reflect the “truth” of what it found. The more data you expose the machine to, the “smarter” it gets. And when it sees enough patterns, it begins to make predictions. Unlike humans, however, machines cannot generalize knowledge or transfer learning from one application to another. Pure analytical calculating According to SAS , a computer learns “from previous computations to produce reliable, repeatable decisions and results.” Machines are purely analytical, and despite what filmmakers might suggest, they do not form opinions about the fate of humans. On the other hand, they are certainly more intelligent than the smartest humans and are capable of computations in minutes that would take hundreds of data scientists a year to complete. The idea of machine learning is not new. The term was first defined by Arthur Samuel back in 1959. However, we only recently started realizing its potential when technology became capable of gathering massive amounts of data. By marrying that data to affordable computers with tremendous processing power and inexpensive storage, the age of machine learning was born. Applications from basic to futuristic Applications for artificial intelligence are proliferating in every industry. From your bank detecting fraud on your checking account within seconds to your Facebook newsfeed prioritizing information from the people you “like” the most, artificial intelligence is already part of your life, whether you know it or not. Artificial intelligence is also used in search and recommendation engines. For example, Rakuten, the largest e-commerce site in Japan, uses the technology to analyze uploaded photos in order to suggest clothing and accessories to shoppers. So to answer that age-old question, yes, artificial intelligence can help you find the perfect sweater. And you won’t have to spend hours trying to guess the right terms to use on Google or Pinterest. Perhaps the most futuristic (and controversial) of applications are self-driving vehicles. Google’s driverless cars are a frequent sight in the Bay Area. Last year, a self-driving truck built by Uber’s unit Otto made a beer delivery in Colorado with only the help of a police escort. This task clearly required a rapid analysis of data from all sensors to operate.
<urn:uuid:ce0d91d3-e53f-490b-9931-379b684d9e80>
CC-MAIN-2022-40
https://swisscognitive.ch/2017/05/25/how-does-a-machine-learn/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00360.warc.gz
en
0.936395
561
3.53125
4
The Overwhelming World of Smart Cities and Cyber-Kinetic Threats For most people around the world, cities are central to our lives and security. More than half of the world’s population lives in urban areas and that percentage is expected to grow dramatically over the coming decades. Along with this influx into cities, though, come many challenges. Increases in energy consumption strain remaining resources. Increasing traffic congestion adds more pollution and lowers quality of life. Larger populations produce more solid waste and wastewater that must be removed. In response to these growing challenges, cities are turning increasingly to smart technologies and applying them to all aspects of city life. These technologies offer the promise of increased efficiencies that can reduce the burgeoning problems of urban growth. At the same time, though, smart technologies create challenges of their own. Vulnerabilities to cyber-kinetic attacks accompany smart systems, putting systems critical to our lives at risk of massive disruption. Furthermore, the massive volume of data that our interactions with smart systems generate create serious threats to individuals’ privacy if not adequately protected. Making physical objects in cities “smart” One of my favorite descriptions of smart cities is from the British Standards Institute (BSI), which describes a smart city as: THE EFFECTIVE INTEGRATION OF PHYSICAL, DIGITAL AND HUMAN SYSTEMS IN THE BUILT ENVIRONMENT TO DELIVER SUSTAINABLE, PROSPEROUS AND INCLUSIVE FUTURE FOR ITS CITIZENS Smart cities are made possible by connecting physical objects in them to digital systems to create cyber-physical systems (CPS). This is done by means of sensors that act as the eyes and ears of the system, gathering data that then is fed into digital systems for analysis to determine how best to optimize the physical output of those systems. These CPSes increasingly guide systems in our cities: traffic management, transportation systems, energy distribution, water distribution, public safety, pollution control, waste disposal, wastewater treatment and more. The homes we live in and places where we work are also increasingly becoming digitized, and all these systems, in order to function, generate data. Data are as much building blocks of smart cities as bricks and girders are of their physical components. They help us manage the needs of people living in cities. Think of the data that you generate on CPSes in smart cities every day. Your digital assistant reports your schedule for the day to you. If you drive a car with smart technology into town, it may track your route and interact with traffic management systems to report to you the least congested route to take and help you identify parking spots that best suit your preference for convenience versus cost. If you take some form of mass transit, you can receive real-time updates on arrival times at various pick-up points, as the system works behind the scenes to optimize speed and efficiency. In the future, your options may also include being picked up by an autonomous vehicle that uses real-time traffic information to take the most efficient route to your destination. When you stop in a coffee shop before your work day starts, you grab a coffee – brewed with water treated and distributed through a smart water system – and a doughnut – baked with electricity from a smart power plant – and need only tap your phone to pay. Your payment moves instantaneously from your account to theirs. In the future, checking out to make a purchase may be unnecessary. Some businesses are already experimenting with systems in which you present your phone to a scanner as you walk in the door, grab the items you want and walk out with them. An advanced tracking system monitors what items you take out the door and charges them automatically to your account without having to present them to a cashier. Regarding smart workplaces, the office you work in likely already has some level of digital optimization installed, beyond the security system that recognizes you and clears you to enter the building. Flipping light switches or adjusting room temperature may be unnecessary, with smart systems sensing the presence of occupants automatically and adjusting light and heat accordingly. While you’re at work, the power company detects that you are away and reduces power to appliances that aren’t being used, so you can waste less energy. You’re not even aware of this, though, because the power company, having data on your behavior patterns, restores your power to the level you require for your evening before you arrive home. Such a scenario only touches the surface of smart city interactions. Many cities – not to mention countries – are aggressively pursuing an increasing amount of digital connectivity for everything that goes into their physical environments. Further reading: Strategies for Improving Smart City Logistics The development of smart cities Among the most extensive smart city initiatives is Singapore’s “Smart Nation” project, which is applying new technologies to enhance transportation systems, health, home and business. The goal is to increase interconnectedness in all aspects of citizens’ lives through digital technologies. As a former Singapore resident, I have had the pleasure of working personally on many Smart Nation-related projects. China, too, has been aggressive in developing smart cities, districts and towns, developing 103 smart cities over the past five years. These smart cities seek to bring pollution, traffic congestion and widespread energy consumption under control through greater use of connected technologies. Not far behind in numbers is India, whose government has targeted 90 cities to develop smart capabilities as part of its “Smart City Mission.” Their pragmatic approach is to attack this initiative in a layered approach, solving specific issues one at a time. The U.S. Department of Transportation issued a “Smart City Challenge,” to U.S. cities, encouraging them with funding to develop more smart technologies around transportation. The hope is that, as cities compete, they will develop ways to use digital technology to solve transportation problems and improve efficiencies. As ideas are developed, other cities can then adapt them to their needs. Billionaire Bill Gates is taking a different approach to building smart cities. Rather than trying to take the existing systems of a city and convert them to digitally connected systems, he purchased 25,000 acres of land in Arizona with the plan of building a smart city from scratch. Smart systems behind smart cities The following are some of the city systems enhanced by digitization. Transportation is crucial to cities, but it also creates pollution, costs energy, leads to injuries and fatalities and requires land for its infrastructure. Solving these problems involves both individual and mass transportation. Smart public transportation offers real-time data about mass transit schedules and delays. For individual drivers, smart traffic, with data from automobiles, smartphones being carried by occupants of them, street monitoring sensors, drones and satellites, in some places, feed information that helps relieve congestion by controlling lights and signals based on real-time traffic conditions. Montreal uses AI, satellites, drones, and sensors mounted on vehicles to improve traffic flow. NVIDIA Metropolis combines internet-enabled video cameras and AI to provide improve traffic flow. They also offer smart parking services. Drivers who drive IoT cars can get real-time information on available parking spots and their costs, even if cost varies according to parking demand. As our homes and workplaces become increasingly part of the IoT, we not only gain greater control over those spaces, but also generate more data about our preferences and lifestyles. An example of smart technology in buildings is Deloitte Netherlands’ office building known as “The Edge.” The Edge is one of the most energy-efficient buildings in the world. Its lighting system is optimized to reduce energy usage and it has solar panels and underground geothermal energy storage. In addition, it uses rainwater for wastewater systems. All this is tied to a vast array of sensors that collect environmental data. The data that these sensors generate optimize building management. Cleaning and maintenance are done on an as-needed basis. The system that all this data feeds makes it possible to track and find the location of any person on the premises at any time. It also offers employees the convenience of applying their personal preferences to all devices with which they interact – even down to the coffee machines. Our homes are also becoming increasingly smart with the proliferation of devices such as Amazon Echo Spot that offer us the convenience of controlling systems across our entire home environment with voice commands. And useful IoT devices are expanding into a variety of roles to make our lives easier. Many even allow us to monitor what’s happening in our homes and control the systems in them when we are away. Smart energy involves more than just delivering energy efficiently. It also involves balancing increasing energy needs against environmental concerns and incorporating more clean energy. Smart grids use smart meters to coordinate supply schedules and deliver energy cheaper and more efficiently. The smart grid can even assess consumers’ energy usage patterns and turn off appliances that its assessment suggests are not needed during peak energy delivery hours. Siemens is currently working with Rotterdam and Dutch energy providers to create a smart grid that connects 20,000 homes and companies. The system, due for completion in 2020, will use data generated by consumers to identify energy usage trends and then use those trends to optimize energy supply. Water is crucial to a city and delivering it to tens of millions of people is no small task. Its flow must be understood to optimize it. That’s where smart technology comes in. Smart water meters systems analyze usage patterns to predict future water needs. But they also do more than merely distribute water. Their sensors also automatically assess water quality and detect maintenance needs throughout the system. Smart technologies are also being used to enhance public safety. Smart street lights are centrally managed to adapt to weather conditions and report their own maintenance needs. Traffic and surveillance cameras are used along with gunshot detection sensors to provide surveillance and real-time information throughout the city. Law enforcement authorities use facial recognition technology to recognize known individuals who could pose a threat to public safety. The U.S. FBI is said to already have more than 52 million individuals in its facial recognition database. Waste management is a vital aspect of smart cities. With more than half of the world’s 7.5 billion people living in and around cities, the amount of waste generated is staggering. In response, technology companies have increasingly been developing smart solutions. IBM, for one, has developed an intelligent waste management platform. Its advanced data analytics helps it optimize collection, transportation and waste recovery. Another company, Veolia, has created internet-enabled containers. Sensors in the containers identify empty ones that can be skipped and detect the volume of garbage and odor so collection can be done on an as-needed basis. By communicating with waste trucks so they pick up only the containers that need it, pollution from the trucks is reduced. Chicago, in 2016 began a massive “Array of Things” program, installing sensors at key locations to monitor pollution and climate conditions to provide relevant data to researchers as they seek to improve Chicago’s air quality. Threats to these systems The following show some of the vulnerabilities of these smart systems. Traffic control systems Tests have shown that many smart traffic control systems are vulnerable to takeover. Disgruntled employees attacked Los Angeles traffic control systems in 2006, snarling traffic for days before their actions were discovered. And security expert Cesar Cerrudo demonstrated in 2014 how the lack of encryption in many widely used traffic control systems could enable an attacker to reverse engineer systems so that false data could be fed into them. This could allow an attacker to disrupt traffic lights and snarl traffic. Although an increasing number of newer smart systems have the necessary encryption, older systems currently installed lack it and would be very hard to replace without major street reconstruction. And attacks on traffic and surveillance cameras, too, could render a city blind. Mass transit systems The 2008 hijacking of the Lodz, Poland, tram system shows evidence of vulnerabilities in mass transit systems, too. And while the 2016 ransomware attack on the San Francisco municipal rail system didn’t significantly inconvenience riders or cause injury, it, too, demonstrates that municipal transit systems are being targeted. Additionally, introducing false information on the systems could cause unnecessary congestion and delays as people base their actions on that false information. You can read more about mass transit and railway risks here. Attacks on power grids could be devastating, as the 2015 BlackEnergy attack on the Ukrainian power grid has already demonstrated. More than 80,000 consumers were left without power. And both vulnerability demonstrations and actual attacks on power plants have been reported. Water distribution systems Attacks on water distribution systems could be even worse. And like with the Ukrainian attack on the power grid, such attacks have already been accomplished, although, fortunately, with minimal damage so far. Most chilling are reports of a 2016 hack of an unidentified water treatment plant, where mass casualties were averted only because the hacktivists who attacked it did not immediately realize what toxic chemicals they were in a position to unleash on the plant’s consumers. Every city has hundreds of systems to manage different services and tasks. Hacking centralized service management systems would give an attacker many ways to wreak havoc, from introducing software bugs to introducing false data into systems to disrupt their intended functions. Hospitals have been particularly hard-hit by ransomware attacks. This can have a massive effect on the health and well-being of city residents if essential equipment and services are disrupted. Wireless smart street lighting systems, like traffic management systems, have encryption problems, leaving them vulnerable to attack. Such attacks could cause widespread street blackouts. Services that are location-based are often based on GPS. This opens vulnerabilities to GPS spoofing. When people are making decisions based on real-time location information and the location given is incorrect, it disrupts potentially critical services. With so many critical services enmeshed with smart city platforms, the attack surface is enormous and extremely vulnerable. The more technology is involved, the greater the vulnerability to infrastructure and city services. Securing systems is essential. Existing systems are already often prone to the “too-fast-to-market syndrome” that plagues so many industrial control systems. Much of what is out there has had too little thought put into its security, leaving systems shockingly vulnerable. Many systems need to have their security upgraded and the new smart systems that are added need to be carefully thought out for security before they are incorporated into critical infrastructure. Our cities are growing larger and the challenges of so many people living so closely connected need solutions that not only solve those challenges, but do not create new ones. The time to act on securing our smart cities is now. The more that systems with vulnerabilities are incorporated, the greater is the risk to which city dwellers are exposed – and the more that we will have to catch-up in the future. Read the full article here. This article was written by Marin Ivezic, Partner at PwC. From his start in law enforcement more than 25 years ago, Marin came to focus on emerging threats of cyber-kinetic attacks – cyberattacks on Internet of Things (IoT) and Industrial Control Systems (ICS) that threaten people’s physical well-being, lives or the environment. Now, as a Partner in PwC, Marin brings this expertise to businesses to help them defeat these often-overlooked – but potentially deadly – hazards.
<urn:uuid:6603c641-5c7f-44d8-bf1d-3bec23a535f7>
CC-MAIN-2022-40
https://www.iiot-world.com/smart-cities-buildings-infrastructure/smart-cities/the-overwhelming-world-of-smart-cities-and-cyber-kinetic-threats/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00560.warc.gz
en
0.94645
3,203
2.796875
3
by Oleg Afonin, Danil Nikolaev & Yuri Gubanov © Belkasoft Research 2015 Computer forensic techniques allow investigators to collect evidence from various digital devices. Tools and techniques exist allowing discovery of evidence that is difficult to get, including destroyed, locked, or obfuscated data. At the same time, criminals routinely make attempts to counter forensic efforts by wiping data, deleting files, faking or clearing logs, histories and other traces of performed activities. Anti-forensic efforts are not limited to just that. In this whitepaper, we will have a brief overview of common anti-forensic techniques frequently used by suspects who are not specialists in high-tech, and ways to counter them during the investigation. What this paper does not discuss is the suspects’ use of advanced tools dedicated to countering forensic efforts. Instead, we will talk about the most common anti-forensic techniques. In this paper, we will move from easy to moderately difficult anti-forensic techniques, explaining who might be using these methods and how to counter them. What Is Anti-Forensics? Anti-forensics is a set of precautionary measures a user can perform in order to hide traces of his activity, making investigations on digital media more complicated and time-consuming, and potentially rendering evidence of illegal activities difficult or impossible to obtain. Detecting anti-forensic techniques in use is not always easy and not always possible, as destroying certain types of evidence may leave no traces anywhere in the system. However, since average users have little to average hi-tech knowledge, anti-forensic attempts they perform may be generally ineffective or obviously visible to the expert. Moving or Renaming Files Moving or renaming files that may hold evidence against suspects is a form of simple, almost naive anti-forensics. By moving certain files (such as those used by instant messengers for keeping conversation histories), renaming or changing extensions (e.g. renaming an encrypted ZIP archive containing illegal images into something like c:\Windows\System32\drivers\rtvienna32.dat), suspects hope to confuse experts and delay investigations. Indeed, locating moved or renamed files presents an obstacle to investigators who rely on nothing but their skills and expertise to locate evidence. However, modern computer forensic tools can automatically analyze media with different techniques, like evidence search or carving. The carving technique performed is a bit-precise sequential scan of the media for various artifacts. With carving, file names or locations become irrelevant as carving does not rely entirely on the file system, reading low-level data directly from the media instead. Carving can locate evidence by looking for signatures, which are sequences of bytes characteristic of certain types of data. In our example, the “c:\Windows\System32\drivers\rtvienna32.dat” file will be located during the carving process and identified as a ZIP archive not based on the name of the file, but because the actual data stream contains information associated with ZIP archives. Similarly, Skype databases, JPEG images, documents, and dozens of other types of evidence can be reliably detected by carving the disk or disk image. Deleting Evidence or Using Privacy Protection and Disk Cleaning Tools Another type of naive anti-forensics is using freeware or commercial disk cleaners or privacy protection tools. Most, if not all such tools are not designed specifically for anti-forensics, providing more peace of mind to their users than actually countering forensic efforts. Traditionally, such tools have been created to protect the privacy of the user as opposed to purposely destroying evidence. Most privacy protection tools and many disk sanitizers are effective against casual observers, but leave many traces behind that can be easily discovered during an investigation. Many disk cleaners will clean up the following: - Cache and history for popular Web browsers - Chat logs produced with Skype and some other popular instant messengers - Provide “secure delete” option to wipe files - Clean some (but almost never all) of the following: jumplists, thumbnails, registry items, Skype chatsync, etc. In our experience, we have never encountered a single disk cleaner/privacy protection tool that would consistently destroy every piece of evidence an investigator would be looking for. Some of the things that we were able to discover after running a bunch of disk cleaning tools were: - Deleted digital pictures, but did not clean thumbnails cache - Deleted documents, but did not clean Windows jumplists - Deleted but did not wipe data such as browsing cache and history - Deleted main Skype database but did not touch chatsync - Failed to delete files that were currently opened (e.g. deleted files from the browser cache but could not delete the index database) - Deleted files but left corresponding Registry entries intact By using forensic tools, investigators can counter many of these measures. Let us briefly look at what can be done to locate evidence after it has been cleaned by the user manually or using one of the privacy protection tools. Windows OS is designed in such a way that it has many hidden and little known places that keep remnants or traces of evidence. One of these places is the Thumbnail cache. If a suspect used a privacy protection tool or securely wiped one or more pictures and denies their existence, it’s not the end. Let’s have a look into the Thumbnail cache. Windows keeps thumbnail-sized copies of many file formats including pictures, PDF files, compatible RAW (digital negative) formats, presentations and documents stored on the computer in thumbnail cache. Files in JPEG, GIF, DOC/DOCX, XLS/XLSX, PDF, and dozens of other formats will leave traces in the form of small preview images long after the original file is deleted. If a certain format is not supported by Windows, third-party extensions are available and often installed with corresponding applications. The system keeps file thumbnails even if the original image has been deleted. This gives investigators a chance to find a smaller version of the picture of interest in Windows Thumbnails. Thumbnails are indirect but strong pieces of evidence. It is worth mentioning that if Windows (or the suspect) deletes a thumbnail item, experts can attempt to recover it with file carving. When analyzing the computer for evidence, digital forensic tools implement a complex approach including a combination of techniques to recover deleted data or discover external metadata that belongs to deleted files. When looking for deleted pictures, the tool will normally carve the disk that used to contain images. In some cases, the data can be recovered from unallocated space with carving. If, however, there are seemingly no traces of deleted pictures anywhere on the disk, it is worth looking in the system Thumbnail cache. Different versions of Windows keep thumbnails in different places. Windows XP and earlier create a hidden file named Thumbs.db in the same folder where the pictures are stored. When deleting the entire folder with pictures, Thumbs.db would typically also get deleted. In addition to Thumbs.db, Windows XP Media Center maintains a video thumbnail cache file named ehthumbs.db. Windows Vista, Windows 7, 8, 8.1 and Windows 10 have all of their thumbnails stored in a single location, usually %userprofile%\AppData\Local\Microsoft\Windows\Explorer The files are named thumbcache_xxx.db, where “xxx” is a number. Here is a brief overview of thumbnail locations in different versions of Windows: - Windows XP and earlier: Thumbs.db alongside pictures - Windows XP Media Center: also ehthumbs.db for videos - Windows Vista, Windows 7, 8, 8.1 and Windows 10: thumbcache_xxx.db (numbered by size) centrally located at %userprofile%\AppData\Local\Microsoft\Windows\Explorer If the expert needs to find proof of existence of a certain picture, yet the image itself was deleted or accessed from a removable device (e.g. a USB flash drive that is no longer available), the thumbnails storage folder can often help locate the required evidence. Windows thumbnail cache contains reduced versions of images that were either viewed or listed in a folder. Since Windows Vista, thumbnails are kept on the disk even if the original picture is deleted, or if the original was located on a removable drive. Thumbnails are relatively little known and often left out by third-party disk cleaning tools. However, Microsoft’s Disk Cleanup Wizard supplied with recent versions of Windows does clean the thumbnail cache properly. However, unlike many third-party privacy protection tools, Disk Cleanup Wizard does not run on a schedule and has to be launched manually every time the user needs to clean up their disk. When analyzing a PC, experts have an option of locating the thumbnails manually by searching (or carving) for Thumbs.db (in Windows XP) or analyzing the content of the %userprofile%\AppData\Local\Microsoft\Windows\Explorer folder in Windows Vista, Windows 7, 8, 8.1 and Windows 10. Automating thumbnail analysis is possible if using a digital forensic tool. In Belkasoft Evidence Center for example, thumbnails can be acquired automatically if the corresponding option is enabled when searching for evidence. Deleted Programs and Documents While most digital investigators know about Windows Registry, not everyone has heard about Jumplists. Jumplists contain information on documents, pictures, applications and other types of files being viewed, opened, launched or otherwise accessed by the user. Jumplists do not go anywhere even if the original file is deleted. Interestingly, jumplists are generated for files accessed from remote or external devices. Such jumplists also remain in the system even after the external device is removed, so they can be used as proof of access to a certain file that is no longer available. Jumplists are a valuable source of information about files being used on the system. They are little known by the criminals. For this reason, they are rarely cleaned, and can trace back to the day the operating system was first installed. Jumplists can be used as definite proof of access. They contain full name and path to the file being accessed, computer name and MAC address, access date and time, and application used to open the file. Technically speaking, jumplists are stored as little files in plain text format. They are stored at the following location: For example, they could be stored at the following path: (where USERNAME is the name of the Windows account being investigated). What is the practical use of Jumplists? Let us say the suspect denies the use of a document named “my.doc”. If the document was deleted securely with a tool such as CCleaner or if it was opened from a removable drive (e.g. a USB stick), there would be no trace in the system that the file has ever existed. However, if that file was ever opened, Windows will record information about the event in a corresponding Jumplist. The Jumplist will contain information about the name of the file, its full path, as well as the date and time of the last access. In this scenario, the investigator will get a definite proof of access to a certain file. On many systems Jumplists are available with dates going all the way back to the day the operating system was first installed. Jumplists are also great when it comes to SSD drive analysis, allowing investigators to discover files that existed on the disk before TRIM and garbage collection wiped deleted evidence. Deleted Skype Database Cleaning chat logs is yet another example of naive anti-forensics. Since it is not possible to delete files opened by running applications, many privacy protection tools will simply delete everything except the main database. Users can manually clean their Skype histories from Skype settings. The thing is, Skype is an old app. While employing a modern SQLite database to maintain history, Skype still keeps some internal data in a proprietary format in the “chatsync” folder. The content of that folder may reveal chunks and bits of user conversations. While the format is not officially disclosed, there are tools available that can analyze such files. Thus, if chatsync folder exists, there are definite chances to recover Skype chats even if one has failed to recover a deleted Skype database. Deleted SQLite Records SQLite is a de-facto standard database used by thousands of applications to keep settings, histories and other data. SQLite is used by iOS and Android apps, and employed by desktop tools such as Skype, Chrome, Firefox, and a wide range of instant messaging applications such as WhatsApp and iMessages. Regularly cleaning browsing and conversation histories helps maintain the illusion of privacy with computer users where in fact deleted records may not immediately disappear. Instead, these records might be placed into so-called “freelist”. Simplified analysis of SQLite databases is often concluded by opening a database file in a database viewer instead of using a proper forensic tool. The obvious drawback of using a free or commercially available database viewer for examining SQLite databases is the inherent inability of such viewers to access and display recently deleted (erased) as well as recently added (but not yet committed) records. On the other hand, forensic tools such as Belkasoft Evidence Center will look inside all available sources, often recovering records that have been deleted by the user. In the first part of this whitepaper we mainly focused on describing the most common among the basic ways of applying anti-forensic measures, such as deleting or changing certain types of files manually or using privacy protection and cleaning tools, and how to deal with those if we are investigating a suspect’s machine. In the next part we will talk about some slightly more advanced anti-forensic techniques, in particular, how to deal with formatted or otherwise wiped drives as well as how to bypass encryption and anti-debugging protection while looking for digital evidence. The second part is coming next week – take a look at our previous articles. The latest version of Belkasoft Evidence Center comes with enhanced SQLite analysis algorithms, allowing the tool to process even huge databases in a matter of seconds. Fully functional trial version of Belkasoft Evidence Center is available for evaluation at: http://belkasoft.com/trial. More information about recovering deleted records from SQLite databases is available in the following whitepaper: “Forensic Analysis of SQLite Databases: Free Lists, Write Ahead Log, Unallocated Space and Carving.” Ready to start carving the disk, memory card or forensic disk image? If you are using Belkasoft Evidence Center, you can follow these step-by-step instructions with screenshots: https://belkasoft.com/bec/Carving.asp About the authors Oleg Afonin is Belkasoft sales and marketing manager. He is an author, expert, and consultant in computer forensics. Danil Nikolaev is Belkasoft sales and marketing manager, co-author, and content manager. Yuri Gubanov is a renowned digital forensics expert. He is a frequent speaker at industry-known conferences such as CEIC, HTCIA, TechnoSecurity, FT-Day, DE-Day and others. Yuri is the Founder and CEO of Belkasoft, the manufacturer of digital forensic software empowering police departments in about 70 countries. With years of experience in digital forensics and security domain, Yuri led forensic training courses for multiple law enforcement departments in several countries. You can add Yuri Gubanov to your LinkedIn network at http://linkedin.com/in/yurigubanov.
<urn:uuid:dbed68f7-d6ed-48a5-8955-33e7018fcbbf>
CC-MAIN-2022-40
https://www.forensicfocus.com/articles/countering-anti-forensic-efforts-part-1/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00560.warc.gz
en
0.931734
3,306
2.65625
3
Pen testing for a Cryptocurrency Platform Today’s technological advances make it easier than ever for hackers to identify vulnerable points and barge into an organization’s security. The purpose of pen testing is to help companies protect the weaknesses of their servers and network before hostile parties could discover and exploit those. What is Pen testing? A penetration test, also known as a pen test or ethical hacking, is a simulated cyberattack against the organization’s IT infrastructure. Penetration tests are often carried out to evaluate security by safely trying to exploit vulnerabilities in security posture. These vulnerabilities may exist in operating systems, due to services and application flaws, improper configurations, or risky end-user behavior. In case of web application security, penetration testing is commonly used to elevate a web application firewall (WAF). The insights obtained from the test are used to fine-tune the WAF security policies and patch detected vulnerabilities. Such assessments are also useful in authenticating the effectiveness of defensive mechanisms, as well as end-user adherence to compliance regulation. Penetration testing is typically carried out using manual or automated technologies to spot potential points of exposure. Through privileged escalations, testers can attempt higher levels of security clearance and deeper access to electronic assist, once the vulnerabilities are successfully exploited. Why Cryptocurrency Platform needs Pen testing? The future of digital currency is strong and so are security risks. With the prevalence of cryptocurrency, it is crucial to prepare exchange platforms for any exploitation. Hence, Pen test is often carried out to evaluate the software, applications, systems, and devices used in cryptocurrency transactions. It identifies vulnerabilities and risks in the system which may impact the confidentiality, integrity, and availability of the data by emulating real attacks such as DDoS attacks. Pentest ensures that hackers cannot reach sensitive data through loopholes and users have a secure crypto exchange. It validates the current security implementation. It also checks the social engineering aspects such as attempts to hack employee data or vendors or other stakeholders to gain security credentials and phish into cryptocurrency networks. Here is a case study to help you understand better. Case Study – Pen testing for a Cryptocurrency Platform Recently, iLink performed a pen test for a cryptocurrency exchange site, and below is the complete case study of the same. The client is a cryptocurrency exchange platform that offers users the ability to buy and sell digital money. Being a crypto-financial platform, security is a major concern and thus penetration testing plays an important role in validating the security of the application. In this approach, our security analyst carried out ethical hacking to identify the exposed security loopholes. The entire pen testing was performed manually, though automated tools were used to check DDoS attacks and other vulnerabilities. The goal of the application is to provide a beginner-friendly, secure, and fast cryptocurrency trading experience to users. Customers make several USD deposits and trade in Bitcoin (BTC), Ethereum (ETH), Litecoin (LTC), and other currencies, and the client wanted to be assured of stringent security protocols to forbid trespassing onto the application. The client also wanted to check Admin wallets against coin theft. Only authorized and valid users must be able to access the application after completing the KYC process. Users must be able to immediately report any failure during the KYC, registration, login, transaction, etc, and follow a safe and responsible workflow to solve the issue. Protecting customers and providing a satisfactory experience was one of the top priorities, yet the biggest challenges being addressed were in terms of security and tamper-proof transactions. Module flows were analyzed, and the security testing approach was based on the module flow. Testing was a mix of automation and manual testing. - The application has two interfaces – one for Admin and other for Users, both the interfaces were tested separately. The backend databases hosted in AWS were highly restricted and its access is managed via AWS keys. IPs were whitelisted to grant access to the databases only to specific IP addresses. - The analyst tested all the entry points. Additionally, all the UI forms were checked with different combinations of inputs. The security analyst tested every response given and ensured whether the application redirections are as expected. - Admin interfaces were tested in terms of secure login, authorized access, SSL enablement, SQL injections, Calculations of coin bundle pricing with other currencies. On the other hand, user interfaces were tested from point of user registration till the user accessing the application. - The major focus, in terms, of user registration testing, was the KYC process. It authenticated all the checks and revealed that the process validates the application only to credible users. - All UI forms were tested from the point of input validation. Minimal test cases were performed based upon different scenarios of attacks like wallet transfer, coin exchanges, approvals on transactions, and so on. The first scenario considered was Identity Management or “ID management”. The term refers to the processes of identifying individuals or users, thereby authorizing them to access organizational systems and networks. The process also includes canceling user access when it’s no longer valid. ID management is of utmost importance to protect against the wide range of fraud scams that are projected to become prevalent. The tests include Credentials Transported over an Encrypted Channel, Default Credentials, Weak Lock Out Mechanism; Bypassing Authentication Schema; Vulnerable Remember Password; Browser Cache Weaknesses; Weak Password Policy; Weak Security Question Answer; Weak Password Change or Reset Functionalities; Weaker Authentication in Alternative Channel. The security analyst found that most of the test cases passed. The user authorization and authentication were validated in this initial testing. Authenticating users is essential especially when financial transactions are involved. Besides, the client wanted to ensure zero fraudulent transactions. For this, different test cases such as – Logging in with a fake username or password; Role Definitions; User Registration Process; Account Enumeration, and Guessable User Account and Weak or Unenforced Username Policy targeted testing areas. The client portal transmits data over the encrypted channel as the application has enabled Secure Socket Layers (SSL). Tampering that is nearly impossible. In the final finding, a few cases were identified with a weak password scheme applicable and could be improved while the rest of the test cases had passed. The findings from these tests revealed that the client has a good user registration process in place. Role definition is very clear as only privileged users can access the application. There are two portals one is admin and the other is users. Unless a hacker muddles through the account and breaks the password there could not be a way to break the security. However, a 2-factor authentication feature is enabled along with an auto-logout to assure security. Session Management Testing To avoid repeated authentication for each page of a website or service, web applications implement various mechanisms to store and validate credentials for a pre-determined timespan. These mechanisms are known as Session Management. In our test, no cookies were used as the application adopted token-based authentication and almost all the test cases passed. As the token expires after 5 hours, it assures the security of the platform. Most of the test cases were found to be not applicable. All this testing has been done manually. Some of the test cases were executed through coding to get them verified. Business Logic Testing In this testing, the application was tested in terms of its business logic. Testing of business logic flaws is similar to the test types used by functional testers that focus on logical or finite-state testing. These types of tests require security professionals to think in unconventional methods as the vulnerability cannot be easily detected. Being a financial application data validation is the most important aspect. Thus our experts tested validation in terms of inputs. While testing the application for forging a request, say for a balance (for USD transfer)the number of times such activities can be performed is limited. For example, OTP sent is limited to 5 times a day. Likewise, unexpected upload of files is not allowed through the application. Client-Side Testing and Error Handling The proper error message is being displayed for each type of errors. iLink helped the client to determine the vulnerability of its platform under different kinds of cyberattacks. The pen test helped them realize how low-risk vulnerabilities could manifest into higher-level damage and negatively impact business operations. Our security analysts not only minimized the security risks but also recommended solutions with proven methods to help them thwarts future attacks. Thus, organizations need to hire security professionals to make sure that their systems are safe and secure. iLink has a special team of threat intelligence and certified testers with almost 20 years of experience. Our security solutions include the latest features that provide comprehensive security regulation to protect your most critical cloud assets.
<urn:uuid:f774bb62-bf52-48cf-b939-c1338e6a95e3>
CC-MAIN-2022-40
https://www.ilink-digital.com/insights/blog/pen-testing-for-a-cryptocurrency-platform/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00560.warc.gz
en
0.945907
1,856
2.859375
3
Hashing is an algorithm performed on data such as a file or message to produce a number called a hash (sometimes called a checksum). The hash is used to verify that data is not modified, tampered with, or corrupted. In other words, you can verify the data has maintained integrity. A key point about a hash is that no matter how many times you execute the hashing algorithm against the data, the hash will always be the same if the data is the same. Hashes are created at least twice so that they can be compared. As an example, imagine a software company is releasing a patch for an application that customers can download. They can calculate the hash of the patch and post both a link to the patch file and the hash on the company site. They might list it as: - Patch file. Patch_v2_3.zip - SHA-1 checksum. d4723ac6f72daea2c7793ac113863c5082644229 The Secure Hash Algorithm 1 (SHA-1) checksum is the calculated hash displayed in hexadecimal. Customers can download the file and then calculate the hash on the downloaded file. If the calculated hash is the same as the hash posted on the web site, it verifies the file has retained integrity. In other words, the file has not changed. Hashing Algorithms are: Message Digest 5 (MD5) is a common hashing algorithm that produces a 128-bithash. Hashes are commonly shown in hexadecimal format instead of a stream of 1s and 0s. For example, an MD5 hash is displayed as 32 hexadecimal characters instead of 128 bits. Hexadecimal characters are composed of 4 bits and use the numbers 0 through 9 and the characters a through f. Secure Hash Algorithm (SHA) is another hashing algorithm. There are several variations of SHA grouped into four families—SHA-0, SHA-1, SHA-2, and SHA-3: - SHA-0 is not used. - SHA-1 is an updated version that creates 160-bit hashes. This is similar to the MD5 hash except that it creates 160-bit hashes instead of 128-bit hashes. - SHA-2 improved SHA-1 to overcome potential weaknesses. It includes four versions. SHA-256 creates 256-bit hashes and SHA-512 creates 512-bit hashes. SHA-224 (224-bit hashes) and SHA-384 (384-bit hashes) create truncated versions of SHA-256 and SHA- 512, respectively. - SHA-3 (previously known as Keccak) is an alternative to SHA-2. The S. National Security Agency (NSA) created SHA-1 and SHA-2. SHA-3 was created outside of the NSA and was selected in a non-NSA public competition. It can create hashes of the same size as SHA-2 (224 bits, 256 bits, 384 bits, and 512 bits). Another method used to provide integrity is with a Hash-based Message Authentication Code (HMAC). An HMAC is a fixed-length string of bits similar to other hashing algorithms such as MD5 and SHA-1 (known as HMAC-MD5 and HMAC-SHA1). However, HMAC also uses a shared secret key to add some randomness to the result and only the sender and receiver know the secret key. As an example, imagine that one server is sending a message to another server using HMAC- MD5. It starts by first creating a hash of a message with MD5 and then uses a secret key to complete another calculation on the hash. The server then sends the message and the HMAC-MD5 hash to the second server. The second server performs the same calculations and compares the received HMAC-MD5 hash with its result. Just as with any other hash comparison, if the two hashes are the same, the message retained integrity, but if the hashes are different, the message lost integrity. RACE Integrity Primitives Evaluation Message Digest (RIPEMD) is another hash function used for integrity. It isn’t as widely used as MD5, SHA, and HMAC. Many applications calculate and compare hashes automatically without any user intervention. For example, digital signatures use hashes within email, and email applications automatically create and compare the hashes. Additionally, there are several applications you can use to manually calculate hashes. As an example, sha1sum.exe is a free program anyone can use to create hashes of files. A Google search on “download sha1sum” will show several locations. It runs the SHA-1 hashing algorithm against a file to create the hash. It’s worth stressing that hashes are one-way functions. In other words, you can calculate a hash on a file or a message, but you can’t use the hash to reproduce the original data. The hashing algorithms always create a fixed-size bit string regardless of the size of the original data. The hash doesn’t give you a clue about the size of the file, the type of the file, or anything else. As an example, the SHA-1 hash from the message “I will pass the Security+ exam” is: 765591c4611be5e03bea41882ffdaa159352cf49. However, you can’t look at the hash and identify the message, or even know that it is a hash of a six-word message. Passwords are often stored as hashes. When a user creates a new password, the system calculates the hash for the password and then stores the hash. Later, when the user authenticates by entering a username and password, the system calculates the hash of the entered password, and then compares it with the stored hash. If the hashes are the same, it indicates that the user entered the correct password. Key stretching (sometimes called key strengthening) is a technique used to increase the strength of stored passwords and can help thwart brute force and rainbow table attacks. Key stretching techniques salt the passwords with additional random bits to make them even more complex. Two common key stretching techniques are bcrypt and Password-Based Key Derivation Function 2 (PBKDF2). Bcrypt is based on the Blowfish block cipher and is used on many Unix and Linux distributions to protect the passwords stored in the shadow password file. Bcrypt salts the password by adding additional random bits before encrypting it with Blowfish. Bcrypt can go through this process multiple times to further protect against attempts to discover the password. The result is a 60-character string. As an example, if your password is IL0ve$ecurity, an application can encrypt it with bcrypt and a random salt. It might look like this, which the application stores in a database: Later, when a user authenticates with a username and password, the application runs bcrypt on the supplied password and compares it with the stored bcrypt-encrypted password. If the bcrypt result of the supplied password is the same as the stored bcrypt result, the user is authenticated. As an added measure, it’s possible to add some pepper to the salt to further randomize the bcrypt string. In this context, the pepper is another set of random bits stored elsewhere. PBKDF2 uses salts of at least 64 bits and uses a pseudo-random function such as HMAC to protect passwords. Many algorithms such as Wi-Fi Protected Access II (WPA2), Apple’s iOS mobile operating system, and Cisco operating systems use PBKDF2 to increase the security of passwords. Some applications send the password through the PBKDF2 process as many as 1,000,000 times to create the hash. The size of the resulting hash varies with PBKDF2 depending on how it is implemented. Bit sizes of 128 bits, 256 bits, and 512 bits are most common. Some security experts believe that PBKDF2 is more susceptible to brute force attacks than bcrypt. A public group created the Password Hashing Competition (PHC). They received and evaluated 24 different hashing algorithms as alternatives. In July 2015, the PHC selected Argon2 as the winner of the competition and recommended it be used instead of legacy algorithms such as PBKDF2. Hashing provides integrity for messages. It provides assurance to someone receiving a message that the message has not been modified. Imagine that Lisa is sending a message to Bart. The message is “The price is $75.” This message is not secret, so there is no need to encrypt it. However, we do want to provide integrity, so this explanation is focused only on hashing. In this example, something modified the message before it reaches Bart. When Bart receives the message and the original hash, the message is now “The price is.75.” Note that the message is modified in transit, but the hash is not modified. A program on Bart’s computer calculates the MD5 hash on the received message as 564294439E1617F5628A3E3EB75643FE. It then compares the received hash with the calculated hash: - Hash created on Lisa’s computer, and received by Bart’s computer: D9B93C99B62646ABD06C887039053F56 - Hash created on Bart’s computer: 564294439E1617F5628A3E3EB75643FE Clearly, the hashes are different, so you know the message lost integrity. The program on Bart’s computer would report the discrepancy. Bart doesn’t know what caused the problem. It could have been a malicious attacker changing the message, or it could have been a technical problem. However, Bart does know the received message isn’t the same as the sent message and he shouldn’t trust it. You might have noticed a problem in the explanation of the hashed message. If an attacker can change the message, why can’t the attacker change the hash, too? In other words, if Hacker Harry changed the message to “The price is .75,” he could also calculate the hash on the modified message and replace the original hash with the modified hash. Here’s the result: - Hash created on Lisa’s computer:D9B93C99B62646ABD06C887039053F56 - Modified hash inserted by attacker after modifying the message: 564294439E1617F5628A3E3EB75643FE - Hash created for modified message on Bart’s computer: 564294439E1617F5628A3E3EB75643FE The calculated hash on the modified message would be the same as the received hash. This erroneously indicates that the message maintained integrity. HMAC helps solve this problem. With HMAC, both Lisa and Bart’s computers would know the same secret key and use it to create an HMAC-MD5 hash instead of just an MD5 hash. See also Integrity.
<urn:uuid:2173cc21-d17c-4a0f-aaa0-aa262d3c40e1>
CC-MAIN-2022-40
https://cybersecurityglossary.com/hashing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00560.warc.gz
en
0.905883
2,413
4.34375
4
HTML (Hypertext Markup Language,) the common language used to create web pages, can also be viewed in most modern e-mail program. This allows someone to create a nice looking, graphic rich e-mail message. Because the layout is written in HTML, images can be displayed without having to send them along with the e-mail message. Instead the code is written to access the server where the image is held, and download from that server. Unfortunately, because images come from some outside server, this makes for an excellent way for spammers to verify your e-mail address as not only valid, but also notes that you read the message they sent you, making your e-mail address a target for spam. How it works: In an HTML e-mail message, an image is placed using code similar to <http://somesite.com/insertmessage.html>. Very basic. What spammers do is insert the e-mail address they are sending to, into the code used to display the image, something like <http://somesite.com/youraddy=yourdomain.com/instertmessage.html>. It’s easy to then setup a server to filter out ‘youraddy=yourdomain.com’ and record that it was accessed by someone. This effectively tells the spammers that not only did someone receive the message, it was also read, thus making the e-mail address valid. Once this happens, your e-mail address can become a target for spam. How does a spammer get your e-mail address in the first place? They just keep trying random combinations of addresses until one works. Computers are excellent tools for high volume, repetitive tasks. How do you prevent this? It is generally easy, although in the case of Outlook and Outlook Express, not very obvious, in most e-mail programs to keep this from happening. You will need to turn off the display of HTML e-mails. In Mac OS X mail.app, go into the ‘Mail’ menu, select ‘Preferences’ then go to the ‘Viewing’ preference. Uncheck ‘Display images and embedded object in HTML messages’. In Outlook Express 5.x for Mac, select ‘File -> Preferences’ then click on the ‘Display’ tab and uncheck ‘Show attached pictures in messages’. In Outlook and Outlook Express for Windows, you should be able to select Tools->Options, then click the Security Tab. In the “Security Zones” section, select the “Restricted Sites” zone. For other e-mail clients, see the help for the particular application you are using. There is an excellent web page that explains why HTML e-mail is bad and has an extensive list of instructions for turning of HTML e-mail, located at: http://expita.com/nomime.html What is the downside to turning of HTML in e-mails? You won’t be able to see any HTML that is sent to you. You may see text that is included in the message, but no images. Most of the time, it’s probably a spam message that you are receiving anyway. It’s up to you do decide if you want to see HTML messages. Just keep in mind, if you do, you are a more likely target for spam than you might be otherwise.
<urn:uuid:eb151c90-8fa1-4670-9ef4-7dff46e1b00d>
CC-MAIN-2022-40
https://estreet.com/knowledge-base/how-spammers-use-html-e-mail-to-verify-an-e-mail-address-as-valid/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00560.warc.gz
en
0.912635
721
3.390625
3
For years, Linux and free software were perceived as threatened by cloud computing, the online storage of data. However, over the last few years, something ironic happened — free software became a major player in cloud computing. That wasn’t always the case. In 2008, Richard Stallman, the founder of the Free Software Foundation, condemned cloud computing as “just as bad as using a proprietary program….If you use a proprietary program or somebody else’s web server, you’re defenseless. You’re putty in the hands of whoever developed that software.” Cloud computing, he added, was “worse than stupidity” because it meant that providers controlled customer’s data. Stallman was referring mainly to the free storage that many providers offer, equating it with the free services provided by social media sites such as Facebook and Twitter. Free services, he argued, gave the same convenience as free software, but without user control. The Free Software Foundation’s response to this threat was to release the Affero General Public License, a license designed for online services. However, the Affero License has never been widely used, and critics like me have often noted that the Free Software Foundation has courted disaster by not offering a solution to an obviously growing threat. What none of us foresaw was that much of the perceived problem would eventually solve itself. Nor could we foresee that free software would become the model for a growing number of cloud vendors. Angel Diaz, Vice President, Software Standards and Cloud Labs, estimates that IBM did seven billion dollars’ worth of business in cloud service in 2014 alone — and that was only a single company. Cloud services have been dominated by companies like Amazon and Microsoft. However, in 2012, the OpenStack Foundation was founded to administer a project started by RackSpace and NASA. Today, the OpenStack Foundation consists of hundreds of companies, many of whom are also active in free software development, including Canonical, Hewlett-Packard, IBM, Red Hat, and SUSE. Others are well-known technology corporations such as Huawei, Oracle, and VMWare. Such a diverse group required a model for cooperation. The Foundation found it in Linux and the free software movement. It chose the Apache 2.0 license for its software, allowing for a mixture of free and proprietary uses. Just as importantly, it took Linux, free software, and the community that supports them as a direct example, noting how they were organized and how they had survived the cycle of boom and bust around the turn of the millennium. The result was unprecedented growth, which Chairman of the Board Alan Clark of SUSE attributes largely to the Foundation’s ability to learn from free software’s example. It helped, too, Clark says, to be able to point to a proven success to convince executives of the validity of the approach. To add irony to irony, OpenStack has recently started promoting federation, the solution that many free software advocates see to the centralized control of cloud services. Federation is the development of a standard core of functionality that allows users to choose the software they use, instead of becoming dependent on the vendors’ choices. Admittedly, free software advocates see federation as a means of combating vendor lock-in, while OpenStack corporations see it as a way to ease cooperation between conflicting interests. However, in this case, the same approach enables both ideals and practicality, going a long way to disprove the idea that free software and cloud services are natural enemies. Free Software Alternatives Of course, free software as a means of production does not address Stallman’s concerns about privacy and control of data. Even if users can examine the code for backdoors usable by vendors, they still have no control over who has access to the data, or where and how it is stored. However, free software is providing alternatives that address these issues as well. For example, Tahoe-LAFS is a free software project that offers the means to encrypt data and to store it in separate chunks across multiple sites and reassemble it, with the result that privacy is returned to the users. Similarly, ownCloud, which began as a free software project and became a company, offers a relatively easy way for customers to set up their own cloud services while retaining control over their data. The fact that ownCloud does not sell storage itself helps to reinforce its dedication to privacy. In fact, when ownCloud founder Frank Karlitschek talks, his concerns sound almost identical to Stallman’s. The problem with most cloud services, Karlitschek explains, “is that we give up control of our data, which means privacy is a concern; you don’t really know who has access to the data.” ownCloud is probably a minor company compared to most members of the OpenStack Foundation, but the signs are that it, too, is flourishing. Still, the point is that, both in the mainstream and in the alternatives, free software has become a dominant player in cloud services. What is more, it has done in less than five years what free software took over twenty do — largely because free software was available as an example. Apparently, in expressing his concerns for free software, Stallman neglected to consider free software itself as a factor in the situation. The current situation is one that was inconceivable in 2008. Photo courtesy of Shutterstock.
<urn:uuid:dfd04a71-6231-452a-9243-0a53c4050cca>
CC-MAIN-2022-40
https://www.datamation.com/cloud/the-cloud-vs-open-source/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00560.warc.gz
en
0.96849
1,125
2.65625
3
5G or fifth-generation wireless technology is a term that you have probably heard about in the media recently. Understanding what 5G is and how it’s set to reinvent life as we know it is critical for B2B and B2C companies. Right now, there is hardly any industry that’s not going to be transformed by 5G. In fact, the approximate number of 5G wireless connections is projected to increase from 10 million in 2019 to 1.01 billion in 2023 – a staggering compound annual growth rate (CAGR) of 217.2%. To better understand the role of 5G and how it has accelerated transformation across industries and encouraged widespread adoption of new technologies, let’s first break down what 5G is and how it works. “…the fifth-generation of cellular data technology, succeeding 4G and related technologies including LTE.” Leading tech giant Cisco adds to this definition, the function of 5G: “It is designed to increase speed, reduce latency, and improve the flexibility of wireless services.” To give you some perspective, here is a little history. The first generation of wireless cellular technology (1G) was introduced in the 80s. Mobile phones that employed 1G could only support voice calling because of the analogous nature of the 1G networks. 2G succeeded 1G, and with it came the ability for networks to provide additional services such as text, picture messaging, and MMS thanks to the digital radio signals used by 2G. Nokia emerged as a leading mobile brand during this 2G era. The dawn of 3G would forever change the world. 3G brought with it wireless voice telephony, mobile internet access, video calling, and even mobile TV. Apple changed the nature of mobile phones forever when they released the original iPhone on June 29, 2007. Today, most smartphones are running on fourth-generation (4G) systems that support HD mobile TV, 3D TV, video conferencing, IP telephony, and mobile web access. 5G promises to bring new capabilities that will create opportunities and transform the way businesses operate. 5G is about 20 times faster than 4G. However, there is more to it than just more rapid connection speeds. Users can expect extremely low latency, reliability, and power while supporting more devices. Now that we know the history of broadband cellular network, let’s see exactly how it works. How 5G works Essentially, the technology behind each generation is the same. The only differences between subsequent generations are the new frequency bands, non-backward-compatible transmission technology, and higher data rates. The spectrum in which 5G operates falls into 3 bands: low (<1 GHz), mid (1 up to 6 GHz) and, high (24 to 40 GHz) 5G is available in all of the band tiers, but only the high band enables download speeds from 1 to 3+ Gbps – which is the chief benefit of 5G, and what phone OEMs and carriers are touting. There are, however, notable issues with the high band tier: Significant band signal weakness High band signals don’t travel very far and can be obstructed by objects (buildings, trees, etc.). The further one is from a tower, the less reliable speed and coverage are. Conversely, low to mid bands have greater coverage and are more reliable but have slower download speeds, similar to what we see with LTE and, further down the spectrum, 3G. To maintain greater coverage for 5G speeds will require carriers to build more nodes and masts, which are expensive CapEx projects and consumers will likely cover these costs via higher priced subscriptions. High mobile phone battery depletion rates High band speed will drain batteries at a greater rate. This raises an important question – do consumers want slim, stylish smartphones that deliver high band performance, knowing that their phones will require more frequent charging? Or, will they accept bulkier, less aesthetically appealing products that deliver performance with the battery capacity that they’re accustomed to? While these are serious issues carriers and mobile phone manufacturers still need to address, the truth is 5G technology is already in full swing in many industries. Let’s take a look at some of the top 5G use cases according to industry. Specific 5G use cases by industry In 2021, more companies will invest in 5G to deliver highly customized, connected solutions that respond in real-time as is already being witnessed in the following industries. Energy & Utilities In this industry, grid monitoring, control, and protection make use of the 5G network to provide real-time flexible routing of electricity flows depending on generation and consumption levels in different parts of the electrical grid. In the energy domain, oil rig production analytics is enabled by equipping oil wells with IoT sensors that are connected to 5G networks that have the ability to send and receive data in real-time. This real-time analysis from the oil wells can identify distress signals that alert off-shore teams when the wells fall outside the ideal production ranges. Autonomous vehicles are one the biggest use cases enabled by 5G – in particular cellular vehicle-to-everything (C-V2X) technology. This tech allows these self-driving cars to connect to the 5G network and transmit data through vehicle-to-vehicle (V2V), vehicle-to-network (V2N), vehicle-to-infrastructure (V2I), and vehicle-to-pedestrian (V2P) technologies. Within the agriculture sector, 5G use is seen in automated machinery which can be operated remotely from a central control center over the 5G network. Farming drones for monitoring is another 5G use case within the agricultural industry. Drones can now be deployed to monitor livestock, automated machinery, and fields. A high-quality, live video feed is sent over the 5G network to the central control room for management. Manufacturing automation is a classic 5G supply chain use case example. Manufacturing automation refers to automated industrial devices and machinery. These automated machines sometimes called automated guided vehicles (AGVs) are currently being employed along 5G networks by numerous manufacturing companies. The mobility management, coverage and quality of service assurance of 5G networks provide the reliable communication needed by these types of automated guided vehicles (heavy burden carriers, unit load handlers, forklifts, etc.) to do their jobs safely. For example, 5G is assisting these AGVs to move across factories and warehouse floors more efficiently. This generation of the technology and new capabilities has some new possibilities and promise. The US is currently championing a huge push to build out these networks to approved providers. As the US pursues the connected future, there needs to be strong focus on security of connections, devices and applications. It is important that the 5G networks are built securely from the start. In March 2020, the White House developed the National Strategy to Secure 5G, outlining how the Nation will safeguard 5G infrastructure domestically and abroad. In January 2021, the Nation Strategy is focused on securing 5G and ensuring the US is equipped to continue development, deployment, and management of secure and reliable 5G. 5G is a powerful technology that’s not only accelerating transformation and adoption of new technologies in our communities but is also changing the B2B and B2C landscape. With all the 5G apps and infrastructure that’s being rolled out, enterprises need to stay on top of their game so they are not left behind. Liberty Advisors boasts a team that’s both experienced and professional, with extensive knowledge about 5G and industry insights. If you would like to schedule a consultation with one of our strategists to map a way forward for your business, contact us today. ABOUT LIBERTY ADVISOR GROUP Liberty Advisor Group is a goal-oriented, client-focused, and results-driven consulting firm. We are a lean, handpicked team of strategists, technologists, and entrepreneurs – battle-tested experts with a steadfast, start-up attitude. We collaborate, integrate, and ideate in real-time with our clients to deliver situation-specific solutions that work. Liberty Advisor Group has the experience to realize our clients’ highest ambitions. Liberty has been named as Great Place to Work, to the Best Places to Work in Chicago, and to FORTUNE’s list of Best Workplaces in Consulting and Professional Services.
<urn:uuid:0f0514ab-79c2-4722-a3be-50023cd2ae3d>
CC-MAIN-2022-40
https://libertyadvisorgroup.com/insight/5g-accelerating-transformation-adoption-of-new-technologies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00760.warc.gz
en
0.943161
1,776
2.65625
3
Artificial intelligence (AI) is one of the most exciting future technologies we’re encountering. And it’s recognized as such by almost all businesses worldwide. 56% of all respondents to a massive McKinsey survey of businesses report AI adoption in at least one function, up from 50% in 2020. Adoption is most common are service operations, product and service development, and marketing and sales – but has potential usage in almost all areas. AI allows businesses to operate more quickly, smoothly, and with fewer mistakes – in theory. There are plenty of issues that have been identified with bias in artificial intelligence in the past, but when operated correctly, it’s something that is meant to reduce many of the issues that have plagued the way businesses work in the past. That’s in theory. In practice, AI does have its own issues – and some of them we haven’t yet seen occur. That’s what makes it important to acknowledge the potential risks – as well as the potential benefits – of using AI in our lives. Where we are at AI has become one of the main ways in which we live and is touching more and more of our lives. From Netflix recommendations to suggesting the best possible routes as we travel, to offering personalized financial services that are tailored to our needs and interests, there’s a near-endless list of benefits to everything we do by using AI. It’s becoming ubiquitous, common in all areas and aspects of how we live. And with that comes more understanding and acceptance of AI. We’re increasingly less scared of it and more trusting that the results it comes to must be correct because there’s no way that it could be wrong. But that overlooks some of the issues that do exist with AI that haven’t yet been identified. One of the key ones is the fact that AI remains, for all its increased usage, a black box to the majority of people. We simply don’t understand how it works or why it works – and instead just trust that it does. That works well when the AI systems themselves do: but it doesn’t account for the potential nefarious or malicious changing of AI without us realizing. When AI goes wrong One key problem with a world run by AI is that we’ve become so reliant on it that we’re not necessarily conscious of how it works. Even some of the largest websites that use AI struggle to harness their algorithms, letting them run wild – as the radicalization of many of us through websites like YouTube, Twitter, and Facebook have shown. But that lack of transparency over how algorithms actually operate and how AI shifts and shapes our perceptions comes with other risks. If even the platforms that operate them don’t understand how they work, then it is challenging to understand how they’re potentially being used for malicious purposes. If people don’t know how an AI algorithm is meant to work correctly, then it’s impossible to know when it’s working incorrectly. One of the biggest cyber challenges associated with AI is this potential malicious misfiring of AI systems. It’s eminently possible for someone to hack into an AI-powered system and adapt it for their own purposes. That could mean pushing videos that will radicalize an individual without them realizing it or promoting adverts that push one point of view through social media platforms. For that reason, politicians and campaigners worldwide are promoting the idea of algorithmic and AI transparency. In November 2021, the UK government proposed that its public sector algorithms would be transparent so that people could see how it works – and if it discriminates against people. If it works well, it could be pushed to other businesses, making things more equal for all. It’s one tool to ensure that AI algorithms don’t misfire and cause more harm than good.
<urn:uuid:d7b4012a-5591-4803-b4b7-8e126887ebf4>
CC-MAIN-2022-40
https://cybernews.com/security/ai-has-been-heralded-as-the-future-but-what-are-the-risks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00760.warc.gz
en
0.959927
802
2.5625
3
Department of Energy Secretary Rick Perry wrote last May that “the future is in supercomputers,” but until recently, only a handful of agencies have been able to tap into that kind of power. Traditionally, high performance computing (HPC, or supercomputing) has required significant capital investment — as much as $400 million to $600 million for large-scale supercomputing infrastructures and operating expenses. It also called for small armies of scientists and engineers skilled in HPC application development. Precious few agencies had these resources and technical expertise. But times have changed, according to Ian Lee, open source lead at Lawrence Livermore National Laboratory. “We’ve been doing open source on big Unix systems for more than 20 years. Back then, if we produced open source software for our supercomputers, we were the only ones who could use that software,” he said. “Now, the software can be ported out and mainstreamed, and it’s a lot easier to make use of supercomputing in other places.” Open source and the hybrid cloud: de facto technologies for HPC Together, open source and the hybrid cloud are bringing HPC within reach of organizations that may not possess the budget or supercomputing expertise inherent in larger agencies. Now, even small agencies can harness HPC to elastically process large amounts of information in short amounts of time. They can use the combined power of open source and hybrid cloud environments — which combine on-premise and public cloud services — to open the door for new and exciting advancements in data science. This evolution started when Linux became the de facto software that powers all of the TOP500 supercomputers. Meanwhile, organizations began to look beyond Unix machines and turn to the cloud. Today, open-source software is behind Summit and Sierra, the world’s two most powerful supercomputers, and community initiatives like OpenHPC exist to support open source’s contributions to supercomputing. HPC has become deeply intertwined with open-source technologies like OpenStack, which provides a flexible and highly scalable infrastructure for supercomputing. Open-source technologies have also informed and been used in cloud technology stacks, allowing CPU resources to be scaled out. In fact, HPC workloads no longer need to run on bare-metal hardware. These workloads can often be deployed in the cloud using software containers that are easier to provision, access, orchestrate and scale as needed. By scaling out the compute assets available, agencies can dynamically increase resources as needed, which can save taxpayer dollars. Scalable open technologies have also flowed the other way, gaining traction and adoption within the established supercomputing facilities. Linux container technology, in particular, has seen interest and adoption by developers seeking to run software applications on large HPC systems. We’re seeing this evolution take shape before our eyes and in different forms. For example, there is support for high-performance graphics processing units in Kubernetes, an open-source container orchestration tool. Kubernetes enables containerized applications to be managed and scheduled with on-demand access to the same types of compute hardware routinely provided for traditional HPC workloads. These GPU resources are used extensively in machine learning and artificial intelligence. Open-source machine learning frameworks like TensorFlowenable high-performance numerical computation. They can be easily deployed across a variety of platforms, including GPUs, and are readily containerized. Open source has always been core to HPC efforts. Now, with the potential of hybrid cloud and Linux containers, the use case for HPC is wider than ever before. Opening new possibilities Consider the possibilities for federal agencies that, until now, did not have access to HPC. The Department of Housing and Urban Development could use HPC to advance intelligence that could help the agency make better decisions pertaining to people’s housing needs. The National Institutes of Health could test different medicines’ effects on the human heart and build upon the cardioid application developed at Lawrence Livermore National Laboratory and available now as open-source software. U.S. Strategic Command could more accurately project the fallout of satellite collisions, akin to work being done at Lawrence Livermore. “Many agencies are also using HPC and the cloud for cyber defense,” Lee said. “For example, the Department of Homeland Security is using the cloud to do vulnerability scanning across government agencies. Rather than having a bunch of dedicated resources that are only used once a week, they’re using the cloud to spin up resources, perform scans and shut those resources down,” he said. “This approach saves taxpayer dollars by only running the compute resources when they are using them, and not when they aren’t.” Agencies no longer must make capital outlays for HPC hardware that can depreciate quickly. Instead, they can take advantage of the greater flexibility provided by hybrid cloud, including on-demand elasticity. Like DHS, they can spin up HPC-scale resources whenever the need arises, perform parallel experiments, spin them back down when finished, and not spend money whenever there’s a lull in research. This can help to lower the cost of failure, which aligns well with the “fail fast to succeed sooner” credo of open source. Using a hybrid cloud strategy, agencies can experiment quickly on a small scale with lower risk, and then bring those workloads on-premises to scale out the solution. Indeed, a hybrid cloud is appropriate for this type of work, as some workloads can perform better and more cost-effectively using on-premises hardware. The key is to build HPC solutions on an open substrate that enables these workloads to go from physical to virtual to cloud and back again as business needs and technologies change. As Secretary Perry has written, the future is in supercomputers. But that future can and should be open to everyone. The combination of open source and the hybrid cloud has the potential to give all federal agencies access to HPC resources.
<urn:uuid:1d977dd5-7133-4bd0-b3fc-16ebee29caaa>
CC-MAIN-2022-40
https://resources.experfy.com/bigdata-cloud/opening-supercomputing-to-all-agencies/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00760.warc.gz
en
0.93923
1,252
2.65625
3
Cloud computing, simply stated, is the ability to use files and applications over the Internet instead of hosting, storing, or processing them on locally managed hardware Josh Manchester, Forbes Magazine Cloud computing is a catchy buzzword that every online provider has tried to claim. The result is a lot of consumer confusion. From a high level, cloud computing is any technology that makes compute resources available to subscribers as an online service. In practical terms, it’s the outsourcing of IT services and infrastructure to a remote data center that’s managed by another company. That company, the cloud provider, is responsible for installing, configuring, and maintaining all hardware and software in their data center, in addition to backups, physical security, environmental controls, disaster preparedness, etc. If you’ve ever used webmail, online storage like DropBox, or had someone else host your company website, then you’ve used cloud computing. Cloud computing also encompasses cloud apps and cloud desktops. Our cloud solution focuses on transforming traditional Windows apps and desktops into cloud apps by leveraging Citrix and VMware technologies.
<urn:uuid:d2c54da0-d8c7-4432-9b07-64f67e5b4108>
CC-MAIN-2022-40
https://vcit.ca/what-is-cloud-computing/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00760.warc.gz
en
0.912985
229
2.953125
3
How to live more securely in a connected world: If You Connect It, Protect It Written by Allie Johnson for NortonLifeLock One fact that has hit home during the coronavirus pandemic: Technology is central to our ability to stay connected. You might stay in touch with family and friends via video chat and use online tools to do your job. You might live in a smarter home. Areas of your life that used to be divided into neat categories — online and offline, work and home — have merged. That’s why now is a good time to boost your online security. October is National Cybersecurity Awareness Month (also known as NCSAM, pronounced N-C-SAM), which asks you to do your part to promote cyber safety. This year marks the 17th NCSAM from the U.S. Cybersecurity & Infrastructure Security Agency and the National Cyber Security Alliance. This year’s theme: “Do Your Part. #BeCyberSmart.” It’s hashtag ready, so you might consider posting on social media to encourage your friends to join you. One message to get across: “If you connect it, protect it.” If You Connect It, Protect It The phrase “If You Connect It, Protect It” is a call to secure any object that’s connected to the internet. This applies to the Internet of Things — IoT, for short — everyday objects that can connect to the internet and share data with each other. For instance, you may have a smart speaker that can tell you how to make a perfect boiled egg, a digital doorbell that lets you look at your smartphone to see who’s on your porch, or home lights you can flip on at night while you’re on vacation. It is anticipated that more than 55 billion IoT devices will be connected to the internet by 2025, according to the technology market research firm IDC. All this connectivity opens up plenty of opportunity for cybercriminals who want to spy on you from your baby monitor, break into your home through your digital door lock or recruit your old device to carry out a cyberattack. That’s why “If You Connect It, Protect It” is a good rule to live by. Here are a few IoT security tips for protecting your connected items: Do an IoT survey of your home. Start by taking a look around your house and making a list of all your IoT devices. Consider replacing old or questionable devices. For example, you may want to get rid of that novelty camera you bought to see what your dog does while you’re away. It could be vulnerable to hackers who could spy on you and your family. When you buy a new smart device, take these steps. - Choose a device from a reputable manufacturer. - Make sure the device can be protected with a password you set. - Find out what kinds of data the device will transmit and store, and whether that data will be encrypted. - Determine how long the company will provide regular software updates for the device. - Check to see if the manufacturer offers two-factor authentication, which typically requires you to enter your password plus a one-time passcode to log in. One example: Amazon-owned Ring, a maker of video doorbells and security cameras, recently made two-factor authentication mandatory after hackers took over cameras in U.S. homes to yell threats at residents. One hacker told a little girl he was Santa Claus and ordered her to smash items in her bedroom. Take the secure route(r). A Wi-Fi router offers an entry point that could allow a cybercriminal to access your IoT and home network. Setting up your router with security in mind is like putting a good lock on that door. Here are some tips on setting up a router securely. - Choose the right router for your needs. Make sure your router comes from a reputable company and has a firewall, which monitors and allows or blocks users trying to access your Wi-Fi network. This will help protect you against cybercriminals and nosy neighbors. - Swap the name and password. Routers often come with standard names and default passwords that you should change immediately. Choose a name not associated with you in any way. For example, never include your last name or street address. Add a unique and strong password. - Choose the best kind of encryption. Choose WPA2 encryption if available because it’s the most secure kind. If you have the option, also add AES (Advanced Encryption Standard) for extra security. - Stick to an update schedule. Set up notifications for firmware updates if necessary and make sure to run regular updates on your router. Make your accounts hard to crack. Use unique strong passwords or passphrases on all of your IoT apps and devices, and consider using a password manager to help you keep track of your login credentials. Also enable two-factor authentication if you have IoT accounts that offer this as an option. Just like with other apps and software, it’s important to keep your IoT devices current with the latest updates to close security loopholes that could leave you vulnerable to hackers. Securing devices at home and work The pandemic brought unexpected changes, and many people are now working from home. The lines between our home and work lives and technology are less distinct than ever, so it’s key to secure your personal devices, however you use them. Keep your system secure. Start by protecting the accounts and personal devices you use for personal and work purposes. - Safeguard your devices. Install security software from a reputable company on all devices, including computers, smartphones and tablets, to block malware and viruses. Security software also can help prevent spoofing attacks, in which a criminal sends you a communication disguised to look like it came from a trusted sender. If you use a personal device to conduct business activities, check with your boss or IT department to see if your company has a preferred provider of security software that works well with your company network. - Get regular updates. Make sure to perform regular updates on your apps, software and operating systems. These updates often patch security glitches that could allow hackers to gain entry. - Turn on multifactor authentication. Just as you want this added protection on your IoT accounts, you also should add it to your email account and any other personal or work accounts you want to secure. Double down on data protection. It’s important to protect both your personal and work data on personal devices. This may help keep identity thieves and other cyberthieves from getting your personal information and other sensitive data. - Get video-chat savvy. You may be using video chat technology to meet with your team and have happy hour with your best friends. Make sure to get familiar with any security problems with video chat tools you use. Also learn the privacy rules of the tool and how to use it in the most secure way. - Follow corporate data security rules. Make sure you’re clear on your company’s data security policies and that you follow them whenever you’re working from home. - Call for back up. It’s important to regularly back up your data so valuable information doesn’t get lost. You can back up your personal data to a hard drive or to the cloud. Check with your company about whether and how to back up work data to make sure you’re not violating company policies designed to keep data safe. - Shield extra sensitive information. Check with your boss and IT department about using extra protection for very private documents or other information. For example, you can add passwords and encryption for added security. Stop phishing, vishing and smishing attacks. In a phishing scam, a cybercriminal sends an email to get you to click on a link, provide private information, or send money. Cybercriminals have stolen $3 billion from businesses through email phishing scams since 2016, according to a Better Business Bureau study. These crooks may try to exploit the fact that you can’t run down the hall to ask your boss a question when you’re working from home. - Get wise to phishing. Learn about phishing scams and how they work — as well as their counterparts, vishing (which involves voice) and smishing (which involves SMS text messaging) so you don’t get tricked. - Verify requests. A common phishing tactic: send a fake email from “the boss” asking an employee to deposit money into a new bank or send over sensitive information. Double check on any email, text or other message that asks you to take an unexpected action. Keep your health between you and your doctor. Another sudden change brought by COVID-19 is an increase in telehealth appointments. As you juggle home and work, you may be seeing your doctor by video chat. Here are steps you can take to avoid privacy and security issues with telemedicine. - Quiz your doctor. Ask whether the provider saves the video of telemedicine sessions with patients. If so, ask how this data is stored and how long it will be kept. - Don’t overshare. Don’t send private information about your health or anything else via email or text. Save your sharing for the actual telemedicine appointment. - Do a tech checkup. Find out if the video chat technology being used for the appointment uses end-to-end encryption, which mixes up the data while it’s being transmitted and greatly increases your privacy and security. So remember to #Do Your Part, #BeCyberSmart. Connected devices may be a big part of your life, and technology advances will offer new opportunities and present new security challenges. So set a time to review your tech, get up to date, and make sure you have protection for everything you connect now and in the future.
<urn:uuid:aac88f43-f361-42f5-825f-02047433e097>
CC-MAIN-2022-40
https://www.nortonlifelock.com/us/en/partner/resources/articles/how-to-live-more-securely-in-a-connected-world/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00760.warc.gz
en
0.919897
2,066
2.71875
3
You’ve just purchased some shiny new Wireless Access Points from ‘Vendor X’. Vendor X has promised you lightning-fast, wired-like connection speeds. Product datasheets speak of incomprehensible Wi-Fi data rates and sky-high maximum client connection limits: it all sounds great! You deploy the APs which are now in production, so what’s the first thing you do? Run a speed test, of course! The tests report speeds that outperform your previous network, but it’s still a fraction of what you expected. You also notice that your video calls are still buffering when roaming between buildings – What’s gone wrong? One of the most overlooked aspects of Wi-Fi design is understanding the wireless capabilities of your mobile devices. When designing a wired network, we seldom consider what features our devices support. However, the wireless world is a completely different beast. At the time of writing, the IEEE 802.11 Wi-Fi standard has seen 34 amendments and modifications, and there are an additional 8 draft amendments awaiting ratification in the near future. Some of the technologies introduced with each amendment have no relevance to enterprise Wi-Fi networks, but many amendments are relevant and should be utilised where possible. In this blog, I’ll share some of the most common wireless capabilities of mobile devices impacting performance, and why you should care. One of the most misunderstood concepts surrounding Wi-Fi is the requirement to match the transmit power between mobile devices and APs. Consider the analogy of two people having a conversation across a field. One person is speaking through a megaphone (the AP), while the other person (mobile device) is speaking at a regular volume. The mobile device can easily hear the AP, but the AP may not be able to hear the mobile device: this often results in one-way communication. The image below is a simplified representation of the issue. Due to their small form factor, mobile devices have a limited battery capacity. One of the easiest ways for manufacturers to decrease power consumption is to limit the transmit power of the devices’ Wi-Fi radios. Far too often, we encounter customers complaining of having ‘5 bars of signal’ but slow speeds. This is a classic symptom of an AP’s transmission power being set to a level well beyond the capabilities of the mobile devices. Knowing the minimum and maximum transmit power values of all wireless devices in a network is a critical piece in determining the AP placement as part of the design process. Thankfully, the receive sensitivity of mobile devices has improved dramatically so it’s no longer a major issue, especially with high-density deployments being much more common. Multiple-Input Multiple-Output (MIMO) is a technology introduced to 802.11n in 2009 and has been a staple for the Wi-Fi standards that have followed. MIMO utilises multiple radios and antennas, and allows for the transmission and reception of multiple simultaneous data streams by exploiting an old undesirable phenomenon called multipath. The end result for users is a drastic increase in data rates: a device capable of receiving two unique data streams can effectively receive double the throughput. A device capable of receiving three unique data streams can effectively receive triple the throughput. The current Wi-Fi standard allows up to eight MIMO spatial streams, but most high-end mobile devices only support two or three streams. Some devices, despite supporting standards which allow for MIMO operations, only support a single spatial stream – effectively disabling MIMO. Understanding your device’s MIMO capabilities is key to determining your maximum achievable throughput. Wireless networks typically support operations in two Radio Frequency (RF) bands – 2.4 GHz and 5 GHz. A wireless device will transmit in one band, or both, with a radio (transmitter and receiver) assigned to each. The RF bands used for wireless networks have been subdivided into equal, smaller bands and designated their own channel number. The 2.4 GHz band allows for 3 non-overlapping channels, while the 5 GHz band allows for 23 non-overlapping channels. Your wireless infrastructure will allow the transmission on all permitted channels within your region, however, your mobile devices may not support them all. By knowing which channels your devices don’t support, you can make sure that your APs don’t transmit on those channels. What happens when you allow an AP to transmit on a channel that’s not supported by a device? You effectively create a blackspot for the device, as it cannot hear the AP’s signal – it has no idea the AP is there. Channel support for mobile devices has greatly improved over time, but there are still many devices on the market that don’t support the full range of 5 GHz channels. It’s all well and good for a mobile device to be capable of blazing-fast connection speeds while stationary, but what about when you are on the move? Can the device maintain those connection speeds when transitioning between APs? Ideally, devices should also roam to an AP which is not under load, or experiencing other performance issues. There are technologies available to assist with roaming, but device support varies. The following sections provide an overview of the three main roaming technologies available today. Assisted Roaming (802.11k) defines ways in which the Wi-Fi infrastructure and client stations can work together to determine the best AP for clients to roam to at any given time. The APs will provide moving clients with an optimised list of roaming targets as the signal strength of the connected AP weakens. Prior to 802.11k, a client station would generally connect to an AP transmitting the strongest signal alone. Fast Transition (802.11r) has brought much-needed roaming efficiencies for WPA2-Enterprise (802.1x) networks. 802.11r minimises the disruption to mobile and latency-sensitive applications while moving between APs. By storing encryption keys for authenticated clients on every AP, clients no longer need to go through the full authentication process, as it connects to a different AP. Without 802.11r, dropped voice on video calls and negative performance is common for devices on the move. BSS Transition Management (802.11v) defines ways in which the Wi-Fi infrastructure and mobile devices can exchange information about the state of the RF environment and topology to assist clients with roaming decisions. Client stations also have the ability to exchange information regarding the RF environment amongst themselves. This allows client stations to be more aware of their surroundings and ideally improves the overall performance of the network. It might sound like an obvious tip, to update to the latest and greatest technologies, but Wi-Fi is developed at breakneck speeds. Ensuring that your infrastructure and mobile devices support the latest 802.11 physical layer standards allows you to keep up with ever-growing performance demands of network services and applications. While overall throughout certainly increases with each standard, the last two standards focus on much-needed traffic management and efficiencies. For example, the up and coming 802.11ax (Wi-Fi 6) introduces technologies that could be a game-changer, if fully supported by vendors. Some of the best features to appear in 802.11ax include: 802.11ac (retroactively labelled Wi-Fi 5) made some large leaps over 802.11n as well. While 802.11n allowed APs to utilise 40 MHz channel widths, 802.11ac introduced 80 and 160 MHz channels. Each step up effectively doubles the throughput from the previous. I am a big believer in using 20 MHz channels to minimise channel re-use and achieve cleaner RF signal. I really think anything more than 40 MHz is mostly market hype! There are corner cases where 80 MHz and 160 MHz channels could be used if you truly believe you need to push multiple GB/s. However, enabling wider channel-widths without understanding the impact on your environment can be detrimental to Wi-Fi performance. Proceed with caution! You’re now aware of some of the key wireless capabilities that can affect performance, but how do you know what your device does or doesn’t support? While there is no single source of truth, there are several places you can look. For devices running Microsoft Windows, gathering Wi-Fi data couldn’t be simpler. Using the commands netsh wlan show drivers and netsh wlan show wirelesscapabilities you will find a wealth of information regarding wireless capabilities. If you want to filter out the noise and display the info that really matters, use the following command: netsh wlan show all | findstr /c:”Firmware Version” /c:”DOT11k” /c:”Transition” /c:”MU-MIMO” /c:”Spatial Streams” /c:”Management Frames” /c:”Driver :” /c:”Vendor ” /c:”Provider” /c:”Date” /c:”Version :” /c:”Radio types” The output should look something like this; If Windows doesn’t list support for a technology but you expect it to, this could be a driver issue. Try upgrading the Wi-Fi adapters drivers to the latest version and execute the above command again. For non-Windows devices, one of my favourite places to determine wireless capabilities is the list compiled by Mike Albano. Mike and others in the Wi-Fi community have used packet analysis software to determine the level of adapter support. The ‘association request’ frame sent by a mobile device to an AP contains a lot of handy information, including channel support and transmit power capabilities not known by Windows. The level of detail listed in adapter documentation varies between vendors, but should always be used as a reference. Some Wi-Fi adapters have been found to advertise their support for certain roaming technologies for example, but the official documentation says otherwise. You should always contact your vendor if unsure as to the features it supports. Another great source of information for wireless capabilities is the Wi-Fi Alliance. One of the Wi-Fi Alliance’s primary tasks is to ensure interoperability of Wi-Fi technologies by providing certification testing for devices. Their website has a product finder tool, allowing you to search an adapter and determine its level of product certification. It is important to note that the Wi-Fi Alliance may list their certification programs by the marketing term given to technologies, and not necessarily the name of the IEEE standard that it represents. The image below, for example, shows that an adapter supports Fast Transition as part of the Agile Multiband Certification. The IEEE-recognised name for Fast Transition is 802.11r. Whether you are a Network Administrator, CIO, or Solutions Consultant, knowing the wireless capabilities of today’s mobile devices is more important than ever. By knowing what features devices are capable of, you can set and manage realistic expectations regarding network performance and functionality. No matter how much you have invested in your network infrastructure, sometimes the limiting factor is the end devices themselves. If you’re planning a Wi-Fi upgrade, or simply want to optimise your Wi-Fi performance, why not reach out to the experts at Data#3. With a Cisco Gold Partner and an Aruba HPE Platinum Partner, you can be sure you are choosing the right people for the job. Tags: Aruba, Assessment & Design, Cisco, Connectivity, Consulting, HPE Aruba, Mobility, Modern Workplace, Network Infrastructure, Network Modernisation, Networking, Professional Services, Project Services, Wi-Fi, Wireless, Wireless Infrastructure, Wireless Network
<urn:uuid:723536f5-7444-4df0-9448-ea25555b9959>
CC-MAIN-2022-40
https://www.data3.com/knowledge-centre/blog/do-you-know-the-wireless-capabilities-of-your-mobile-devices-heres-why-you-should/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334596.27/warc/CC-MAIN-20220925193816-20220925223816-00160.warc.gz
en
0.922876
2,445
2.703125
3
If you've spent any time reading tech news in the past few years, you know that nothing has been more anticipated than the Internet of Things (IoT). Sure, drone delivery and autonomous vehicles have had their moments in the spotlight, but these technologies will also be brought into the fold of IoT. American research and advisory firm Gartner, Inc. anticipates there will be 21 billion IoT-connected devices by 2020. As you can imagine, this spans a number of industries including healthcare, manufacturing, consumer tech, business tech, logistics and everything in between. This is exciting news for a lot of people because it means that various devices, systems and services could all work in concert someday soon. But, unfortunately, there is a downside to IoT - primarily that it lacks even the most basic levels of cybersecurity measures. This begs the question: Does IoT need regulation? And who should set the standards of IoT resilience? How can businesses employ IoT technologies without opening their networks to disaster? CyberPolicy examines these questions and more below. And remember, you can visit CyberPolicy for your free cyber liability insurance quote. IoT: Welcome Innovation or a Cause for Concern? Although IoT is still in its infancy, it is already being used to address a number of challenges faced by various industries. For example, IoT in healthcare is employed to automatically monitor patient devices and implants, enhance drug management and share lifesaving information across organizations and providers. But on the flipside, IoT is also being used to infiltrate private networks and launch cyberattacks. For instance, the 2016 Dyn distributed denial-of-service attack (DDoS) that knocked hundreds of websites off the internet was made possible by IoT devices. Essentially, the attacker leveraged a botnet of IoT devices (which is kind of like an army of malware-infected zombie computers) to flood Dyn's DNS until it collapsed from an overload of phony web requests. One of the ways hackers turn good devices into zombies is through default password protections on popular IoT devices. This presents us with two major problems, the first being that insecure IoT devices can become a weapon for hackers, and the second that IoT provides additional entry points for cybercriminals. If a hacker can breach one web-connected device on your network, it's easier for them to steal data or compromise operations once they're inside. It like having a home with dozens of doors to the outside, but no locks to keep intruders out. Seeking a Solution So, who is going to clean up this mess? Well, the FCC is already stepping in to develop security regulations for IoT. "The large and diverse number of IoT vendors, who are driven by competition to keep prices low, hinders coordinated efforts to build security by design into the IoT on a voluntary basis," the FCC says. \"Left unchecked, the growing IoT widens the gap between the ideal investment from the commercial point of view and from society's view." However, some worry that government regulations could impede innovation and progress and would prefer to see the tech industry develop its own standards and protocols. But as the FCC statement said, tech companies are generally more focused on quick rollouts and eye-catching functionalities than privacy and defense. In the meantime, it is up to individual businesses to manage their own cybersecurity measures when adopting IoT. This includes developing strong and unique passwords to replace default logins for IoT products and web routers; siloing employee permissions and service to prevent further incursion; implementing greater vigilance for security incidents through IT teams and automated threat detection; and investing in cybersecurity insurance from a reputable provider. You may not be able to block every incoming attack or potential data breach, but CyberPolicy can ensure that your organization stay financially healthy through it all. Visit CyberPolicy for more information.
<urn:uuid:a29fa963-3b42-4811-80a5-4a2c179d736b>
CC-MAIN-2022-40
https://www.cyberpolicy.com/cybersecurity-education/the-internet-of-things-should-there-be-regulation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00160.warc.gz
en
0.950705
775
2.8125
3
Gail Ray is a 64-year-old honors student at the John T. Blong Technology Center at Eastern Iowa Community College. After Ray retired from her career as an architect, she spent two years studying mechanical design. Dureing here time at the Blong Center, she became interested in 3D printing and 3D scanning technology. She had previously used photogrammetry, which uses digital photos to create 3D models, but she was looking for something more precise for a new project -- a detailed scan of a human. She enlisted the help of her professor, Brad McConnell, a mechanical design and solid modeling instructor who brought in Dana Green, an account manager at Exact Metrology, an expert in 3D scanning technology. It took 15 minutes for Green to scan the student, and about 30 hours to print a 9-inch bust. In this IEN Newsdesk interview with David Mantey, Ray, McConnell, and Green discuss how they made the project happen, and the future of 3D scanning technology, scanning quality. 3D scanning human models, how to prepare a the file of a a 3D scan, put together individual scans, cleaning up the data.
<urn:uuid:b96b2239-4e25-46fb-b459-8a0f5f9381c1>
CC-MAIN-2022-40
https://www.mbtmag.com/home/video/21101881/ien-newsdesk-64yearold-student-3d-printed-a-peer
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00160.warc.gz
en
0.947959
243
2.609375
3
Data is a central resource in the 21st century. Only those who manage to extract value from their data will remain competitive. For this reason, many organizations are looking at innovative ways of using their data to create value. Data monetization refers to the process of identifying and marketing data or data-based products to generate monetary value. Data products (i.e., products based on raw, refined or analyzed data) are at the heart of data monetization. They can take many forms, including consumable data sets, analysis results and operational applications that contain analysis results. These can come as reports, extensions of existing products, digital platforms or can be incorporated into new business models. When discussing the use of data products, a distinction should be made between internal and external data monetization. Internal data monetization aims to improve internal processes such as marketing and customer experience or the maintenance of equipment. External data monetization involves using data to extend an organization’s product offering with data-driven services or business models to create new revenue streams. Data monetization is at an early stage of adoption but is expanding 17 percent of companies have established data monetization initiatives, a further 12 percent are currently building prototypes and another 10 percent are still developing a concept. Large companies from retail, services, finance and banking are leading the way and, in general, data monetization is currently being implemented by larger companies. 25 percent of large companies and 23 percent of larger medium-sized companies responding to our survey have already launched data monetization products. In contrast, only 9 percent of small and 13 percent of small to medium-sized companies have already done so. Providing results for process improvement is the main way to monetize data Providing analysis results is the most common form of monetizing data. 40 percent of participants employ this type of data monetization, whereby data analytics is involved. The provision of data via reporting and benchmarking is almost as important with 37 percent of respondents citing this type of data monetization. Less common methods include establishing digital platforms (22 percent), extending existing products (17 percent), providing new services (16 percent) and building new business models based on data (6 percent). Using commercial technologies Given that the most common way to monetize data is the provision of data via benchmarking and reporting, it comes as no surprise that the most common technologies used are BI software (86 percent) and data integration tools (70 percent). There is also a high proportion of custom developments for data monetization (54 percent). Commercial analytics software tools are used a little more (48 percent) than open source tools (40 percent) but the difference is quite small. In terms of back-end technologies, commercial software is the most commonly used – analytical databases (53 percent) are used more than twice as frequently as Hadoop technologies (25 percent). Embedded BI solutions are used in surprisingly few cases (38 percent). Benefits are tangible Data products can bring a broad range of benefits, from new revenue sources to a better understanding of customers and product improvements. New revenue sources are the most important benefit of data products, reported by 69 percent of respondents. For 66 percent, the provision of new services is a benefit, and improved customer loyalty is cited by 63 percent of participants. Internal provision of the results of data analysis is a motivation for data monetization for more than half of our respondents (59 percent), as is the internal provision of data and benchmarks (53 percent). Better insight into customers and improved customer experience – for example by personalization – is achieved by about 50 percent of participating companies. Generating new data is only viewed as a benefit by 38 percent, and binding partners and suppliers by 31 percent. Challenges – Data quality is key Not surprisingly data quality is by far the most common obstacle to monetizing data, reported by 56 percent of respondents. Data security is a concern for 37 percent. This arises when data is shared and proper anonymization needs to be taken care of. Integrating data products into existing systems is a problem for 37 percent. Besides these data and technology-related concerns, respondents also reported challenges such as lack of management support (34 percent), lack of use cases (32 percent) and lack of professional know-how to implement data monetization initiatives (31 percent). Cost and a shortage of data were cited by 25 percent and 19 percent respectively.
<urn:uuid:8f339494-e492-4030-a471-23908776c914>
CC-MAIN-2022-40
https://bi-survey.com/data-monetization
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00160.warc.gz
en
0.943237
898
2.9375
3
Black History is American History This year’s Black History Month theme, Black Health and Wellness, is timely given what we’ve collectively experienced for almost two years from the date the pandemic first appeared in Silicon Valley. It not only relates to the strides the Black community has made in improving physical health, but also emotional and mental health. Successes include the establishment of leading institutions like Howard University and the Provident Hospital and Training School, as well as the impact of luminaries like Jane Wright, Daniel Williams, Mae Jemison, and even Michelle Obama. But as we all know, we’ve still got a way to go, and the burden the past two years has placed on Black health care professionals. For a first-world nation, the US lags in providing equitable medical care, a gap that not only impacts people of color as we all pay the price for physical and emotional neglect. Closer to home, this year’s theme spans two facets that weave through our daily lives. The first is the work environment, challenging at times. At Aryaka, we’ve maintained a focus on the emotional health of our employees, sharing best practices for remote workers with industry experts, establishing monthly wellness days, and bringing employees together via virtual town halls. And for recruitment, we have an emphasis on diversity, reaching out to the Latino community and women, in addition to the Black community. The second focus is on education, something that has been a major challenge for many of us over the past two years. As with the healthcare system, there are structural inequalities depending upon where you live and what schools your children attend, inequalities that result in daily stress. We must and will make improvements, and what the government plans for universal broadband access via the new Infrastructure Law is only one step in the right direction. Turning back to Aryaka, we’ve done everything in our power to support what has been at times a remote learning environment that changes daily. So, take this opportunity during Black History Month to better understand the contributions from our black communities, as well as how to better overcome some of the challenges we collectively encounter! “A life is not important except in the impact it has on other lives.” — Jackie Robinson The theme for 2022 focuses on the importance of Black Health and Wellness. This theme acknowledges the legacy of not only Black scholars and medical practitioners in Western medicine, but also other ways of knowing (e.g., birthworkers, doulas, midwives, naturopaths, herbalists, etc.) throughout the African Diaspora. The 2022 theme considers activities, rituals and initiatives that Black communities have done to be well.
<urn:uuid:0704a294-f59d-4fcd-8422-526ae276910c>
CC-MAIN-2022-40
https://www.aryaka.com/blog/black-history-is-american-history-2022/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00160.warc.gz
en
0.949231
548
2.921875
3
Data Management Glossary Cold Data Storage Cold data refers to data that is infrequently accessed, as compared to hot data that is frequently accessed. As unstructured data grows at unprecedented rates, organizations are realizing the advantages of utilizing cold data storage devices instead of high-performance primary storage as they are much more economical, simple to set up & use, and are less prone to suffering from drive failure. For many organizations, the real difficulty with cold data is figuring out when data should be considered hot and kept on primary storage or it can be labeled as cold and moved off to a secondary storage device. For this reason, it’s important to understand the difference between data types to develop a solution for managing cold data that is most cost effective for your organization. Types of Data That Cold Storage is Typically Used For Examples of data types for which cold storage may be suitable include information a business is required to keep for regulatory compliance, video, photographs, and data that is saved for backup, archival, big-data analytics or disaster recovery purposes. As this data ages and is less frequently accessed, it can generally be moved to cold storage. A policy-based approach allows organizations to optimize storage resources and reduce costs by moving inactive data to more economical cold data storage. Advantages of Developing a Cold Data Storage Solution - Prevent primary storage solutions from becoming overburdened with unused data - Reduce overall resource costs of data storage - Simplify data storage solution and optimize the management of its data - Efficiently meet governance and compliance requirements - Make use of more affordable & reliable mechanical storage drives for lesser used data Reduce Strain on Primary Storage by Moving Cold Data to Secondary Storage Affordable Costs of Cold Storage When comparing costs for enterprise-level storage drives, the mechanical drives used in many cold data storage systems are just over 20% of the price that high-end solid-state drives (SSD) can cost on average. For SSD’s at the top tier of performance, storage still costs close to 10 centers per gigabyte whereas NAS-level mechanical drives cost only around 2 centers per gigabyte on average. Simplify Your Data Storage Solution A well-optimized cold data storage system can make your local storage infrastructure much less cluttered & easier to maintain. As the storage tools which help us automatically determine which data is hot and cold continue to improve, managing the movement of data between solutions or tiers is becoming easier every year. Some cold data storage solutions are even starting to automate the entirety of the management process based on rules that the business establishes. Meet Regulatory or Compliance Requirements Many organizations in the healthcare industry are required to hold onto their data for extended periods of time, if not forever. With the possibility of facing litigation somewhere down the line based on having this data intact, corporations are opting to use a cold data storage solution which can effectively store critically important, unused data under conditions in which it cannot be tampered with or altered. Increase Data Durability with Cold Data Storage Reliability is one of the most important factors when choosing a data storage solution to house data for extended periods of time or indefinitely. Mechanical drives can be somewhat slower than SSD’s in providing file access, but they are still quick to be able to pull files and offer much more budget room for creating additional backup or parity within your storage system. When considering storage hardware for cold data solutions, consider low cost, high-capacity options with a high degree of data durability so your data can remain intact for as long as it needs to be stored for. Learn more about the your options when it comes to migrating file workloads to the cloud.
<urn:uuid:fef6610e-db8f-47c9-a846-377a54cd9020>
CC-MAIN-2022-40
https://www.komprise.com/glossary_terms/cold-data/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00160.warc.gz
en
0.927527
753
2.875
3
NFS, or the Network File System (NFS), provides remote access to shared file systems across networks. Designed to be machine, operating system, network architecture, and transport protocol independent, NFS enables the export or mounting of directories to other machines, either on or off a local network. These directories can then be accessed as though they were local. NFS uses a client/server architecture and consists of a client program, a server program, and a protocol used to communicate between the two. The server program makes filesystems available for access by other machines via a process called exporting. File systems that are available for access across the network are often referred to as shared file systems. In environments spanning multiple private networks, the NFS plays a significant role in making critical file systems accessible to users across networks. Since users expect to access these shared file systems just as swiftly and effortlessly as they would the local ones, even the slightest of access delays can put him/her off. To ensure that the user experience with NFS remains pleasant, the client-server interaction of NFS should be continuously monitored. eG Enterprise provides distinct monitoring models for monitoring the NFS server and client on Solaris and Linux, which measure the effectiveness of the server program as well as the experience of the client. For detailed discussion on the distinct monitoring models, refer to the following pages;
<urn:uuid:cb5e3abc-9dda-403e-9a17-46cf89031ba5>
CC-MAIN-2022-40
https://www.eginnovations.com/documentation/Network-File-System/Introduction-to-Network-File-System-Monitoring.htm
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00160.warc.gz
en
0.948718
280
3.328125
3
Sci-fi enthusiasts have over the years pondered whether a robot invasion would be possible. But even if that can happen, there are far more significant cyberattack threats to your life that you need to worry about right now. We’re not talking about natural disasters or illness, but other humans and the malicious use of technology. As the skills of hackers grow, the bigger the danger they pose to you and me. And no robots are needed. This has been made painfully clear in a recent new trend of cyberattacks. These attacks aren’t out to steal your details or bank accounts. The main goal? To cripple life-giving and life-saving infrastructure. Read on for more information on this growing threat. Here’s the backstory Ransomware aims to lock down your files and information through specialized code. Once encrypted, a hacker will demand money (often in cryptocurrency) to unlock it and give back your files. The new trend is killware, and it does exactly what it sounds like. It recently came to light that a cyberattack had been thwarted targeting the Oldsmar, Florida water system. But unlike the Colonial Pipeline attack that used ransomware to get money, the water system was attacked purely to harm humans. Thankfully the hack was discovered before it could do damage, but it is worrying nonetheless. Homeland Security Secretary Alejandro Mayorkas told USA Today that if this water contamination attack succeeded, it would “have gripped our entire country.” Homeland Security explained that the Oldsmar cyberattack was by no means an isolated event. Any facilities that deal with people are targets, including hospitals, water supplies, banks, police departments and transportation systems. The goal? To cause as much physical harm as possible. Autonomous vehicles can also be hacked by killware, which would have a devastating result if bad actors remotely steered cars into highly populated areas. Connected thermostats are also vulnerable. How to help stop cyberattacks Technology analysts at Gartner are aware of the problems these types of attacks can cause. In a recent report, the company said that by 2025, “cyberattackers will have weaponized operational technology environments to successfully harm or kill humans.” Here are some ways you can personally help thwart cyberattacks: - Where possible, enable two-factor authentication for your devices and online accounts. This adds another step in the security and login process and ensures bad actors can’t access your devices. - Never use default credentials on your home router. When you set it up, change the login details and ensure the password can’t be easily guessed. Tap or click here for help finding and changing your router’s password. - Make sure that the firmware, operating system and software on your devices are up to date. Updates include security patches that help protect them from hackers.
<urn:uuid:2a4bb812-6810-44a9-b4c4-df4c1c4c5fb9>
CC-MAIN-2022-40
https://www.komando.com/security-privacy/cybersecurity-threat-killware/811931/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00361.warc.gz
en
0.939734
591
2.71875
3
WHAT IS CIS RAM? CIS RAM is an information security risk assessment method that helps organizations design and evaluate their implementation of the CIS Controls. CIS RAM provides instructions, examples, templates, and exercises for conducting risk assessments. And because CIS RAM is based on the DoCRA Standard, its risk assessments meet the requirements of established information security risk assessment standards and demonstrate whether safeguards are “reasonable” and “appropriate” as regulators and judges often require. WHAT IS THE DoCRA STANDARD? DoCRA (or “Duty of Care Risk Analysis”) is a method for analyzing risk as regulators and judges expect it to be done. Regulations and judicial “balancing tests” expect that organizations consider the likelihood and degree of harm they may cause themselves and others, and to use safeguards that reduce those risks – as long as those safeguards are not overly burdensome. DoCRA can be used to analyze cybersecurity risks using any variety of control standards or regulatory requirements. HALOCK uses DoCRA methods to analyze risks with ISO 27001/27002, NIST Special Publications 800-53, the HIPAA Security Rule, GDPR, 23 NYCRR Part 500, 201 CMR 17.00, the NIST Cybersecurity Framework, and even maturity model-based controls models, such as FFIEC CAT. RISK ASSESSMENTS CAN BE TOO COMPLEX OR TOO SIMPLE. WHAT ABOUT CIS RAM AND DoCRA? CIS RAM provides three different risk analysis approaches to support organizations of three levels of capability. Organizations that are new to risk analysis can use instructions for modeling foreseeable threats against the CIS Controls as the organization generally applies them. Experienced organizations can follow instructions for modeling threats against information assets to determine how the CIS Controls should be configured to protect them. Expert organizations are provided instructions for analyzing risks based on “attack paths” (similar to “kill chains”) using CIS’ Community Attack Model. WHY ANOTHER RISK ASSESSMENT METHOD? While there are multiple, established risk assessment standards, CIS RAM is the first to provide very specific instructions for analyzing information security risk in a way that regulators define as “reasonable,” and judges evaluate as “due care.” CIS RAM emphasizes balance between the harm that security incidents may cause others and the burden of safeguards; the foundation of “reasonableness.” Do you know reasonable? IS CIS RAM A REPLACEMENT FOR THE OTHER RISK ASSESSMENT STANDARDS? CIS RAM conforms to established information security risk assessment standards, such as ISO 27005, NIST SP 800-30, OCTAVE, and RISK IT. These standards all use similar forms of risk modeling. But CIS RAM supplements these standards by providing very detailed instructions and templates for quickly designing and conducting an information security risk assessments. As a result, CIS RAM risk assessments support established standards, and produce analysis that regulators and legal authorities expect to see. DOES THE RISK ASSESSMENT TAKE LONG TO COMPLETE? New users are able to design their risk assessment within their first day of following the CIS RAM instructions, including analysis of several risks. The amount of time the organization takes after that largely depends on the scope of their assessment, and the level of instructions they are following. WHY IS CIS RAM SO LARGE? CIS RAM includes three sets of detailed instructions for organizations of varying risk assessment capabilities. Each organization will select a section of the CIS RAM that applies most to them, so typical users will only read a portion of the document. And because CIS RAM provides many detailed illustrations to guide its readers step-by-step, a risk assessment can typically be designed within a day, and risk analysis can start right away. Organizations that wish to understand the basics and full lifecycle of a CIS RAM risk assessment may first read CIS RAM Express Edition. The Express Edition may provide some experienced organizations all they need to start their “duty of care”-based risk assessment. ISN’T A GAP ASSESSMENT GOOD ENOUGH? Because the CIS Controls are already prioritized by their criticality in preventing cyber attacks, a CIS Controls gap assessment already has risk built in. However, each organization faces its own risks, and has its own level of resources to invest against security incidents. CIS RAM helps organizations determine whether their use of CIS Controls is sufficient against the likelihood of impacts in their environment, and whether proposed safeguards are more burdensome than the risk they are designed to prevent. This helps translate security concerns into business terms, and helps regulators and legal authorities determine whether safeguards are reasonable and demonstrate due care. AREN’T RISK ASSESSMENTS JUST SUBJECTIVE EXERCISES? Risk assessments have often been conducted as guess-work, using “high,” “medium,” and “low” rankings of identified gaps. CIS RAM helps organizations associate risk scores with the potential of harm that may come to themselves and to others. Additionally, CIS RAM provides guidance on estimating foreseeability so both impacts and likelihoods can be communicated in simple language to technical and non-technical people. CAN CIS RAM or DoCRA BE USED TO EVALUATE non-CIS CONTROLS? The risk analysis methods described in CIS RAM and Duty of Care Risk AnalysisDoCRA conform to established security frameworks, such as ISO 27000, NIST Special Publications, the NIST Cybersecurity Framework, and risk assessment requirements described in PCI DSS. Security controls that come from these and other standards can effectively be risk assessed using the CIS RAM methods. And because CIS RAM and DoCRA align with risk assessment guidance for regulations such as the HIPAA Security Rule, Gramm Leach Bliley Act’s Safeguards Rule, Federal Trade Commission guidance on risk assessments, Massachusetts 201 CMR 17.00, GDPR, and 23 NYCRR Part 500, specifications from these regulations can also be included in a risk assessment. CAN I USE A DIFFERENT RISK ASSESSMENT METHOD TO ASSESS CIS CONTROLS? Yes. CIS does not require CIS RAM as the sole method for assessing information security risk. CIS does recommend reviewing the Principles and Practices listed in CIS RAM and CIS RAM Express Edition to be sure that information security risk assessments are meaningful to non-technical management, to regulators, and to legal authorities. I USE A MATURITY MODEL TO ASSESS MY RISK. IS THAT COMPATIBLE WITH CIS RAM and DoCRA? Many organizations have supplemented maturity model analysis (such as FFIEC CAT and the CSF from HITRUST) using DoCRA’s methods. Organizations may use each control maturity score as an indicator of how likely a control failure may be – making maturity a factor in the risk calculation – or they may use CIS RAM or DoCRA-based analysis to let their organization know how to prioritize their investment in cybersecurity maturity, and whether to accept the risk of staying at a certain maturity level. ARE CIS RAM AND DoCRA COMPATIBLE WITH PROBABILITY MODELS? For organizations that conduct risk assessment using probability analysis (i.e. use of Bayesian statistics, Monte Carlo simulations, or similar analysis) risk analysis that is based on ordinal values may appear to be a mismatch. However, CIS RAM provides simple examples for bridging the two approaches so organizations can receive the benefits of evidence-based risk analysis with the duty-of-care approach to demonstrate reasonableness. WHY DO I NEED TO DOWNLOAD CIS RAM FROM CISECURITY.ORG? CIS has set up a sign in process as part of the CIS RAM download in which they ask for some basic information about the downloader, and to offer the opportunity to sign up to be informed of developments on the CIS Controls and CIS RAM. CIS uses the information to better understand how CIS RAM is being used and who is using it; this information is extremely helpful to CIS as they update CIS RAM and develop associated documents like the CIS RAM Workbook.
<urn:uuid:7f0eafb9-9dc7-43a1-9fa0-8ab653dd687c>
CC-MAIN-2022-40
https://www.halock.com/security-management-cis-ram/cisramfaq/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00361.warc.gz
en
0.929487
1,671
2.515625
3
Programming quantum computers is becoming easier: computer scientists at ETH Zurich have designed the first programming language that can be used to program quantum computers as simply, reliably and safely as classical computers. “Programming quantum computers is still a challenge for researchers,” says Martin Vechev, computer science professor in ETH’s Secure, Reliable and Intelligent Systems Lab (SRI), “which is why I’m so excited that we can now continue ETH Zurich’s tradition in the development of quantum computers and programming languages.” He adds: “Our quantum programming language Silq allows programmers to utilize the potential of quantum computers better than with existing languages, because the code is more compact, faster, more intuitive and easier to understand for programmers.” Quantum computing has been seeing increased attention over the last decade, since these computers, which function according to the principles of quantum physics, have enormous potential. Today, most researchers believe that these computers will one day be able to solve certain problems faster than classical computers, since to perform their calculations they use entangled quantum states in which various bits of information overlap at a certain point in time. This means that in the future, quantum computers will be able to efficiently solve problems which classical computers cannot solve within a reasonable timeframe. This quantum supremacy has still to be proven conclusively. However, some significant technical advances have been achieved recently. In late summer 2019, a quantum computer succeeded in solving a problem – albeit a very specific one – more quickly than the fastest classical computer. For certain “quantum algorithms”, i.e. computational strategies, it is also known that they are faster than classical algorithms, which do not exploit the potential of quantum computers. To date, however, these algorithms still cannot be calculated on existing quantum hardware because quantum computers are currently still too error-prone. Expressing the programmer’s intent Utilizing the potential of quantum computation not only requires the latest technology, but also a quantum programming language to describe quantum algorithms. In principle, an algorithm is a “recipe” for solving a problem; a programming language describes the algorithm so that a computer can perform the necessary calculations. Today, quantum programming languages are tied closely to specific hardware; in other words, they describe precisely the behavior of the underlying circuits. For programmers, these “hardware description languages” are cumbersome and error-prone, since the individual programming instructions must be extremely detailed and thus explicitly describe the minutiae needed to implement quantum algorithms. This is where Vechev and his group come in with their development of Silq. “Silq is the first quantum programming language that is not designed primarily around the construction and functionality of the hardware, but on the mindset of the programmers when they want to solve a problem – without requiring them to understand every detail of the computer architecture and implementation,” says Benjamin Bichsel, a doctoral student in Vechev’s group who is supervising the development of Silq. Computer scientists refer to computer languages that abstract from the technical details of the specific type of computer as high-level programming languages. Silq is the very first high-level programming language for quantum computers. High-level programming languages are more expressive, meaning that they can describe even complex tasks and algorithms with less code. This makes them more comprehensible and easier to use for programmers. They can also be used with different computer architectures. Eliminating errors through automatic uncomputation The greatest innovation and simplification that Silq brings to quantum programming languages concerns a source of errors that has plagued quantum programming until now. A computer calculates a task in several intermediate steps, which creates intermediate results or temporary values. In order to relieve the memory, classical computers automatically erase these values. Computer scientists refer to this as “garbage collection”, since the superfluous temporary values are disposed of. In the case of quantum computers, this disposal is trickier due to quantum entanglement: the previously calculated values can interact with the current ones, interfering with the correct calculation. Accordingly, cleaning up such temporary values on quantum computers requires a more advanced technique of so-called uncomputation. “Silq is the first programming language that automatically identifies and erases values that are no longer needed,” explains Bichsel. The computer scientists achieved this by applying their knowledge of classical programming languages: their automatic uncomputation method uses only programming commands that are free of any special quantum operations – they are “qfree”, as Vechev and Bichsel say. “Silq is a major breakthrough in terms of optimising the programming of quantum computers; it is not the final phase of development,” says Vechev. There are still many open questions, but because Silq is easier to understand, Vechev and Bichsel hope to stimulate both the further development of quantum programming languages and the theory and development of new quantum algorithms. “Our team of four has made the breakthrough after two years of work thanks to the combination of different expertise in language design, quantum physics and implementation. If other research and development teams embrace our innovations, it will be a great success,” says Bichsel.
<urn:uuid:bb492cfc-42f6-427e-ba50-5faa199f7b8e>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2020/06/17/programming-quantum-computers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00361.warc.gz
en
0.929424
1,083
3.296875
3
“Big data and AI” is very hot right now. In a sense, now or in the future, “the one who obtains the data gets wealth” or even “the one who gets the data gets the world” is not an exaggeration. Because “big data” is not about big data and a lot of data, but more and more “bigger” calculation and analysis capabilities to find regularities from complex data and apply them rationally. However, “big data” is not a mystery to the massive amounts of data that could not be processed before or things that were not considered data. Because of the advancement of computer computing power, it can now be analyzed. Many people go to the supermarket, there is a data relationship between their route and shopping, and adjusting the layout accordingly will promote sales . Some supermarkets in the United States put DVDs and diapers together for sale because they found through “big data” analysis that most young parents who come to buy diapers for their children like to bring DVDs to “console” themselves. It is even more misunderstanding if you think that there is no problem that cannot be solved with “big data”. People’s ideology and behavior patterns, as well as the existence and development of different countries, are complicated, tortuous, and unique, and computers cannot describe them clearly. The expectation to use “big data” to explain and guide everything in the world is similar to the previous attempts to explain and regulate human behavior patterns with biological codes such as genes. It seems objectively neutral, but is essentially partial. No matter how big “big data” is, designers, analysts, and users have the final say. “Big data” cannot completely get rid of people’s misunderstandings, barriers and prejudices. No matter how “big” “big data” is, Due to human factors, it is not neutral, comprehensive and fair enough. The potential negative effects of “big data” should not be ignored. For example; nearly “big data” is used to predict the personal information of Facebook users (including sexual orientation, race, religious and political views, personality characteristics, etc.). Also, these highly sensitive information may be caused by employers, landlords, and government agencies. Educational institutions, private organizations, etc. discriminate against individuals. Looking back at the introduction of “new wave” concepts, theories and technologies in recent years , there are indeed many positive effects, but there are also some noteworthy lessons. For example, in the praise and promotion of their passion, they often lack different opinions and kind reminders. As far as “big data” is concerned, there are many international doubts. Victor, the author of “big data era”, wrote a new book called “Delete”, emphasizing the information choices in the era of big data. He said that “forgetting is a kind of “Virtue” should be remembered, and should be forgotten. It can be seen that if “big data” is mentioned to an inappropriate height or even deified, it will be harmful to the good use of “big data”. Finally, do not Interpret “Big Data & AI” as a “Big Myth” Its the Future
<urn:uuid:598d4f8f-a15c-40ea-95f8-dd0d965f68b9>
CC-MAIN-2022-40
https://hybridcloudtech.com/dont-interpret-big-data-ai-as-a-big-myth/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00361.warc.gz
en
0.947657
702
3.15625
3
Welcome to the tutorial about SAP BW ETL. We aim to provide you with basic and in-depth information regarding the three stages of the ETL process. After completing this tutorial, you will be able to define what is ETL, expound on the different stages of the ETL process and weigh in the advantages and disadvantages of developing your own ETL tool as well as availing the ETL tools in the market such as SAP BW ETL. ETL stands for Extraction, Transformation, and Loading. ETL is considered as an essential component of a data warehousing system. To avoid interference with the source systems, a temporary working area needs to host the extracted data. Commonly addressed as the data staging area, some writers also refer to this as the construction site of a data warehouse. The warehouse shows the needed data and how to store it. The sources of the different data may be located at various servers across the company’s computer network. All data extracted from multiple sources must be held at one place. After that, data transformation can be efficiently performed. The data stagingarea requires reconciling of data structures of both the source systems and the warehouse. The created flat files and/or databases meet these needs. In order to automate a data warehouse population process, an ETL procedure must be developed. Ideally, developing an ETL tool may consume about half of the time of a warehouse project. An ETL tool must map the source and the destination for each piece of data. It must be specified with the correct paths of the data sources and corresponding destinations. This enables the ETL tool to pull the data from the given sources and send it to the right destinations in the warehouse. Moreover, the ETL tool must also clearly define what data to be pulled from each source and what transformation is to be performed for it. SAP BW ETL provides a collection of objects and tools that allow users to import, export, and transform heterogeneous data between one or multiple types of data formats, such as MS Excel, text files, SAP ECC, etc. The first stage of a SAP BW ETL process is data extraction from the numerous source systems. In almost all the cases, this is the most difficult aspect of ETL. Correct extraction of the data sets the stage for how consequent processes go further. A big part of creating a data warehouse is pulling data from various data sources and placing it in a central storage area. Hence, this is a very challenging step to accomplish. Data extraction is basically the process of selecting, transporting and consolidating the source data to the SAP BW ETL environment. The extraction phase converts the data into a common format which would be suitable for transformation processing. SAP BW offers standard extractors; however, you can still design your own extractor based on your own requirements. Most extractors that extract SAP application transaction data are delta-enabled. During the time of posting, the transactions are written to the delta queue. They are then extracted to SAP BW. You can also extract data directly from the tables/views which use DB Connect and UD Connect interfaces. Similarly, flat files interface allows extraction to SAP BW from flat files. There are many other extraction interfaces in SAP BW. These include staging BAPIs, web services, etc. Acquiring data requires info packages. You can set various parameters to acquire data as per the following screen: SAP BW’s staging layer (Persistent Staging Area – PSA) stores the extracted data. This stage transforms and relates the data extracted from numerous resources; this is another important task after data extraction. In the transformation step, a series of functions or rules apply to the extracted data from the source. This derives the data for loading into the final target. Some data sources will require very little or even no manipulation of data. Take for example if an organization has much of its data in flat files and operational systems while we build a data warehouse. We have to relate data from all of these numerous sources to handle data extracted from any of these source systems. The last stage of the SAP BW ETL process is data loading. For the data to generate reports, you need to fill the data targets with the data in the staging database. This step only looks uncommon. Several lookups may be necessary to perform before calculating some values for the data target. Take into account that such data transformations can be performed at one of the two stages: while extracting the data from the beginning or while loading data into the dimensional model. Clients need to wait for complete data extraction before transforming it to make sure that they can extract the data first. Prior to extraction, if you have any information about dimensions, proceed and transform the data while extracting it. SAP BW’s Data Transformation Process (DTP) pushes the data to the data targets. Organizations can build their own data transformation tool. This is the ideal way for a small number of data sources that reside in the same type of storage. Because of the similar system architecture and common data structure, the work involved in developing the needed transformation lessens. This method also saves license costs and training the employees in the new tool. However, if the transformations become more refined during the time or there is a need to integrate other systems, the intricacy of such an ETL system raises and the manageability falls considerably. Also, building an own tool from scratch often is a waste of time. Many ETL tools are available on the market. Increasingly, corporations are purchasing ETL tools to help in the creation of ETL processes. The significant advantage in using available ETL tools is that they are optimized for the ETL process. They provide connectors to common data sources such as xml, mainframe systems, databases, flat files, etc. These tools also implement data transformations across multiple data sources with ease and consistency. Readily available features include joining, aggregation, and sorting. SAP BW’s reliable data acquisition and information processing capabilities make it one of the best among many other renowned commercial tools. Did you like this tutorial? Have any questions or comments? We would love to hear your feedback in the comments section below. It’d be a big help for us, and hopefully it’s something we can address for you in improvement of our free SAP BW tutorials.
<urn:uuid:bdc45d7b-8e3d-449e-a824-4eabbd06dde9>
CC-MAIN-2022-40
https://erproof.com/bi/sap-bw-training/sap-bw-etl-extraction-transformation-and-loading/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00361.warc.gz
en
0.926887
1,312
2.625
3
SACRAMENTO, Calif. (AP) -- California on Thursday adopted new flammability standards for furniture and other products that would allow manufacturers to stop using chemical flame retardants. Gov. Jerry Brown said the new standards were a badly needed update to nearly 40-year-old rules that led to the widespread use of chemicals known as PBDEs to treat the foam found inside furniture. Current rules require furniture filling to withstand exposure to an open flame, like a candle, for 12 seconds. This is no longer a requirement under the new rules. Instead, manufacturers will reduce fire danger by focusing flammability protection on ignition sources that are more common fire starters, like cigarettes, radiant heaters, extension cords and fireplace embers. Brown said the new standards will keep furniture in homes fire-safe while limiting chemical exposure. "Today, California is curbing toxic chemicals found in everything from high chairs to sofas," Brown said in a statement. The U.S. Centers for Disease Control and Prevention says animal studies show PBDEs — polybrominated diphenyl ethers — can affect brain development, but human health effects from low exposure levels are still unknown. California is the only state with a mandatory residential furniture flammability standard, a rule that has become the de facto standard for the rest of the nation. Environmental advocates urged regulators to make changes, saying many of the products containing the chemicals are marketed to children, who are a higher-risk population. "Today's announcement is a culmination of our long drive to urge the state to update a so-called 'safety' standard that was actually harmful to our health," said Michael Green of the Center for Environmental Health, said. The rules require manufacturers to be in compliance by Jan. 1, 2015.
<urn:uuid:32452f4c-c00d-483c-9770-babf86459f78>
CC-MAIN-2022-40
https://www.mbtmag.com/quality-control/news/13202437/new-calif-chemical-flame-retardant-rules-adopted
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335396.92/warc/CC-MAIN-20220929225326-20220930015326-00361.warc.gz
en
0.947692
369
2.515625
3
Stolen domain credentials are a much bigger problem than most realize. A recent Dark Web audit by the Digital Shadows Photon Research Team showed 15 billion stolen usernames and passwords circulating on the Dark Web — more than two sets of pilfered credentials for every living person. Furthermore, credential theft rose a staggering 300% from 2018 through 2020. How Attackers Use Domain Credentials Attackers reuse stolen domain credentials found on compromised machines. Domain credentials are used by the operating system and authenticated by the Local Security Authority (LSA). Typically, they are established for users when their logon data is authenticated by a registered security package like the Kerberos protocol. Microsoft Windows manages usernames and passwords of domain users using Local Security Authority Subsystem Service (LSASS). Attackers can dump the LSASS memory and use tools like Mimikatz to run the command sekurlsa::logonpasswords for access to passwords they can reuse on other computers. When an attacker gains a foothold on a compromised machine, they first dump the LSASS memory, then use tools like Mimikatz to run the command: sekurlsa::logonpasswords Output from logonpasswords resembles: Authentication Id: 0 ; 515764 (00000000:0007deb4) Session: Interactive from 2 User Name: Gentil Kiwi msv: Primary Username : Gentil Kiwi Domain : vm-w7-ult-x Username: Gentil Kiwi Each grouping of output reflects a Windows Logon Session. In each session, the grouping can be msv/tspkg/ssm/etc, representing Mimikatz modules used to extract passwords. Under each Mimikatz module, we find the password in various forms: - Cleartext: The actual password - NTLM (MD4): The most common form seen by attackers - LM: Old hash, not commonly used - Kerberos Tickets: Like Ticket Granting Ticket (TGT) or TGS With password hash, an attacker has three options. They can crack the password hash; use Pass the Hash or Pass the Ticket techniques. Regardless of password type, the attacker can connect to and run code on a remote computer in several ways, including, remote desktop, Windows management instrumentation WMI, service control manager, Windows task scheduler, remote registry, and Windows Remote Management. Domain credentials can be used in both the same domain and different domains. Cross-domain use is visible to the users being authenticated. To enable cross-domain user authentication, use the Active Directory Trust capabilities. When one domain trusts another, users can authenticate across them but still need permissions for the second domain to perform high-privileged operations like managing remote computers. IT teams frequently add a Domain Admins group from one domain to the Domain Admins group of another. This gives every admin user access to every trusting domain. However, this practice is best avoided, as it often leads to cross-domain attacks. To circumvent or prevent attacks and breaches, consider these best practices and mitigation steps. Reduce Non-Essential Interactive Logons The interactive logon method is often used to manage remote computers, but there are better ways to do so without leaving your credentials behind, like Microsoft Restricted Admin and other Network Logon methods. Read more here. Monitor Logon Events When interactive logon appears, event 4624 is generated, representing "an account was successfully logged on." You can monitor such events by filtering for event-id 4624 in the security event log. From the filtered list, you can determine which users are connected via interactive login or have previously connected that way. Entries tagged as Logon Type 2 indicate an interactive logon. To run equivalent queries on the Event Log from the command line, enter: WEVTUtil query-events /count:1000 Security /rd:true /format:text "/q:*[System [(EventID=4624)]] and *[EventData[Data[@Name='TargetUserName']='<SOME_USER>']]" Logon events from the Security Event Log indicate the origin of an interactive logon, and often include the IP address of the computer that performed the authentication and ID of the process involved. Use Credential Guard The Credential Guard feature in Windows 10 employs a virtual machine to protect credentials from dumping tools like Mimikatz. Although attackers can bypass it using Access Tokens, Credential Guard is still the best passive protection currently available. Activate Protected Process Light (PPL) PPL provides protection around the LSASS process, limiting the ability of attackers to dump memory and steal passwords. While PPL can be bypassed with tools like Mimikatz signed driver, it's still recommended for protecting in-memory passwords from low-skill bad actors. Monitor Protected Users Microsoft Protected Users Group is an Active Directory that restricts users to Kerberos tickets for authentication. Since Kerberos tickets expire quickly and are not renewable, they protect against credential theft and abuse. Install Two-Factor Authentication Enabling multifactor increases the security of your environment. Without it, anyone who gets control of your credential can take over your user. Adopt the Attacker Perspective The best way to defeat an adversary is to view your defenses through their eyes. Focusing on attack-centric exposure prioritization can help you accomplish this. By simulating attacks across on-premises and cloud networks, these tools can help you find and remediate critical attack paths, demonstrably reducing the risk associated with credential theft and other common tactics. By looking through the attacker's eyes, you can see where you are vulnerable — and which vulnerabilities present grave risk to business-critical assets. Domain credential theft numbers are skyrocketing, so cybersecurity teams must take concrete steps to improve their defenses. Following these best practices and deploying the right tools is the best way to ensure that happens.
<urn:uuid:7913ec2f-cf2c-4505-8540-fe89e0159072>
CC-MAIN-2022-40
https://www.darkreading.com/attacks-breaches/how-to-mitigate-against-domain-credential-theft
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00361.warc.gz
en
0.860385
1,390
2.75
3
New developments in China’s medical industry point to a potential future in which AI could replace human doctors to a considerable degree. The nascent trend can be seen in Ping An Good Doctor, a recently-established subsidiary of the Ping An holding conglomerate. The company offers AI-assisted medical services, centered around a mobile app that prompts users to enter their medical information and report symptoms, and then uses AI to remotely connect them to an appropriate doctor. Ping An Good Doctor is also working on peripheral hardware such as a pulse-monitoring bracelet that can interface with its larger medical AI platform, which the company showcased at the 2018 World AI Expo. [symple_button url=”http://eepurl.com/dFvTtT” color=”red” size=”default” border_radius=”3px” target=”self” rel=”” icon_left=”” icon_right=””]Interested in AI? Sign up for our specialty newsletter, AI ID[/symple_button] As Agence France-Presse reports, this kind of technology could have a dramatic impact on healthcare in China, where patients in ‘second- and third-tier cities’ only have access to basic hospital facilities with limited capabilities. It will, of course, take time for the state of the art in medical AI to reach anywhere near the level of human doctors, but efforts elsewhere have already demonstrated that specialized AI systems can develop highly sophisticated diagnostic capabilities for particular health issues. As more of those specialized programs emerge, and AI ‘receptionists’ like that of Ping An Good Doctor grow more sophisticated, it’s easy to imagine that a great deal of medical consultation will be automated in the years to come. Source: CTV News
<urn:uuid:1c565c0e-f67c-41aa-b943-ac6016aabd65>
CC-MAIN-2022-40
https://mobileidworld.com/china-sees-rise-ai-doctors-909262/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00361.warc.gz
en
0.924338
383
2.578125
3
Watch an on-demand demo how to stop data breaches Data breaches are confirmed incidents that may lead to unauthorized access or disclosure of sensitive, confidential or other protected data. Data breaches typically affect personally identifiable health information (PHI), personally identifiable information (PII), intellectual property, financial data like credit card or bank account numbers, personal data like social security numbers or user credentials, or commercially sensitive data like customer lists or manufacturing processes. If any of these types of data, or similarly sensitive data, is exposed to unauthorized parties, this represents a data breach. Data breaches can damage an organization’s reputation, may result in non-compliance with regulations or industry standards, and the organization can face fines or lawsuits in connection with the data it lost. There are many ways of measuring the magnitude of a data breach, including the number of records lost and the financial damages caused. Below are ten data breaches that were indisputably among the biggest in history. As security blogger Brian Krebs reported in 2013, Adobe disclosed that hackers compromised encrypted customer credit card records and credentials for approx. 3 million user accounts. Later Adobe corrected this estimate and said that 38 million active users were compromised. However, Krebs discovered that in reality 150 million usernames and passwords were breached, along with customer ID numbers and credit card numbers. In 2018, a biometric database called Aadhaar, containing personal data belonging to more than 1 billion Indians, was made available for sale on the dark web. The data breach happened due to a data leak in a utility company owned by the Indian government. Attackers obtained personal information about almost all Indian citizens, including ID numbers, photographs, bank details, retina scans and fingerprints. In 2019, an Australian graphic design service named Canva was attacked, leaking salted and hashed passwords, email addresses, usernames, locations of 137 million users of the service, of which 61 million created a password on Canva and did not use social sign in. The company claimed hackers were not able to steal credit card data, but did view some files with partial payment data. In 2014, an attack on eBay revealed a complete list of accounts, including names, dates of birth, addresses, and encrypted passwords of 145 million users. According to the company’s statement, the breach was achieved by attackers who obtained the credentials of three office workers, gained access to the network and were only discovered after 229 days. In 2017, Equifax, a large credit institution in the United States, stated that a vulnerable open source component on its website caused a data breach, putting approximately 148 million consumers at risk. The breach occurred in May, 2017, but was discovered only in late July. Attackers stole social security numbers, personal addresses, and some driver’s license numbers of 148 million US citizens, and credit card information of 200,000 citizens. In 2013, a successful attack on Yahoo became the largest data breach in history. 3 billion user accounts were affected, but it took Yahoo as long as three years to discover the attack. It publicized the breach in late 2016, and asked all affected users to reset passwords and security challenges. The breach resulted in an immediate drop of 3% in Yahoo’s stock price, representing a loss of $350 million for investors. In 2018, Facebook announced that it had stored millions of Instagram passwords in plain text format, exposed to the Internet. In 2019, another data breach discovered by TechCrunch exposed 400 million phone numbers linked to Facebook accounts. In addition to the phone numbers, attackers obtained the name, location and gender of the users. In 2018, Marriott International announced that hackers stole the data of approximately 500 million customers of its Starwood hotel brand. The breach happened in 2014, when Marriott acquired Starwood, but was only discovered in 2018. Attackers stole data including names, contact details, travel information and passport numbers. The company said that credit card numbers or other financial information was exposed for 100 million customers, but that attackers may have been unable to make use of the data, which was strongly encrypted. In 2018, Twitter employees mistakenly stored a list of passwords in an internal log, making all 330 million passwords of Twitter users available on Twitter’s local network. The company claimed that no breach occurred and the issue was fixed, however the file was exposed internally for several months. Twitter requested all 330 million users to change their passwords, as a precautionary measure. In 2013, a Russian hacker gained access to data for 360 million MySpace accounts. The breach was only discovered in 2016. The leaked information included name, username, passwords and dates of birth. Between 2013-16, attackers in control of this data were able to access hundreds of millions of MySpace accounts and view private data. After the breach was discovered, MySpace revoked passwords for any accounts created before 2013. Here are a few important statistics about the global state of data breaches. All the statistics refer to organizations in the USA in the year 2019. |Average Number of Records Breached||25,575| |Average Time to Detect a Breach in Days||279| |Average Time to Eradicate a Breach in Days||73| |Average Number of Records Lost Per Day||780,000| |Total Breaches Per Annum||1,473| |Share of Breaches with Accidental Causes||49%| According to a model developed by the Identity Theft Resource Center (ITRC), there are seven main sources of data breaches. The 2020 DBIR Report from Verizon sheds light into how frequent is each type of breach and how commonly different threat actors are involved. The Cyber Kill Chain (CKC) is a cyber security model developed by Lockheed Martin’s Computer Security Incident Response Team (CSIRT). The purpose of this model is to better understand the steps taken by an attacker during a data breach, allowing the security team to stop the attack at each stage. |1. Reconnaissance||Attackers gather information about the infrastructure of the target organization.||Port scanning, social media monitoring, shadowing| |2. Intrusion||Attackers make attempts to penetrate the security perimeter.||VPN attack, spear phishing, supply chain compromise| |3. Exploitation||Attackers seek vulnerabilities or security gaps they can exploit while inside the network.||PowerShell attack, scripting attack, Dynamic Data Exchange.| |4. Privilege Escalation||Attackers attempt to gain additional privileges to extend their reach to more systems or user accounts.||Access Token Manipulation, path interception, Sudo attack| |5. Lateral Movement||Attackers “move laterally” by taking overmore accounts, connecting to more systems, on their way to the most valuable assets.||SSH hijacking, internal spear phishing, Windows remote management| |6. Obfuscation||Attackers cover their tracks by tampering with security systems, deleting or modifying logs, changing timestamps, etc.||Binary padding, code signing, file deletion, hidden users| |7. Denial of Service||Attackers disrupt an organization’s critical systems, with the goal of getting the attention of security and operations, and creating a distraction.||Endpoint DoS, network DoS, service stop, system shutdown| |8. Exfiltration||An attacker finally obtains the organization’s most sensitive data. Attackers will find a discrete way, such as DNS tunnelling, to copy the data outside the organization without being detected.||Data compressed, data encrypted, exfiltration over alternative protocol, scheduled transfer| Learn more about stages of the kill chain in our detailed guides to: To better understand how data breaches occur, it is important to be familiar with the common cybersecurity threats facing organizations today. Common cyber threats that result in data breaches include: The working principle of social engineering attacks is to psychologically manipulate users, causing them to disclose confidential information to take action beneficial to the attackers, such as clicking on unsafe links or installing malware. Common types of social engineering attacks include: Learn more in the detailed guides to: An APT is a long-term attack campaign carried out by an individual or group, aimed at gaining unauthorized access to the network of a specific organization. Attackers may remain in the network for a long time; during this period, they use advanced techniques to evade detection, and in the meantime, exfiltrate sensitive data. APTs require a high level of expertise, coordination, organization and effort from attackers. Therefore, APTs are typically launched against valuable targets such as governments, institutions, or large organizations. A network attack is aimed at gaining unauthorized access to a company’s network, to steal data or perform other malicious activities. There are two main types of network attacks: Network attacks are an umbrella term for many types of cyber attacks: Ransomware has become a major threat for organizations of all types, from small business to large enterprises, institutions and governments. Ransomware is malware that infects a machine, encrypts its data, and displays a notice asking the victim to pay a ransom to unlock their data. In many cases, payment is ineffective, and the ransomware destroys the data in any case. Once a ransomware attack has occurred, it is very difficult to recover, and so the primary way to protect an organization is prevention. A ransomware prevention program includes four steps: Learn more in our detailed guides about: An insider threat is a malicious act directed at an organization, executed by the staff of the organization or other people the organization has willfully granted access to its systems. The threat actor (usually an employee or contractor) is a person who has existing access to the company’s network, databases, applications, or other IT systems. Types of insider threats: Cloud native is a new paradigm that simplifies the development, testing, deployment and operations of cloud-based applications. A cloud native application is built from scratch for the cloud, rather than being migrated from a traditional data center to the cloud. “Cloud” in this context can mean a public cloud such as Amazon Web Services, a private cloud, a hybrid or multi cloud environment. Cloud native applications are more difficult to secure than traditional applications, because of their dynamic nature and the large number of entities that comprise them. Cloud native security threats include: A data breach response plan (DBRP) outlines the steps a company should take to discover and address a data breach. It helps everyone in the organization understand their role in the event of a breach, and provides practical steps employees can take to mitigate the threat and minimize the damage caused to the organization. These include security measures, as well as instructions and procedures employees must follow. The main steps in a data breach response plan are: Data protection, also known as data security, is the process of protecting the confidentiality, integrity, and availability of sensitive information owned by your organization. Almost all organizations work with sensitive data, either belonging to the organization itself or to its customers. This raises the need for a data protection strategy that can prevent theft, damage, and loss to that data, and reduce damage in case of a data breach or disaster. After critical business data is breached or accidentally lost, recovering it is urgent, and any delay can impact business continuity. A data protection strategy must take into account the ability to recover the data in a timely manner. In addition, many industries have legal requirements or voluntary compliance standards governing how organizations store personal information, medical information, financial information, or other sensitive data. A data protection strategy must address the specific compliance requirements your organization is subject to. Learn more in The Compliance Aspect of Data Breaches below. Learn more in the detailed guide to data protection There are many management and storage solutions available to protect your data. There are many data security measures that can limit access to data, monitor network activity, and respond to suspected or confirmed breaches. The following are commonly used data protection technologies and practices: A backup copies data from primary storage to secondary storage, to provide protection in the event of a disaster, disaster, or malicious activity. Data is crucial for modern businesses, and data loss can cause major damage. Therefore, backup is an essential process for businesses of all sizes. Learn more in the detailed guide to data backup RPO and RTO are key concepts in backup management, disaster recovery and business continuity. Recovery Point Objective (RPO) is the amount of data a company can lose in the event of a disaster, and is determined by the frequency of backups. If the system is backed up once a day, the RPO is 24 hours. The lower the RPO, the more network, computer, and storage resources are needed for frequent backups. RTO (Recovery Time Objective) is the time needed to restore data or systems from a backup and resume normal operation. If you store or back up large amounts of data in remote locations, copying the data and restoring the system can take a long time. There are technical solutions, such as high performance connectivity to backup locations and fast synchronization, which can shorten RTO. Cloud backup (also known as online backup) lets you send a copy of your data to a cloud server, over a public or secure private network. Cloud backup services are typically offered by third-party providers. Cloud backups are an excellent way to enable offsite backups that can minimize data loss. You can access your data from multiple access points and share your data among multiple cloud users. Organizations are typically charged for cloud backup on a pay-per-use basis, according to the number of users, the amount of data stored, the duration of storage, the amount of data transferred to and from cloud storage, and the frequency at which data can be accessed (hot, warm or cold data tiers). Amazon Web Services (AWS) offers AWS Backup, a managed service you can use to back up both local data, and data stored in the Amazon cloud, to storage services including: AWS Backup is a central management interface that integrates these technologies, making it easy to organize and schedule backups in one place. Amazon also provides the AWS Storage Gateway, which lets you integrate local storage and backup solutions with Amazon services. Learn more in the detailed guide to AWS backup Microsoft Azure Backup is a cloud-based backup solution that is part of the Azure Recovery Services Vault. Azure Backup can be used to backup local data or cloud-based systems. Azure Backup provides consistent backup with security controls and management through the Azure portal. Azure Backup can take point-in-time backups of the following data sources, including files, folders, system state, and SQL databases: Learn more in the detailed guide to Azure backup Google Cloud does not provide an integrated backup solution like AWS and Azure. It supports backup as part of the Google Cloud Storage service. Google Cloud Storage has three storage classes you can use to back up local or cloud-based systems: Each tier offers progressively lower pricing per GB. Typically, regular backups are stored on the nearline tier, and long-term archives on coldline storage. Learn more in the detailed guide to Google Cloud backup Disaster recovery (DR) is the ability to respond to an event that negatively affects business operations and recover from it. The goal is to enable organizations to regain the use of critical IT infrastructure and systems as quickly as possible after a disaster occurs. DR typically requires an in-depth analysis of all systems and creating a disaster recovery plan, a formal document the organization can follow during events. It enables organizations to think about disasters before they occur and design effective recovery mechanisms. Disaster recovery planning raises awareness about potential disruptions, helping organizations prioritize mission-critical functions and facilitate discussions related to these topics so they can make informed decisions about suitable responses in low-pressure settings. Learn more in the detailed guide to disaster recovery Data Loss Prevention (DLP) refers to the strategies and tools used to prevent data loss or loss across an organization. DLP solutions have an endpoint management component, which defines who can access data on an endpoint, what can be accessed, and specifies how data should be secured in transit. They can also protect data at rest and data in transit. A DLP solution lets you adapt data protection to the level of importance and sensitivity of different classes of data. DLP solutions cover four main areas: Advanced threat prevention (ATP) is a collection of analysis tools for defending against advanced threats using unknown and known attack vectors. ATP helps extend common security tools designed to repel only known intrusion strategies. Advanced threats attempt to surreptitiously gain unauthorized access to a certain network and then remain undetected within the network for months or years. Staying in the network for a long time enables them to exfiltrate large amounts of data, conduct espionage, and cause significant damage. ATP solutions help protect endpoints against sophisticated and advanced threats by using artificial intelligence (AI) and machine learning (ML) technologies. This focus on threat prevention, rather than detection and response, enables ATP tools to minimize the potential impacts and risk of advanced attacks on endpoints. Learn more in the detailed guide to Advanced Threat Protection Endpoint security solutions combine two layers of security: Here the main features provided by endpoint protection platforms (EPP): User behavior analytics (UBA), which later evolved into User and Entity Behavior Analytics (UEBA), is a security solution that profiles the day-to-day behavior of user accounts or entities like servers, applications or networks. UBA/UEBA uses anomaly detection, based on machine learning techniques, to compare current behavior with the normal behavior of the specific entity and its peers (for example, other users in the same department). When it detects abnormal activity, it alerts security teams to the suspicious behavior. An important part of modern UEBA systems is the use of thresholds to determine when to treat anomalies as a security threat. For example, if a user always starts at 8am, and then one day logs in at 7am, this is rare, but not unusual enough for investigation. UEBA tools measure the degree of anomaly by calculating a risk score. For example, a log in event at 4 or 5 am, combined with other anomalous characteristics (location, equipment used, other activities, etc.) may increase the risk to a level sufficient to raise an alert. Backup is a critical defense against ransomware attacks. However, several steps need to be taken to prevent backups themselves from being attacked and encrypted by ransomware software. To protect your backups from ransomware, follow these guidelines: Data breaches are not only damaging for an organization, but may place it in violation of regulations or industry standards. This may result in fines and other negative consequences. Below is a brief review of regulations that affect an organization’s data breach strategy. Data classification involves tagging data according to specific types, sensitivity levels, and the impact of data loss, such as data modification, theft, or deletion. Organizations use data classification to determine the value of specific data, its risk level, and then apply the appropriate controls to mitigate these risks. The data classification process is subject to regulatory compliance while also helping achieve compliance. Certain industries, for example, require classification according to different data attributes. The ability to locate and control specific data can help meet compliance with SOX, PCI DSS, GDPR, and HIPAA regulations. Learn more in the detailed guide to data classification. Other federal laws that apply to the collection of information online The HIPAA Breach Notification Rule requires companies to disclose security breaches. It applies both to Covered Entities (healthcare organizations, medical providers and practitioners), and Business Associates (who provide services to Covered Entities). The HIPAA Breach Notification Rule may require organizations to notify individuals whose data was affected by the breach, the USA Office for Civil Rights (HHS/OCR), and/or the media. Violation of the rule can result in fines of up to $1.5 million per year, calculated per violation, or per PHI record exposed in the breach. Learn more in our in-depth guide to HIPAA breach notifications A significant regulation at the state level is the CCPA, the most comprehensive data protection law in the United States, which came into force in January 2020. CCPA places certain obligations on companies who collect or store information about California citizens. These include notifying the data subject when and how their data was collected, and giving them the ability to access and delete that information. The CCPA gives California citizens the right to request statutory damage if their information was exposed in a data breach. This applies only to data breaches that meet three criteria: The EU General Data Protection Regulation (GDPR) regulates the collection, use, transmission, and security of data collected from residents of 27 European Union countries. It applies to any business that works with European citizens, regardless of where the company is based. Organizations that violate the GDPR can be fined up to 20 million Euro or 4% of global revenue. What is the official GDPR definition for data breaches? The GDPR requires organizations to notify relevant parties if they are breached. According to the Quick Guide to Breach Notifications, a breach that requires notification is an incident that: 72 hour deadline and possible fines According to Article 33 of the GDPR, organizations need to report security breaches as defined above within 72 hours of detection of the breach. Breaches are reported to a Data Protection Authority (DPA), and in some cases, also need to be reported to individuals who were affected or to the press. Failure to notify about a breach can result in a fine of up to 10 million Euro or 2% of global revenue. However, European authorities emphasize that fines are a last resort and will only be imposed on those who repeatedly and seriously violate the regulation. Learn more in our in-depth guide to GDPR data breaches Cynet 360 AutoXDR™ is an autonomous breach protection platform that works in three levels, providing XDR, Response Automation, and 24/7 MDR in one unified solution. Cynet natively integrates these three services into an end to end, fully-automated breach protection platform. Breach protection with Cynet incident response services: CyOps, Cynet’s managed detection and response team, is on call 24/7 allowing enterprises of all sizes to get access to the same expert security staff that protect the largest enterprises. Here’s what you can expect from the CyOps incident response team: Learn how the Cynet Autonomous Breach Protection platform and the CyOps 24/7 incident response team can help you. Cynet 360 AutoXDR™ provides cutting edge EDR capabilities: Cynet 360 AutoXDR™ provides the following XDR capabilities: Cynet 360 AutoXDR™ can be deployed across thousands of endpoints in less than two hours. It can be immediately used to uncover advanced threats and then perform automatic or manual remediation, disrupt malicious activity and minimize damage caused by attacks. Get a free trial of Cynet 360 AutoXDR™ and experience the world’s only integrated XDR, SOAR, and MDR solution. Cynet, together with several partner websites, has authored a large repository of content that can help you learn about many aspects of data breaches and data security. Check out the articles below for objective, concise reviews of key data security topics. Authored by Cynet Advanced threat protection (ATP) solutions leverage various techniques to provide organizations with the real-time visibility and data awareness needed to detect and block advanced threats. See top articles in our advanced threat protection guide: Authored by Imperva Learn how organizations prepare themselves for a disaster by setting up a remote disaster recovery site and creating automated procedures for business continuity and business recovery (BC/DR). Authored by Cloudian Data protection practices and tools help organizations ensure the protection of data that passes through the organization’s systems. Learn about key data protection techniques. See top articles in our data protection guide: Authored by Cloudian Data backup is critical to ensure organizations can recover from various types of data losses. Learn how to successfully implement data backup techniques. See top articles in our data backup guide: Authored by NetApp Amazon Web Services (AWS) is a top cloud computing vendor, offering highly customizable tools, including a dedicated backup service. Learn how to leverage AWS backup tools and techniques. See top articles in our AWS backup guide: Authored by NetApp Microsoft Azure is a popular cloud computing vendor, offering enterprise-grade solutions, including backup and recovery services. Discover key Azure backup tools and techniques. See top articles in our Azure backup guide: Authored by NetApp Google Cloud Platform (GCP) is a widely used cloud computing vendor, offering scalable and cost-effective services. Learn about popular backup options and techniques offered by Google Cloud. See top articles in our Google Cloud backup guide: Authored by Satori Learn how organizations classify large scale datasets in order to better secure and protect their most sensitive and valuable data. Additional Data Breach and Data Security Resources Below are additional articles that can help you learn about data security topics.
<urn:uuid:6d1a79c1-df3d-4fe0-aba3-5b1a5976d982>
CC-MAIN-2022-40
https://www.cynet.com/data-breaches/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00361.warc.gz
en
0.925338
5,322
3.078125
3
The Internet of Things has been receiving quite a bit of attention. Definitions vary, but at its core the concept is a simple one: Extend computing and data-processing capability to the physical world around us. The earliest manifestations of this are starting to be seen already in the growth of smart devices: televisions, automobiles, appliances, electric meters, etc. Certainly, one can imagine numerous scenarios in which our businesses can be streamlined through strategic application of this concept: dynamic inventory management; self-diagnostic capability for appliances (e.g., refrigerators); better logistics; increased efficiencies resulting from better telemetry; and so forth. These advantages promise rapid and prolific adoption as implementation comes to fruition. However, there are also serious ramifications for security and privacy. For example, 51 percent of respondents to a recent global survey planned to capitalize on the Internet of Things — and 45 percent believed it had already impacted their enterprises. The top governance-level concerns were related to security and privacy. Specifically, “increased security threats” were cited by 38 percent of respondents, followed by data privacy, which was a top concern of 28 percent of respondents to the ISACA 2013 IT Risk/Reward Barometer. Gearing Up for IoT Still, there have been IP-connected, closed architecture, specialized devices in the scope of many security programs for quite a long time. Consider the role of point-of-sale devices in retail, diagnostic modalities in healthcare (MRI machines and the like), and industrial control systems in energy and manufacturing. While wildly different in functionality and implementation, these devices have common aspects that can help shed light on the security challenges ahead as more and IP-connected, purpose-built devices come online. These historical challenges can serve as a touchstone to prepare for the emergence of the Internet of Things. We can’t solve all of them now — there are too many unknown unknowns — but anticipating now what capabilities we might need as smart devices become more prevalent has a few advantages. It can give us a leg up if enterprise use amps up quickly, as it is likely to, and also help insulate organizations against risks during early adoption, when guidance and standards are still emerging. Although securing the Internet of Things is a work in progress, there are a few security capabilities to develop — or hone, if they’re already in place — in order to prepare. These are things you can do today that have benefits right away but that also will be critical as IoT develops and smart devices proliferate. Capability No. 1: Threat Awareness/Intelligence Purpose-built devices, no matter what they are, have security-critical vulnerabilities to the same degree that everything else does. A few things are different: First, manufacturers may not have the same kind of vulnerability reporting and response channels as, say, an operating system or application vendor would. Second, these devices are often closed architecture with a nontransparent and often proprietary code base. Thus, there will be varying degrees of transparency when it comes to vulnerability reporting. For example, some manufacturers may initially downplay the impact of vulnerabilities or be slow to report them. Having internal analysts with their ear to the ground for vulnerabilities in these devices — and a process for rapidly reporting what they find — can help expose vulnerabilities earlier than if the sole alerting mechanism is manufacturer notification. Likewise, tracking the tactics of attackers will help expose attempts to actively exploit these devices. Capability No. 2: Inventory Management As most security pros know from cloud and virtualization efforts, retroactively creating inventories of a rapidly expanding technology footprint is challenging. As previously unconnected dumb devices start to come with built-in network and computing capability, knowing what and where those devices are will be important. Put those two things together, and it’s probably a good idea to start tracking what they are, where they live (to the extent they’re non-portable), and who’s responsible for them. It’s easier to start now while the problem is small than it is to wait and retroactively attempt discovery once usage proliferates. Capability No. 3: Application Security If you’re a manufacturer producing a smart device, it behooves you to minimize the number of issues you have to fix once its in customers’ hands. Likewise, if you’re a consumer, it’s helpful to understand the underlying protocols these devices use to interact. Both require expertise in understanding how applications operate and interact: how the protocols operate; how security defects or misconfigurations arise; how other components are likely to impact the applications running on these devices; etc. These skills are forged in the subdiscipline of application — that is, software — security. If, like many shops, you’ve underinvested in this arena in the past, starting to build some strength here might be a smart move. Capability No. 4: Vendor Governance Though it might not seem immediately apparent, securing the supply chain can be particularly critical when it comes to securing purpose-built devices. There are a few reasons. First, the practices of manufacturers (for example, their ability to build a hardened product) play a role. Second, implementers and VARs can leave configuration or other errors in deployment. Lastly, maintenance and support may require granting access to external parties so they can troubleshoot and provide that support. Building a capability to assess these external parties in the supply chain can give you some transparency and help you assess the level of risk these situations might introduce. Capability No. 5: Business Integration All of the above capabilities require, at their core, one central thing to be effective: namely, knowledge of how an organization is employing the Internet of Things as part of its broader strategy. To get this, you need some knowledge about what the business is doing — ideally, as rapidly as possible. Being out of touch with business efforts has never been a good way to operate, but it’s particularly risky now. Business stakeholders might not think to come to IT when making purchasing decisions about previously unconnected devices that now host both networking and computing capability.
<urn:uuid:e50bdcd3-b57b-43ed-aeae-851e37a0a554>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/securing-the-internet-of-things-5-easy-pieces-79438.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00361.warc.gz
en
0.949357
1,265
2.546875
3
Decentralized clinical trials (DCT) attempt to reduce or eliminate study site visits for patients, thereby easing participation and improving patient retention. This is accomplished through the use of Internet of Things (IoT) technology such as smartphones or tablets for patient data collection, medical sensor or wearable devices, and cellular connectivity for secure data transmission. According to Applied Clinical Trials, drug-based interventional DCTs only experienced a 7% CAGR between 2014 and 2019, and that rate jumped to 77% between the second halves of 2019 and 2020, following the onset of the pandemic. The most common connected health devices used in DCTs fall into two primary categories: consumer-grade health monitoring devices and medical-grade wearable technology. These devices are equipped with sensors and connectivity to take biometric readings and transmit those readings to cloud-based platforms and APIs where they can be viewed and analyzed by healthcare providers and researchers. Medical devices may be used to collect patient data remotely, and connectivity is needed to transmit that data to the cloud so that it can be accessed by healthcare providers and clinical staff. Some devices may be equipped with embedded cellular connectivity, while others — like Bluetooth-enabled medical sensors — may rely on a hub for cellular connectivity, such as a smartphone, tablet, or another connected device. In some decentralized clinical trials, devices may be provided to the patients by their care providers or clinical staff. IoT managed services providers help companies deploy connected health solutions with a suite of services that includes hardware procurement, configuration, kitting, shipping, reverse logistics, and warehousing. Comprehensive Mobile Device Management (MDM) solutions are specifically built to meet the unique patient privacy and security requirements of the life sciences industry. Clinical trial companies can improve and enhance data collection with custom controls that result in lower risk, reduced costs, and improved efficiency in the operation and execution of clinical trials. IoT enablers help companies cut through the complexities and reduce risk, but it’s important to select a partner who understands the unique requirements of life sciences, pharmaceutical, and clinical research companies across the globe. Regulatory compliance, technical ability, ecosystem, and logistics are exceptionally important to consider when looking to decentralize clinical trials. KORE works with several leading CROs and pharmaceutical companies to streamline the process of sourcing, securing, and connecting mobile devices that are used for patient data collection within clinical trials. This is done through a suite of managed services that includes mobile device management (MDM), wireless connectivity, deployment and logistics, and project management — all provided from FDA registered and ISO 9001/13485 certified facilities. Download the eBook, “Roadmap to Decentralized Clinical Trials” to learn more about the Connected Health technology used in decentralized clinical trials. KORE keeps you up to date on all things IoT. Stay up to date on all things IoT by signing up for email notifications.
<urn:uuid:db5c10df-7005-4656-b6e8-b60a4c7a0016>
CC-MAIN-2022-40
https://www.korewireless.com/news/roadmap-to-decentralized-clinical-trials
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00361.warc.gz
en
0.929879
596
2.640625
3
Viruses are the plague that the internet-connected world is struggling with. With more than 350,000 new types of malware emerging each day, the annual cost of the menace runs to up to $55 billion. The question on our minds is if we will ever stop 100% computer viruses. While this may be possible, it may not happen overnight. Read on as we explore the hindrances to stopping 100% viruses and how we can overcome them. Most developers rely on the Neumann architecture that treats data and programs similarly. Program instructions are similar to data instructions in the Von Neumann machine. They both share the same architecture, memory, and bus, forming a security flaw, as pointed out by Feustel as early as 1971. The architecture allows data and programs to use similar instructions. That means that your data could be your program. In some cases, some programs can change their instructions and modify their addresses. This may not be problematic in minicomputers with single users. The problem arises in an environment with multi-user machines where sharing data and codes may be rampant. This can lead to gross data corruption. These outdated designs make our computers inherently unsafe. This compromises efforts to stop 100% of viruses since they create an environment for the continuous creation of corrupted programs and data that present themselves as viruses. Sometimes we build computer artifacts that we cannot analyze with our science and engineering. This is because the foundational architecture allows self-modifications that we may not notice or control. This creates mutations that we did not set out to build, usually evolving into viruses. As long as we are writing new versions of operating systems and programs, we will continue to see an emergence of viruses. This is because people will continue to try new tricks to overcome protocols. Recent attempts used malware that prompts you to open and install it on your devices. As you follow the prompts and input new data, you can create a random stream of log information. This information may corrupt your systems and data, and your anti-malware software may fail to stop it. This leaves your system exposed to corruption by mutating log information. It shows that we may not build foolproof software using a flawed foundational architecture that allows the software to self-replicate. We can overcome this hindrance to stopping viruses by looking beyond conventional computer architectures. The first step would be to delimit our architectures beyond Von Neumann and Harvard bus. Another step would be determining cache memory and register through a system-level design approach. This would be different from the transistor count, maximum clock speed, or chip area currently in use. Rooting for secure boot, trust, secure shutdown, secure OS, secure log files, and secure auditing will be another step towards stopping computer viruses. How many times are people warned against opening suspicious links and emails, but they still do so? Humans are inherently curious and gullible. They will want to know what is in an email with a subject matter that arouses their curiosity. If a link or attachment promises users gainful leads, they may open it without a second thought. This way, we give room for hackers to continue creating malware since they know the malware will find hosts in the many computers operated by gullible and curious humans. Installing anti-malware software may reduce the success rate for hackers. However, it does little to eliminate the threat since hackers work round the clock to create programs that overpower anti-malware software. This creates an endless cat and mouse race of hackers and anti-malware software developers. A human firewall may be a strong defense against malware. If people stopped opening links and attachments that introduce malware to our computer systems, it would dissuade hackers from building malicious programs. Hackers make efforts to build malware since their efforts pay off. If their efforts do not bring rewards, it will deter them from creating new malware. That may be rightfully dismissed as wishful thinking since it is not easy to change human nature. However, you can reduce the threat by training employees on the danger posed by malware, how they can identify suspicious attachments and links, and avoid falling victim to phishing scams. Can we innovate a way of making people know what is in an email or a link before opening it? This would be a perfect way of ensuring people do not click on or open unsafe attachments and links. We may not change human nature to stop people from exposing themselves and systems to cyberattacks. Changing the foundational architectural design of computers may be a long-term venture, but there is a lot we can do to stop hackers on their tracks. As stated earlier, hackers thrive as long as their techniques bring a reward. We can employ deterrent measures to prevent them from getting the returns. These are cybersecurity measures within our means, including the following: Multi-layer security or defense in depth is an excellent way for protecting your systems and devices against blended cyberattacks. When one layer defends you against one kind of attack, the other may protect you against the malware that circumvents the first layer. It is a good defense against increasingly complex cyber threats. Encourage your employees and family members to use strong passwords or passphrases. They should not write down the passwords anywhere or share them with unauthorized people. Doing so may compromise the security of your systems and data. Take proactive security measures to keep your systems, devices, and data safe from cyberattacks. Besides installing anti-malware software and cybersecurity programs, ensure you update them regularly to protect you effectively at all times. A web-filtering system blocks programs and sites that may harm your systems and devices. You can also use it to regulate the websites your employees can visit while at work, making it an excellent way of preventing them from using working hours to view non-work-related content.
<urn:uuid:f2ff0358-5c5e-4256-a44d-24c41dab4f19>
CC-MAIN-2022-40
https://ebsolution.ca/will-we-ever-stop-100-computer-viruses/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00561.warc.gz
en
0.941685
1,189
3.375
3
Because it is a browser-accessible Web service, Dropbox needs little in the way of IT intervention, and can be used by students on campus and off. And because it offers clients for Windows, Mac and Linux -- as well as Android, iOS and BlackBerry smartphones -- any student can use Dropbox, regardless of device. Here are four great uses for Dropbox in the classroom. 1. Sharing Stored Files. In the early days, some educators probably turned to Dropbox simply because their school's own networking setup lacked such a feature. Anecdotal reports suggest that schools now are sanctioning the use of cloud services like Dropbox. [ What's the latest and greatest in Dropbox? Read Dropbox 2.0.0 Pretties Up the Menu. ] Last year, Dropbox launched a program called Space Race, offering people with an .edu email address an extra 3 GB of storage -- on top of the 2 GB of storage all users get. At this writing, it is not clear if Dropbox will offer Space Race again this year. 2. Overcoming Email Limitations. Over-size attachments, such as large PowerPoint files and videos, that never reach their intended recipient because the email program chokes on the file, is a common complaint of email users. Dropbox essentially solves this problem by bypassing email. 3. Turning In Homework. In its simplest application, Dropbox can be as used a common filing cabinet through which teachers can provide documents, such as homework assignments and handouts, and media files for the entire class. But another popular use goes in the opposite direction, from students to teachers. Using Dropbox as a homework drop has the added benefit of providing, by default, a time-stamp for these submissions. Of course, students can share Dropbox folders with each other too, and so collaborate on joint assignments. Happily, the free version of Dropbox saves a history of all deleted and earlier versions of files for 30 days. Paid Dropbox Pro accounts have a feature called Packrat that saves file history indefinitely. 4. Easy Saves From Popular Apps. Quite a number of popular productivity and educational applications now feature a Dropbox "sync" option. Evernote, for example, has a "save to Dropbox" option. Other popular education apps with Dropbox integration include: Notability, iThoughtsHD and Ghostwriter Notes. A free Dropbox account includes 2 GB of space. Users can earn more free space in a variety of ways. Also, more storage can be purchased via monthly or annual plans. For institutions needing even more storage, there is Dropbox for Teams, which adds a number of advanced account security and management options, as well as unlimited storage. Pricing for Dropbox for Teams starts at $795 for up to 250 licenses. InformationWeek's March Must Reads is a compendium of our best recent coverage on collaboration. This Must Reads: Collaboration issue looks at how collaboration tools solve real problems, the potential for unified communications to expand collaboration outside your company, where the cloud fits in and more. (Free with registration.)
<urn:uuid:35f3474b-fefb-4a69-b7af-9eddcd1b91cd>
CC-MAIN-2022-40
https://www.darkreading.com/attacks-breaches/dropbox-in-the-classroom-4-great-uses
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00561.warc.gz
en
0.933388
624
2.515625
3
HIPAA Data Retention In The Cloud The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a federal law that implemented national standards to protect sensitive patient health information and prevent it from being disclosed without the patient’s consent or knowledge. There are also HIPAA data retention laws that upgraded standards within the healthcare industry in general, helping to minimize paperwork and improve the transfer of medical records, insurance coverage, and billing information between healthcare entities. Of course, HIPAA was implemented long before cloud data backup was an option for organizations in need of compliance, but the same rules apply today. Below is a brief overview of what HIPAA policies entail regarding personal data—and which entities must adhere to them. Who Must Comply With HIPAA Data Retention Policies? - Health plans – e.g., insurance or group health plans. - Healthcare providers – including doctors, hospitals, and clinics. - Healthcare clearinghouses – entities that assist healthcare providers in standardizing various aspects of health data. - Businesses and organizations that associate with HIPAA-covered entities – any person or entity whose services and activities involve the use or disclosure of protected health information (PHI) on behalf of a HIPAA-covered entity. Types of documents that fall under the consideration of HIPAA include: - Medical records – any file or document containing a patient’s medical history, exam results, treatments, medications, etc. - Non-medical HIPAA-related documents – any file or document issued during the actions of securing, storing, processing, or destruction of medical records. The State of HIPAA Data Retention Laws Today Technology, especially tech that handles data storage and backup, has changed significantly since HIPAA was first introduced in 1996—something the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 was created to address. HITECH applies to any service provider with access to protected health information (PHI) who create, receive, maintain or transmit PHI on behalf of an organization—including cloud backup providers. HITECH includes supplementary requirements around the protection of electronic PHI (ePHI) that include aspects like secure backup, backup frequency, recoverability of data, and encryption. What complicates things even further is that retention periods for medical data can vary depending on the state since each state sets its own policies, and furthermore, there are different federal retention period requirements depending on the type of HIPAA-related documents and data. Maintaining HIPAA Compliance with Cloud Backup If your organization is subject to HIPAA regulations, the risk of compliance failure can be incredibly costly. Numerous companies have paid out millions of dollars to the US federal government for violations in recent years. It’s absolutely essential to depend on a cloud backup platform that can meet HIPAA compliance demands on all ends—this includes data retention policies, encryption, secured backup, and the backup frequency. Clumio is a fully secure, cloud backup-as-a-service solution that provides organizations of all sizes with end-to-end data backup and recovery through an interface that offers clear visibility into data retention policies. The platform helps define backup policies, automate data retention and monitors compliance in real-time, sending instant alerts when compliance may be at risk. Other features that help maintain HIPAA compliance include: A simple interface that shows a single, cohesive view of all assets The ability to automatically apply uniform policies to existing and future resources ISO 27001, ISO 27701, SOC 2 Type 2, HIPAA, and PCI DSS certifications Air-gapped storage of data backups outside of production environments to protect backups from threats like ransomware attacks See for yourself how industry-leading innovation can enable your organization to achieve and maintain HIPAA compliance while also controlling cloud costs. Schedule a demo today to learn how Clumio can protect your organization’s data and ensure compliance in less than 15 minutes—no new software to install, no additional hardware to add, and no pre-planning required.
<urn:uuid:aefa6791-5331-4526-bbc7-e1db88514ebf>
CC-MAIN-2022-40
https://clumio.com/rto/hipaa-data-retention-in-the-cloud/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00561.warc.gz
en
0.916687
838
2.71875
3
Representatives of the space agency said that recently NASA staff and home-based agency contractors suffered from increase in the number of hacker attacks, and their devices are constantly trying to gain access to malicious sites. Therefore, according to official figures, in recent days, NASA personnel have been suffering from: - doubling the number of phishing attacks by email; - exponential growth of malicious attacks on NASA systems; - doubling attempts to block or mitigate the activity of NASA systems trying to access malicious sites (unknowingly, due to users accessing the Internet). The last point means that NASA employees and contractors are actively clicking on malicious links that they send to them via email or text messages. And now this happens twice as often as usual. Social engineering is still one of the easiest ways to access corporate networks and users’ computers. The mechanisms for blocking and mitigating such incidents that NASA SOC uses seem to include blocking access to servers that are considered malicious or suspicious, as well as terminating dangerous downloads from agency computers. Unfortunately, these measures can hardly be called reliable, and much better when the staff is trained to recognize phishing attempts and act accordingly. “NASA employees and contractors should be aware that the APT and cybercriminals are actively using the COVID-19 pandemic to attempt exploitation and attacks on NASA’s electronic devices, networks and personal devices. In some cases, the goals of [criminals] include access to confidential information, usernames and passwords, conducting denial of service attacks, the spread of misinformation, and fraud”, – told NASA representatives Cybercriminals began to send emails with malicious attachments and links to fraudulent sites more often, trying to force victims to disclose confidential information and provide access to NASA systems, networks and data. Such baits are often masked as requests for donations, supposedly updated data on the methods of transmission of the virus, security measures, tax refunds, information on fake vaccines and disinformation campaigns. As a result, contractors and staff are advised to exercise caution when using computers and mobile devices connected to the Internet and to exercise increased vigilance. As we wrote earlier, not only NASA will experience such difficulties. For example, the other day, Check Point experts reported that 71% of cybersecurity experts report an increase in the number of threats and attacks since the beginning of the pandemic. The majority of respondents (55%) report phishing attempts as the main threat. In second place are malicious sites that allegedly contain information and tips about coronavirus (32%). Next is the increase in the number of malware (28%) and ransomware (19%). “My new certificate log catcher is sucking in all the covid-19 and coronavirus domain certificates. 3,143 certificates in 24 hours today (UTC), not yet checked for duplicate domains re-registered for additional hosts”, — reports IS expert Sean Gallagher. Overall, attackers are very actively exploiting the new opportunities that the pandemic offers them.
<urn:uuid:3266147b-8c2d-4217-8fa7-0cc20189d84c>
CC-MAIN-2022-40
https://gridinsoft.com/blogs/nasa-staff-faces-exponential-increase-in-number-of-hacker-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00561.warc.gz
en
0.945864
603
2.546875
3
File transfer protocol (FTP) has been around for more time than HTTP and TCP/IP protocol and has more than 40+ years of existence in the industry. Its original specification was written way back in 1971 and initial FTP clients were command line programs, and as they adopted, they grew into GUI (Graphic User Interface) and installed on numerous systems, desktops, mobile devices and so on. In this article we will learn more about FTP and how it works, setting up FTP access on systems, its features, functions and limitations in this article. About – FTP (File Transfer Protocol) File transfer protocol (FTP) is a standard network protocol used for transferring computer files between a client and a server across a computer network. Users can use FTP via a command line interface such as DOS in Windows and Terminal in UNIX systems or MacOS. To login to FTP server a user name and password is required and the port number (When logging from command line interface). The FTP protocol uses port 20 and 21 by default. FTP can also work anonymously where the default user name can be ‘anonymous’ or email address as the password. File Transfer Types FTP supports two kinds of file transfers: Binary and ASCII - ASCII is a 7-bit character set which contains 128 characters. Any file which is text based such as HTML, TXT, PostScript etc are ASCII files. - Binary files have a different structure and require different transfer types which includes images, applications, algorithm generated packages such as .ZIP, and much more. Use of Browser FTP let you offer limited functionality to download files. Some examples of FTP Servers and its clients are FileZilla server and FileZilla, SolarWinds and WinSCP, Serv-U and SmartFTP. Features of FTP Access - FTP is one of the fastest ways to transfer files from one computer to another computer. - FTP is very efficient as we do not need to complete all the operations to get the entire file. - FTP access is secure as we need to login with username and password - FTP allows you to transfer the files back and forth. How to set up FTP Server? FTP works in the Client – Server model. The server hosts the files to be shared and the client provides the interface to access, download or upload files to the file server. The systems transferring files can be within the same network where FTP is configured or could be outside the network (Over the Internet). FTP uses two ports, one for connection and one for sending data. FTP runs in two modes – Active and Passive. It uses two channels: command and data channel. - Command channel is used for sending commands and responses and - Data channel is meant for sending data. - In Active mode client launches command channel and - In passive mode both command and data channels are established by client. Open channel on FTP client and server Data and other communications from clients should be able to reach FTP server to allow outgoing data and other communications from the client to FTP server. Server-side Port 21 to be opened for initiating connection. The port used by the server to respond to clients can be between Port 21 to 1022. - FTP requires IIS. Both IIS and FTP should be installed for the configuration of FTP server - A ‘root’ folder to publish FTP - Set permissions to allow anonymous access to the folder “ICACLS “%SystemDrive%\ftp\ftproot” /Grant IUSR:R /T” “%SystemDrive%\ ftp \ftproot” - The root folder should be set as the path for your FTP site. The software firewall (like Windows firewall or Symantec) should allow connections to the FTP server Step 1 : Enabling FTP in Windows if IIS is not installed If IIS is not installed: - Go to Start > Control Panel > Administrative Tools > Server Manager in Windows Server Manager. - Go to Roles node. Right-click on Roles and click Add Roles - In the Add Roles window, open Server Roles and check Web Server (IIS). - Proceed through the setup wizard and click Install. Let installation to be completed Step 2 : Transferring files To transfer files, add an FTP site. Post FTP site is enabled, clients can transfer to and from the site using the FTP protocol. Step 3 : Setting up FTP site Go to Start > Control Panel > Administrative Tools > Internet Information Services (IIS) Manager Expand Local server in IIS console Right click on sites , Add FTP site type the FTP server name and the content directory path, and click Next. The directory path should be the same as the one we set permissions to allow anonymous access. Above, we used: %SystemDrive%\ ftp \ftproot In ‘Binding and SSL Settings’ type the IP address of the server Check the Start FTP Site Automatically option. Choose SSL Based on Constraint. Click Next. Select Basic for authentication and Click Finish to complete FTP site creation Step 4: Accessing files on the FTP server To access files on the FTP server, open a file explorer and type ftp://serverIP. The FTP server asks for a username and password. Enter the username and password (Windows or Active Directory credentials) and click Logon. The files and folders display under the FTP server.
<urn:uuid:c8b0b69d-175a-4cea-9641-ba0f7b463d00>
CC-MAIN-2022-40
https://networkinterview.com/what-is-ftp-how-to-set-up-ftp-server/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00561.warc.gz
en
0.863197
1,157
3.875
4
CCPA vs GDPR Compliance Comparison What is GDPR? The GDPR is a European Union (EU) law that went into effect in April of 2016. The Regulation is designed to improve personal data protection and increase organizational accountability for data breaches to protect European Union residents. GDPR includes fines of up to 4% of global revenues or 20 million EUR (whichever is higher), and no matter where your organization is located, if it processes or controls the personal data of EU residents, your organization is subject to the regulation. Notable data security requirements of GDPR Some of the key provisions of the GDPR require organizations to: - Process personal data in a manner that ensures its security, “including protection against unauthorized or unlawful processing” (Article 5) - Implement technical and organizational measures to ensure data security appropriate to the level of risk, including “pseudonymisation and encryption of personal data.” (Article 32) - Communicate “without undue delay” personal data breaches to the subjects of such breaches "when the breach is likely to result in a high risk to the rights and freedoms" of these individuals. (Article 34) - Safeguard against the “unauthorized disclosure of, or access to, personal data.” (Article 32) More about GDPR: - What is GDPR? – Entrust webpage with complete list of GDPR chapters and articles - GDPR.EU – Official European Union site with complete guide to compliance - GDPR Overview – Entrust webpage with a general overview of GDPR Does GDPR require encryption of personal data? Article 6 (Lawfulness of processing) identifies “encryption or pseudonymisation” as “appropriate safeguards” for protecting subjects’ personal data. Article 32 (Security of processing) states that “the processor shall implement appropriate technical and organisational measures to ensure a level of security appropriate to the risk, including inter alia as appropriate: (a) the pseudonymisation and encryption of personal data …” Article 34 (Communication of a personal data breach to the data subject) allows organisations suffering a data breach to avoid the communication requirement if they used encryption to “render the personal data unintelligible to any person unauthorised to access it.” What is CCPA? The California Consumer Privacy Act (CCPA) went into effect at the beginning of 2020. It was designed to give California residents more control over what personal data is collected and how that data is used. Businesses found in violation of CCPA stand to incur a $7,500 fine for each intentional violation. Non-intentional violations are less onerous, but still costly, at $2,500 each. However, civil litigation can potentially have a negative impact on non-compliant organizations. For each consumer affected by CCPA non-compliance, organizations stand to face up to $750 in civil damages per consumer. More about CCPA: - CCPA Civil Code – Official code on California state legislation information site - CCPA Overview – Entrust webpage with a general overview of CCPA Does CCPA require encryption of personal data? Section 1798.150 of CCPA states: “Any consumer whose nonencrypted and nonredacted personal information, as defined in subparagraph (A) of paragraph (1) of subdivision (d) of Section 1798.81.5, is subject to an unauthorized access and exfiltration, theft, or disclosure as a result of the business’s violation of the duty to implement and maintain reasonable security procedures and practices appropriate to the nature of the information to protect the personal information may institute a civil action …” Additionally, the protection of encryption keys is addressed in legislation related to CCPA. Notably, Assembly Bill 1130, which was introduced to update the California breach notification statute, requires notifying people whose data has been breached unless that data is encrypted, and the encryption keys have not been obtained with the data. CCPA vs. GDPR How are GDPR and CCPA similar? Both GDPR and CCPA are intended to protect the privacy and data rights of those living in their respective geographies. Both extend their reach to organizations doing business with their residents, regardless of whether those organizations reside in their geographies. Both CCPA and GDPR grant to individuals certain rights regarding their personal data and require transparency from the organizations that hold and process that data. Both CCPA and GDPR: - Require businesses to disclose what personal information the businesses have compiled about those individuals. - Require organizations to divulge what they do with the personal data. - Require organizations holding personal data to delete that data upon request of the person the data pertains to. - Require organizations to put in place cybersecurity measures to protect the personal data of individuals. - Levy fines for non-compliance. How are GDPR and CCPA different? - GDPR requires companies to have legal basis before processing data about residents. CCPA does not. - GDPR applies to all businesses that meet the legal basis requirement mentioned above. CCPA applies only to businesses with an annual gross revenue of more than $25 million. - Under CCPA, an individual can keep companies from selling their private data, and organizations cannot discriminate against these individuals. - GDPR imposes additional conditions for companies processing health-related information, because GDPR is more specific by including terms, such as “genetic data” and “biometric data.” CCPA uses a general umbrella term. - In general, GDPR fines seem likely to be higher than CCPA fines. However, CCPA opens the door for civil litigation, which could prove just as costly to an offending organization.
<urn:uuid:c5f6db7b-16a7-48db-a4c2-90fe3f953c79>
CC-MAIN-2022-40
https://www.entrust.com/resources/hsm/faq/data-protection-security-regulations/ccpa-vs-gdpr
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00561.warc.gz
en
0.916134
1,197
2.78125
3
Secure programming is the best defense against hackers. This multilayered course will demonstrate live real time hacking methods, analyze the code deficiency that enabled the attack and most importantly, teach how to prevent such vulnerabilities by adopting secure coding best practices in order to bullet-proof your HTML5, JS and Angular applications. The methodology of the Cycle of knowledge is as follows: Understand, Identify, Prevent. This methodology presents the student with analytical tools to keep a deeper understanding of coding vulnerabilities and implement security countermeasures in different areas of the software development lifecycle. The courses cover major security principles for securing HTML5, JS and Angular applications, the training includes programming vulnerabilities, and specific security issues relevant to HTML5, JS and Angular applications. - Unit 1: Introduction to Application Security - Unit 3: Browser Security Policy - Unit 4: HTML5 Secure Coding - Unit 5: Angular2 Secure Coding 1.5 – 2 hours. Following completion of all chapters the student will be directed to a final exam- once passing the final exam (60% and above) the student will receive a completion certificate. HTML5, JS and Angular development team members.
<urn:uuid:d17b54a7-7124-45d8-b858-99a61bb7f177>
CC-MAIN-2022-40
https://appsec-labs.com/html5-js-angular-secure-coding-e-learning/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00761.warc.gz
en
0.868721
268
2.828125
3
The complex layers of cloud computing sustainability Interest is growing in cloud computing’s ability to reduce carbon, but the ‘green cloud’ argument is not as clear as many believe. I’ve argued over the years that cloud computing is a step in the right direction when it comes to sustainable computing. My viewpoint often opposes environmental organizations that argue against the many new power-hungry data centers that cloud companies build. The sustainability of public cloud computing is easy to understand. Simply put, cloud does more processing and storage with the same number of physical servers and data centers. How? Cloud computing’s multitenant approach puts more applications, data sets, and users on a smaller amount of hardware, at approximately 85% to 95% utilization of capacity. Compare this to traditional approaches where we own the servers and data centers, and the hardware resources are often utilized at a very low capacity, often 3% to 7%. Cloud requires less power for the same amount of processing. Does this always make cloud computing green—or should we say, “greener”? The word “always” rarely turns out to be accurate. It’s the same with cloud—the issues are rarely black and white. To investigate cloud sustainability, let’s break it down into the two major layers. Source of power People often brag about how their electric car has a zero-carbon footprint. It’s not that simple. Most power, at least in the USA, is generated by burning fossil fuels. In 2021, it was 60% fossil fuel with the remainder split between nuclear and renewables. No matter where you charge your Tesla or power your cloud or non-cloud data center, power consumption has a carbon impact. The argument for cloud computing sustainability is that it reduces the required hardware and data center space. However, both cloud and non-cloud application processing requires power grids driven by fossil fuels. The opportunities to reduce cloud-related carbon not only depend on the use of sharable IT resources in public clouds but also the locations of the data centers for public clouds. The trend is to have points of presence as close as possible to those using the cloud services. Many of those points depend on carbon-heavy power sources. In those instances, cloud computing is not as green. You could make the argument that it’s better to leverage an enterprise-owned data center with much lower utilization of server resources that’s in an area served by renewable and/or nuclear energy. In that instance, it’s not as green to choose a cloud computing provider’s point of presence that only uses carbon-heavy generating power. No matter if your power burns fossil fuel or not, much can be said about how you optimize the cloud resources you use. For example, if you give two developers the same business problem to solve using public cloud resources, you’ll usually find that one of them does a much better job at leveraging the minimum number of resources for the maximum effect in terms of value returned to the business. This can have very different impacts on resources used to solve basically the same problem. For example, the fully optimized version uses only three compute resources and two storage resources. The second uses three times more to solve the same problems. Thus, the second solution also burns approximately three times the power. Remember: There are cost and carbon impact penalties if you put underoptimized solutions on a public cloud. The more unnecessary resources, the more unnecessary cloud fees. Depending on how your public cloud provider powers its data center, this could also have a significant impact on carbon output. You can do audits yourself or use an outside organization to dig into the specifics of how your use of public clouds helps or hurts the planet. These audits include an explanation of the ultimate source and location of the power as well as how you specifically use the cloud resources. The findings often surprise enterprises. Some believe their use of cloud computing is at the height of sustainability. It could turn out that their cloud solutions are highly inefficient if they just lifted and shifted applications and did not refactor them for cloud resource optimization. Or an enterprise might not understand the true source of the power. It could be as simple as just moving their workloads to another region that only uses renewables as a source. Yes, there might be some impact on latency, but the positive impact on sustainability might offset it. This is interesting stuff. I agree with using cloud computing to reduce carbon. However, like anything else, we need to understand what’s truly happening before we declare success. Nastel Technologies is the global leader in Integration Infrastructure Management (i2M). It helps companies achieve flawless delivery of digital services powered by integration infrastructure by delivering tools for Middleware Management, Monitoring, Tracking, and Analytics to detect anomalies, accelerate decisions, and enable customers to constantly innovate, to answer business-centric questions, and provide actionable guidance for decision-makers. It is particularly focused on IBM MQ, Apache Kafka, Solace, TIBCO EMS, ACE/IIB and also supports RabbitMQ, ActiveMQ, Blockchain, IOT, DataPower, MFT, IBM Cloud Pak for Integration and many more. The Nastel i2M Platform provides: - Secure self-service configuration management with auditing for governance & compliance - Message management for Application Development, Test, & Support - Real-time performance monitoring, alerting, and remediation - Business transaction tracking and IT message tracing - AIOps and APM - Automation for CI/CD DevOps - Analytics for root cause analysis & Management Information (MI) - Integration with ITSM/SIEM solutions including ServiceNow, Splunk, & AppDynamics
<urn:uuid:61c2f5de-720b-4df4-b3a0-e1ecf7acaa3a>
CC-MAIN-2022-40
https://www.nastel.com/the-complex-layers-of-cloud-computing-sustainability/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334802.16/warc/CC-MAIN-20220926051040-20220926081040-00761.warc.gz
en
0.929295
1,189
2.734375
3
Being a thorough internet user, you must be familiar with the terms like IGW, node, AWS VPC, NAT gateway, or internet protocols. Let’s start with a small overview of gateways before diving further into the topic. Gateways are like bus stops or entry gates for data information that is sent through the internet. Users like us, whenever communicate, we send data information back and forth through the internet to each other. This information stops on its way on these gateways to or from other networks. Therefore gateways can take a variety of forms ranging from hardware to software or even devices like routers and computers, to perform various different tasks. Gateways are often at the edge of a network and are commonly combined with firewall software to keep all the unwanted foreign traffic at bay. In other words, gateways prevent over trafficking of network and double-check data information. What is Internet Gateway? An Internet gateway is simply a network stop or scientifically a “node” that helps to connect two separate networks using a different set of protocols for communication. For home Internet connections, usually your Internet Service Providers (ISPs) act as the internet gateways where all your data information makes a stop while on their way to the internet. They offer your information access to the worldwide Internet through their own networks. If you are using a Wi-Fi connection anywhere, then your Internet gateway is none other than the modem/ router combination. And whatever ISP is connected to that device, is the one granting you access through their network to the Internet. If by any chance, a computer server is your Internet Gateway it will behave like a firewall as well as a proxy server. This is usually observed in offices or firms. A firewall is responsible to keep the unwanted traffic away and foreign computing devices out of a private network. While the proxy server’s duty is to make sure that the in-progress online data requests are handled properly by the actual server. What is NAT Gateway? Network Address Translation Gateway is abbreviated as NAT Gateway in short. It is used to allow instances that are present in a private subnet to connect to the internet services or more specifically to the Amazon Web Servers (AWS). Moreover, the NAT gateway prevents the internet from initiating a connection itself with the instances. Obviously, the NAT Gateway service is completely monitored and managed by Amazon, so that much effort is not required from the administrator. To be specific, each NAT gateway device is created in a unique Availability Zone and thus implemented with the redundancy in that particular zone. One basically has a limit of NAT gateways that can be created in a single Availability Zone. A NAT gateway service provided in your device forwards the extra traffic from instances of the private subnet to the Amazon Web Servers. Then, it sends the coming responses from their servers back to the previous instance. Whenever the traffic is moved to the internet or AWS connection, an IPV4 address is needed to be replaced by the address of a NAT device. Once the response is received, it is sent back to the instance by translating the NAT device address back into the previous form of IPV4 address. Because an instance can read only the IPV4 addresses. There are usually two major kinds of NAT devices available which are offered by Amazon Web Servers: A NAT Gateway Device and A NAT Instance Device. Both are equally important according to their programs and demands but AWS still recommends the NAT Gateway Devices as they provide high connection availability as well as better bandwidth. Comparing AWS NAT Gateway VS Internet Gateway Internet Gateway (IGW) An Internet Gateway is a simple and logical connection stop between the Amazon Virtual Private Connection and the Internet connection. It acts to connect the two forms of networks virtually and thus it is not a physically available device. Also, it does not disturb the bandwidth of Internet connectivity. If an Internet Gateway is not present in the connection, then the VPC resources are inaccessible from the Internet unless the provided traffic has to flow through some corporate network and then VPN connects. The main purpose of segmenting the Internet called as “subnetting” is to relieve network stress and avoid congestion. But a subnet can only be a Public Subnet if it has a Route Table which is capable of directing the unwanted traffic to the Internet Gateway. AWS NAT Gateway The Amazon Web Servers felt the need to introduce a NAT Gateway Service in place of the NAT Instance which is much more efficient. Using a NAT Gateway service has the following pros: - NAT Gateway is a fully-managed service. You just need to create an account on it and it works automatically without failing. - It has the capability to burst up to 10 Gbps which is greater than the NAT Instance. If you don’t know how to set up NAT Gateway in your VPC, start by following these steps. - First, check if you have an Internet Gateway route as Routing Table. - Then get the Public Subnet ID to deploy you NAT Gateway. - Now, create a NAT Gateway account. - Test your Internet connectivity with it. However, some cons of NAT Gateway are: - You cannot associate Security Groups with a NAT Gateway service. - NAT Gateway only allows VPC resources to access the internet in a private subnet. - You will require a “one in each AZ” to operate because it can only work that way. Comparing AWS NAT Gateway VS Internet Gateway it is clear that both have their specific significance. In AWS you have the benefit of designing your own network by using VPC and you are able to split your network connection into Public as well as Private Subnets. While an Internet Gateway allows VPC resources to access the internet. But to make this happen, a routing table entry is needed which gives subnet access to the IGW.
<urn:uuid:773a3ba5-9bda-4c11-8232-27f4f7415e17>
CC-MAIN-2022-40
https://internet-access-guide.com/aws-nat-gateway-vs-internet-gateway/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00761.warc.gz
en
0.932796
1,212
3.109375
3
A new architecture that uses micro-threading in DRAM cores may result in four-times faster performance for gaming applications, among other uses. The development, announced today by Rambus, a technology licensing company that specializes in high-speed chip interfaces, will increase memory subsystem efficiency in 3D graphics, advanced video imaging and network routing and switching. Micro-threading chops data into tiny bits that can be processed more efficiently without draining power. The bits of information can be processed simultaneously. The increased memory efficiency allows for richer graphics rendering. While multi-core processors are becoming the norm, this is a first for DRAM (dynamic random access memory). “It is a unique breakthrough since there is no DRAM product which allows access to several banks at the same time,” Nam Kim, director and principal analyst, Memory ICs/Storage Systems at iSuppli, told TechNewsWorld. “This is more than existing data interleaving solutions.” Data interleaving is a method of arranging data to increase performance. Rambus said micro-threading eliminates the waste of bandwidth that occurs in a single-core operation. “A typical mainstream DRAM provides a larger amount of data than needed by many applications. As a result, large amounts of memory bandwidth are used to deliver a small amount of relevant data,” the company said in a press release. Micro-threading provides only the data needed, and in smaller bits. PCs Don’t Need It While the technology is an advance, it is not expected to find widespread use. “It is a good fit with some network and consumer applications which require higher graphic performance,” said Kim. “However, normal PCs don’t require this technology. Micro-threading will be mainly used in the consumer and wired-network industries.” Rambus quantified the speed difference in its own analysis of 3D applications. It found that standard GDDR SDRAM can deliver between 50 and 125 million triangles per second; GDDR SDRAM enhanced with micro-threading can deliver between 100 and 500 million triangles per second. DRAM controllers connected to the micro-threaded DRAMs will only be able to perform at the faster speeds if they are optimized to do so, Rambus said.
<urn:uuid:f75bf5ab-e88e-47d6-b286-f73a9d80a55b>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/dram-breakthrough-speeds-3d-rendering-rambus-says-42012.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00761.warc.gz
en
0.923955
487
2.828125
3
The field of healthcare is constantly revolutionizing. Although, there are many challenges that doctors and other medical professionals face in treating patients. The advancement in Microsoft’s HoloLens technology has brought a huge shift in how hospitals, medical clinics, and other modern healthcare setups deliver care to patients. Major Challenges in Healthcare Many hospitals and medical clinics still depend on traditional technologies to interact with patients and read paper charts to get an overview of a patient’s health condition. In many cases, neurosurgeons face difficulty in performing CT scans of the patient. It is because the CT scan causes claustrophobia to several patients due to the noise and the enclosed pattern of the machine. Also, doctors need medical scans and records for detailed analysis, but loading and modifying electronic medical records of thousands of patients is a bulky process. A team of medical professionals consists of receptionists, nurses, doctors and other professionals who need seamless coordination with each other during patients’ treatment. Also, it becomes challenging for them to work as a team by relying on multiple devices for communication, recalling diagnostic charts and other data. Mixed Reality – Working Wonders in the Healthcare Industry The new version of Microsoft HoloLens has unveiled cutting-edge features that can impact the healthcare industry to a great extent. Let’s check out some of the ways in which these features are empowering medical professionals to improve the doctor-patient relationship. Contextual Patient Data Visualization MR headsets can detect patients and instantly provide relevant medical information to doctors, saving time during interactions and allowing doctors to respond to emergencies quicker than before. Just being able to observe a patient’s vital signs without having to read screens or derive paperwork can save a lot of valuable time and allow for more convenient patient interactions. Also, using MR enables elderly patients to receive hospital-level care and treatment at home where they feel more comfortable and hospitals would also get a benefit of free hospital beds for other critically ill patients. The HoloLens app is also helpful to patients who need to travel often due to frequent doctor appointments with specialists by saving their time and money while still delivering personalized care to them. This enables the health care facility to utilize the same amount of time for other critical patients. Generally, mixed reality application enables real-time interaction with medical professionals via hologram technology. It also allows care providers to share information hands-free as well as record patient data in real-time via a virtual dashboard. This advancement combines real-life, video conferencing and projected holograms to facilitate nurses’ and clinicians’ in accessing information and services as and when needed. Holographic Surgical Planning The MR application provides Virtual Surgery Intelligence (VSI) to physicians to show patients their own MRI scans and explain the level of complication in a surgical procedure in a visual format. Not only that, but MR apps helps to reduce doctor’s response time and improve surgical accuracy, thereby offering an enhanced patient experience. For example, if a patient needs to undergo a complicated surgery, the doctor is able to show diagnostic images to the patient with the help of the VSI feature of mixed reality. This feature helps the patient and the doctor to share the same field of view (FOV). In this way, doctors can discuss, plan, and initiate their treatment procedure, thereby reducing response time in inpatient care. Onsite/Remote Surgery Assistance Furthermore, by wearing HoloLens, surgeons can have their hands free for surgery, as well as use microphones and sensors to communicate with other surgeons in different parts of the world making collaboration seamless. All these features including simulations and information extraction make mixed reality a valuable asset in improving surgical performance. The Final Say These are just some of the notable advantages of Microsoft HoloLens that prove that the future of healthcare is heavily reliant on mixed reality technology. These advancements make it feasible for medical professionals to hone their skills and attend hundreds of patients virtually without even touching them. Besides, using an MR application enables seamless collaboration between physical and digital objects to provide a better quality of treatment and patient experience. To know more about how to implement a HoloLens app for your healthcare organization, talk to our experts.
<urn:uuid:8ec7728a-54f5-496d-b1c0-fc763d34108a>
CC-MAIN-2022-40
https://www.iotforall.com/virtual-hospital
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336674.94/warc/CC-MAIN-20221001132802-20221001162802-00761.warc.gz
en
0.934074
847
2.625
3
Ten percent of Caucasian American men but less than one percent of women are estimated to have some form of colorblindness. Colorblind people represent a significant but often neglected talent pool and consumer segment. Identifying opportunities to make products usable by as many people as possible, without degrading overall quality or performance, is a quality assurance function that is not always well understood or practiced. Here is a guide for increasing the usability of products and the communication of information. It contains need-to-know information for anyone engaged in software and hardware product design, quality assurance or business communication. We begin with a few simple tips. Two principal tips for usability are to avoid using color alone as a sole distinguishing element or functional indicator, and to emphasize highly contrasting brightness, hues and saturations in adjacent colors. In emphasizing a hyperlink, for example, not only can a dark blue font be used against a white background, but the hyperlink can be underlined to indicate its function. Avoid using colors together that are distinguishable solely by the amount of red or green in them. In presenting text, colored letters work best when they are in bold fonts and have high-contrast backgrounds. The easiest text to read is in strong black fonts on white backgrounds. Dark fonts and busy or dark backgrounds do not work well together, nor do light fonts on light backgrounds. Avoid yellow or light green letters on white backgrounds. Information should never be distinguished by color only, but should be combined with bolder fonts, underlining, dashes, italics, or other typographical features. This is particularly important when information is being presented in graphs and maps. Graphs and Maps In creating graphs or maps, do not rely on a separate color-coded key outside of the map or graph. It is hard for colorblind people to compare colors that are not immediately adjacent. Instead, distinguish information by labeling lines and other features with text or arrows extending inside the graph or map itself. In maps and graphs, information can be distinguished by letter and number codes, stripes, hatching (cross hatching, stripes, dotted fill lines), triangles, rectangles and other shapes. For example, one trend line in a graph could be marked off with little squares every time the line changes course, whereas a nearby line could be distinguished by the use of dotted lines and the occasional triangle. A third line could be thicker than the first two. Color codes can still be used, but there should be labeling (without an exclusive reliance on color coding) inside the graphic itself. Rather than using red in photos and other graphics, try purple (magenta), which has equal portions of red and blue. A red-green colorblind person will not see the red, but should be able to see the blue. Keep purple away from a blue or black background. In a North American audience of 250 people that is made up equally of men and women, there will be an average of 10 colorblind people. How can presentations be made that are equally comprehensible by everyone in the audience? When presenting graphics in PowerPoint or other slideshow presentation software, avoid referring to information solely on the basis of color. For example, instead of saying: “the green box indicates ____,” say “the second box on the left indicates ____.” When using a laser pointer in making a presentation, remember that colorblind people have trouble seeing light from red laser pointers. Light from green laser pointers is easier to see and appears as a white light to many colorblind people. Red-Green Colorblindness Can Extend to Blues The most common form of colorblindness involves difficulty distinguishing red and green colors. Eight percent of Caucasian men reportedly have red-green colorblindness, with an additional two percent having more extensive colorblindness. Five percent of Asian men and four percent of African men have red-green colorblindness. If you present a project deliverable to three Caucasian males, there is a 22 percent chance that one of the recipients will be colorblind. Red colorblindness means that red does not appear as a bright or vivid color. During the day, a red traffic light, for example, may appear to be broken or unlit to colorblind drivers. Red-green colorblindness can make it hard to identify green traffic lights at night and for colorblind drivers to distinguish green traffic lights from white street lamps. The impact of red-green colorblindness can be under-recognized. Since red and green are components of many other colors, red and green colorblindness frequently affects the capability to recognize a wide range of colors, especially blues. A colorblind person may be able to identify a color as having blue in it and may be sensitive to subtle distinctions in bluish hues, but will not be able to identify the exact color of blue. The easiest type of red for most colorblind people to see is vermilion. For greens, bluish green is the easiest to see. High Contrasts Work Best The safest route to legibility in screen and print layouts is to rely on black fonts and white backgrounds. To place blue or green backgrounds behind black fonts decreases the contrast and degrades readability. The use of red or dark green backgrounds also degrades readability, regardless of font color. Avoid the mistakes that Chevrolet has been making in its recent billboard and print ads, where it intersperses black and red fonts and uses red fonts on top of dark backgrounds. My last three vehicles have all been Chevrolets. I take pride in my Chevrolet and enjoy seeing Chevrolet ads, but when these ads are unreadable it leaves the impression that Chevrolet doesn’t care about its customers. All the major search engines are making mistakes when it comes to maximizing the readability of their sites, including the color schemes chosen for displaying search engine ads. One of the principal mistakes involves the use of light blue and gray colors. Others include the choice of reds. A9 is the most readable of the major search engines. A9 is powered by Microsoft Windows Live search, which at its own site is the least readable for colorblind people. The use of light blue and gray colors, regardless of background, degrades usability and increases eye strain for many colorblind viewers, as with the standard color scheme of the Mozilla Firefox browser. The use of red and green colors on black backgrounds creates images that are difficult or impossible for many colorblind people to distinguish. In Firefox, there is a high contrast, dark skin or “browser theme” called Pitch Dark that would be attractive for colorblind people, except for the fact that most of the icons have red or green in them, which renders them wholly or partially invisible against the browser skin’s black background. One trick that colorblind people use in these situations is to memorize the positions of icons and to navigate online from memory. The design of online forms customarily includes tools for identifying entries that have not been filled in correctly. The use of red fonts is a common tool for directing someone back to portions of a form where data needs to be entered correctly before it can be submitted. Even mildly colorblind people can find it impossible to distinguish red letters from black ones, and so will be unable to complete some forms properly or without experiencing delays, frustration and possibly total failure to complete the form’s submission process. Changing a font’s color from black to red as the sole means of directing users to correct data entry errors has been a distinguishing feature of the online recruiting software used by Norman Broadbent, a member of the TranSearch International global recruiting partnership. With the costs of attracting qualified job applicants frequently exceeding thousands of dollars, discriminating against a small pool of otherwise qualified applicants can have significant financial consequences — both for the clients of recruiting firms and for applicants who are adversely affected by discrimination. It should be a source of embarrassment to be found to be arbitrarily creating bars to employment for a class of applicants who would otherwise perform properly. Rather than blame TranSearch International, I would suggest that primary responsibility for eliminating arbitrary discrimination rests with clients that use hiring systems with forms that cannot reasonably be completed by colorblind individuals.Colorblindness – A Usability Guide for Commercial Applications, Part 2 Anthony Mitchell , an E-Commerce Times columnist, has beeninvolved with the Indian IT industry since 1987, specializing through InternationalStaff.net in offshore process migration, call center program management, turnkey software development and help desk management.
<urn:uuid:79d2c5da-752e-4bca-87a4-e18ba93a30fd>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/colorblindness-a-usability-guide-for-commercial-applications-part-1-56106.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00761.warc.gz
en
0.919767
1,746
3.359375
3
This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI. To human observers, the following two images are identical. But researchers at Google showed in 2015 that a popular object detection algorithm classified the left image as “panda” and the right one as “gibbon.” And oddly enough, it had more confidence in the gibbon image. The algorithm in question was GoogLeNet, a convolutional neural network architecture that won the 2014 ImageNet Large Scale Visual Recognition Challenge (ILSVRC 2014). The right image is an “adversarial example.” It has undergone subtle manipulations that go unnoticed to the human eye while making it a totally different sight to the digital eye of a machine learning algorithm. Adversarial examples exploit the way artificial intelligence algorithms work to disrupt the behavior of artificial intelligence algorithms. In the past few years, adversarial machine learning has become an active area of research as the role of AI continues to grow in many of the applications we use. There’s growing concern that vulnerabilities in machine learning systems can be exploited for malicious purposes. Work on adversarial machine learning has yielded results that range from the funny, benign, and embarrassing—such as to following turtle being mistaken for a rifle—to potentially harmful examples, such as a self-driving car mistaking a stop sign for a speed limit. How machine learning “sees” the world Before we get to how adversarial examples work, we must first understand how machine learning algorithms parse images and videos. Consider an image classifier AI, like the one mentioned at the beginning of this article. Before being able to perform its functions, the machine learning model goes through a “training” phase, where it is provided many images along with their corresponding labels (e.g., panda, cat, dog, etc.). The model examines the pixels in the images and tunes its many inner parameters to be able to link each image with its associated label. After training, the model should be able to examine images it hasn’t seen before and link them to their proper labels. Basically, you can think of a machine learning model as a mathematical function that takes pixel values as input and output the label of the image. Artificial neural networks, a type of machine learning algorithm, are especially well-suited for dealing with messy and unstructured data such as images, sound, and text documents because they contain many parameters and can flexibly adjust themselves to different patterns in their training data. When stacked on top of each other, ANNs become “deep neural networks,” and their capacity for classification and prediction tasks increases. Deep learning, the branch of machine learning that uses deep neural networks, is currently the bleeding edge of artificial intelligence. Deep learning algorithms often match—and sometimes outperform—humans at tasks that were previously off-limits for computers such as computer vision and natural language processing. It is worth noting, however, that deep learning and machine learning algorithms are, at their core, number-crunching machines. They can find subtle and intricate patterns in pixel values, word sequences, and sound waves, but they don’t see the world as humans do. And this is where adversarial examples enter the picture. How adversarial examples work When you ask a human to describe how she detects a panda in an image, she might look for physical characteristics such as round ears, black patches around the eyes, the snout, the furry skin. She might also give other context, such as the kind of habitat she would expect to see the panda in and what kind of poses a panda takes. To an artificial neural network, as long as running the pixel values through the equation provides the right answer, it is convinced that what it is seeing is indeed a panda. In other words, by tweaking the pixel values in the image the right way, you can fool the AI into thinking it is not seeing a panda. In the case of adversarial example you saw at the beginning of the article, the AI researchers added a layer of noise to the image. This noise is barely perceptible to the human eye. But when the new pixel numbers go through the neural network, they produce the result it would expect from the image of a gibbon. Creating adversarial machine learning examples is a trial-and-error process. Many image classifier machine learning models provide a list of outputs along with their level of confidence (e.g., panda=90%, gibbon=50%, black bear=15%, etc.). Creating adversarial examples involves making small adjustments to the image pixels and rerunning it through the AI to see how the modification affects the confidence scores. With enough tweaking, you can create a noise map that lowers the confidence in one class and raises it in another. This process can often be automated. In the past few years, there has been extensive work on the workings and effects of adversarial machine learning. In 2016, researchers at Carnegie Mellon University showed that wearing special glasses could fool facial recognition neural networks to mistake them for celebrities. In another case, researchers at Samsung and Universities of Washington, Michigan and UC Berkley showed that by making small tweaks to stop signs, they could make them invisible to the computer vision algorithms of self-driving cars. A hacker might use this adversarial attack to force a self-driving car to behave in dangerous ways and possibly cause an accident. Adversarial examples beyond images Adversarial examples do not just apply to neural networks that process visual data. There is also research on adversarial machine learning on text and audio data. In 2018, researchers at UC Berkley managed to manipulate the behavior of an automated speech recognition system (ASR) with adversarial examples. Smart assistants such as Amazon Alexa, Apple Siri, and Microsoft Cortana use ASR to parse voice commands. For instance, a song posted on YouTube can be modified in a way that playing it would send a voice command to a smart speaker nearby. A human listener wouldn’t notice the change. But the smart assistant’s machine learning algorithm would pick up that hidden command and execute it. Adversarial examples also apply to natural language processing systems that process text documents, such as the machine learning algorithms that filter spam emails, block hateful speech on social media, and detect sentiment in product reviews. In 2019, scientists at IBM Research, Amazon, and the University of Texas created adversarial examples that could fool text classifier machine learning algorithms such as spam filters and sentiment detectors. Text-based adversarial examples, also known as “paraphrasing attacks,” modify the sequences of words in a piece of text to cause a misclassification error in the machine learning algorithm while maintaining coherent meaning to a human reader. Protection against adversarial examples One of the main ways to protect machine learning models against adversarial examples is “adversarial training.” In adversarial training, the engineers of the machine learning algorithm retrain their models on adversarial examples to make them robust against perturbations in the data. But adversarial training is a slow and expensive process. Every single training example must be probed for adversarial weaknesses and then the model must be retrained on all those examples. Scientists are developing methods to optimize the process of discovering and patching adversarial weaknesses in machine learning models. At the same time, AI researchers are also looking for ways that can address adversarial vulnerabilities in deep learning systems at a higher level. One method involves combining parallel neural networks and switching them randomly to make the model more robust to adversarial attacks. Another method involves making a generalized neural network from several other networks. Generalized architectures are less likely to be fooled by adversarial examples. Adversarial examples are a stark reminders of how different artificial intelligence and the human mind are.
<urn:uuid:98557adf-d5a1-4223-8182-4cffb22f821a>
CC-MAIN-2022-40
https://bdtechtalks.com/2020/07/15/machine-learning-adversarial-examples/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00761.warc.gz
en
0.939058
1,641
3.515625
4
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for… A scientist, whose discoveries helped shape the storage technology field, is the winner of the 2014 Millennium Technology Prize worth more than $1 million. Dr. Stuart Parkin was recognized last week by Technology Academy Finland for developing the spin-valve read head, an essential component in the development of larger capacity hard drives over the last 25 years. Parkin is a research fellow, manager of IBM’s Magnetoelectronics group in California and a consulting professor at Stanford University. “Who would have known that my invention would one day sit at the heart of today’s cloud, social media and data analytics applications, and affect the way people share information and communicate with each other on the Internet, on our mobile devices and across the world,” Parkin said. In a hard drive, the read and write heads are mounted on an armature that moves over the surface of the spinning hard disks. The heads pick up and put down bits of data that are stored electromagnetically on the surface of the disks. The spin-valve read head, developed by Parkin in 1989, is more sensitive than its predecessors and allows for data to be stored at higher densities, greatly increasing the capacity of multiple generations of hard drives over the decades.
<urn:uuid:90b567b9-3a09-4081-b16e-2008f638f0a4>
CC-MAIN-2022-40
https://drivesaversdatarecovery.com/spintronics-expert-wins-millenium-prize/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00761.warc.gz
en
0.936175
286
3.1875
3
A lot of information security leaks are attributed to insider threats. Just look at Wikileaks and the US government. An Army private copied thousands of documents onto a CD and walked out the door with them. This may be an extreme case, but is becoming more common. A lot of data and document security issues are due to the oops factor; we do it accidentally or unknowingly. Most of us don’t willingly comprise our company or personal security, but sometimes it happens because of a lack of knowledge. I think a lot of this can be traced to insufficient training. Most companies just assume that everyone knows how to operate a computer safely. If a new CRM system comes in, everyone gets trained on it. What about email, Microsoft Word, Adobe Acrobat, instant messenger and a browser? What about basic computer security? Most people are just expected to figure it out on their own. How many times have you accidentally hit a key or clicked and something happened that you didn’t expect? And if it did, you weren’t sure how to undo it? This gets worse with all the malware out there. Accidentally clicking on an email link that takes you to a malware infected website could be devastating. A few years ago there was the case of the Connecticut teacher whose computer was hitting porn site popups in the classroom. Spyware or malware was most liking causing the problem, but the poor teacher didn’t know how to stop it. Most of us have heard we shouldn’t click on suspicious links, but how many of us have had formal training on it? With all the consumer electronics available, everyone assumes that everyone knows how to use technology. Just because you can use an iPod, doesn’t mean you have a clue about computer security. Most companies give employees a security or ethics policy saying “Though shall not send anything confidential to anyone . . . blah, blah, blah.” We read it and are supposed to follow it. But most of us aren’t trained on the technology we use everyday that makes it very easy to violate these policies. Most of us don’t think about it. We assume IT has that covered. And how about security on mobile devices? We all love our Blackberries, iPads and USB drives, but how many of us know how to secure them against data loss? Think about the MFP or printer in your office? Technology can solve many problems by preventing viruses, malware and hackers from wreaking havoc, but until users are trained on operating their computers securely, we will still have problems. Knowing what your users know (or don’t know) can help you prevent another oops. To quote Pogo, “We have met the enemy and he is us.” Are your people trained on computer security? Photo credit Nilocram
<urn:uuid:0bf597f7-b3f3-42ab-a80d-5727c3038636>
CC-MAIN-2022-40
https://en.fasoo.com/blog/fight-against-data-leaks-from-insider-threats-with-training/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00761.warc.gz
en
0.96213
585
2.515625
3
As robust as your organization’s security program, policies, and toolsets are, there is still one vulnerability that even the most secure IT department cannot control – your employees. Cybercriminals rely on the fact that many employees are inadequately trained and underestimate the risks of phishing scams and ransomware. With over 53,000 security incidents and more than 2,000 data breaches in the past year, companies need to start implementing security awareness training to employees at all levels (Source). What is Security Awareness Training (SAT)? Security awareness training is a company-specific education program focused on decreasing cybersecurity risk by making end-users more secure in their use of technology. SAT helps to reduce issues caused by user error, misconfiguration, and mismanagement by educating users on the most common methods of social engineering being used by cybercriminals, and by clearly communicating and reinforcing the company’s policies on data privacy. There are many ways that you can conduct security awareness training: - Classroom training: In-person training that allows employees to ask questions in real-time - Online training: Remote training from any location that allows employees to learn and work at their own pace - Phishing campaigns: A campaign that consists of either a single or recurring set of tests that help to determine the most vulnerable users, providing additional training for those that fall for the “phish” Security awareness training also covers methods for detecting and reporting phishing and includes additional education on a variety of other cybersecurity topics including physical security, desktop and laptop security, wireless networking, password security, and malware. To ensure that employees both understand and abide by company policies, organizations should customize their training based on an employee’s role so that the content is relevant to the employee and the work they do. How Do I Develop A Plan for Security Awareness Training? The first step in developing effective security awareness training is by establishing a comprehensive set of cybersecurity policies for your company. These policies should be clear, concise, enforceable, and based on the varying roles in the organization. They should also be developed with the input and consensus from upper management and reflect current business requirements. If your company operates in a regulated industry such as healthcare or financial services, you will want to incorporate compliance requirements into the development of the training and determine what training is necessary to meet those requirements. It is also important to show employees real-world examples of a cyberattack and spell out precisely what to do if they fall victim to one. As aforementioned, there are different types of training that your organization can use to train employees. It is best to deliver training with a mixture of methods, as opposed to only one. Emailing campaigns keep employees sharp, and web resources are great to have on hand when needed as well. In-person meetings are a way to offer employees opportunities to ask questions and to ensure you have their full attention. The best way to start your security awareness training plan is by conducting an annual training program for existing employees and mandatory training for new hires. How Do I Know If My Training is Effective? The best way to know if your training is effective is by conducting pre and post-testing for training content. By determining what information needs to be enforced, it ensures that employees retain the information. Sending out random phishing emails, looking for exposed passwords, or unlocked computers around the office helps to determine how effective your training is. It is also crucial to track who completes the training, how much time they spend on it, and then measure the impact it has on incidents. If employees fail to complete training or fail tests, then it confirms that they need either further training or an in-person meeting. If your organization’s security awareness training is effective, you will begin to see a drop in number and severity of security incidents. If you do not see an improvement, then you should revisit your training materials and adjust the approach. It is also important to incorporate new threats as they emerge, and work them into the training to ensure that staff is equipped with the proper knowledge at all times. Still curious how security awareness training could help your organization improve its cybersecurity risk? Call us today for a free consultation at (305) 278-7100. With over 30 years of experience as an IT professional, I lead our technical engineering and administrative staff in the delivery of our complete suite of IT services, which includes our award-winning Help Desk, Network Operations Center (NOC), and On-Site Field Service team. As the senior technology expert, I also provide oversight and technical support for our Palmetto Bay Village Center co-location and hosting facility.
<urn:uuid:1abd012b-be07-4862-82ce-7455e4a6f093>
CC-MAIN-2022-40
https://www.4it-inc.com/security-training-awareness-basics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337889.44/warc/CC-MAIN-20221006222634-20221007012634-00761.warc.gz
en
0.958417
947
2.765625
3
The National Cancer Institute is funding pilot computer clouds that will help researchers crack cancer’s genetic code. Computer clouds have been credited with making the workplace more efficient and giving consumers anytime-anywhere access to emails, photos, documents and music as well as helping companies crunch through masses of data to gain business intelligence. Now it looks like the cloud might help cure cancer too. The National Cancer Institute plans to sponsor three pilot computer clouds filled with genomic cancer information that researchers across the country will be able to access remotely and mine for information. The program is based on a simple revelation, George Komatsoulis, interim director and chief information officer of the National Cancer Institute’s Center for Biomedical Informatics and Information Technology, told Nextgov. It turns out the gross physiological characteristics we typically use to describe cancer -- a tumor’s size and its location in the body -- often say less about the disease’s true character and the best course of treatment than genomic data buried deep in the cancer’s DNA. That’s sort of like saying you’re probably more similar to your cousin than to your neighbor, even though you live in New York and your cousin lives in New Delhi. It means treatments designed for one cancer site might be useful for certain tumors at a different site, but, in most cases, we don’t know enough about those tumors’ genetic similarities yet to make that call. The largest barrier to gaining that information isn’t medical but technical, said Komatsoulis who’s leading the cancer institute’s cloud initiative. The National Cancer Institute is part of the National Institutes of Health. The largest source of data about cancer genetics, the cancer institute’s Cancer Genome Atlas, contains half a petabyte of information now, he said, or the equivalent of about 5 billion pages of text. Only a handful of research institutions can afford to store that amount of information on their servers let alone manipulate and analyze it. By 2014, officials expect the atlas to contain 2.5 petabytes of genomic data drawn from 11,000 patients. Just storing and securing that information would cost an institution $2 million per year, presuming the researchers already had enough storage space to fit it in, Komatsoulis told a meeting of the institute’s board of advisers in June. To download all that data at 10 gigabytes per second would take 23 days, he said. If five or 10 institutions wanted to share the data, download speeds would be even slower. It could take longer than six months to share all the information. That’s where computer clouds -- the massive banks of computer servers that can pack information more tightly than most conventional data centers and make it available remotely over the Internet -- come in. If the genomic information contained inside the atlas could be stored inside a cloud, he said, researchers across the world would be able to access and study it from the comfort of their offices. That would provide significant cost savings for researchers. More importantly, he said, it would democratize cancer genomics. “As one reviewer from our board of scientific advisers put it, this means a smart graduate student someplace will be able to develop some new, interesting analytic software to mine this information and they’ll be able to do it in a reasonable time frame,” Komatsoulis said, “and without requiring millions of dollars of investment in commodity information technology.” It’s not clear where all this genomic information will ultimately end up. If one or more of the pilots proves successful, a private sector cloud vendor may be interested in storing the information and making it available to researchers on a fee-for-service basis, Komatsoulis said. This is essentially what Amazon has done for basic genetic information captured by the international Thousand Genomes Project. A private sector cloud provider will have to be convinced that there’s a substantial enough market for genomic cancer information to make storing the data worth its while, Komatsoulis said. The vendor will also have to adhere to rigorous privacy standards, he said, because all the genomic data was donated by patients who were promised confidentiality. One or more genomic cancer clouds may also be managed by university consortiums, he said, and it’s possible the government may have an ongoing role. The cancer institute is seeking public input on the cloud through the crowdsourcing website Ideascale. The University of Chicago has already launched a cancer cloud to store some of that information. It’s not clear yet whether the university will apply to be one of the institute’s pilot clouds. Because the types of data and the tools used to mine it differ so greatly, it’s likely there will have to be at least two cancer clouds after the pilot phase is complete, Komatsoulis said. As genomic research into other diseases progresses, it’s possible that information could be integrated into the cancer clouds as well, he said. “Cancer research is on the bleeding edge of really large scale data generation, he said. “So, as a practical matter, cancer researchers happen to be the first group to hit the point where we need to change the paradigm by which we do computational analysis on this data . . . But much of the data that I think we’re going to incorporate will be the same or similar as in other diseases.” As scientists’ ability to sequence and understand genes improves, genome sequencing may one day become part of standard care for patients diagnosed with cancer, heart problems and other diseases with a genetic component, Komatsoulis said. “As we learn more about the molecular basis of diseases, there’s every reason to believe that in the future if you present with a cancer, the tumor will be sequenced and compared against known mutations and that will drive your physician’s treatment decisions,” he explained. “This is a very forward looking model but, at some level, the purpose of things like The Cancer Genome Atlas is to develop a knowledge base so that kind of a future is possible.”
<urn:uuid:efa895b2-d63c-4570-9b44-57a6f2ef6315>
CC-MAIN-2022-40
https://www.nextgov.com/it-modernization/2013/08/computer-clouds-can-help-cure-cancer/68096/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00161.warc.gz
en
0.929583
1,268
3
3
Cybersecurity: The Children’s Guide to Being Safe Online was written to teach children how to protect themselves from cybercriminals. It will also teach them how to act responsibly and make smart decisions while online by being proactive and safe. This is a timely and much-needed resource for children of all ages. Caryn Warren is a powerful and influential leader in the information technology industry. She is the founder of Innovators of Tomorrow, an organization that provide training in various IT concentrations including coding, cybersecurity, robotic to underrepresented communities in STEM fields. With over 20 years of IT experience, her current job is managing and supporting Oracle Databases. Today, we’re talking with Caryn Warren about her book Cybersecurity: The Children’s Guide to Being Safe Online. Visit Caryn Warren website https://carynwarren.com About the field How did you get into cybersecurity? About the context of the book What made you write this book? Where does the inspiration / motivation come from? Who is this book for? About the content of the book What are the main questions you answer with your book? Some parents sometimes find it difficult to address cybersecurity issues with their children, What are your top 3 tips for them? About the style of the book Is it fair to describe your book as a practical cybersecurity guide for parents and kids? Review: “Excellent book for all ages!!! Simple words for all to understand. My son enjoyed it very much and wants to read it again!”
<urn:uuid:64ddf8b7-107f-4f77-be47-0b337e782b92>
CC-MAIN-2022-40
https://cybermaterial.com/cyber-review-the-childrens-guide-to-being-safe-online/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00161.warc.gz
en
0.956663
319
2.921875
3
Russia’s Federal Security Service, or FSB, recently reported that it found a cyberspying virus in the computer networks of more than 20 state authorities and defense contractors. The claim that malware has infected various government and defense companies, published last month by Russia’s official TASS news agency, came in the midst of a flurry of accusations that Russia has engaged in cyberattacks against U.S. targets in an effort to impact the presidential election. The Federal Security Service revealed virus software for cyberspying in computer networks of about 20 organizations in Russia. The attack was aimed at information resources of the state authorities, scientific and defense companies, the defense industry, and other infrastructure operations, the organization said. The malware was targeted — a virus that was professionally planned, created and spread, TASS reported. Based on an analysis of the style of programming, file names, parameters of use and other factors, the virus was similar to the software used in a previous high-profile cyberspying incident discovered within the Russian Federation and around the globe, TASS reported. New sets of the malware are made individually for every target, taking into account the unique features of attacked machines, according to the TASS report. The virus is spread through electronic messages that contain a malicious attachment. After the software gets inside a computer system, the virus launches modules that allow it to intercept network traffic, listen to the traffic and create screen shots. It can turn on Web cameras and microphones inside a computer, copy audio and video files, and record keystrokes. The FSB is working with various ministries and authorities to finalize efforts to reveal all of the targets in the Russian Federation and to minimize the impact of the attack, according to the report. Kindly Shut Up Malware has infected various government and defense companies at a time when the U.S. and Russia are embroiled in a high-profile cyberdebate. Russian hackers linked to the country’s intelligence services in recent months have been implicated in cyberattacks on the computer systems of the Democratic National Committee, the Hillary Clinton presidential campaign, and other political and government organizations. Russian officials vehemently denied any link to the attacks, and the FBI has not attributed them to any specific organizations. “I do not have any additional information with regard to the reported recent cybersecurity breach in some organizations in Russia,” said Russian Embassy spokesperson Yuri Melnik. “I believe that all related comments, if any, will be issued by relevant authorities in Russia,” he told TechNewsWorld. “The investigation is ongoing,” Melnik said, and requested that we “kindly refrain from groundless allegations about the origins of the breach.” The FBI last month launched a probe into Wikileaks’ online publication of information stolen from the Democratic National Committee, some of which appeared damaging to the Democratic Party. Cyberspying is considered standard practice among nations, noted Martin Libicki, adjunct senior management scientist at Rand. “The primary objection to what the Russians did was not that they broke into the DNC — it is that they released the information they took, presumably for the purpose of influencing the U.S. election,” he told TechNewsWorld. The concern about the breach of related systems, including the Clinton campaign and the Democratic Congressional Campaign Committee, was that the information obtained from those organizations would be used to exercise untoward influence, Libicki suggested. There is growing concern in the U.S. that Russia may use its capabilities to influence electronic voting systems, which would “attack the integrity of the U.S. elections process,” he added. Although cyberattacks may have targeted the Russian Federation, that would not necessarily mean the U.S. was behind them. Even if it were, that would not necessarily mean that the information obtained would be used for anything more than intelligence purposes.
<urn:uuid:518cda5e-7e2b-485d-9ef2-bfc03d19609b>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/russia-plays-the-cybervictim-card-83797.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337504.21/warc/CC-MAIN-20221004121345-20221004151345-00161.warc.gz
en
0.953186
818
2.515625
3
What is VMsafe? VMsafe is a new security technology. What is means is that software security vendors will partner with VMware to develop custom virtual appliances used for protecting virtual machines on a VMware ESX server. Each virtual appliance will be digitally signed and confirmed by VMware. Only trusted virtual appliances will get the privileges of a VMsafe technology, so even though VMsafe solutions are virtual machines, not every virtual machine can become a VMsafe solution – it needs to come from a trusted partner and go through a verification and certification procedures before customers can use it. How does VMsafe work? Like I said, each VMsafe solution is a virtual appliance with elevanted access. Essentially, it will have access to all the key functional areas of all the VMs on the ESX server, monitoring memory and CPU, virtual network adapters and storage. VMsafe appliance has visibility of all the memory pages of every VM, and has the functionality to prevent a security breach on the memory page or CPU instruction level. Network packets are also analyzed on the fly, and the same kind of dynamic analysis is applied to all the storage available to a given VM. The main advantage of a VMsafe approach is that it's a security solution which resides outside of any virtual machine. Being above all the VMs (or besides them should I say), VMsafe appliance gets unsurpassed flexibility and maintains the security level which simply was not achieved before: any malware, any virus which traditionally tries to detect and disable an anti-virus solution on your OS, will be left unaware of the fact that the VM is monitored for security at all. Key benefits of VMsafe These are a few: Isolation – VMsafe security solutions reside in their own VM which makes it impossible for a malware running in any of the protected VMs to compromise the security appliance. Correlation – having direct access to most of the functional areas of all theVMs allows for a deeper and better correlation between security threats – VMsafe appliances will be able to detect threats earlier and correctly recognize the scope (when all the VMs are under the same attack, for example, this should be detected as a single threat spanning a few VMs). Scalability – being a tightly integrated part of virtual infrastructure, VMsafe appliances will allow for easier and more effective scalability, this will result in flexible and scalable protection of large virtual infrastructures.
<urn:uuid:257bee4e-b89e-4e09-9e9b-e57597aff148>
CC-MAIN-2022-40
http://www.desktop-virtualization.com/2008/03/04/vmsafe-security-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00161.warc.gz
en
0.932109
506
2.703125
3
Graph Neural Networks in AllegroGraph Enterprises are subscribed to the power of modeling data as a graph and the importance of using Knowledge Graphs for customer 360 and beyond. The ability to explain the results of AI models, and produce consistent results from them, involves modeling real-world events with the adaptive schema consistently provided via Knowledge Graphs. Probably the most important reason for building Knowledge Graphs has been to answer the age old question: “What is going to happen next?” Given the data, relationships, and timelines we know about a customer, patient, product, etc. (“The Entity of Interest”), how can we confidently predict the most likely next event. For example, in healthcare, what is the outcome for this patient given the sequence of previous diseases, medications, and procedures. For manufacturers, what is going to require repair next in this aircraft or some other point in the supply chain. Machine Learning and more recently, Graph Neural Networks (GNNs) have emerged as a mature AI approach used by companies for Knowledge Graph enrichment. GNNs enhance neural network methods by processing graph data through rounds of message passing, as such, the nodes know more about their own features as well as neighbor nodes. This creates an even more accurate representation of the entire graph network. In this presentation we describe how to use graph embeddings and regular recurrent neural networks to predict events via Graph Neural Networks. We will also demonstrate creating a GNN in the context of a Knowledge Graph for building event predictions. For more info – https://github.com/franzinc/agraph-examples
<urn:uuid:9697b3d0-db4c-4de3-95fe-0a381f32b303>
CC-MAIN-2022-40
https://allegrograph.com/webcasts/graph-neural-networks-in-allegrograph/?print=print
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00161.warc.gz
en
0.937002
332
2.765625
3
Phishing, or the act of fraudulently obtaining sensitive personal information, is one of the most common cyberattacks. Recently, there has been a rise in phishing attacks that target only wealthy individuals – specifically, those with an annual income over $100,000. In this article, we’ll explore why this type of phishing attack is effective and what you can do to avoid being targeted. The Phishing Schemes The three most popular phishing schemes are the “Targeted” scheme, the “High Net Worth” scheme, and the “CEO” scheme. Each of these schemes targets a different type of victim. The Targeted scheme generally targets lower-income individuals, who are more likely to be less suspicious of email solicitations. The High Net Worth scheme targets wealthier individuals, who are more likely to have more money to lose and are therefore more susceptible to scams. The CEO scheme specifically targets top executives in corporations, who are often more focused on their work than on protecting themselves from scams. There are a variety of phishing attacks that send phishing messages only to wealthy individuals. One common type of attack is an email that appears to be from a legitimate company or individual. The email might request your personal information, such as your account number or password. Another type of attack is a phone call phishing scam. Criminals will call you and try to convince you to give them your personal information. The Victims of a phishing attack are typically wealthy individuals. The primary goal of a phishing attack is to steal personal information, such as account passwords and credit card numbers. Wealthy individuals are often more likely to fall victim to these attacks because they are more likely to have access to sensitive information. Phishing attacks typically target people who are already vulnerable due to their wealth or other factors. For example, attackers might send messages that appear to be from a legitimate company or organization. The victim is then persuaded to enter personal information, such as their login credentials, into the message box. In order to avoid becoming a victim of a phishing attack, be sure to always use caution when sending any form of email, especially if it’s from an unknown source. Don’t trust anything you see online – always verify the legitimacy of any contact or message before giving out any personal information. The methodology for this particular phishing attack is to send phishing messages only to wealthy individuals. This allows the attacker to more easily gain access to personal information, since these individuals are more likely to have money saved up. The first step in this attack is to find a list of wealthy individuals. This can be done through various means, such as researching social media profiles or looking through databases of people with high net worth. Once a list has been compiled, the next step is to create fake emails that look like they are from legitimate companies or organizations. These emails should contain sensitive information, such as login credentials or financial information. Finally, the phishing emails should be sent out to the on-list of wealthy individuals. This will allow the attacker to obtain valuable information without having to risk any backlash from the victims. By targeting only those who are likely to have extra money saved up, this method of phishing attacks is much more successful than normal phishing attacks. The article discusses the results of a study that looked at which variation of a phishing attack sends phishing messages only to wealthy individuals. The study found that spoofed email domains and messages that were personalized to the target user were the most successful in landing phishing attacks on high-value targets. Who Is Most Likely to Be Targeted by Phishing? According to a study by the Ponemon Institute, affluent individuals are more likely to be targeted by phishing attacks than those who are not as well off. The study found that 60 percent of all phishing attacks target individuals who have an annual income of over $100,000. This is in contrast to the 38 percent of phishing attacks that target individuals who have an annual income of less than $25,000. There are several reasons why wealthier individuals are more likely to be targeted by phishing attacks. First, they are more likely to have more online identities and access to more personal information. Second, they may be more likely to invest their money in securities or other high-value investments that are vulnerable to fraud. Finally, they may be more likely to use online banking and other online services that are susceptible to cyberattack. If you are a wealthy individual who is concerned about being targeted by a phishing attack, you should take steps to protect yourself. You should always use strong passwords and encryption software when accessing your online accounts, and you should never give out your personal information unless you are sure that you trust the person whom you are speaking with. Why Are Wealthy Individuals More Susceptible to Phishing Attack? Phishing attacks are commonly conducted against individuals who possess wealth or assets. This is because these individuals may be more likely to fall for a convincing phishing email, which can steal their personal information. Additionally, these individuals may be more likely to have less reliable security measures in place, making them more susceptible to other types of cyberattacks. How Do Wealthy Individuals Get Phished? The phishing attack variation that targets only wealthy individuals is called the “ millionaires’ phishing” attack. This type of phishing attack is based on the fact that many wealthy individuals are more likely to have accounts with high-value assets like stocks, bonds, and real estate. These accounts can be tempting targets for cybercriminals looking to steal money or login credentials. Some key differences between phishing attacks aimed at ordinary people and those aimed at wealthy individuals are: - Wealthy individuals are more likely to have accounts with high-value assets, so they are more likely to be targeted in a millionaire’s phishing attack. - Wealthy individuals may be more likely to be careless about their online security, which can make them more vulnerable to a millionaire’s phishing attack. - Wealthy individuals may also be more likely to believe the messages they receive in a millionaire’s phishing attack, because they are more likely to trust people they know well. Phishing attacks are becoming more and more sophisticated, with attackers targeting high-value individuals to send them phishing messages that appear to be from trusted sources. This variation of a phishing attack is known as “pharming,” and it relies on fooling the target into thinking they are receiving a legitimate message from a trusted source. By targeting wealthy individuals, attackers can gain access to more financial information than average users, which could help them steal money or other valuable assets.
<urn:uuid:b8464715-8e0c-44b1-84e4-4e66b4bcb576>
CC-MAIN-2022-40
https://cybersguards.com/which-variation-of-a-phishing-attack-sends-phishing-messages-only-to-wealthy-individuals/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337668.62/warc/CC-MAIN-20221005203530-20221005233530-00161.warc.gz
en
0.959322
1,396
2.921875
3