text
stringlengths
234
589k
id
stringlengths
47
47
dump
stringclasses
62 values
url
stringlengths
16
734
date
stringlengths
20
20
file_path
stringlengths
109
155
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
57
124k
score
float64
2.52
4.91
int_score
int64
3
5
New generation of rover tech wows NASA, public with Mars images Long gone are the days when space missions were conducted by radio waves and onboard computer memory was limited to storing merely a few hundred words. Space exploration technology has evolved to the point that NASA can now provide near-real-time 3-D views of neighboring planets via the Internet. Since Jan. 4, NASA has treated billions of online viewers to spectacular images of Mars, courtesy of Spirit, the latest Mars Exploration Rover to land on the Red Planet. And with a second rover, Opportunity, in flight to Mars and expected to land near the planet's south pole Jan. 24, NASA plans to keep the images flowing. "It's going far, far better than I had hoped," said Mark Adler, mission manager for the Mars Exploration Rovers program at the Jet Propulsion Laboratory in California. "Everything's going really well." After flying more than 300 million miles and navigating a tricky landing, Spirit has used its two scientific panoramic cameras to transmit some of the best images ever captured of Mars. Donna Shirley, a former manager of the rovers program who worked on the 1997 Pathfinder mission, marvels at the ability of Spirit's technology to generate high- Spirit "has a camera so good it's like being there yourself with 20/20 vision," said Shirley, who is the director of Experience Science Fiction, a Seattle museum opening this summer. The onboard technological force behind Spirit is VxWorks a real-time operating system made by Wind River Systems Inc. This system controls all mission-critical tasks, such as trajectory, descent and ground operations control, data collection and Mars/Earth communication relay. Spirit's operating system is embedded in an old IBM Corp. RAD 6000 computer that Adler said resembles a desktop model from the mid-1990s. Running at 20 MHz and radiation-hardened to survive the mission, the computer contains 128M of memory, with 256M of flash memory for storing images. This onboard computer is nearly identical to the one on the Pathfinder rover, Adler said. Spirit was designed to maximize the capabilities of this system and was given additional memory for image storing. Despite not being considered a state-of-the-art onboard system, it is still a vast improvement from past NASA missions. Shirley said the 1973 Mariner 10 Venus/Mercury mission had a flight control system that was limited to 512 words of onboard memory and that Pathfinder was the first Mars rover with a hazard-avoidance system. NASA has benefited from technological advances made in the land systems used by mission control. Engineers operating on Sun Microsystems Inc. stations with Solaris software can process larger amounts of data than ever before, Adler said. The high-speed processors allow NASA officials to ensure that software programs on Spirit work before the rovers roll off the lander to validate all systems on the ground before being sent up to Spirit. To accomplish this task, officials test full-scale duplicates of the rovers and use computer simulation programs. NEXT STORY: Clinton proposes health IT bill
<urn:uuid:94f1bc0c-a8b6-4bb8-9677-0b8fcbd59c8e>
CC-MAIN-2022-40
https://fcw.com/2004/01/return-to-the-red-planet/236489/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00568.warc.gz
en
0.938343
641
2.796875
3
Going in for a checkup with your physician? You'll probably see a screen, too. Scores of filing cabinets containing thousands of patient medical records are disappearing into the cloud. Use of electronic health records systems in doctors' offices has doubled in recent years, according to a new report released Tuesday by the Centers for Disease Control and Prevention. In 2012, 72 percent of office-based physicians reported using electronic health records, up from 35 percent in 2007, the CDC says. The report finds that adoption of electronic health records was higher among younger physicians compared with older physicians, among primary-care physicians rather than specialty doctors, and among larger practices than smaller. This digital revolution among doctors is driven in part by the stimulus bill, which created a system for incentive payments to Medicare and Medicaid physicians who could use electronic health records to improve patient care. While there's plenty of anecdotes of patients irritated by their doctors looking at a screen during their appointment, early evidence shows using electronic health records can improve health outcomes. Online systems can remind physicians when patients are due for vaccinations and prescription refills, as well as offer a complete snapshot of the patient's health history so that doctors can make more informed decisions about treatment. The Office of the National Coordinator for Health Information Technology is helping guide implementation of the Hitech Act reforms. Led by Karen DeSalvo, the office is currently navigating the process of getting different electronic health systems to talk to each other—a process known as interoperability. "We have made impressive progress on our infrastructure, but we have not reached our shared vision of having this interoperable system where data can be exchanged and meaningfully used to improve care," DeSalvo said at a recent health information-technology conference. With electronic health records systems being put to use in thousands of doctors' offices nationwide, the next step is to be able to transfer patient data across systems, allowing patients with complex conditions to share their medical information with specialty doctors and hospitals. NEXT STORY: Women Are Still Scarce in IT Leadership Roles
<urn:uuid:e7f10cba-8518-44db-86e7-190dceefc3b0>
CC-MAIN-2022-40
https://www.nextgov.com/cxo-briefing/2014/05/paper-medical-records-are-vanishing-cloud/84859/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00568.warc.gz
en
0.958778
415
2.625
3
What is clinical analytics? Clinical analytics has emerged as a significant area of focus for IT leaders among healthcare providers, who are moving toward adopting accountable care. Many of the leaders have said that they are prioritizing clinical analytics over other systems in their organizations. Clinical analytics is a field that makes use of real-time medical data to generate insights, take decisions, increase revenues, and save on costs. The implementation of clinical analytics in organizations has led to reduced medication errors, improved population health, and cost savings for many organizations. The rapid advancements in key technologies and adoption of electronic health records (EHRs) has led to the growth of clinical analytics in recent years. Due to the emergence of smart mobile devices and the app economy, patients expect high-quality care in hospitals. The result will be a revolution in healthcare with major benefits, such as personalized care, for patients. Now is the right time for hospitals to embrace clinical analytics, which is critical for organizations to survive and thrive in the midst of changing governmental regulations. Addressing the challenges posed by the reforms will mean decreased costs and improved quality of care delivered. The advantages of employing clinical analytics practices are many. It can help reduce administrative costs, enhance care coordination, improve patient wellness, provide clinical decision support, and minimize abuse and fraud. It could also help lower costs by removing difference in supplies, overheads, and labor.
<urn:uuid:158f14c0-ed82-47d7-b9f3-2388b3b7d481>
CC-MAIN-2022-40
https://www.hcltech.com/technology-qa/what-is-clinical-analytics
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00768.warc.gz
en
0.949091
293
2.546875
3
The growth of cloud computing could cause a huge increase in greenhouse gas emissions, Greenpeace has warned. In a new report, ‘Make IT Green: Cloud Computing and its Contribution to Climate Change’, Greenpeace estimates that, at current growth rates, data centres and telecommunication networks will consume about 1,963 billion kilowatt hours of electricity by 2020 – more than triple their current consumption. Greenpeace picks out the growth of “quintessential cloud computing devices” like Apple’s iPad as part of the problem, as they give users constant access to social networks and other online tools. The organisation also criticised Facebook’s recently constructed data centre in Oregon, which runs on coal rather than renewable energy. According to Greenpeace, over 365,00 Facebook members have recently joined groups calling on the company to quit coal and become a climate leader. “The IT sector has the ability to help us combat climate change by innovating to reduce greenhouse gas emissions and increase energy efficiency. Technologies that enable smart grids, zero emission buildings and more efficient transport systems are key to cutting climate change pollution. But, given the current expansion in cloud computing, the industry also needs to get its own carbon footprint under control,” Greenpeace said in a statement on its website. “IT companies like Microsoft, Google, and IBM are now in powerful positions at the local, national, and international levels. They could use that influence to promote policies that will allow them to grow responsibly without fuelling climate change.”
<urn:uuid:99d85653-769f-4b4c-ac0d-adeb98b17d0b>
CC-MAIN-2022-40
https://www.pcr-online.biz/2010/03/30/cloud-computing-fuels-climate-change/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00768.warc.gz
en
0.939301
312
2.921875
3
Authentication Vulnerabilities – Factors Authentication, essentially, is to prove that someone or something is genuine and valid. User credentials must match the ones set in the database or authentication server. This, in return, provides industries secured and authenticated access to their resources, knowing it is the right user who gained access. Hence, recognizing user identity is an essential mechanism within any industry. There are many authentication methods, types, and techniques. They can range from passwords, two-factor authentication, token, biometric, Single Sign Ons (SSO), and authentication protocols such as SSL or Kerberos. Each works slightly differently, but all try to accomplish the same. Choosing one, however, is another thing. However, despite multiple authentication methods, hackers still find a way to gain access. We now know that majority of attacks occur with passwords or password-based authentication methods. These can include phishing attacks, man-in-the-middle, brute force attacks, and even credential stuffing. Additionally, some password authentication vulnerabilities can come from users’ weak or default passwords and even by using weak or insecure verification functions such as MD5. However, there are a lot of mitigations for such scenarios and usually, boil down to the end-users security awareness. Today, however, we will focus on the other spectrum and touch upon broken authentications. Broken authentication refers to the inherent weakness in the application or platform, which can allow attackers to bypass the security. As attackers apply vast techniques to gain an advantage of a vulnerable or weakened system, an organization needs to be aware of the vulnerabilities and contain a solid defensive plan! What are broken authentication vulnerabilities? Weak Password Recovery Have you forgotten your password? Many of us, at some time, have clicked the “forgot password” button and were taken on a recovery path to unlock our account. Even though security procedures are created to enforce the authentication process, sometimes many can neglect the recovery procedure. Vulnerable Authentication libraries Today’s software may rely on another one; we see many dependencies in such cases. There are many cases where specific plugins or additional add-ons have vulnerabilities during the authentication processes, which can be easily exploited and used to gain access. Session handling vulnerabilities Certain authentication processes allow for a smooth session after authentication. This means that it wouldn’t ask you again to authenticate yourself. After verification, the system acknowledges you as the user you authenticated as. Not logging out, no session timeouts, and storing session data in web pages, browsers, and even cookies can provide malicious users the ability to exploit these vulnerabilities and grant them an authenticated session without anyone knowing. Lacking login limit functionality may create a route for hackers to exploit the authentication processes. They can use brute force attacks to crack the password and gain access to your resources. A good practice is to set up rate limit functionality to stop users from logging in after a few unsuccessful attempts. Flawed Authentication Implementation Weak implementation of authentication methods can result in hackers finding methods to exploit or bypass certain processes. For example, several cases of Two-Factor Authentication have been bypassed even though it is a secure authentication process. Conducting proper implementation can reduce the threat landscape.. What are some of the attack methods? SQL Injection – SQL injection can be used to gain access to a web vulnerability and interfere with the queries that are run on the application. It allows the attacker to view data that they usually wouldn’t be able to view, such as user data (credentials), and have the capabilities to delete or change the data itself. Password Attacks – Phishing is a common and most popular attack vector against passwords. A phishing attack is when an individual sends a fraudulent message to trick the individual who receives it, to share information. This is common with e-mails, but recently, SMS camouflaged as known third-parties, i.e., banks, ISP, and even support teams from available applications are known to be quite efficient. Logic Flaws – Logic flaws can be exploited when not thought thoroughly. The implementation of flawed authentication methods, intercepting clear text protocols, or faulty assumptions regarding behaviors can all be exploited as vulnerabilities. If you want to learn more about Fudo authentication methods and certain attack vectors, take a look at our Authentication infographic. Author: Damian Borkowski– Technical Marketing Specialist
<urn:uuid:06fbac0d-90c7-4cdb-9688-ea638dfcb5eb>
CC-MAIN-2022-40
https://fudosecurity.com/company/blog/commnon-factors-in-authentication-vulnerabilities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334942.88/warc/CC-MAIN-20220926211042-20220927001042-00768.warc.gz
en
0.932316
903
2.953125
3
Every penetration tester needs to know how to write code in order to automate a task or to develop a tool that will perform a specific activity that it might be needed in a penetration test.So in this tutorial we will see how we can create simple tcp port scanner in bash.Of course the best tool for this job is Nmap but the scope of this post is to familiarize with bash scripting and to inspire the readers to develop their own tools. This port scanner is going to be created in bash so in the first line we will have to write the following: The shebang (#!) is used because it will instruct the operating system that what comes next will be the interpreter of the script which in this case is /bin/bash. Next we have to assign the values from the argument to the suitable variables.We can also use the # sign when we will want to put comments in the code.So the next lines of our code will be like the following: #Defining the variables As an input our port scanner will take an IP (which it will be the IP of the target that we want to scan),the port that we want to scan first which has defined as firstport and the port that we want to scan last (lastport). The next thing that we have to do is to use a function. Functions in general allow us to take piece of code and to use it again and again without the need of rewriting.We will call our function portscan because it will test the ports that will specify to see if they are open.Then we can set a for loop in order to loop from the first port to the last port.You can see the code below: for ((counter=$firstport; counter<=$lastport; counter++)) Then we will use a do loop and we will try to echo each port of the IP that we are scanning.We will send the output of this echo to the /dev/null.The > sign is used in order to make this redirection.Then we will redirect it again to &1 and we will use the && operator (and) in order to echo the string <port number> open to the console.Below is the piece of the code that we have to write: (echo >/dev/tcp/$IP/$counter) > /dev/null 2>&1 && echo “$counter You can see the complete code of the tcp port scanner in the image below: Before we attempt to run the script we need to make it executable.In order to do this we need to be in the directory that contains the file and to type the command in terminal chmod u+x pentestlab_scanner This command will add execute permissions for the user that is the owner of the file.Now that the file is executable we can run it with the command ./pentestlab_scanner IP Firstport Lastport As we saw creating a port scanner in bash is very easy if we know the basics of this scripting language.The main function of this port scanner is to check only if the tcp ports of a host are open.The user can select from which port the scan will start and in which port it will end.Of course the port scanner can be improved in many ways.For example implementing additional functions like scanning and the UDP ports as well and allowing the user to scan multiple hosts.
<urn:uuid:9f20002a-e346-4902-8f7b-1374e2b6de5c>
CC-MAIN-2022-40
https://pentestlab.blog/2012/11/12/creating-a-tcp-port-scanner-in-bash/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00768.warc.gz
en
0.917091
738
2.734375
3
5. Search engines. Search engines have been around for a long time. Search engines have the capability of operating on unstructured data as well as structured data. The only problem is that search engines still need for data to have context in order for a search to produce sophisticated results. While search engines can produce some limited results while operating on unstructured data, sophisticated queries are out of the reach of search engines. The missing ingredient that search engines need is the context of data which is not present in unstructured data. So the data warehouse has arrived at the point where it is possible to include big data in the realm of data warehousing. But in order to include big data, it is necessary to overcome a very basic problem—the data found in big data is void of context, and without context, it is very difficult to do meaningful analysis on the data. While it is possible that data warehousing will be extended to include big data, unless the basic problem of achieving or creating context in an unstructured environment is solved, there will always be a gap between big data and the potential value of big data. Deriving context then is the forthcoming major issue of data warehouse and big data for the future. Without being able to derive context for unstructured data, there are limited uses for big data. So exactly how can context of text be derived, especially when context of text cannot be derived from the text itself? Two Ways to Derive Context for Unstructured Data In fact, there are two ways to derive context for unstructured data. Those ways are “general context” and “specific context.” General context can be derived by merely declaring a document to be of a particular variety. A document may be about fishing. A document may be about legislation. A document may be about healthcare, and so forth. Once the general context of the document is declared, then the interpretation of text can be made in accordance with the general category.
<urn:uuid:ff272931-eae2-47eb-a42f-7531425c9bb7>
CC-MAIN-2022-40
https://www.dbta.com/Editorial/Think-About-It/Unlocking-the-Potential-of-Big-Data-in-a-Data-Warehouse-Environment-95401.aspx?PageNum=2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00768.warc.gz
en
0.934586
405
3.015625
3
Last Updated on 7 months by Touhid In information technology, there are different forms of application software such as database software, multimedia software, word processing software, presentation, and educational software. In this post, we will discuss the different forms of application software. After reading this post, you will get an idea of what is application software, different forms of application software, uses of software, and the development process of application software. What is an Application Software? Application software is a software program that has been developed to complete a particular job for end users. Typically, application software uses for different purposes of our daily lives, such as personal, official, research, and, educational. The application software is installed on the user’s computer or hosted on a server computer. For example, “Email Software” is a form of application software. Using this application software, we’re easily sending and receiving emails with attachments files (such as images, pdf, and word doc). Each type of application software has been developed to perform specific jobs or tasks. The application software is developed using computer programming languages and databases. Forms of Application Software In the digital world, we are regularly using system or application software in every sector. The software makes our lives very easy, saves time, manpower, and is more efficient. We have categorized the application software based on our daily needs. The main forms of application software are as follows. - Database Application Software - Multimedia Application Software - Word Processing Application Software - Presentation Software - Enterprise Application Software - Web Browser Application Software - Graphics Application Software - Education Application Software - Web Development Software - Security Software 1. Database Application Software Database software is a forms of application software that is used to develop dynamic applications. There are different types of database management system software such as MySQL, Microsoft SQL Server, Oracle, PostgreSQL and MongoDB. Basically, database software is used to store, search, update and delete data within the database of an organization. If you want to develop an application for your personal or official purpose so you have to use a database. Each database has a collection of tables and the data is stored in the database tables. You can use the above database according to your business needs. Some databases are free to use, such as MySQL and PostgreSQL are free and open-source relational database management system software. On the other hand, Microsoft SQL Server has license issues although it offers free-to-use. And the Oracle database express edition (XE) is free to use but limited in functionality with database size. There has a license plan to get full functionality with a different edition. Finally, if you want to develop an enterprise or financial type of application software that stores large amounts of data, then you may use an Oracle database. And if you want to develop a small or medium application then MySQL database is best for you. Read more about Different Types of Database Security. 2. Multimedia Application Software This is another forms of application software that is used to create, record, play audio and video files. These types of application software are used in various fields of real life such as computer gaming, education, engineering, entertainment, remote system, and TV programs. The best and most popular examples of multimedia application software are Adobe Photoshop, VLC media player, Windows media player, WMV files, and AVI video. 3. Word Processing Application Software Word processing software is the most usable and very important forms of application software. Word processing application software used to insert text, images, shapes and different types of charts in a documents. There are a lot of options in this software so you can customize your text such as adding color, increasing/decreasing font size, font style, and adding effect on text. You can add a table in a document, the option to text transformation, add page number, hyperlink on the text, and customize your page layout according to your needs. Some other useful options in this software such as grammar and spell checker, provide synonyms and smart graphical art in your documents. Example: One of the most popular examples of word processing application software is “Microsoft Word”. Some other examples of word processing application software are WordPad and Notepad. Microsoft Word is a word processing application software which has been developed by Microsoft and it supports Windows, Mac, Linux, iOS, Android operating systems. The latest version of Microsoft Word– 2021 is Microsoft 365. You can use this software for home and business one month free trial or you can purchase for license version. 4. Presentation Software Presentation software is a forms of application software which is designed to make a presentation including text, images, audio or video files. This particular software is used to show information or share your ideas to audience in the form of a slide show. In the computer world, presentation software is used in education, research centers, private and, government offices. You can easily share your thoughts, research methodology, organization portfolio, and system development process. Example: The most common use of presentation software is Microsoft Office PowerPoint, which has been developed by Microsoft. Microsoft PowerPoint supports Windows, Mac, iOS, And Android Operating Systems. Using Microsoft PowerPoint, you can easily create presentation slideshows using text, images, audio, or video files. The latest version of Microsoft Office PowerPoint – 2021 is Microsoft 365. You can use this software for a home and business one-month free trial or you can purchase for license version. There are some other presentation software which is as follows: - Google Slides - Zoho Show 5. Enterprise Application Software Enterprise software is another vital forms of application software. In short, “enterprise” is a large business organization including many functionalities or activities. We know that, an enterprise may have the following sections or departments. - Human Resource Department - Accounts Department - IT Department - Sales and Marketing Department - Support Department - Library Department and - PR (public relations) Department So, enterprise software is types of application software that provides complete solutions for an organization. The enterprise application software has a number of modules with different user roles for different departments to perform their activities. Each role is assigned to a specific user or group of users. The main aim of enterprise software is to create a relationship between all modules or systems to solve problems and increase the efficiency of an entire organization. There are different types of enterprise software, here we have mentioned 5 major enterprise software which is as follows: - Enterprise Resource Planning (ERP) - Customer Relationships Management (CRM) - Supply Chain Management (SCM) - Human Resources Management (HRM) - Business Intelligence (BI) 6. Web Browser Application Software A web browser is a forms of application software that is used to access resources from the internet (World Wide Web) or web-based applications from a local server. If you want to access the resources from the internet then you should use a web or internet browser. In that case, you have to install a web browser on your computer. When a user requests content to a website, then the web browser sends the user request to a particular web server and displays the content on the web browser. There is a number of most popular web browser such as Google Chrome, Mozilla Firefox, Microsoft Edge, Internet Explorer, Safari, and, opera. 7. Graphics Application Software Graphics software is a computer application software that is used to create, edit and display the 2D (two-dimensional) and 3D (three-dimensional) images. There are a number of popular graphics software such as Adobe Illustrator, Adobe Photoshop, PaintShop, Microsoft Paint, Paint.Net, CorelDraw, and, Inkscape. The graphic software’s used in computer graphics such as creating and editing digital photos, logos, website graphics, banners, advertisements, and clip art. 8. Education Application Software This is another important forms of application software that is designed and developed for education institutes. Educational software automates and improves the learning management system of an education. In the digital world, application software playing an important role in education sector. Nowadays, student registration, academic management, course management, and result processing are done by education software. So, it makes it more effective and efficient. Most educational institutes develop web-based customized application software to conduct student admission and result process. Nowadays, e-Learning plays an important role in distance learning education, which delivers digital content (text, images, audio, and video) to the learners. There are some best web-based e-Learning platforms such as Blackboard, Moodle, TalentLMS, Google Classroom, Schoology, and, Canvas. Moodle is one of the best and most popular eLearning software. It’s free and Open Source software which is written in PHP and it supports MySQL, PostgreSQL, and Oracle databases. It is a learning management system that’s been designed to help teachers to create and manage quality content for the learners. The Moodle software is user-friendly, configurable, and highly flexible, which provides excellent documentation and strong support for security and administration. It can be installed and implemented on various operating systems such as UNIX, Linux, Windows and Mac OS X. 9. Web Development Software Web development refers to the process of developing internet or LAN (local area network) -based web application software that is hosted on a server. There are a number of application software such as Adobe Dreamweaver, CodePen, Visual Studio, and Sublime Text which are used to develop a website or web application. If you want to design a website very easily within a short time, then you may use a content management system (CMS) such as WordPress, Joomla, and Drupal. All these CMS are free and open-source software, but there are some themes and plugins which need to be purchased. WordPress is the most popular content management system in the web world to design a website. WordPress software developed in PHP language and MySQL database. You can easily manage your website content from anywhere, anytime. Learn more about Best Software for Website Design. 10. Security Software Security software is a very essential forms of application software that ensures the cyber security of your system. Application security is the process of preventing applications from cyber threats in order to protect information. If your application has any bugs or security holes, then cyber attackers may gain unauthorized access to your application. Application security software identified the security holes or bugs in an application using penetration testing tools and it improves the security of an application. Here, we’ve mentioned some best security testing tools which will help to protect your system and database (security) from cyber-attacks. Learn more about Steps of Penetration Testing. Why Application Software is Required? Already, we’ve defined what application software is and different forms of application software. Now we’ll enlighten you on why application software is required for us. The application software is required for governments, companies, and other institutions to perform their regular activities and keep data secure. There are different types of functionalities of an organization such as HR management, account management, inventory management, procurement, and communication system. If an organization has application software to perform those activities, then it’ll save time and give accuracy of the work. So, application software is required for banks, hospitals, universities, offices, online ticket systems, shopping malls, and other areas. Uses: Different Forms of Application Software Application software is used in computer systems for different purposes in our daily lives. In this post, we’ve discussed different forms of application software. Now, we will mention the uses of application software, which are as follows: 1. Database Application Software Database application software is a database management system (DBMS) that is designed to create databases, store and manage data. This type of application software is used to develop web-based and desktop-based dynamic application software which can insert, search and manipulate data. Database software is used in every sector of computer systems such as: - Railway Reservation System - Library Management System - Banking or Financial Organization - Education Institute - Government Office - Social Media Sites - Broadcast Communications - Account Management - Enterprise Resource Planning - Data Warehouse - Dynamic Websites 2. Multimedia Application Software In today’s world of technology, multimedia software has become a vast strength and plays an important role in human life. These forms of application software are used in the following sectors in order to create, record, and play audio or video files. - Business or Corporate Organization - Education Institute - Product Advertising - Product Marketing 3. Word Processing Application Software Word processing application software is used in every office or individual’s computer. In today’s world, word processing application software is the most common software of all computer applications. Typically, the main features of word processing software are composing, editing, pasting, deleting, saving, and printing. The Word processing software is used in school, college, university, office, business, research, training, etc. 4. Educational Software Educational software is generally used in education and training institutes. This software includes student information systems, classroom management systems, course management, materials management, schedule management, and processing of results. These forms of application software provide the facilities for learning in different ways (such as audio, video, text), discussion with classmates or teachers, knowledge sharing, and Q& A. 5. Enterprise Software Enterprise resource planning (ERP) is the form of application software that helps an organization to automate their overall activities. Basically, the ERP software is used for inventory management, payroll and account management, customer relationship management (CRM), procurement management, HR management, and project management of an office. Most of the government offices, financial organizations, and corporate offices use enterprise application software. 6. Web Development Software There are different types of software developers or engineers such as: - Frontend Developer - Backend Developer - Full stack Developer - Mobile Developer - Game Developer Development Process of Application Software How to develop different forms of application software successfully? There should follow some steps or in terms of a sequence to develop an application software successfully. Typically, the process of software development life cycle is also known as system development life cycle (SDLC). Each step has a specific role in developing the software. The development steps of application software are as follows: Step 1: Collect the requirements of the new application software. Step 2: Analysis the functional requirements of the system. Step 3: Design of system’s architecture, databases, user interfaces, and system interfaces. Step 4: Development the application software based on system design. Step 5: Identify the bugs and errors (testing the system) in developed system before the implementation. Step 6: Implement the newly developed system and start for maintenance. These steps provide a standard and series of guide line to develop an application software in appropriate way. This process of SDLC used by Software Company in order to develop a quality-based software application. Learn more about Process of Software Development Life Cycle. Finally, application software or app is needed to complete your job. We are using different types of computer application software for different purposes of our daily lives, such as personal, official, research, development, and, educational. The software makes our lives very easy, saves time & manpower, and makes us more efficient. In this post, we’ve discussed different “forms of application software” and we can expect that this article will be helpful for you.
<urn:uuid:9dc7bad5-ed5a-40a4-bf14-3b439d3d0154>
CC-MAIN-2022-40
https://cyberthreatportal.com/forms-of-application-software/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337529.69/warc/CC-MAIN-20221004215917-20221005005917-00768.warc.gz
en
0.894974
3,427
3.1875
3
Data centers accounted for 1.8% of the United States’ total electricity consumption in 2014, according to the U.S. Department of Energy. That translates to some 70 billion kilowatt-hours! Smaller data centers are major contributors to this consumption. In addition to housing around 50% of all servers, their energy management is generally quite poor. Energy efficiency must rank as a key priority for data centers, as it affects everything from the wider environment to an enterprise’s bottom line. This wide ripple shows just how important it is for data centers to be constantly innovating and improving their systems. GRC’s liquid immersion cooling offers a revolutionary way to cut operational expenses, while also delivering greater efficiency gains than the alternatives. Immersion cooling uses a safe-for-electronics liquid coolant instead of air; removing heat at a fraction of the cost to your budget. This cooling method is literally over a thousand times more effective at conducting heat away from servers than conventional air cooling; which translates to substantial direct and indirect cost savings. Why Data Center Efficiency Matters Quite simply, inefficiency wastes money and natural resources. The physical efficiency of operational processes dictates how well data centers can convert electricity into computational capacity—and thus into profit. As customers demand more powerful processing, the only viable way to keep energy use in check is to increase efficiency. Data centers have begun to improve this metric in the past few decades; however, they have faced challenges in developing it further. We’re reaching the crucible, where conventional data centers can no longer meet the computational needs of the economy. At the same time, as data centers struggle to perform better, energy efficiency matters even more. Facilities are growing in number and size. They’re using more electricity and producing more emissions just as these sustainability issues have captured the public’s attention. Efficiency directly correlates to variables that matter to both data centers and the public: financial and environmental health. Data centers need to find ways to raise efficiency. Enter GRC’s liquid immersion cooling solutions. Upgrade to Liquid Immersion Cooling Data center cooling represents one of the main energy uses and operational costs dragging down efficiency. As such, it also represents a key area for implementing massive upgrades to operational efficiency. The reason cooling takes such a large percentage of data center electricity and finances is that conventional air cooling is extremely inefficient. It’s a legacy solution that still works, yes—but not well enough to meet modern demands. Newer and more effective methods like liquid immersion cooling are here to bridge the gap. Improvements to cooling technology account for much of data center efficiency upturns in recent decades; for instance, the development of cold-plate and rear-door heat exchanger technologies. This process continues with GRC’s liquid immersion cooling, which brings unprecedented efficiency to the data center industry. Immersion cooling only consumes around 2-3% of the energy a data center needs to function. By contrast, a legacy cooling system may double or even triple the energy that data centers use. There’s no better way to see the effects of energy efficiency than to look at an extreme case. A scientific supercomputer project systematically measured the different available cooling options and found GRC immersion cooling to be much more efficient than all the alternatives. Using liquid immersion, the Vienna Scientific Cluster cut costs while increasing computational ability. They reduced their infrastructural requirements and resource consumption and built the strongest supercomputer in Austria! When you upgrade to a more energy-efficient cooling solution, your data center runs on less electricity, which cuts the costs of operation. For example, switching to GRC’s immersion cooling can reduce your operating expenses by as much as 50%! Your entire data center becomes more lightweight, enabling your business to save on the total capital budget too. You don’t need to buy wasteful generators and batteries for an overbuilt air cooler. The liquid immersion tanks fit into a compact space, minimizing floor use. There are other financial benefits to liquid immersion cooling. It reduces wear on parts, so you spend less on maintenance and replacement. However, the main advantage of its extreme efficiency is in the smaller electricity bill: immersion cooling uses a mere 5% of the electricity that air cooling requires. The cost advantages of liquid cooling combine with savings from other efficiency improvements that can be implemented throughout the data center. For instance, if you use more energy-efficient processors to save on electricity, you’ll see synergistic cost savings from these processors and immersion cooling. Safeguard the Environment Energy efficiency is measured by the sum of environmental resources needed to achieve the desired end, such as powering data centers. Using unnecessary resources isn’t only about financial costs. It also drives damaging extractive activities, such as strip mining, and increases the burden of fossil fuel emissions on the environment. While some forward-thinking data center operators have already taken it upon themselves to make their facilities environmentally sustainable, this trend is quickly becoming the norm. Public opinion and government regulations are increasingly pressuring the industry to minimize their resource utilization. The use of innovative technologies that simultaneously serve the financial interests of businesses has emerged as the best way to safeguard the environment. This lets you improve the energy efficiency of your data center and go green while simultaneously cutting back on expenses. Incidentally, your servers will run faster, quieter, and more reliably too! GRC’s liquid immersion cooling glides by with half as much electricity as other cooling options. This is because liquids transport heat far more effectively than air. It also allows you to productively reuse server heat for various environmentally friendly (and profitable) functions. Moreover, immersion-cooled data centers require less water from the environment—in some cases, none at all. These same synergies have ecological benefits equal to their economic ones. For example, energy-efficient processors running in liquid immersion cooling tanks will slash carbon waste, water waste, pollution, and other negative environmental impacts that data centers must consider. Boost Data Center Efficiency With GRC With enhanced efficiency, it’s possible to boost profits while also doing your bit for the environment. Every step you take to make your data center more energy-efficient will result in a substantial return on investment. And one of the biggest steps you can take right now is to upgrade to liquid immersion cooling. Immersion cooling uses less electricity to deliver immensely more computational power. GRC has the history, global presence, and expertise you need to increase performance, while saving your data center half its total operational costs. Enjoy these results now—get started with GRC today.
<urn:uuid:bac46dfa-234d-4ca8-89eb-22661f2b9a44>
CC-MAIN-2022-40
https://www.grcooling.com/blog/improving-efficiency-data-centers/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00768.warc.gz
en
0.923181
1,369
2.75
3
What is an identity provider (IdP)? An IdP what stores and authenticates the identities your users use to log in to their devices, applications, files servers, and more depending on your configuration. Generally, most IdPs are Microsoft Active Directory (AD) or OpenLDAP implementations. IdPs fall into a much larger space, however, one called identity management. The identity management space is complex, with a number of different components to it. Identity management underpins most organizations; it is the central nervous system of an organization’s IT infrastructure. It tells users and IT resources who can do what and on which resources. As organizations get larger, the job becomes more complex and critical. In fact, the function takes on a security angle as well. The identity and access control systems within an organization span a number of different resources. It starts with the directory service, which is often referred to as the identity provider all the way through to the web app single sign-on (SSO) and multi-factor authentication (MFA) IdP services. The IdP, though, is the brain of any identity management infrastructure. IdP: The Central Source of Identity The core identities for any infrastructure are stored within the IdP. What is stored there? Effectively, the identity provider is a database of user records. Those user records contain credentials that are leveraged when users access different IT resources. IT resources will check with the identity provider to verify that a user is allowed to access that resource and to what degree. Historically, that was a simpler process as the communication between IT resource and identity provider took place over just one protocol: LDAP. It was used decades ago and was widely known as the industry standard. More recently, though, different types of devices, applications, and network equipment are using a variety of different authentication protocols. The result? Identity providers are feeling the pressure to keep up and remain the central source of identity within an organization. Legacy Directories Exit the Identity Provider Stage Over the past two decades, on-premises solutions such as OpenLDAP and Microsoft Active Directory served as the core identity provider for an organization. These were often referred to as user directories. More technical infrastructure that was based on Linux would likely connect to OpenLDAP, while Microsoft Windows®-based devices and applications would connect to AD. This process worked reasonably well until several new categories of IT infrastructure emerged. Solutions like cloud infrastructure and web applications changed the identity provider game. Newer IT resources struggled to connect to OpenLDAP and AD for one of two reasons: they either leveraged different protocols or networking became an issue. As macOS systems emerged, those too put pressure on the legacy directories. Existing IdP solutions weren’t keeping up with user access authentication needs and the changing IT landscape. Thankfully, a solution was made for the cloud era. JumpCloud Directory Platform is an Identity Provider for Today and Tomorrow A new generation of identity provider has emerged in the form of the JumpCloud Directory Platform. The cloud directory platform is agnostic in every respect: platform, location, protocol, and provide. Essentially, it’s a centralized SaaS-based identity provider that organizations can leverage for all of their IT resources. That’s because it uses core protocols, such as LDAP, SAML, RADIUS, SSH, REST, and others. That means it connects to resources on-premises or in the cloud. Additionally, the platform supports Windows, Mac, and Linux systems. In short, it’s the next-generation identity provider that organizations are seeking. Learn More About JumpCloud If you’d like to learn more about how your identity provider can support your organization’s needs, drop us a note. We’d be happy to chat with you about how the JumpCloud Directory Platform enables you and your organization to evolve with the changing IT landscape. Or, you just want to try it out, sign up for a JumpCloud Free account today. It’s free, requires no credit card, and empowers you to manage up to 10 users and 10 devices with the full-featured version of JumpCloud.
<urn:uuid:a612fef7-79b0-4fb5-9362-cff42af02387>
CC-MAIN-2022-40
https://jumpcloud.com/blog/identity-provider-idp
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00768.warc.gz
en
0.94535
867
2.796875
3
DDoS happens due to lack of security awareness, application, or skill on the part of the network/server owners or adminstrators. We often hear that a particular machine is under DDoS attack, or that the NOC has unplugged a given machine due to its participation in a DDoS attack. DDoS has become one of the common issues in our world. In some ways, DDoS is like a disease which doesn´t have a countering antibiotic, and requires being very careful while dealing with it. Never take it lightly. In this article, I´ll try to cover the steps/measures which will help us defend our machines from a DDoS attack – at least up to a certain extent. Simply stated, DDoS (Distributed Denial of Service) is an advanced version of the DoS (Denial of Service) attack. Much like DoS, DDoS also tries to block important services running on a server by flooding the destination server with packets. The specialty of DDoS is that the attacks do not come from a single network or host but from a number of different hosts or networks which have been previously compromised.Read Full Story
<urn:uuid:4f4df144-4a46-42e5-9748-f4f056b66d46>
CC-MAIN-2022-40
https://it-observer.com/preventing-ddos-attacks.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00168.warc.gz
en
0.966335
237
3.125
3
As we’ve recently discussed, the low latency and high bandwidth capacity of 5G networks will likely revolutionize health care through profound improvements to patient experience. Many of these experience improvements fall under the general umbrella of “telehealth.” While the benefits will be significant in the US, they may be felt even more dramatically in the developing world. In this article, we’ll dive deeper into telehealth, and look at some of its use cases both domestically and abroad. What Is Telehealth? According to a recent overview by BroadBandCommunities, telehealth consists of the following broad categories: - Teleradiology enables the physical separation of medical diagnostic imaging and analysis, eliminating the need for a radiologist to be in the same location as the imaging technician. While this can allow patients unlimited access to expert opinions across the world, transmission of these high-resolution images can strain network capacity. - Remote Patient Monitoring (RPM) includes an entire ecosystem of devices that can track and store a patient’s vital signs outside of a traditional medical facility. RPM devices have been used to great effect by the Veterans Health Administration to proactively address troublesome developments. - Telepresence is a general term that encompasses any remote interaction with a medical professional. It may include anything from a high-resolution video conference with a doctor to a semi-invasive procedure performed by a connected robotic device. These capabilities will increase in their immersiveness with advancements in virtual reality (VR) and augmented reality (AR). While traditional health services delivery in the Western world can be challenging, those difficulties are more extreme elsewhere. According to a recent BBC report, in Indonesia (the world’s fourth most populous country), there are only three doctors for every 10,000 residents. The limitations of physical infrastructure that cause massive traffic jams mean a ten-minute consultation can consume up to half a day, including transport and long waiting times at a health care facility. However, one solution profiled in the feature offers a solution. Through a combination of video conferencing with a doctor and subsequent e-prescription delivery, patients can have a full professional consultation and receive their prescription in the comfort of their homes, all in the time it would take to physically arrive at a doctor’s office. The Challenges of Telehealth The primary challenge to telehealth has been bandwidth availability. These issues are compounded by clunky interfaces and workflow integration challenges. However, provided ample bandwidth going forward (perhaps through fiber capacity expansion strategies), innovation should overcome these operational difficulties. To ensure your health care network has sufficient bandwidth to support these new services, contact Champion ONE today.
<urn:uuid:f9fb43d5-0088-4f13-a54d-d58026951861>
CC-MAIN-2022-40
https://www.championone.com/blog/telehealth-the-global-future-of-health-care
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00168.warc.gz
en
0.940282
548
2.671875
3
This is part 2 of the Symmetric and Asymmetric Encryption blog post. You can see part 1 here. Asymmetric encryption uses two keys in a matched pair to encrypt and decrypt data—a public key and a private key. There are several important points to remember with these keys: - If the public key encrypts information, only the matching private key can decrypt the same information. - If the private key encrypts information, only the matching public key can decrypt the same information. - Private keys are always kept private and never shared. - Public keys are freely shared by embedding them in a certificate. Only a private key can decrypt information encrypted with a matching public key. Only a public key can decrypt information encrypted with a matching private key. A key element of several asymmetric encryption methods is that they require a certificate and a PKI. Some examples of asymmetric encryption are: Asymmetric Encryption to Privately Share a Key Although asymmetric encryption is very strong, it is also very resource intensive. It takes a significant amount of processing power to encrypt and decrypt data, especially when compared with symmetric encryption. Most cryptographic protocols that use asymmetric encryption only use it to privately share a symmetric key. They then use symmetric encryption to encrypt and decrypt data because symmetric encryption is so much more efficient. Some of the more advanced topics related to asymmetric encryption become harder to understand if you don’t understand the relationship of matched public and private key pairs. However, because you can’t actually see these keys, the concepts are hard to grasp for some people. The Rayburn box demonstrates how you can use physical keys for the same purposes as these public and private keys. The Rayburn Box I often talk about the Rayburn box in the classroom to help people understand the usage of public and private keys. A Rayburn box is a lockbox that allows people to securely transfer items over long distances. It has two keys. One key can lock the box, but can’t unlock it. The other key can unlock the box, but can’t lock it. Both keys are matched to one box and won’t work with other boxes: - Only one copy of one key exists—think of it as the private key. - Multiple copies of the other key exist, and copies are freely made and distributed—think of these as public keys. The box comes in two different versions. In one version, it’s used to send secrets in a confidential manner to prevent unauthorized disclosure. In the other version, it’s used to send messages with authentication, so you know the sender actually sent the message and that the message wasn’t modified in transit. The Rayburn Box Used to Send Secrets Imagine that I wanted you to send some proprietary information and a working model of a new invention to me. Obviously, we wouldn’t want anyone else to be able to access the information or the working model. I could send you the empty open box with a copy of the key used to lock it. You place everything in the box and then lock it with the public key I’ve sent with the box. This key can’t unlock the box, so even if other people had copies of the public key that I sent to you, they couldn’t use it to unlock the box. When I receive the box from you, I can unlock it with the only key that will unlock it—my private key. This is similar to how public and private keys are used to send encrypted data over the Internet to ensure confidentiality. The public key encrypts information. Information encrypted with a public key can only be decrypted with the matching private key. Many copies of the public key are available, but only one private key exists, and the private key always stays private. The “Understanding the HTTPS Process for Security+” post shows this process in more depth. The Rayburn Box Used for Authentication With a little rekeying of the box, I can use it to send messages while giving assurances to recipients that I sent the message. In this context, the message isn’t secret and doesn’t need to be protected. Instead, it’s important that you know I sent the message. When used this way, the private key will lock the Rayburn box, but it cannot unlock the box. Instead, only a matching public key can unlock it. Multiple copies of the public key exist and anyone with a public key can unlock the box. However, after unlocking the box with a matching public key, it isn’t possible to lock it with the public key. Imagine that you and I are allies in a battle. I want to give you a message of “SY0-401,” which is a code telling you to launch a specific attack at a specific time. We don’t care if someone reads this message because it’s a code. However, we need you to have assurances that I sent the message. I write the message, place it in the box, and lock it with my private key. When you receive it, you can unlock it with the matching public key. Because the public key opens it, you know this is my box and it was locked with my private key—you know I sent the message. If someone intercepted the box and opened it with the public key, he or she wouldn’t be able to lock it again using the public key, so you’d receive an open box. An open box with a message inside it doesn’t prove I sent it. The only way you know that I sent it is if you receive a locked box that you can unlock with the matching public key. This is similar to how digital signatures use public and private keys. The “Understanding a Digital Signature” post explains digital signatures in more depth. In short, I can send you a message digitally signed with my private key. If you can decrypt the digital signature with my matching public key, you know it was encrypted, or signed, with my private key. Because only one copy of the private key exists, and I’m the only person who can access it, you know I sent the message. The Rayburn Box Demystified Before you try to find a Rayburn box, let me clear something up. The Rayburn box is just a figment of my imagination. Rayburn is my middle name. I haven’t discovered a real-world example of how public/private keys work, so I’ve created the Rayburn box as a metaphor to help people visualize how public/private keys work. Feel free to build one if you want.
<urn:uuid:5ecd0f24-c8ad-4b44-8381-2f23d7614d21>
CC-MAIN-2022-40
https://blogs.getcertifiedgetahead.com/asymmetric-encryption/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00168.warc.gz
en
0.91858
1,420
4.15625
4
This is a continuation of the social engineering principles post. Part 1 is available here. Urgency is One of the Social Engineering Principles Some attacks use urgency as a technique to encourage people to take action now. As an example, the CryptoLocker ransomware virus uses the scarcity principle with a countdown timer. Victims have 72 hours before they’ll lose all their data, and each time they look at their computer, they’ll see the timer counting down. Using urgency is most effective with ransomware, phishing, vishing, whaling, and hoaxes. For example, phishing emails with malicious links might indicate that there are a limited number of products at a certain price, so the user should “Click Now.” Executives might be tricked into thinking a subpoena requires immediate action. Many virus hoaxes have a deadline such as at 4:00 p.m. when the hoax claims the virus will cause the damage. Many of the reasons that social engineers are effective are because they use psychology-based techniques to overcome users’ objectives. Scarcity and urgency are two techniques that encourage immediate action. Familiarity/Liking is One of the Social Engineering Principles If you like someone, you are more likely to do what the person asks. This is why so many big companies hire well-liked celebrities. And, it’s also why they fire them when those celebrities become embroiled in a scandal that affects their credibility. Some social engineers attempt to build rapport with the victim to build a relationship before launching the attack. This principle is most effective with shoulder surfing and tailgating attacks: - Shoulder surfing. People are more likely to accept someone looking over their shoulder when they are familiar with the other person, or they like them. In contrast, if people don’t know or don’t like someone, they are more likely to recognize a shoulder surfing attack and stop it immediately. - Tailgating. People are much more likely to allow someone to tailgate behind them if they know the person or like the person. Some social engineers use a simple, disarming smile to get the other person to like them. Trust is One of the Social Engineering Principles In addition to familiarity/liking, some social engineers attempt to build a trusting relationship between them and the victim. This often takes a little time, but the reward for the criminal can be worth it. Vishing attacks often use this method. As an example, someone identifying himself as a security expert once called me. He said he was working for some company with “Secure” in its name, and they noticed that my computer was sending out errors. He stressed a couple of times that they deploy and support Windows systems. The company name and their experience was an attempt to start building trust. He then guided me through the process of opening Event Viewer and viewing some errors on my system. He asked me to describe what I saw and eventually said, “Oh my God!” with the voice of a well-seasoned actor. He explained that this indicated my computer was seriously infected. In reality, the errors were trivial. After seriously explaining how much trouble I was in with my computer, he then added a smile to his voice and said, “But this is your lucky day. I’m going to help you.” He offered to guide me through the process of fixing my computer before the malware damaged it permanently. All of this was to build trust. At this point, he went in for the kill. He had me open up the Run window and type in a web site address and asked me to click OK. This is where I stopped. I didn’t click OK. I tried to get him to answer some questions but he was evasive. Eventually, I heard a click. My “lucky day” experience with this social engineering criminal was over. The link probably would have taken me to a malicious web site ready with a drive-by download. Possibly the attacker was going to guide me through the process of installing rogueware on my system. If my system objected with an error, I’m betting he would have been ready with a soothing voice saying, “That’s normal. Just click OK. Trust me.” He spent a lot of time with me. I suspect that they’ve been quite successful with this ruse with many other people. This is a continuation of the social engineering principles post. Part 1 is available here and includes a practice test question.
<urn:uuid:150a69c4-3cac-42f2-aa32-bc802ff55dd3>
CC-MAIN-2022-40
https://blogs.getcertifiedgetahead.com/social-engineering-principles-part-2/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00168.warc.gz
en
0.972292
954
2.5625
3
In times of crisis, agencies know that decision-making ability and response time are only as good as the information in their supplier database. There are many lessons to be learned from the early months of the COVID-19 pandemic and resulting global shutdowns. Some are managerial or operational in nature, and others are more directly related to the management and adaptation of existing supply chains. These challenging times have exposed supply chain shortcomings while highlighting the strength of technology to address strategic problems. In earliest days of the pandemic, public- and private-sector organizations were often unable to access the goods and services they needed from their current suppliers. If they were buying something completely new, they did not always have a trusted source for information. Organizations know that their decision-making ability and their response time in a crisis are only as good as the information in their supplier database. All too often, unfortunately, the quality of that data is very poor, leading to poor choices or prompting decision-makers to look elsewhere because they have so little trust in the data. This problem came into sharp focus as hospitals and governments worldwide scrambled to buy enough personal protective equipment to allow them to continue to operate. In the United Kingdom, digital access to supplier information made all the difference in short-term crisis response and ensured protection against longer-term disruption. The government wanted to identify all suppliers located in specific countries that could supply medical equipment -- such as N95 masks, medical gowns and hand sanitizer -- and who met certain qualifications and requirements. In fact, the U.K. government took the effort a step further, leveraging its knowledge of supply chains to handle not just short-term needs but also protect against longer-term disruption. Given the constraints and unpredictability of the supply chain in March, April and May, U.K. officials wanted a list of both PPE manufacturers and distributors. This data would better support decision-making and prevent dependence on distributors that were not able to deliver during this critical time. Additionally, officials wanted the lists of suppliers to be searchable by employee count and revenue so they could tell which manufacturers were most likely to be able to handle the large volumes required. Emphasis was placed on current suppliers and companies that were similar in profile to those that government was already working with. Finally, the government wanted to see which suppliers had been “verified” by other large volume buyers to avoid any problems with fraud. By using sophisticated, yet easy-to-use, digital tools and a foundation of clean supplier data, the procurement team quickly sifted through matching supplier candidates, focusing on companies with certifications in quality and security as well as those who met diversity and sustainability criteria. In the end, the U.K. government was able to identify and prequalify over 60,000 suppliers across the categories of PPE considered most critical. On the list were companies both large and small, some of which were already known to the U.K. government and some were completely new discoveries. Each ultimately became part of a diverse solution that proved critical during the crisis. One of the most important lessons we are learning from the pandemic is the importance of trusted, agile and transparent information in a crisis. Having ready access to trusted suppliers and details about their businesses will support agencies’ nuanced decision-making on immediate needs and long-term objectives without having to abandon existing requirements and diversity and community-based goals.
<urn:uuid:0057839d-fbc1-4763-84c9-e1474fa3ca2e>
CC-MAIN-2022-40
https://gcn.com/cloud-infrastructure/2020/08/when-high-quality-supplier-data-becomes-mission-critical/315215/?oref=gcn-next-story
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00168.warc.gz
en
0.977881
698
2.65625
3
Written by Harry Menear With the growing power of the cloud and ubiquitous 5G connectivity, the coming decade could see our devices truly step beyond the limitations of physical hardware. Earlier this year, UK carrier Vodafone and Canonical — the company behind Ubuntu — unveiled a prototype for a new kind of smartphone. Combining virtual machines, cloud storage, and a low-latency 5G connection, the result is “a smartphone running entirely on the cloud while leaving basic functionality on the device a user holds”. By outsourcing the phone’s entire android operating system to a virtual machine in a nearby data centre, this cloud-based smartphone can effectively deliver infinitely scalable computing power through a handset with less on-board hardware than today’s most affordable devices and, supposedly “provides the user with an environment that shows no difference to what they are regularly used to having”. Damian Hanson, Co-Founder & Director of CircleLoop, says: “Cloud-based smartphones could offer consumers an unmatched user experience and may one day replace the physical smartphone altogether as usage grows. “Current issues with sustainability surrounding telecoms, such as manufacturing CO2 emissions and the use of non-recyclable materials, will also be solved thanks to a higher reliance on cloud-based smartphones. Supply chain issues will also be less problematic as demand for physical smartphone devices drops in favour of a simple login to a cloud-based telephone provider.” The Long Road to Cloud Based Smartphones The idea of a smartphone that offloads the bulk of its processing and storage capabilities to the cloud ‒ pulling what the user needs as and when they need it ‒ has been slowly gaining traction for a number of years now. Back in 2013, two ex-Googlers, Tom Moss and Mike Chan, founded a company called Nextbit. The startup launched its debut smartphone, the Robin, on Kickstarter a few years later, with a vision to deliver “seamless cloud-first computing across multiple devices”. The Robin came equipped with a paltry 32GB of onboard storage and a plan to use AI-based behavioural analysis to offload underused applications and data ‒ like videos, pictures, and even entire applications ‒ into an unlimited cloud storage space, outsourced in a way that optimised performance and reduced the demands placed on the device’s internal hardware. The approach is similar to Google’s Chromebooks, which host applications in Google’s own cloud and online to reduce the need for large amounts of RAM and powerful chipsets. The project raised over US$1mn on Kickstarter and, like most Google spin-off projects ahead of their time, fell quickly off the face of the Earth. The idea of a cloud-based smartphone, however, persisted. Back in 2020, Qin Fei, Head of Vivo’s Communication Research Institute, speculated in an article published in the Manila Standard that, with low enough latencies between the end device and an edge computing data centre, smartphones might effectively be freed from the constraints of having to carry around all their own hardware. “Could the future be as simple as a single sheet of glass, which is how artists and science fiction envision the future smartphone?” he asked, wondering if the eventual form factor of a smartphone might be a “pure display device with all processing and intelligence in the cloud”. Raj Shah, North America Industry Lead for Telecom, Media, and Technology at digital consultancy Publicis Sapient, thinks the future form factor of a smartphone might even stop resembling a single device. “By pushing the computing power to the cloud, smartphones can grow smaller and consume less power, possibly fragmenting into components – one small piece in your pocket or purse for connection, an audio device in your ear, and a visual overlay for AR/VR,” he tells me, adding that making smartphones into a more integrated and unobtrusive technology, while simultaneously making them more powerful by orders of magnitude is “a critical step for the future of digital reality. A cloud-based smartphone is a necessary step to an immersive, always-on Metaverse that we believe is coming.” Cloud Gaming: A Blueprint for Cloud-Based Smartphones? The rapidly expanding cloud gaming sector is probably the best example we have for using low-latency connections to host heavy IT workloads that are streamed in near-real time to the user’s device. The idea is that powerful computers run demanding games remotely, and stream them to a smartphone with such low latency that twitch-based games, where reaction times are paramount, can still be played effectively. However, 5G infrastructure isn’t quite ready to deliver the kinds of cloud gaming experiences that can compete with local platforms, yet. Sri Iyer, CEO and founder of Game Bench, notes that, “To meet the demands of enthusiasts, the input latency needs to be less than 133-milliseconds, quickening to less than 83-milliseconds for ultra-gamers, yet the best we can currently serve up is 170 to 180 milliseconds, which only caters to basic performance”. Iyer insists, however: “We’re not too far away from an exhilarating future where 5G means high-end games can be played convincingly and seamlessly on mobile devices.” The question poses itself: if we’re entering an age where we can run AAA games on monstrously powerful gaming rigs in a data centre, and stream them in near-real time to a smartphone, why can’t we also stream the phone’s OS, apps, and data in real time as well? The computing power and the software certainly exists. The issue lies with connectivity. 5G: Is it Fast Enough? According to David Owen, Managing Director of Communications at Intercity, when it comes to assessing the viability of a cloud-based smartphone, “You can draw parallels with Chromebooks here, which carry just the OS and basic services and then everything is drawn from the cloud. These struggled to take off in a significant way because connectivity in the UK is still patchy and the risk of not being able to work due to this is too great”. Shah agrees that, while “the computing power is certainly available, the latency and coverage needed to be a truly reliable device – even with 5G C-Band rollout – aren’t quite there yet”. The need for superfast connectivity also creates some dissonance when held up against one of the biggest selling points of a cloud-based smartphone: fewer, less powerful components. This makes them an ideal candidate for the budget market. Budget phones tend to sell best in markets where the quality and reliability of networks is a few years behind more affluent countries. Nevertheless, the gulf between what’s possible and what’s needed for the dawn of a cloud-based smartphone age is, Owen continues, smaller than it might appear. “We already consume cloud-based services on our phones for work, email, Power BI, calendar, Teams, Zoom – none of these would work without connectivity, so I can see why cloud-based phones might be attractive,” he says.
<urn:uuid:775e8c0c-7a27-4fdf-a65a-9a73fc3b7f70>
CC-MAIN-2022-40
https://mobile-magazine.com/5g-and-iot/will-5g-give-us-the-first-cloud-based-smartphone
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00168.warc.gz
en
0.933715
1,506
2.515625
3
In recent years, headlines about cyber security have become increasingly common. Thieves steal customer social security numbers from corporations’ computer systems. Unscrupulous hackers grab passwords and personal information from social media sites or pluck company secrets from the cloud. For companies of all sizes, keeping information safe is a growing concern. What Is Cyber Security? Cyber security consists of all the technologies and practices that keep computer systems and electronic data safe. And, in a world where more and more of our business and social lives are online, it’s an enormous and growing field with many types of job roles. According to the Cyber Security & Infrastructure Security Agency (CISA), "Cyber security is the art of protecting networks, devices and data from unauthorized access or criminal use and the practice of ensuring confidentiality, integrity and availability of information." What Is Information Security? Information security is the processes and tools designed and used to protect sensitive business information from modification, disruption, destruction and inspection, according to CISCO. Information security and cyber security are often confused. According to CISCO, information security is a crucial part of cyber security but is used exclusively to ensure data security. Everything is connected by computers and the internet now, including communication, entertainment, transportation, shopping, medicine and more. A copious amount of personal information is stored among these various services and apps, which is why information security is critical. Why Is Cyber Security Increasingly Important? Getting hacked isn’t just a direct threat to the confidential data companies need. It can also ruin their relationships with customers and even place them in significant legal jeopardy. With new technology, from self-driving cars to internet-enabled home security systems, the dangers of cybercrime become even more serious. So, it’s no wonder that international research and advisory firm Gartner Inc. predicts worldwide security spending will hit $170 billion in 2022, an 8% increase in just a year. “We’re seeing a tremendous demand for cyber security practitioners,” said Jonathan Kamyck, associate dean of cyber security at Southern New Hampshire University (SNHU). “Most businesses, whether they’re large or small, will have an online presence, for example. Some of the things you would do in the old days with a phone call or face-to-face now happen through email or teleconference, and that introduces lots of complicated questions with regard to information.” These days, the need to protect confidential information is a pressing concern at the highest levels of government and industry. State secrets can be stolen from the other side of the world. Companies whose whole business models depend on control of customer data can find their databases compromised. In just one high-profile 2017 case, personal information for 147.9 million people – about half the United States – was compromised in a breach of credit reporting company Equifax. What Are Cyber Attacks? A cyber attack is an unwelcomed attempt to steal, expose, alter, disable or destroy information through unauthorized access to computer systems, according to the International Business Machines (IBM). There are many reasons behind a cyber attack, such as cyber warfare, cyber terrorism and even hacktivists, but these actions fall into three main categories: criminal, political and personal. Attackers motivated by crime typically seek financial gain through money theft, data theft or business disruption. Similarly, personal attackers include disgruntled current or former employees who will take money or data in an attempt to attack a company's systems. Socio-political motivated attackers desire attention for their cause, resulting in their attacks being known to the public, and this is a form of hacktivism. Other forms of cyber attacks include espionage, or spying to gain an unfair advantage over the competition, and intellectual challenging. According to CISA, as of 2021, there is a ransomware attack every 11 seconds – a dramatic rise from every 39 seconds in 2019 (CISA PDF Source). In addition, small businesses are the target of nearly 43% of all cyber attacks, which is up 400%. The Small Business Association (SBA) reports that small businesses make attractive targets and are typically attacked due to their lack of security infrastructure. The SBA also reports that a majority of small business owners felt their business was vulnerable to an attack. This is because many of these businesses: - Can't afford professional IT solutions - Have limited time to devote to cyber security - Don't know where to begin What Are Types of Cyber Attacks and Threats? Here are some of the most common threats among cyber attacks: - Malware: Malware, also known as malicious software, is intrusive software developed by cyber criminals to steal data or to damage and destroy computers and computer systems, according to CISCO. Malware has the capability of exfiltrating massive amounts of data. Examples of common malware are viruses, worms, trojan viruses, spyware, adware and ransomware. - Phishing: Phishing attacks are the practice of sending fraudulent communications while appearing to be a reputable source, according to CISCO. This is typically performed via email or on the phone. The goal is to steal sensitive information such as financial or login information – or to install malware onto a target's device. - Ransomware: Ransomware is a form of malware designed to encrypt files on a target device, rendering those files and the systems they rely on unusable, according to the CISA. Once the system has been encrypted, actors demand ransom in exchange for decryption. - Viruses: A virus is a harmful program intended to spread from computer to computer, as well as other connected devices, according to the SBA. The object of a virus is to deliver the attacker access to the infected systems. Many viruses pretend to be legitimate applications but then cause damage to the systems, steal data, interrupt services or get additional malware, according to Proofpoint. Who Is Behind Cyber Attacks? Attacks against enterprises can come from a variety of sources such as criminal organizations, state actors and private persons, according to IBM. An easy way to classify these attacks is by outsider versus insider threats. Outsider or external threats include organized criminals, professional hackers and amateur hackers (like hacktivists). Insider threats are typically those who have authorized access to a company's assets and abuse them deliberately or accidentally. These threats include employees who are careless of security procedures, disgruntled current or former employees and business partners or clients with system access. Developing Cyber Awareness Cyber security awareness month takes place every October and encourages individuals and organizations to own their role in protecting their cyberspace, according to Forbes, although anyone can practice being mindful of cyber security at any time. Awareness of the dangers of browsing the web, checking emails and interacting online in general are all part of developing cyber security awareness. Cyber security awareness can mean different things to different people depending on their technical knowledge. Ensuring appropriate training is available to individuals is a great way to motivate lasting behavioral changes. While cyber security awareness is the first step, employees and individuals must embrace and proactively use effective practices both professionally and personally for it to truly be effective, according to Forbes. Getting started with cyber security awareness is easy, and many resources are readily available on the CISA government website based on your needs. Whether you need formal training or a monthly email with cyber security tips and tricks, any awareness and training can impact behavior and create a positive change in how you view cyber security. What Are the Types of Cyber Security? Here are the most common types of cyber security available: - Application Security: Application security describes security used by applications to prevent data or code within the app from being stolen or hijacked. These security systems are implemented during application development but are designed to protect the application after deployment, according to VMWare. - Cloud Security: Cloud security involves the technology and procedures that secure cloud computing environments against internal and external threats. These security systems are designed to prevent unauthorized access and keep data and applications in the cloud secure from cyber security threats, according to McAfee. - Infrastructure Security: Critical infrastructure security describes the physical and cyber systems that are so vital to society that their incapacity would have a debilitating impact on our physical, economic or public health and safety, according to CISA. - Internet of Things (IoT) Security: IoT is the concept of connecting any device to the Internet and other connected devices. The IoT is a network of connected things and people, all of which share data about the way they are used and their environments, according to IBM. These devices include appliances, sensors, televisions, routers, printers and countless other home network devices. Securing these devices is important, and according to a study by Bloomberg, security is one of the biggest barriers to widespread IoT adaption. - Network Security: Network security is the protection of network infrastructure from unauthorized access, abuse or theft. These security systems involve creating a secure infrastructure for devices, applications and users to work together, according to CISCO. Do You Need a Degree To Be a Cyber Security Professional? A cyber security degree provides an opportunity for students to develop skills and a mindset that empowers them to begin a career in securing systems, protecting information assets and managing organizational risks. Alex Petitto ’21 earned his bachelor’s in cyber security. Petitto always wanted to work within the IT sector, and he chose cyber security because it’s an exponentially growing field. He transferred credits from a community college through a U.S. Air Force program and finished his bachelor's in under two years. "It was much quicker than I thought it would be,” he said. It didn't take long for Petitto to begin exploring his career options. "Even before finishing (my) degree, I … received multiple invites to interview for entry-level positions within the industry and received three job offers," said Petitto. He decided to remain within the Air Force and transfer to a cyber security unit as opposed to joining the private sector. Petitto said his cyber security degree opened doors for him in the field – “a monumental goal for me," he said. "This degree was a critical first step for breaking into the industry." Your cyber security degree program can also connect you with experiential learning opportunities to further your growth as a cyber security professional. For example, the annual National Cyber League (NCL) has a competition wherein students from across the U.S. practice real-world cyber security tasks and skills. SNHU recently placed 9th out of over 500 colleges participating in the NCL competition. Career Opportunity and Salary Potential in Cyber Security As companies large and small scramble to respond to the growing threats, jobs in the cyber security field are growing fast. The U.S. Bureau of Labor Statistics (BLS) predicts that employment for information security analysts will grow by 33% through 2030. That’s more than twice as fast as the average computer-related occupation and four times as fast as American jobs in general. To help fill the need for more professionals in the cyber security world, CyberSeek, a project funded by the federal government and supported by industry partners, provides detailed information on the demand for these workers by state. The tool shows that, across the country, there were 180,000 job openings for information security analysts between May 2021 and April 2022, with only 141,000 professionals holding jobs in the role, reflecting an unfilled demand of 39,000 workers. “There’s a huge shortfall right now in entry-level and midlevel cyber security roles,” Kamyck said. “You’re looking at demand across all business sectors, with companies of all sizes. CyberSeek lists the following entry-mid-and advanced-level roles available in the field. Average salaries are based on job openings posted between May 2021 and April 2022. Entry-level Cyber Security Roles - Cyber Crime Analyst: Cyber crime analysts make an average salary of $100,000, and common skills necessary for the role include computer forensics, information security and malware engineering. - Cyber Security Specialist: Cyber security certified make an average salary of $104,482, and important skills for the role include information security, network security and information assurance. - Incident and Intrusion Analyst: Incident analysts make an average salary of $88,226, and common skills needed include project management, network security and intrusion detection. - IT Auditor: Information technology auditors make an average salary of $110,000, and common skills for the role include internal auditing and audit planning, accounting and risk assessment. Mid-level Cyber Security Roles - Cyber Security Analyst: Cybersecurity analysts make an average of $107,500, and the top skills required include information security and systems, network security and threat analysis. - Cyber Security Consultant: Consultants in cyber security make an average salary of $92,504 and need skills in information security and surveillance, asset protection and security operations. - Penetration and Vulnerability Tester: Penetration testers make an average salary of $101,091 and need skills in penetration testing, Java, vulnerability assessment and software development. Advanced-level Cyber Security Roles - Cyber Security Architect: Cyber security architects make an average salary of $159,752, and top skills for the role include software development, network and information security and authentication. - Cyber Security Engineer: Cyber security engineers make an average of $117,510 a year and need cryptography, authentication and network security skills. - Cyber Security Manager: Managers in this field earn an average salary of $130,000, and top skills include project management, network security and risk management. What Does a Cyber Security Professional Do? Kamyck said cyber security professionals could play a wide range of roles in a modern company. For example, some small businesses may hire a single person to handle all kinds of work protecting data. Others contract with consultants who can offer a variety of targeted services. Meanwhile, larger firms may have whole departments dedicated to protecting information and chasing down threats. While companies define roles related to information security in a variety of ways, Kamyck said there are some specific tasks that these employees are commonly called on to do. In many cases, they must analyze threats and gather information from a company’s servers, cloud services and employee computers and mobile devices. “An analyst’s job is to find meaning in all of that data, see what’s concerning,” he said. “Is there a breach? Is someone violating a policy?” In many cases, Kamyck said, security certified work with other information technology professionals to ensure a company’s systems are secure. That involves not just technical know-how but also people-oriented skills. But breaches don’t just take the form of someone hacking into a server. They can also involve customer lists sent through unencrypted email, a password written on a sticky note in a cubicle or a company laptop stolen from an employee’s car. Depending on their specific role, cyber security professionals must also think strategically. In many industries, companies rely on employees having quick access to highly sensitive data, such as medical records or bank account information. “The goal is to balance the needs of the company or the organization you’re working for with the need to protect the confidentiality of customer data and trade secrets,” Kamyck said. Kamyck said people who do well in these jobs tend to be curious, competitive and willing to keep learning to stay up to date with rapidly changing technology. The work draws on multidisciplinary knowledge, and people who continue with the work find there are a variety of directions they can take in their careers. For example, Kamyck said, if you're interested in the business side, you might become a manager or run audits that let companies know where they need to Excellerate to meet compliance. If you love the adversarial part of the job, you might become a penetration tester, essentially an “ethical hacker” who tests for system vulnerabilities by trying to get through them. How To Get Into Cyber Security If you’re wondering how to get into cyber security, it’s clear there are many positions out there. The question is how to make sure you’re a good fit for them. According to BLS, most information security analyst jobs require at least a bachelor’s degree in computer science, information assurance, programming or another related field. In some cases, the work calls for a Master of Business Administration (MBA) in Information Systems. That degree typically takes an additional two years of study and involves both technical and business management courses. Cyber security job requirements also sometimes include related work experience. Rather than jumping right into the security side of information technology, you can start as a network or computer systems administrator. Depending on the specific cyber security position, employers may have other job requirements. For instance, keeping databases secure might be an ideal job for someone who’s spent time as a database administrator and is also well-versed in security issues. Aside from work experience and college degrees, some employers also prefer job candidates who have received certifications demonstrating their understanding of best practices in the field. For example, the Certified Information Systems Security Professional (CISSP) credential validates a professional’s general knowledge and abilities in information security. There are also more specific certificates, which can highlight specialized knowledge of computer architecture, engineering or management. Whatever path new employees in cyber security want to follow, Kamyck said, those who are willing to make an effort to learn the field will find abundant opportunities. “There’s needs in government. There’s needs in finance. There’s needs in education,” Kamyck said. “There’s a tremendous unfilled need.” Discover more about SNHU's online cyber security degree: Find out what courses you'll take, skills you'll learn and how to request information about the program. Nicholas Patterson is a writer at Southern New Hampshire University. Connect with him on LinkedIn.
<urn:uuid:166b97e4-e02f-4feb-9040-1be6b288b197>
CC-MAIN-2022-40
http://babelouedstory.com/bibliographies/accueil_bibliographie/training-pdf-guide.php?pdf=000-176-IBM-Worklight-V5003-Mobile-Application-Development
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00168.warc.gz
en
0.944216
3,765
3.46875
3
Uninterruptible power supply (UPS) systems are generally thought of as insurance policies for companies and institutions with critical power requirements, such as hospitals, research facilities, laboratories, data centers, manufacturers, health care, government, academic, research, and transportation facilities, providing reliable power supply. Using UPS systems as more than emergency backup — and monetizing their use — makes a compelling proposition. Seeing these systems as assets and new revenue generators with no risk to backup capabilities introduces a new, strategic way of thinking about UPS capabilities. At present, the most common method of providing backup power is the use of generators with UPS batteries. This method bridges the gap between the power interruption and the point in time when the generators produce a stable power supply. This is the traditional model that protects those with critical power requirements from grid failures. Typically, it can take between a few seconds to a few minutes for a generator to reach appropriate production levels. If a generator is not in place, a longer battery backup solution will be needed to bridge the time until grid power is resumed. However, grid failure or grid interruption aren’t the only factors that need to be considered by energy users; there are a wide variety of commercial implications to think about, too. First, there are variations in tariffs throughout the day — power from the grid at peak times is more expensive. Secondly, in some countries, rates charged are based on the maximum consumption in a given period. For example, a manufacturer on a five-day week might have a disproportionate spike in power usage when operating multiple machines or devices at the same time, or when using a high-power device infrequently. Power usage will be several times higher than the maximum consumption level for the rest of the week, but that increase in power consumption, even for a short period of time, will determine the tariff rate for the entire period. There is a way of ameliorating these challenges by using the existing UPS systems that are already installed. New, specialized software allows energy to be stored when charges are less expensive — to be used in place of grid power at times when charges are higher. This can be done automatically as part of normal operations whenever surplus battery capacity is available, while still ensuring that sufficient capacity is preserved for emergency backup if required. Similarly, it is possible to draw energy stored in UPS batteries during low usage periods to supply extra peak power when needed, thus reducing or eliminating predictable spikes in consumption and reducing the overall tariff. In addition to this, UPS batteries can be used to provide additional power for short periods of time in instances where energy cannot be sourced from the grid. Consider the case of a hospital that needed to install a new scanner. The inrush power requirement of the scanner was in excess of what the grid connection could provide, though its post-startup operation was within the available capacity. The hospital’s location also made it unfeasible to upgrade the energy supply. This is quite a common problem in cities around the world where infrastructure tends to be stressed. With the new model of UPS application, the hospital can draw on its UPS power in the scanner’s inrush phase to complement the grid supply until energy demand falls. Use-case scenarios such as these extend the limits of grid connection and enable the user to have access to more power than the grid can supply while not taking away from the UPS system’s emergency functionality. Adding solar to the mix The next step in this evolution is to combine the increased capabilities of UPS systems with a renewable energy source. Many companies and institutions with critical power requirements have already installed some level of solar energy generation as part of their wider carbon reduction goals in order to reduce energy costs. When the grid is on, solar power is used to supplement grid energy for operations and to charge UPS batteries. But what happens when the grid is down? Companies may not realize that when this occurs, solar inverters need to be isolated from the grid, which can result in lost energy production. However, there are solutions that manage to overcome this issue. For example, backup solutions including hardware that isolates the inverters from the grid to maintain solar energy production while the grid is down, effectively create a microgrid. UPS systems can also be utilized to help organizations improve their self-consumption of solar power. Energy usage does not always align with the energy generation of a PV system. As such, in order to overcome this inconsistency, energy can be stored in a battery for consumption at a later time instead of either limiting energy production or feeding it into the grid. Depending on the state, this tactic of feeding excess energy back to the grid could add to the monetary gains from UPS and PV systems, further decreasing the ROI. One way to achieve this is with a standalone storage system. However, it might be more cost-effective to add extra batteries to the existing UPS system and store the energy there instead. By adding batteries to the UPS system, this otherwise wasted energy can be utilized at a lower cost than adding a separate storage system. In this way, the UPS system acts as a hybrid system manager. Crucially, this use of solar energy and batteries does not add risk to an organization’s UPS provision. This is because the energy levels reserved for critical power are automatically monitored, regulated, and preserved. Beyond these requirements, using surplus solar energy can cut costs without adding risk: It maximizes self-consumption when the grid is on and provides backup power capabilities when the grid is down. Putting it together The integration of flexible PV and UPS solutions changes the whole dynamic of working with energy suppliers and using the grid. An integrated PV and UPS system can add value and reduce costs on top of providing users with energy protection. Longer backup times can be achieved, and the flexibility of allocating batteries to the solar and/or UPS sides of the system can deliver further efficiencies and savings, transforming a backup solution from a necessity to an asset. The impact on critical power By joining UPS and PV solutions together, it improves the use of existing UPS resources, allowing users to reduce energy costs while also benefitting from uninterrupted power supply and battery backup. Full integration of the solar PV system with existing UPS provision provides higher efficiency and further reduced costs. Manufacturers of both solar and UPS systems are able to design the system components so that they seamlessly work together. A single controller manages both systems. As such, the systems know how much solar energy is being produced, how much capacity must be reserved, and the exact prioritization of all applications, to provide seamless operation with maximum system availability and best total cost of ownership (TCO). Those planning to install or renew a UPS system will always enquire about cost, and adapting to this “integrated” vision requires a new perspective. However, with a fully integrated solar and UPS solution, ROI actually enters the conversation, which is typically not the case with traditional UPS systems. UPS batteries each have an expected lifetime. After an initial payback time, which depends on the tariffs and incentives, they are expected to create income for many years. With the cost of batteries continuing to fall, the future ROI will likely continue to improve. Critical power is, and will always be, essential for certain organizations and institutions. As renewable energies — particularly solar — become a larger part of the wider energy mix, the vast potential it brings when combined with critical power applications, in terms of financial investment; uninterrupted operations; and, of course, sustainability objectives, can no longer be ignored.
<urn:uuid:4ac828af-650f-4168-a863-4b0409f5f816>
CC-MAIN-2022-40
https://www.missioncriticalmagazine.com/articles/94242-how-upss-can-provide-backup-power-and-an-additional-revenue-stream
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00168.warc.gz
en
0.939376
1,541
3.03125
3
The concept of workflows is quite simple: everything must go through a process. For example, say George from Finance has a doctor’s appointment next Friday at 10 AM. He has to: - Pull up his calendar - Note the exact date it falls on - Write up an email to his supervisor requesting approval to use a personal day - Wait for approval Between Step 3 and 4, his boss now has to: - Verify George’s allotted personal days - Check that he has not used them all - Log the request date into a database George’s request, the piece of work, flowed through an entire process from start to finish: A workflow. When Did This Concept Begin? While the concept of workflows has been around for centuries, Henry Ford introduced the first assembly line in 1913. Prior to this, people built cars unit-by-unit rather than part-by-part. This often took a lot longer and cost a lot more. By creating a linear process of work, Ford sped up the process of mass-production and transformed the practice of manual labor. During World War II, there was a high demand for the organization of work: Draft registration cards, decimal file systems and classifications of all sorts. Maintaining these levels of record-keeping required added structure to filing and information systems in small offices. This called for an optimal workflow and its continual development. Workflows hit full development in the 1980s when two major critiques of workflows were addressed. They were deemed: - Dehumanizing and suboptimal in their use of human beings - Inflexible as the conditions of work changed The workflows that started out as product-oriented processes for making cars more efficiently weren’t that efficient once they entered homes and offices. This gave rise to new movements like Total Quality Management and Six Sigma that addressed these problems in the traditional workflow. New workflows left room for individual preferences and customization, and linked planning with execution. As workflows underwent heavy scrutiny, they developed into the familiar workflow we have today. If It’s a Process, It Has a Workflow Even something as simple as a morning routine is a workflow. Turn off alarm, brush teeth, wash face, change clothes (pants, then shirt), put on shoes (right, then left). Every process involves a workflow. When it comes to the processes at work, utilizing workflow automation allows workers to focus on the more important aspects of their jobs. With workflow automation, organizations can be more efficient, productive and successful.
<urn:uuid:1939cf01-14d6-4930-b90d-6c6226f80f00>
CC-MAIN-2022-40
https://www.nintex.com/blog/brief-history-workflow-processes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337836.93/warc/CC-MAIN-20221006124156-20221006154156-00168.warc.gz
en
0.9562
541
3.15625
3
A sculpture of a giant tap spewing plastic waste greeted delegates at UN environment talks in Kenya earlier this year — a reminder of the urgent need for them to agree a global pact to curb plastic pollution. The 30-foot tall sculpture was built from rubbish collected in Nairobi’s Kibera slum by artist and activist Benjamin Von Wong, who raised funds for the project by selling non-fungible tokens (NFTs), records of digital images bought with cryptocurrency. Von Wong — with activist Casson Trenor and the Degenerate Trash Pandas, an NFT community that advocates against plastics with the Solana cryptocurrency — raised about $110,000 for the installation that also provided work to about 100 Kibera youth. “Raising funds through cryptocurrency was something new for us,” said Byrones Khainga, director of technical services at Human Needs Project, a nonprofit in Kibera that helped on the installation. “But it is now going to inform how we implement our social welfare activities because we have seen how fast we can move on fundraising,” said Khainga, whose nonprofit tackles problems in Kibera such as garbage disposal and access to drinking water. The project is one of several examples of cryptocurrency and NFTs being used in African nations to fund welfare and development projects related to education, electricity, healthcare, housing and livelihoods. Crypto fundraising has picked up as traditional channels of funding dried up in the wake of the coronavirus pandemic and because of economic slowdowns, said Roselyne Wanjiru, a researcher at Blockchain Association of Kenya, an industry body. “Crypto reduces barriers of entry, and is a fast way of raising funds for social causes because it is easier to navigate than traditional financial systems,” said Wanjiru. “We are seeing more companies and individuals use it to offer solutions to communities.” Cryptocurrencies were designed to be free of central financial authorities such as governments and central banks. They allow for “peer-to-peer” transfers between users online without any intermediaries. Their relative anonymity also offers a haven for criminals, extremist groups and sanctioned governments, but champions say they help support marginalised groups and those caught up in crises, even as a sharp downturn in values hurts many users. Payments and remittances via crypto are rising in Kenya, Nigeria and South Africa, which have among the highest share of crypto ownership among nations globally, according to the United Nations’ trade agency UNCTAD. About 8.5% of Kenya’s 56mn people own crypto, UNCTAD said, while the Central African Republic adopted bitcoin as an official currency in April. Virtual coins that use the same underlying blockchain technology as cryptocurrency are also in use in Kenya, like sarafu — meaning currency in Kiswahili — which is issued by the nonprofit Grassroots Economics Foundation. The community currency helps more than 50,000 poor residents who cannot access bank loans to pay for essentials such as food, healthcare and housing. Also in Kenya, the Celo Foundation and Mercy Corps Ventures this year launched a microwork pilot, giving hundreds of youth access to digital jobs, and paying them with Celo dollars, a stablecoin that tracks the value of the US dollar. Microwork is a form of digital labour that breaks up big projects into hundreds of smaller tasks that can be completed on a mobile phone in minutes. In several African countries, more women than men are microworkers as they can work from home. But paying workers on time is a challenge, with cross-border payments often slow and costly, with a high transaction fee. With Celo dollars, workers can be paid immediately, with a much smaller fee. On completion of a task, the payment is transferred to their digital wallet, and they can cash out the Celo dollars on M-Pesa, Kenya’s popular mobile money platform. While stablecoins are seen as less risky than other cryptocurrencies, users can be affected by volatility if they hold on to them, rather than cash out immediately, experts say. Cryptocurrencies can drive financial inclusion by creating new digital employment opportunities and reducing the cost of cross-border payments, said Scott Onder, senior managing director at Mercy Corps Ventures, the venture capital arm of global development agency, Mercy Corps. “Cryptocurrency removes this costly barrier and has the potential to create new ways for young people to earn, spend, save and send money,” he said in a statement. Power, Internet gaps Reliable access to electricity and the Internet are among the challenges to using digital coins in Africa, and most users are still men, an imbalance that mimics the uneven access to traditional finance. Kenyan choreographer Big Mich, who trains slum youth including girls on developing and marketing their dancing skills, aims to change that: she plans to sell her dance moves as NFTs, and use the funds to benefit poor communities. “There are concerns that crypto mining is contributing to global warming because of the huge amount of energy it consumes. But we must not overlook the good things this technology can provide for us,” she said. In the Kibera slum, the funds raised through the sale of NFTs are also helping create hundreds of permanent jobs, said Von Wong. “NFT communities can be leveraged as a major force for good, filling a major gap in development efforts across the globe,” he said. “Anything that helps make it easier to funnel capital more quickly and inexpensively to those in need is always a good thing.” — Thomson Reuters Foundation A popular Swiss-based VPN from the same corporate group as Usenet provider Giganews, VyprVPN has a decent-sized network with 700+ servers in 70+ locations across 60+ countries. These aren't solely focused on Europe and North America, as we often see – VyprVPN has 14 locations in Asia, 5 in the Middle East, 7 in Central and South America, 2 in Africa and 5 in Oceania. Even better, these servers are owned and managed by the company. That means there's no reliance on third-party web hosts, unlike most of the competition. Welcome features include a zero-knowledge DNS service, a customized Chameleon protocol to help bypass VPN blocking, WireGuard support to optimize performance, P2P support across the network, and 24/7/365 customer support to keep the service running smoothly. Wide platform support includes apps for Windows, Mac, iOS, Android, Tomato-based routers, QNAP, Anonabox, Smart TVs and Blackphone. If that's not enough, the website has 30 tutorials to help you set up the service on Chromebooks, Linux, Blackberry, Synology NAS, OpenELEC, Android TV, Apple TV, and via DD-WRT, AsusWRT, OpenWRT and more. Whatever hardware you're using, VyprVPN supports connecting up to 30 devices simultaneously if you sign up via the website, though only 5 if you sign up from the Android or iOS apps. (We're not sure why there's such a huge difference, but if you do have a lot of hardware to protect, keep in mind that Atlas VPN, IPVanish, PureVPN, Surfshark and Windscribe have no fixed connection limits at all.) The website has the usual 'no logging' claims, but unlike most of the competition, you don't have to take these on trust. In 2018, VyprVPN had an independent audit to verify that it doesn't log or share anything about what you're doing online, including session logs, and you can read the report for yourself. VyprVPN hasn't delivered any major updates since our last review, but we noticed a few small app tweaks. The most significant is a new Connection Details panel which displays the latest stats on your current session (length, data uploaded and downloaded, and so on).(opens in new tab) VyprVPN's pricing scheme has changed completely since our last review, with a couple of notable differences. The first is it's all very simple, with the previous jumble of plans replaced by straightforward monthly and annual subscriptions. The second is that it's vastly more expensive. Forget the old 'two months for $12.95' deal; the monthly plan (opens in new tab) is now $15, one of the most expensive products around. The annual plan (opens in new tab) is a little better, but still costly at $8.33 a month (an upfront $100). And way more expensive than the '3 years for $65' plan we found last time. Although there are a few providers in the same ballpark price-wise – ExpressVPN asks $8.32 a month, Hotspot Shield $7.99 – most services charge around $4 to $5 a month on their annual plans. You can save even more by signing up for long-term subscriptions. Private Internet Access charges $2.03 a month for the first term of its three-year plan, for instance, and Ivacy's five-year plan is a monthly $1.19 (VyprVPN is seven times more expensive than this). Payment options are limited to card and PayPal. If you sign up and aren't happy, you're protected by a 30-day money-back guarantee. A few companies provide you more – Hotspot Shield and CyberGhost allow 45-days, for example – but 30-days should be long enough to identify any problems. VyprVPN protects your privacy with well-chosen protocols and industrial-strength encryption. It supports AES-256-GCM and SHA384 HMAC by default for OpenVPN, with TLS-ECDHE-RSA-2048 to provide Perfect Forward Secrecy. (The latter is a smart technique which ensures that a different key is used for every connection, so that even if an attacker obtains a private key somehow, they would only be able to access data in that particular session.) WireGuard is now supported across all platforms, along with OpenVPN and IKEv2. VyprVPN's custom Chameleon 2.0 protocol has been improved to do an even better job of bypassing aggressive VPN blocking (it's a new option with the iOS app, too, which is good to see). Reports suggest this works well in China, although we don't test this and so can't confirm it. VyprVPN provides an encrypted zero-knowledge DNS service, a handy way to avoid 'man-in-the-middle' attacks, DNS filtering and other snooping strategies. Works for us, although if you're less happy with the idea, the apps also allow you to switch to any third-party service (just enter whatever IP addresses you need). Individual apps have their own privacy protecting technologies, too, including options to defend against DNS leaks and bundled kill switches to reduce the chance of data leaks if the VPN connection drops. We'll look at these in more detail later.(opens in new tab) Even better, you don't have to take VyprVPN's word for this, as in September 2018 the company hired Leviathan Security Group to audit the platform and produce a public report (opens in new tab) on its logging practices. The results (opens in new tab) [PDF] are available to all on the VyprVPN website, and make an interesting read. Experts will find a huge amount of detail on how VyprVPN works, and the in-depth testing performed by the auditors (logging in to servers, inspecting running processes, examining source code, and more). Everyone else can simply check the executive summary, which explains that the audit initially found a few limited issues ('from inadvertent configuration mistakes'), but these were 'quickly fixed', and 'as a result, [the audit] can provide VyprVPN users with the assurance that the company is not logging their VPN activity.' While that's great news, and still much more than the majority of VPN providers have done, we hope VyprVPN doesn't stop there. It's been more than three years since this audit; plenty of time for new problems to have cropped up. TunnelBear has had four annual security audits of its service, and we'd like to see other providers do repeat runs in this way. Signing up to VyprVPN is easy, and once you've handed over your details, the website points you to an Apps page with a host of useful links. There are downloads for the company's Windows, Mac, Android and iOS apps, the raw Android APK file if you need to install it somewhere manually, and VyprVPN's Chrome browser extension. Setup is easy, and much the same as every other VPN app you've ever installed. get and run the app, follow the instructions, enter your username and password when you're prompted, and essentially, you're ready to go. You're not restricted to the apps either. VyprVPN's website has tutorials to help you manually set up the service on Chromebooks, Linux, Synology NAS, OpenELEC, Android TV, Apple TV, and on routers via DD-WRT, AsusWRT, OpenWRT and more. These setup guides are, for the most part, relatively basic. Many are short, with only the bare minimum of text, and no screenshots (the Android TV guide says little more than 'you'll need the Android app, get it here or here'). They appear to cover the essentials, though, and should get you connected with minimal hassle.(opens in new tab) VyprVPN's Windows VPN client looks and feels much like a mobile VPN app. It consists of a simple opening window that displays your connection state and preferred location, and you can connect or disconnect with a click. A capable location picker lists available locations by country and city, includes ping times to provide you an idea of distance, and provides a simple Favorites system to save your commonly-used servers. Locations are sorted by country initially, but you can also organize them by continent or ping time. The client supports four protocols: there's WireGuard, OpenVPN, VyprVPN's proprietary Chameleon, and IKEv2. Connection times are longer than usual – indeed up to 10 seconds for WireGuard. The best apps average 2-4 seconds, and AtlasVPN managed around a second. A new Connection Details panel is just a click away, and displays details including your upload and get speeds, the session length, your chosen server, protocol and more. This isn't the most essential of features, but the stats could be useful occasionally, and we're happy to see them here. A kill switch aims to protect you if the VPN drops, or that's the idea, but it didn't always work that way. If we manually closed an OpenVPN connection the kill switch kicked in instantly, blocking internet traffic, displaying a warning and giving us an option to reconnect. If we did the same with an IKEv2 connection, though, the kill switch didn't appear to work, and our device used its regular internet connection instead. The app didn't display a 'Disconnected' warning to alert us to the problem, either. Fortunately, it did automatically reconnect within a few seconds, limiting our exposure. We found the kill switch protected us properly on WireGuard connections, which is important as we suspect most people won't use anything else. But we noticed one or two smaller hassles, with the app again warning us of connection troubles via its own window, rather than using desktop notifications as a clearer alert. VyprVPN's kill switch mostly does its job, then, but doesn't always kick in instantly, and there could be usability issues in some extreme situations. Elsewhere, a capable Settings dialog can configure the client to connect when Windows starts or the application launches. DNS leak protection reduces the chance of others snooping on your web traffic, and the kill switch is joined by an auto-reconnect system to protect you if the VPN drops. If VyprVPN's zero-knowledge VyprDNS service doesn't suit your needs, you can switch it to any other DNS provider you like. And the app can automatically connect to VyprVPN whenever you access untrusted Wi-Fi networks. That's not just a convenient time-saver: if you must connect to a VPN manually, there's always a chance you'll forget, and leave yourself inadvertently exposed to danger. The kill switch needs just a little work, but otherwise this is a decent Windows app: it's fast, has a strong set of features and is generally easy to use. Mac VPN apps can sometimes be a disappointment when you discover they only have a fraction of the features available on other platforms. But not with VyprVPN – its Mac app is a very close match for the Windows version, and even better in one area. The interface is identical, for instance; straightforward and user-friendly. There's the same location list, sensibly organized, with a Favorites system to help speed up accessing the servers you need. The core settings are the same. There's support for OpenVPN, WireGuard, IKEv2 and Chameleon protocols; auto-connect to automatically protect you when accessing public Wi-Fi; custom DNS options; and a kill switch to protect you if the VPN connection drops. The Mac app has one major feature you won't find on Windows: support for split tunneling allows you to define which apps use the VPN, and which use your regular connection. The app isn't perfect – it's also inherited the same lengthy connection times as the Windows edition, for instance. But overall, this is a capable Mac app, easy to use and offering more functionality than we usually see with Mac clients. VyprVPN's Android VPN app opens with an identical interface to the Windows build. In a tap or two you're able to connect to your nearest server, or choose an alternative from the same location picker as the desktop version. The app has very similar settings to the Windows version, too: a kill switch, DNS leak protection, startup and auto-reconnect options, and the ability to use custom DNS settings. Protocol support now includes WireGuard as well as OpenVPN and VyprVPN's own Chameleon. Bonus features include optional URL filtering to protect you from malicious websites. Although we didn't test the effectiveness of the system, we noticed that it gives you more control than most competing services. If you hit a site on the blocklist, for instance, the system doesn't just block it. Instead, it displays a warning, and you can ignore this and proceed to the site if you're sure it's safe. As on the Mac, there's a Connection Per App feature which enables customizing VPN usage by individual app (other services call this 'split tunneling'). Choose any installed app and you can set it to always use the VPN, or bypass it and use your regular internet connection. The app has its issues. Connection times were fractionally longer than usual, for instance, and we'd like to have IKEv2 support (although that's less relevant now the much faster WireGuard is here). These aren't major complaints, though, and overall this is an above-average app with a decent feature set, and well worth a place on your Android shortlist. VyprVPN's iOS app shares much the same look and feel as the rest of the range. Use the service on any other platform and you'll immediately feel at home. Most operations work just as they do with the other apps. A simple location picker makes it easy to find locations by name or speed, and commonly-used servers can be saved as favorites for speedy reconnection later. The iOS app doesn't include all the Android features. In particular, there's no URL blocking, and no kill switch. There are relatively few settings, too, although it is possible to set up the app to connect to the VPN whenever you access an untrusted wireless network, or automatically reconnect if the VPN drops unexpectedly, and you can set a custom DNS. There's a major exact addition in terms of support for WireGuard, though, as well as OpenVPN, IKEv2 and VyprVPN's Chameleon. If you need more control, the VyprVPN support site has instructions on manually setting up OpenVPN, L2TP/IPSec, IKEv2 and even PPTP connections. As with Android, VyprVPN's iOS app isn't exactly packing any killer features, but it's a likeable, user-friendly, and simple way to access VyprVPN from your iDevice. To understand the real-world performance of a VPN, we put every service we review through a series of intensive tests. We use test locations in the US and UK, each with a 1Gbps connection. After connecting to our nearest VPN server, we check speeds at least five times with multiple benchmarking sites and services: SpeedTest.net's website and the command line app, Netflix's Fast.com, TestMy.net and more. Tests are repeated for at least two protocols (where possible), and the full set of tests is repeated across morning and evening sessions, before we crunch the numbers and calculate median speeds. OpenVPN performance proved disappointing, with speeds peaking at 45Mbps (even poor providers typically average 100-200Mbps). Fortunately, VyprVPN doesn't just support the OpenVPN protocol, and switching to WireGuard accelerated our downloads to 340-360Mbps in the UK, 270-420Mbps in the US. That's still on the low side – most VPNs reach 400-600Mbps, Hide.me and TorGuard achieved 900Mbps and higher – but it's likely to be enough for many situations. VPNs often sell themselves on their ability to access geoblocked sites, giving you access to content you wouldn't normally be able to view – VPNs for Netflix have become particularly popular. To test VyprVPN's unblocking technologies, we connected to UK and US locations, then tried to access BBC iPlayer, US Netflix, Amazon Prime Video and Disney Plus. Whatever happened, we disconnected, reconnected, checked we had a different IP address and tried again, just to see if the result might vary depending on our IP. VyprVPN only has a single location in UK, limiting options for unblocking BBC iPlayer, but it successfully allowed us to stream content on all three test connections without any issues at all. US Netflix is a bigger challenge, and VyprVPN failed to get us access with all three of our test locations. The service did better with Disney Plus, allowing us to access US-only content from two of our three test locations. And the results picked up even more as VyprVPN got us into US Amazon Prime Video with every location we chose. Three out of four is a fair performance, but others go further. CyberGhost, ExpressVPN, Hotspot Shield, NordVPN, ProtonVPN and Surfshark all scored full marks in exact unblocking tests. VyprVPN support starts on its website, where a knowledgebase provides setup instructions, troubleshooting guidance and specific advice for various device types. Browse the site and this looks impressive, at least initially, with plenty of guides covering setting up the service on a wide range of platforms. Unfortunately, when you eventually reach an article, there's usually not much in the way of content. Setup guides are generally stripped back to the essentials, with few (or no) screenshots to help illustrate the points they're trying to make. FAQs can also be very basic, often no more than 'how do I turn on feature x?', with a few lines of text to point users in the right direction. Still, there is some decent content here, and an accurate search system did a good job of finding relevant articles for all our test keywords. If the website can't help, live chat is available to provide you a near-instant response. We only raised one test question, but the support agent was talking to us within a couple of minutes, and gave a helpful and informative response. The final option is to send an email. We raised a simple product question and had a clear response within 15 minutes. VyprVPN support clearly has some issues, and it's not as thorough or in-depth as top competitors like ExpressVPN. The website does provide you basic information on a wide range of topics, though, and with speedy live chat support on hand, it shouldn't take long to get helpful advice on any service problems. VyprVPN isn't the fastest, most powerful VPN or generally best VPN out there, and that's a problem when its prices are so high. Still, the apps are easy to use, with more features than most, and if you could benefit from VyprVPN's firewall-bypassing Chameleon protocol then it may be worth a look. The MarketWatch News Department was not involved in the creation of this content. Jul 31, 2022 (Market Insight Reports) -- Latest Report Down Coat Market by Type (Male Short, Female Short, Male Long, Female Long), By Application (Warming, Fashion, and Others), and Region (North America, Latin America, Europe, Asia Pacific, and the Middle East & Africa), Forecast From 2022 To 2028 Detailed picture of the Down Coat market by the way of study, synthesis, and summation of data from multiple sources by an analysis of key parameters such as profit, pricing, competition, and promotions. It presents various market facets by identifying the key industry influencers. The data presented is comprehensive, reliable, and a result of extensive research – both primary and secondary. The Down Coat market research reports provide a complete competitive landscape and an in-depth vendor selection methodology and analysis using qualitative and quantitative research to forecast accurate market growth and Down Coat market size across segments. Our Experts will help you get valuable insights about Down Coat market share, size, and regional growth prospects. Available Other Related Market Research Reports Sample PDF Report at:- https://reportsinsights.com/sample/643611 This report also studies the global Down Coat market competition landscape, market drivers and trends, opportunities and challenges, risks and entry barriers, sales channels, distributors, and Porter’s Five Forces In-depth Analysis and Data-driven Insights on the Impact of COVID-19 Included in this Down Coat Market Report. It aims at estimating the market size and the growth potential of the market across segments by component, application, organization size, deployment type, and region. The Top key vendors in Down Coat Market include are:- Beinia, Wantdo, Eddie Bauer, Orolay, Amazon Essentials, Columbia, Cole Haan, Calvin Klein, Cloudy Arch women Apart from this, the valuable document weighs upon the performance of the industry on the basis of a product service, end-use, geography, and end customer. The industry experts have left no stone unturned to identify the major factors influencing the development rate of the Down Coat industries including various opportunities and gaps. A thorough analysis of the Down Coat markets with regard to the growth trends in each category makes the overall study interesting. When studying the Down Coat markets the researchers also dig deep into their future prospects and contribution to the Down Coat industries. Major Product Types covered are: Major Applications of Down Coat covered are: Based on region, the market is segmented into North America, Europe, Asia Pacific, Latin America and Middle East & Africa (MEA). North America region is further bifurcated into countries such as U.S., and Canada. The Europe region is further categorized into U.K., France, Germany, Italy, Spain, Russia, and Rest of Europe. Asia Pacific is further segmented into China, Japan, South Korea, India, Australia, South East Asia, and Rest of Asia Pacific. Latin America region is further segmented into Brazil, Mexico, and Rest of Latin America, and the MEA region is further divided into GCC, Turkey, South Africa, and Rest of MEA. Important Features of the report: - Detailed analysis of the Global Down Coat market -Fluctuating Down Coat market dynamics of the industry -Detailed Down Coat market segmentation - Historical, current, and projected Down Coat market size in terms of volume and value - exact industry trends and developments - Competitive landscape of the Global Down Coat Market - Strategies of key players and product offerings - Potential and niche segments/regions exhibiting promising growth - A neutral perspective toward Global Down Coat market performance Sample PDF Report at:- https://reportsinsights.com/sample/643611 Reports Insights is the leading research industry that offers contextual and data-centric research services to its customers across the globe. The firm assists its clients to strategize business policies and accomplish sustainable growth in their respective market domains. The industry provides consulting services, syndicated research reports, and customized research reports. Read More Article: The MarketWatch News Department was not involved in the creation of this content. An extensive elaboration of the Worldwide Energy Drinks Market covering micro level of analysis by competitors and key business segments. The Global Energy Drinks explores comprehensive study on various segments like opportunities, size, status, demand, sales and overall growth of major players. The research is carried out on primary and secondary statistics sources and it consists both qualitative and quantitative detailing. Some of the MajorKey players profiled in the study are Red Bull, Monster, Rockstar, Pepsico, Big Red, Arizona, National Beverage, Dr Pepper Snapple Group, Living Essentials Marketing & Vital Pharmaceuticals. Get Free sample Pages PDF (Including Full TOC, Table & Figures) @ https://www.htfmarketreport.com/sample-report/3103864-global-energy-drinks-market-2 On the off chance that you are engaged with the industry or expect to be, at that point this investigation will provide you complete perspective. It’s crucial you stay up with the latest sectioned by Applications [Personal, Athlete & Other], Product Types, [ General Energy Drinks & Energy Shots] and some significant parts in the business For more data or any query mail at [email protected] Which market aspects are illuminated in the report? Executive Summary: It covers a summary of the most vital studies, the Global Energy Drinks market increasing rate, modest circumstances, market trends, drivers and problems as well as macroscopic pointers. Study Analysis:Covers major companies, vital market segments, the scope of the products offered in the Global Energy Drinks market, the years measured and the study points. Company Profile: Each Firm well-defined in this segment is screened based on a products, value, SWOT analysis, their ability and other significant features. Manufacture by region: This Global Energy Drinks report offers data on imports and exports, sales, production and key companies in all studied regional markets Highlighted of Global Energy Drinks Market Segments and Sub-Segment: Energy Drinks Market by Key Players: Red Bull, Monster, Rockstar, Pepsico, Big Red, Arizona, National Beverage, Dr Pepper Snapple Group, Living Essentials Marketing & Vital Pharmaceuticals Energy Drinks Market by Types: General Energy Drinks & Energy Shots Energy Drinks Market by End-User/Application: Personal, Athlete & Other Energy Drinks Market by Geographical Analysis: Americas, United States, Canada, Mexico, Brazil, APAC, China, Japan, Korea, Southeast Asia, India, Australia, Europe, Germany, France, UK, Italy, Russia, Middle East & Africa, Egypt, South Africa, Israel, Turkey & GCC Countries For More Query about the Energy DrinksMarket Report? Get in touch with us at: https://www.htfmarketreport.com/enquiry-before-buy/3103864-global-energy-drinks-market-2 The study is a source of reliable data on: Market segments and sub-segments, Market trends and dynamics Supply and demand Market size Current trends/opportunities/challenges Competitive landscape Technological innovations Value chain and investor analysis. Interpretative Tools in the Market: The report integrates the entirely examined and evaluated information of the prominent players and their position in the market by methods for various descriptive tools. The methodical tools including SWOT analysis, Porter’s five forces analysis, and investment return examination were used while breaking down the development of the key players performing in the market. Key Growths in the Market: This section of the report incorporates the essential enhancements of the marker that contains assertions, coordinated efforts, R&D, new item dispatch, joint ventures, and associations of leading participants working in the market. Key Points in the Market: The key features of this Energy Drinks market report includes production, production rate, revenue, price, cost, market share, capacity, capacity utilization rate, import/export, supply/demand, and gross margin. Key market dynamics plus market segments and sub-segments are covered. Basic Questions Answered *who are the key market players in the Energy Drinks Market? *Which are the major regions for dissimilar trades that are expected to eyewitness astonishing growth for the *What are the regional growth trends and the leading revenue-generating regions for the Energy Drinks Market? *What are the major Product Type of Energy Drinks? *What are the major applications of Energy Drinks? *Which Energy Drinks technologies will top the market in next 5 years? Examine Detailed Index of full Research Study [email protected]: https://www.htfmarketreport.com/reports/3103864-global-energy-drinks-market-2 Table of Content Chapter One: Industry Overview Chapter Two: Major Segmentation (Classification, Application and etc.) Analysis Chapter Three: Production Market Analysis Chapter Four: Sales Market Analysis Chapter Five: Consumption Market Analysis Chapter Six: Production, Sales and Consumption Market Comparison Analysis Chapter Seven: Major Manufacturers Production and Sales Market Comparison Analysis Chapter Eight: Competition Analysis by Players Chapter Nine: Marketing Channel Analysis Chapter Ten: New Project Investment Feasibility Analysis Chapter Eleven: Manufacturing Cost Analysis Chapter Twelve: Industrial Chain, Sourcing Strategy and Downstream Buyers Buy the Full Research report of Global Energy Drinks [email protected]: https://www.htfmarketreport.com/buy-now?format=1&report=3103864 Thanks for memorizing this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia. HTF Market Report is a wholly owned brand of HTF market Intelligence Consulting Private Limited. HTF Market Report global research and market intelligence consulting organization is uniquely positioned to not only identify growth opportunities but to also empower and inspire you to create visionary growth strategies for futures, enabled by our extraordinary depth and breadth of thought leadership, research, tools, events and experience that assist you for making goals into a reality. Our understanding of the interplay between industry convergence, Mega Trends, technologies and market trends provides our clients with new business models and expansion opportunities. We are focused on identifying the “Accurate Forecast” in every industry we cover so our clients can reap the benefits of being early market entrants and can accomplish their “Goals & Objectives”. Contact US : Craig Francis (PR & Marketing Manager) HTF Market Intelligence Consulting Private Limited Unit No. 429, Parsonage Road Edison, NJ New Jersey USA – 08837 Phone: +1 (206) 317 1218 Connect with us at LinkedIn | Facebook | Twitter COLOMBO, Sri Lanka (AP) — A human rights group said Sunday it had filed a criminal complaint with Singapore’s attorney general to seek the arrest of Sri Lanka’s former president for alleged war crimes during his country’s civil war. Gotabaya Rajapaksa was ousted from office over his country’s economic collapse and fled to Singapore earlier this month. He was defense secretary during Sri Lanka’s civil war, which ended in 2009. The International Truth and Justice Project — an evidence-gathering organization administered by a South Africa-based nonprofit foundation —said its lawyers filed the complaint requesting Rajapaksa’s immediate arrest. The complaint alleges Rajapaksa committed grave breaches of the Geneva Conventions during the civil war “and that these are crimes subject to domestic prosecution in Singapore under universal jurisdiction.” Sri Lanka’s economic crisis has left the nation’s 22 million people struggling with shortages of essentials, including medicine, fuel and food. Months of protests have focused on the Rajapaksa political dynasty, which has ruled the country for most of the past two decades. “The economic meltdown has seen the government collapse, but the crisis in Sri Lanka is really linked to structural impunity for serious international crimes going back three decades or more,” said the ITJP’s executive director, Yasmin Sooka. “This complaint recognizes that it’s not just about corruption and economic mismanagement but also accountability for mass atrocity crimes,” she added. Sri Lanka’s civil war killed 100,000 people, according to conservative United Nations estimates. The genuine number is believed to be much higher. A report from a U.N. panel of experts said at least 40,000 ethnic minority Tamil civilians were killed in the final months of the fighting alone. Tamil Tiger rebels fought to create an independent state for ethnic minority Tamils. The country’s ethnic Sinhala majority credited Gotabaya Rajapaksa and his elder brother Mahinda Rajapaksa with the war victory, cementing the family’s political dominance, though accounts of atrocities, autocratic governance and nepotism persisted. Efforts to investigate allegations of war crimes were largely suppressed under Rajapaksa leaders. After Gotabaya Rajapaksa fled the country earlier this month, lawmakers elected Ranil Wickremesinghe to serve the remainder of his presidential term. He declared a state of emergency with broad powers to act to ensure law and order, and a day after he was sworn in, hundreds of armed troops raided a protest camp outside the president’s office, attacking demonstrators with batons. Rights groups have urged the president to immediately order troops and police to cease use of force and said Friday’s display seemed to follow a pattern of Sri Lankan authorities forcefully responding to dissent. The political turmoil has threatened Sri Lanka’s potential for economic recovery. Wickremesinghe recently said bailout talks with the International Monetary Fund were nearing a conclusion. The TV program is also being broadcasted on KTN, Kenya and NTV, Uganda, LNTV, Liberia, and also posted on all social media channels of Merck Foundation and of KTN, NTV, GH One TV, and LNTV. The eleventh episode of "Our Africa by Merck Foundation" raises awareness on a very important issue of "Supporting Girls' Education". The episode talks about the importance of enabling access to education for girls and how lack of education is the cause of many underlying challenges faced by girls. The past episodes of the show have addressed the Importance of early detection and prevention of Diabetes, Breaking Infertility Stigma, Stopping Child Marriage, Promoting a Healthy Lifestyle, Ending Female Genital Mutilation (FGM), Coronavirus Health Awareness, Stopping Gender Based Violence, Women Empowerment and Sustainability and up-cycled fashion respectively. TV viewers and social media followers across Africa and beyond have been sharing an outstanding response to the TV program and the issues highlighted in every episode. Watch the Eleventh Episode promo here: https://youtu.be/OX86nDrSRFMWatch the Eleventh Episode here: https://youtu.be/jGz6yNSik7g 'Our Africa by Merck Foundation' is a pan-African TV program that is conceptualized, produced, directed, and co-hosted by Senator, Dr Rasha Kelej, CEO of Merck Foundation to feature African Fashion Designers, Singers, and prominent experts from various domains with the aim to raise awareness and create a culture shift across Africa. The show is co-hosted by Brian Mulondo from Uganda. Senator, Dr Rasha Kelej, CEO of Merck Foundation expressed, "We have been receiving great feedback from our followers and viewers of 'Our Africa by Merck Foundation' TV program. I would like to thank you all for the immense love you are showering on us and for acknowledging our efforts. Through our TV program, we have been addressing different social and health issues through our 'Fashion and Art with Purpose' Community, making it informative and entertaining at the same time." The show is broadcasting on the following TV channels: - Every Saturday @ 5:30 pm (EAT) on KTN, Kenya; re-run on Wednesday @ 6:30 am (EAT)- Every Saturday @ 6 pm (GMT) on LNTV, Liberia; re-run on Sunday @ 4:30 pm (GMT)- Every Sunday @ 6:30 pm (EAT) on NTV Uganda; re-run on Thursday @ 4:00 pm (EAT)- Every Sunday @ 2 pm on (GMT) GH One TV, Ghana; re-run on Monday @ 1:30 pm (GMT) Watch the Promo of 'Our Africa by Merck Foundation' here:https://m.youtube.com/watch?v=_RIoIMbFd2Q The Eleventh episode features prominent personalities like Nontando Mposo, the Editor-In-Chief of Glamour Magazine from South Africa, and popular Singer Blaze from Mozambique. Fashion Designers Alberto from Mozambique and Anuja Bharti from Ghana, who is also the winner of the Merck Foundation Fashion Awards 2020 showcased their designs that displayed strong messages on "Yes To Girls Education" and "Girls, Not Brides". "I strongly believe in girl education. When Girls are educated, their countries become more powerful, stronger and prosperous. I realize there's a need for more support as there are many brilliant girls out there who are struggling financially and socially to meet their educational needs. Therefore, we started "Educating Linda", a pan-African program that is tailored for each country to contribute to the future of these girls as part of the Merck Foundation "More Than a Mother" Campaign," explained Senator Kelej. Through the 'Educating Linda' program, Merck Foundation has been supporting the education of some of the unprivileged but brilliant girls by providing scholarships and grants that can cover school fees, school uniforms and other essentials including notebooks, pens and mathematical instruments, so they can reach their potential and pursue their dreams. Other than the "Educating Linda" program, Merck Foundation has also announced the MARS Awards to appreciate and recognize 'Best African Women Researchers' and 'Best Young African Researcher'. The aim is to empower women and young African researchers, advance their research capacity and promote their contribution to STEM (Science, Technology, Engineering and Mathematics). Senator, Dr Rasha Kelej further emphasized, "In partnership with the African First Ladies, we have been building healthcare capacity through providing training to healthcare providers in many medical specialties. Out of the total 1334 scholarships, more than 590 scholarships have been provided to female doctors in critical and underserved specialties. This is a great achievement for us." Moreover, Merck Foundation has released many inspiring children's storybooks and songs for supporting girls' education. 1. Watch share and subscribe "Tu Podes Sim" Portuguese song, which means "Yes, You Can" in English by Blaze and Tamyris Moiane, singers from Mozambique in English here: https://youtu.be/BGWR2S-mxl4 2. Watch, share and subscribe "ABC, 123" by Sean K from Namibia song here: https://youtu.be/4Z2i4Wh-bpk 3. Watch, share and subscribe the "Girl Can" song here, sung by two famous singers, Irene and Cwezi from Liberia and Ghana respectively: https://youtu.be/6LP92vAWYgs 4. Watch, share and subscribe the "Like Them" song here, sung by Kenneth, a famous singer from Uganda: https://www.youtube.com/watch?v=jCo52vtz3Q0 5. Watch, share and subscribe "Take me to School" song here, sung by Wezi, Afro-soul singer from Zambia, to support girls' education: https://www.youtube.com/watch?v=rWcujLMbKSg 6. Read Jackline's Rescue Storybook here: https://merck-foundation.com/merckfoundation/public/uploads/digital_library/1639990408_efbd2346fb16c605c12d.pdf 7. Read Educating Linda Storybook, here: https://merck-foundation.com/merckfoundation/public/uploads/digital_library/1623068469_6affa28d861b48da41cf.pdf 8. Read Ride Into The Future here: https://merck-foundation.com/merckfoundation/digital_library.pdf "I am very excited to bring to you the upcoming episode of 'OUR AFRICA by Merck Foundation' TV program. So, stay tuned and be ready to Get informed, Get healthy, and Get entertained!" concluded Senator, Dr Rasha Kelej. One can watch all the past episodes by referring to the below links: Watch Episode 1 here: https://www.youtube.com/watch?v=fz1S1DlugkcWatch Episode 2 here: https://youtu.be/g5wpzOr22l0Watch Episode 3 here: https://youtu.be/BONCtUJZLHIWatch Episode 4 here: https://www.youtube.com/watch?v=Ok6_B8EKNksWatch Episode 5 here: https://youtu.be/RqobIDOHc4EWatch Episode 6 here: https://youtu.be/7GtXkBYv_94Watch Episode 7 here: https://www.youtube.com/watch?v=OCiS_r5y1zMWatch Episode 8 here: https://youtu.be/hFIHJ39Wd98Watch Episode 9 here: https://youtu.be/YH3DKwHuvsMWatch Episode 10 here: https://youtu.be/FXkB6sYb2Rw Click on the link below to get Merck Foundation Apphttps://www.merck-foundation.com/MF_StoreRedirection Join the conversation on our social media platforms below and let your voice be heard Facebook: Merck FoundationTwitter: @MerckfoundationYouTube: MerckFoundationInstagram: Merck FoundationFlickr: Merck FoundationWebsite: www.merck-foundation.com This story is provided by BusinessWire India. ANI will not be responsible in any way for the content of this article. (ANI/BusinessWire India) There was a time when a new version of Windows was a really big deal, such the launch of Windows 95 for which the tones of the Rolling Stones’ Start me up could be heard across all manner of media outlets. Gradually over years this excitement has petered out, finally leaving us with Windows 10 that would, we were told, be the last ever version of the popular operating system and thence only receive continuous updates But here we are in 2021, and a new Windows has been announced. Windows 11 will be the next latest and greatest from Redmond, but along with all the hoopla there has been an undercurrent of concern. Every new OS comes with a list of hardware requirements, but those for Windows 11 seem to go beyond the usual in their quest to cull older hardware. Aside from requiring Secure Boot and a Trusted Platform Module that’s caused a run on the devices, they’ve struck a load of surprisingly exact processors including those in some of their current Surface mobile PCs off their supported list, and it’s reported that they will even require laptops to have front-facing webcams if they wish to run Windows 11. It makes absolute sense for a new operating system to lose support for legacy hardware, after all there is little point in their providing for owners of crusty old Pentiums or similar. The system requirements dropping support for 32-bit cores for example mirrors Windows 95’s abandonment of the 286 and earlier chips that had run the previous version, Windows 3.1. But in this case it seems as though they have wielded the axe a little too liberally, because a lot of owners of not-too ancient and certainly still pretty quick hardware will be left in the cold. In the past there were accusations of a Microsoft/Intel duopoly idea that revolved around the chipmaker and OS vendor conspiring to advance each other’s products, and some commentators have revived it for this launch. A comparison between the 1990s and the present isn’t an easy one to make though, because the difference between the capabilities of a 386 desktop of 1990 and a Pentium 3 of 1999 through a decade in which Moore’s Law was at its height is so much more than for example that between between the first Intel i7 and the latest one. Is this simply Microsoft’s attempt to break with the need for so much of the backwards compatibility in which Windows is mired, and define a new PC for the 2020s? It will be interesting to see when the OS does finally land whether or not it will in fact run on some of the lesser machines, simply without official support. It’s likely a greater-than-average number of Hackaday readers are already users of alternative operating systems such as GNU/Linux, but expecting an ordinary Windows user to install a Linux distro on their machine is a pipedream. Perhaps the real impact of the Windows 11 launch will be a large and slowly dwindling Windows 10 population and a new mountain forming in the e-waste breaking centres of the developing countries who can least afford to deal with the consequences. I think that a new OS should have a better legacy than that. The MarketWatch News Department was not involved in the creation of this content. Jul 28, 2022 (The Expresswire) -- Long Term Food Storage Market Size Trend 2022, Analysis, growth, share, Status and Forecast 2027 | 102 Pages ” By Type (Off Site Storage, On Site Storage), By Application (Civilian Retailers, NASA, Military) Geography (North America (United States, Canada and Mexico), South America (China, Japan, Korea, India and Southeast Asia), Europe (Germany, France, UK, Russia and Italy), Asia-Pacific (China, Japan, Korea, India and Southeast Asia), Middle East and Africa (Saudi Arabia, Egypt, Nigeria and South Africa)) Industry Trends 2022 List of TOP Key Players in Long Term Food Storage Market Report are:● My Food Storage ● Emergency Essentials ● Legacy Premium ● EFoods Direct ● Blue Chip Group ● Wise Company ● Katadyn Group ● OFD Food ● Astronaut Foods ● Freeze-Dry Foods Ltd ● Valley Food Storage Global Long Term Food Storage Market Outlook 2022 The global Long Term Food Storage market size was valued at USD 480.42 million in 2021 and is expected to expand at a CAGR of 2.87% during the forecast period, reaching USD 569.24 million by 2027. Long term food storage refers to food dehydrated and dried or freeze-dried so that the food can be stored longer. The report combines extensive quantitative analysis and exhaustive qualitative analysis, ranges from a macro overview of the total market size, industry chain, and market dynamics to micro details of segment markets by type, application and region, and, as a result, provides a holistic view of, as well as a deep insight into the Long Term Food Storage market covering all its essential aspects. For the competitive landscape, the report also introduces players in the industry from the perspective of the market share, concentration ratio, etc., and describes the leading companies in detail, with which the readers can get a better idea of their competitors and acquire an in-depth understanding of the competitive situation. Further, mergers and acquisitions, emerging market trends, the impact of COVID-19, and regional conflicts will all be considered. In a nutshell, this report is a must-read for industry players, investors, researchers, consultants, business strategists, and all those who have any kind of stake or are planning to foray into the market in any manner. Global Long Term Food Storage Market: Drivers and Restrains The research report has incorporated the analysis of different factors that augment the marketâs growth. It constitutes trends, restraints, and drivers that transform the market in either a positive or negative manner. This section also provides the scope of different segments and applications that can potentially influence the market in the future. The detailed information is based on current trends and historic milestones. This section also provides an analysis of the volume of production about the global market and also about each type from 2015 to 2027. This section mentions the volume of production by region from 2015 to 2027. Pricing analysis is included in the report according to each type from the year 2015 to 2027, manufacturer from 2015 to 2020, region from 2015 to 2020, and global price from 2015 to 2027. A thorough evaluation of the restrains included in the report portrays the contrast to drivers and gives room for strategic planning. Factors that overshadow the market growth are pivotal as they can be understood to devise different bends for getting hold of the lucrative opportunities that are present in the ever-growing market. A holistic research of the markets formed by considering spread of things, from demographics conditions and business cycles during particular country to market-specific microeconomic impacts. The study found the shift in market paradigms in terms of regional competitive advantage and therefore the competitive landscape of major players. Downstream demand analysis and upstream raw materials and equipment additionally administer. With tables and figures helping analyse worldwide Global Long Term Food Storage Market Forecast this research provides key statistics on the state of the industry and should be a valuable source of guidance and direction for companies and individuals interested in the market. On the thought of the product, this report displays the assembly, revenue, price, market share and rate of growth of each type, primarily split into ● Off Site Storage ● On Site Storage On the thought of the highest users/applications, this report focuses on the status and outlook for major applications/end users, consumption (sales), market share and rate of growth for each application, including ● Civilian Retailers Get a sample PDF of report @ https://www.360marketupdates.com/enquiry/request-sample/21366612 Major regions covered within the report:● North America ● Europe ● Asia-Pacific ● Latin America ● Middle East Africa Yes. As the COVID-19 and the Russia-Ukraine war are profoundly affecting the global supply chain relationship and raw material price system, we have definitely taken them into consideration throughout the research, and in Chapters 1.7, 2.7, 4.X.1, 7.5, 8.7, we elaborate at full length on the impact of the pandemic and the war on the Long Term Food Storage Industry. With the aim of clearly revealing the competitive situation of the industry, we concretely analyze not only the leading enterprises that have a voice on a global scale, but also the regional small and medium-sized companies that play key roles and have plenty of potential growth. Please find the key player list in Summary. Both Primary and Secondary data sources are being used while compiling the report. Primary sources include extensive interviews of key opinion leaders and industry experts (such as experienced front-line staff, directors, CEOs, and marketing executives), downstream distributors, as well as end-users. Secondary sources include the research of the annual and financial reports of the top companies, public files, new journals, etc. We also cooperate with some third-party databases. Please find a more complete list of data sources in Chapters 11.2.1 and 11.2.2. Yes. Customized requirements of multi-dimensional, deep-level and high-quality can help our customers precisely grasp market opportunities, effortlessly confront market challenges, properly formulate market strategies and act promptly, thus to win them sufficient time and space for market competition. Fill the Pre-Order Enquiry form for the report@ Major Points from Table of Contents: Table of Content 1 Long Term Food Storage Market Overview 1.1 Product Overview and Scope of Long Term Food Storage Market 1.2 Long Term Food Storage Market Segment by Type 1.2.1 Global Long Term Food Storage Market Sales Volume and CAGR (%) Comparison by Type (2017-2027) 1.3 Global Long Term Food Storage Market Segment by Application 1.3.1 Long Term Food Storage Market Consumption (Sales Volume) Comparison by Application (2017-2027) 1.4 Global Long Term Food Storage Market, Region Wise (2017-2027) 1.4.1 Global Long Term Food Storage Market Size (Revenue) and CAGR (%) Comparison by Region (2017-2027) 1.4.2 United States Long Term Food Storage Market Status and Prospect (2017-2027) 1.4.3 Europe Long Term Food Storage Market Status and Prospect (2017-2027) 1.4.4 China Long Term Food Storage Market Status and Prospect (2017-2027) 1.4.5 Japan Long Term Food Storage Market Status and Prospect (2017-2027) 1.4.6 India Long Term Food Storage Market Status and Prospect (2017-2027) 1.4.7 Southeast Asia Long Term Food Storage Market Status and Prospect (2017-2027) 1.4.8 Latin America Long Term Food Storage Market Status and Prospect (2017-2027) 1.4.9 Middle East and Africa Long Term Food Storage Market Status and Prospect (2017-2027) 1.5 Global Market Size of Long Term Food Storage (2017-2027) 1.5.1 Global Long Term Food Storage Market Revenue Status and Outlook (2017-2027) 1.5.2 Global Long Term Food Storage Market Sales Volume Status and Outlook (2017-2027) 1.6 Global Macroeconomic Analysis 1.7 The impact of the Russia-Ukraine war on the Long Term Food Storage Market 2 Industry Outlook 2.1 Long Term Food Storage Industry Technology Status and Trends 2.2 Industry Entry Barriers 2.2.1 Analysis of Financial Barriers 2.2.2 Analysis of Technical Barriers 2.2.3 Analysis of Talent Barriers 2.2.4 Analysis of Brand Barrier 2.3 Long Term Food Storage Market Drivers Analysis 2.4 Long Term Food Storage Market Challenges Analysis 2.5 Emerging Market Trends 2.6 Consumer Preference Analysis 2.7 Long Term Food Storage Industry Development Trends under COVID-19 Outbreak 2.7.1 Global COVID-19 Status Overview 2.7.2 Influence of COVID-19 Outbreak on Long Term Food Storage Industry Development 3 Global Long Term Food Storage Market Landscape by Player 3.1 Global Long Term Food Storage Sales Volume and Share by Player (2017-2022) 3.2 Global Long Term Food Storage Revenue and Market Share by Player (2017-2022) 3.3 Global Long Term Food Storage Average Price by Player (2017-2022) 3.4 Global Long Term Food Storage Gross Margin by Player (2017-2022) 3.5 Long Term Food Storage Market Competitive Situation and Trends 3.5.1 Long Term Food Storage Market Concentration Rate 3.5.2 Long Term Food Storage Market Share of Top 3 and Top 6 Players 3.5.3 Mergers and Acquisitions, Expansion 4 Global Long Term Food Storage Sales Volume and Revenue Region Wise (2017-2022) 4.1 Global Long Term Food Storage Sales Volume and Market Share, Region Wise (2017-2022) 4.2 Global Long Term Food Storage Revenue and Market Share, Region Wise (2017-2022) 4.3 Global Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.4 United States Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.4.1 United States Long Term Food Storage Market Under COVID-19 4.5 Europe Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.5.1 Europe Long Term Food Storage Market Under COVID-19 4.6 China Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.6.1 China Long Term Food Storage Market Under COVID-19 4.7 Japan Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.7.1 Japan Long Term Food Storage Market Under COVID-19 4.8 India Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.8.1 India Long Term Food Storage Market Under COVID-19 4.9 Southeast Asia Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.9.1 Southeast Asia Long Term Food Storage Market Under COVID-19 4.10 Latin America Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.10.1 Latin America Long Term Food Storage Market Under COVID-19 4.11 Middle East and Africa Long Term Food Storage Sales Volume, Revenue, Price and Gross Margin (2017-2022) 4.11.1 Middle East and Africa Long Term Food Storage Market Under COVID-19 5 Global Long Term Food Storage Sales Volume, Revenue, Price Trend by Type 5.1 Global Long Term Food Storage Sales Volume and Market Share by Type (2017-2022) 5.2 Global Long Term Food Storage Revenue and Market Share by Type (2017-2022) 5.3 Global Long Term Food Storage Price by Type (2017-2022) 5.4 Global Long Term Food Storage Sales Volume, Revenue and Growth Rate by Type (2017-2022) 5.4.1 Global Long Term Food Storage Sales Volume, Revenue and Growth Rate of Off Site Storage (2017-2022) 5.4.2 Global Long Term Food Storage Sales Volume, Revenue and Growth Rate of On Site Storage (2017-2022) 6 Global Long Term Food Storage Market Analysis by Application 6.1 Global Long Term Food Storage Consumption and Market Share by Application (2017-2022) 6.2 Global Long Term Food Storage Consumption Revenue and Market Share by Application (2017-2022) 6.3 Global Long Term Food Storage Consumption and Growth Rate by Application (2017-2022) 6.3.1 Global Long Term Food Storage Consumption and Growth Rate of Civilian Retailers (2017-2022) 6.3.2 Global Long Term Food Storage Consumption and Growth Rate of NASA (2017-2022) 6.3.3 Global Long Term Food Storage Consumption and Growth Rate of Military (2017-2022) 7 Global Long Term Food Storage Market Forecast (2022-2027) 7.1 Global Long Term Food Storage Sales Volume, Revenue Forecast (2022-2027) 7.1.1 Global Long Term Food Storage Sales Volume and Growth Rate Forecast (2022-2027) 7.1.2 Global Long Term Food Storage Revenue and Growth Rate Forecast (2022-2027) 7.1.3 Global Long Term Food Storage Price and Trend Forecast (2022-2027) 7.2 Global Long Term Food Storage Sales Volume and Revenue Forecast, Region Wise (2022-2027) 7.2.1 United States Long Term Food Storage Sales Volume and Revenue Forecast (2022-2027) 7.2.2 Europe Long Term Food Storage Sales Volume and Revenue Forecast (2022-2027) 7.2.3 China Long Term Food Storage Sales Volume and Revenue Forecast (2022-2027) 7.2.4 Japan Long Term Food Storage Sales Volume and Revenue Forecast (2022-2027) 7.2.5 India Long Term Food Storage Sales Volume and Revenue Forecast (2022-2027) 7.2.6 Southeast Asia Long Term Food Storage Sales Volume and Revenue Forecast (2022-2027) 7.2.7 Latin America Long Term Food Storage Sales Volume and Revenue Forecast (2022-2027) 7.2.8 Middle East and Africa Long Term Food Storage Sales Volume and Revenue Forecast (2022-2027) 7.3 Global Long Term Food Storage Sales Volume, Revenue and Price Forecast by Type (2022-2027) 7.3.1 Global Long Term Food Storage Revenue and Growth Rate of Off Site Storage (2022-2027) 7.3.2 Global Long Term Food Storage Revenue and Growth Rate of On Site Storage (2022-2027) 7.4 Global Long Term Food Storage Consumption Forecast by Application (2022-2027) 7.4.1 Global Long Term Food Storage Consumption Value and Growth Rate of Civilian Retailers(2022-2027) 7.4.2 Global Long Term Food Storage Consumption Value and Growth Rate of NASA(2022-2027) 7.4.3 Global Long Term Food Storage Consumption Value and Growth Rate of Military(2022-2027) 7.5 Long Term Food Storage Market Forecast Under COVID-19 8 Long Term Food Storage Market Upstream and Downstream Analysis 8.1 Long Term Food Storage Industrial Chain Analysis 8.2 Key Raw Materials Suppliers and Price Analysis 8.3 Manufacturing Cost Structure Analysis 8.3.1 Labor Cost Analysis 8.3.2 Energy Costs Analysis 8.3.3 RandD Costs Analysis 8.4 Alternative Product Analysis 8.5 Major Distributors of Long Term Food Storage Analysis 8.6 Major Downstream Buyers of Long Term Food Storage Analysis 8.7 Impact of COVID-19 and the Russia-Ukraine war on the Upstream and Downstream in the Long Term Food Storage Industry 9 Players Profiles 9.1 My Food Storage 9.1.1 My Food Storage Basic Information, Manufacturing Base, Sales Region and Competitors 9.1.2 Long Term Food Storage Product Profiles, Application and Specification 9.1.3 My Food Storage Market Performance (2017-2022) 9.1.4 exact Development 9.1.5 SWOT Analysis 9.2 Emergency Essentials 9.2.1 Emergency Essentials Basic Information, Manufacturing Base, Sales Region and Competitors 9.2.2 Long Term Food Storage Product Profiles, Application and Specification 9.2.3 Emergency Essentials Market Performance (2017-2022) 9.2.4 exact Development 9.2.5 SWOT Analysis 9.3 Legacy Premium 9.3.1 Legacy Premium Basic Information, Manufacturing Base, Sales Region and Competitors 9.3.2 Long Term Food Storage Product Profiles, Application and Specification 9.3.3 Legacy Premium Market Performance (2017-2022) 9.3.4 exact Development 9.3.5 SWOT Analysis 9.4 EFoods Direct 9.4.1 EFoods Direct Basic Information, Manufacturing Base, Sales Region and Competitors 9.4.2 Long Term Food Storage Product Profiles, Application and Specification 9.4.3 EFoods Direct Market Performance (2017-2022) 9.4.4 exact Development 9.4.5 SWOT Analysis 9.5 Blue Chip Group 9.5.1 Blue Chip Group Basic Information, Manufacturing Base, Sales Region and Competitors 9.5.2 Long Term Food Storage Product Profiles, Application and Specification 9.5.3 Blue Chip Group Market Performance (2017-2022) 9.5.4 exact Development 9.5.5 SWOT Analysis 9.6 Wise Company 9.6.1 Wise Company Basic Information, Manufacturing Base, Sales Region and Competitors 9.6.2 Long Term Food Storage Product Profiles, Application and Specification 9.6.3 Wise Company Market Performance (2017-2022) 9.6.4 exact Development 9.6.5 SWOT Analysis 9.7 Katadyn Group 9.7.1 Katadyn Group Basic Information, Manufacturing Base, Sales Region and Competitors 9.7.2 Long Term Food Storage Product Profiles, Application and Specification 9.7.3 Katadyn Group Market Performance (2017-2022) 9.7.4 exact Development 9.7.5 SWOT Analysis 9.8 OFD Food 9.8.1 OFD Food Basic Information, Manufacturing Base, Sales Region and Competitors 9.8.2 Long Term Food Storage Product Profiles, Application and Specification 9.8.3 OFD Food Market Performance (2017-2022) 9.8.4 exact Development 9.8.5 SWOT Analysis 9.9 Astronaut Foods 9.9.1 Astronaut Foods Basic Information, Manufacturing Base, Sales Region and Competitors 9.9.2 Long Term Food Storage Product Profiles, Application and Specification 9.9.3 Astronaut Foods Market Performance (2017-2022) 9.9.4 exact Development 9.9.5 SWOT Analysis 9.10 Freeze-Dry Foods Ltd 9.10.1 Freeze-Dry Foods Ltd Basic Information, Manufacturing Base, Sales Region and Competitors 9.10.2 Long Term Food Storage Product Profiles, Application and Specification 9.10.3 Freeze-Dry Foods Ltd Market Performance (2017-2022) 9.10.4 exact Development 9.10.5 SWOT Analysis 9.11 Valley Food Storage 9.11.1 Valley Food Storage Basic Information, Manufacturing Base, Sales Region and Competitors 9.11.2 Long Term Food Storage Product Profiles, Application and Specification 9.11.3 Valley Food Storage Market Performance (2017-2022) 9.11.4 exact Development 9.11.5 SWOT Analysis 10 Research Findings and Conclusion 11.2 Research Data Source Purchase this report (Price 3250 USD for a Single-User License) - https://www.360marketupdates.com/purchase/21366612 Organization: 360 Market Updates Phone:+14242530807 / + 44 20 3239 8187 Press Release Distributed by The Express Wire To view the original version on The Express Wire visit Long Term Food Storage Market is estimated to register a Strong Growth at CAGR of 2.87% during Forecasts 2022-2027 | 102 Pages The MarketWatch News Department was not involved in the creation of this content.
<urn:uuid:217ce73b-52d7-4c38-aa61-fc67e5c367b8>
CC-MAIN-2022-40
http://babelouedstory.com/bibliographies/accueil_bibliographie/training-pdf-guide.php?pdf=050-ENVCSE01-CSE-RSA-enVision-Essentials-new-update
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00168.warc.gz
en
0.901939
15,732
2.65625
3
There are few terms more widely misunderstood in the world of information security than the word ‘hacking’. Although it’s used in a variety of contexts, it’s most commonly used to refer to all types of cyber crime including everything from fraud and industrial espionage to identity theft and spamming. If you take this view, cyber crimes are the deeds of ‘hackers’. In reality, hackers do far more good than harm. Many are researchers that practice a form of ethical hacking driven by a desire to improve the state of information security. Ethical hackers are the ‘white hats’ of security. They use everything from port scanning to breaking and entering to simulate an attack against networks and systems, usually with the consent of their targets. Software companies such as SAP owe a huge debt to white hats. Many of the vulnerabilities patched by SAP Security Notes are discovered not by SAP, but independent researchers that are far more adept at finding vulnerabilities than SAP itself. In the past, white hats would publish details about vulnerabilities as soon as they were discovered. Today, most follow SAP’s Disclosure Guidelines. As a result, very few vulnerabilities are publicized until they are patched by SAP. Whether or not this is in the interest of SAP customers is open to debate. It could be argued that this reduces the incentive for SAP to properly patch its software, A case in point is a session hijacking vulnerability in the Enterprise Portal which wasn’t patched until 18 months after it was reported to SAP. White hats rule the roost of information security. One step below are the black hats who most closely resemble the stereotypical image of hackers portrayed in pop culture. Black hats use the same tactics as white hats but differ in their motives which are generally malicious. Most are driven by the need for notoriety or personal gain, although some are motivated by more noble goals such as social justice. The latter are often referred to as ‘hacktivits’. Its difficult to stall an attack by talented and determined black hats. The only approach that provides any glimmer of hope is the tried and tested defense-in-depth strategy which may buy enough time to detect a breach before any real damage is done or encourage attackers to direct their efforts towards other less well defended targets outside your network. White hats look down upon black hats but the two groups have much in common. Firstly, they are both skilled in the art of finding and exploiting vulnerabilities. Secondly, they’re partial to challenges and venerate well-constructed code like a thing of beauty. Thirdly, both white hats and black hats frown upon script kiddies, or skiddies for short. Skiddies are the hillbillies of the information security world. They don’t look down upon anyone since they’re at the rock bottom of the totem pole. Skiddies are considered social pariahs since they have no appreciation of the concepts and tools of information security. Their sole purpose is to exploit vulnerabilities discovered by black hats. Black hats take pride in their work. Targets are carefully selected and attacks are meticulously planned. They go to great lengths to cover up their tracks. Skiddies, on the other hand, blindly execute scripts developed by black hats hoping to catch victims that happen to be susceptible to whatever vulnerability they’re targeting at a moment in time. Despite this, skiddies should not be underestimated. They far out-number black hats. They also have an uncanny ability to learn about new exploits long before they’re patched. This is fuelled by IRC (Internet Relay Chat) and online trading for zero-day exploits. The proliferation of easy to use security tools with point and click interfaces has dumbed down hacking and turned the tide in favor of skiddies. Many programming or configuration flaws in systems such as SAP don’t require any technical skill to exploit. Therefore, relying upon security through obscurity no longer works, especially when systems are public-facing. Intense, focused attacks led by black hats are destructive but far less likely than a random strike performed by a skiddie. However, the latter will quickly reveal vulnerabilities in a poorly patched SAP environment.
<urn:uuid:8eb4104c-1cef-48b1-a66a-9e15d51e8c70>
CC-MAIN-2022-40
https://layersevensecurity.com/white-hats-black-hats-and-skiddies-the-class-system-in-information-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00168.warc.gz
en
0.961267
858
2.921875
3
A child’s independence is the primary goal of every parent. Raising self-reliant and autonomous kids is part and parcel of parenting as much as nurturing and protecting the child are. It is a frequent observation, that several parents pamper and raise overly dependent children, making it difficult for the young ones to step out of the protective cocoon. And, the major blow is faced when kids turn into adults and face unanticipated challenges. A capable and independent child is not only advantageous for the kid but is a huge relief for parents swamped with countless work and household chores. Why Do Parents Need to Raise Self-Sufficient Kids? Inevitably, all the household responsibilities solely fall on the parent’s shoulder. And, with the fast pace of corporate jobs- balancing work and personal life (also kids) can be tricky. To ease their burden, parents are always looking for a helping hand in their occupied work schedule and who better than the kids themselves. Independent teenagers are capable of taking care of themselves and their younger siblings. They ease out parent’s constant juggle between work and family and are competent enough to fend for themselves in the absence of adults. Bringing up less dependent kids lessens the burden on parents and increases confidence in children. So, experts advise parents to shift to more independent parenting, which encourages autonomous attitude under the watchful eyes of parents. Hey Parents, it’s time to imbibe self-confidence of dealing with problems and situations independently by using these 7 basic tricks on kids. 7 Ways of Teaching Your Child to Be Independent Start early, start when a toddler Going a few years back, kids had lesser problems to tackle. Now, with growing technology and rising competition in all fields, kids cannot lag in any scenario. It is beyond any doubt that a capable and independent child handles demanding tasks better and achieves appropriate goals in life. To make success a reality in kids, start to nurture them early. Begin with simple tasks like choosing the color of school bag, storybooks, breakfast, etc. and gradually with age, you may find them taking more significant decisions. No rule book mentions a child’s independence age. Parents must start the lessons as soon as they can and at the earliest age possible. Allow participation in decision-making How often do parents ask the kid’s opinions before buying fruits? Or choose the color of the curtains? Rarely, right? According to several experts, children should have a say in everyday tasks and offer an opinion on household choices. Allow children to offer their opinion and feedback on different issues. Parents can start involving their decisions through trivial matters like; Color of the new car, Friend’s birthday gift, Clothes and many similar things. Parents should stop imposing their opinion in every matter and allow children to have a say. It is not necessary to have the kid’s last word on all decisions but ensuring their suggestions work to a large extent. There is no denying, today’s job spectrum is killing parents and leaving little time to take care of household matters. In the wake of this situation, it is correct to assume that kid’s participation is a blessing for parents and a new activity in a kid’s life. No one is asking children to take the burden of banking or plumbing. Simple tasks like laundry, serving dinner, and cleaning utensil fall under simple duties which children can easily undertake. Keeping kids away from these responsibilities now, will not only make them reluctant but also make it difficult for them to take up new tasks in the future. Just like little drops of water make a mighty ocean, starting with a few simple tasks can bring up accountable children. Encourage summer jobs and start pocket money It is a well-known theory that kids like to imitate adults they meet every day. And using real money to purchase a few candies and toys can be more fascinating than we know. Rather than letting them off with one-time cash, switch to monthly pocket money. Even summer jobs and part-time jobs are an excellent means to earn good money. And once kids start handling real money, they seek advice on savings and how to purchase their dream bicycle?. Parents should willingly interfere and treat them with necessary information on savings and meeting future goals. Don’t solve their problems Problem-solving is another essential part of growing up as decision-making is. However, many parents, accustomed to solving problems for their children, never leave a chance for kids to find resolutions. It is time to be an attentive parent but let them find solutions to several matters. Just like the math problems need time and patience to find the answer, the same way adults need to master the two skills to encourage problem-solving attitude. Parents can definitely barge in when required, but only when you have awarded them enough attempts to think. Allow room for making mistakes Growing up is directly proportional to making mistakes. And for an independent-natured child, a bunch of errors is compulsory. Mistakes have two distinguished effects on children: - Learning from the mistakes - Staying grounded at all times Mistakes teach an individual more than their success does. It is essential to note that kids should get enough opportunities to commit supervised errors and learn from them. They will not only avoid making the same blunder twice but stay humble in life by accepting imperfections in them and others. Boost their efforts by appreciating a good job Praising a good job is far more critical than praising the person. Every effort recognized by parents is an added certificate for the kid. If your kid observes his/her effort is appreciated, they will automatically take more chances and try once again without hesitation! Are parents ready to let their children off the leash? There is a fine line between independent children and obedient children. Either parents or kids misunderstand it, and things fall out of control. To avoid such situations, parents usually ignore to emphasize on self-reliant habits. They make their best efforts to raise submissive children, listening to parent’s advice. It is important to note that in the process of producing obedient kids, your child does not receive the appropriate exposure to challenges lurking in the world. If you want to keep an eye on them, tools like parental control apps are readily available in the market, but individuality is not an attitude your kids will gain on their own. Give kids the chance to explore the world and make mistakes under parent’s supervision. Child’s independence not only benefits adults but most importantly prepares children to face external challenges in the absence of parents.
<urn:uuid:53ae2a0c-9636-4e2f-8594-9e011186228d>
CC-MAIN-2022-40
https://blog.bit-guardian.com/steps-to-raise-independent-kids/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00369.warc.gz
en
0.948421
1,392
2.90625
3
If there’s one area of the house you’ve learned to avoid when you want to stream a show, play a game, or scroll on your phone, you might have a dead spot. This is an area your WiFi signal can’t reach. Fortunately, there’s a solution called a WiFi extender. These inexpensive devices can enhance your WiFi coverage, helping prevent dead spots and poor coverage. A WiFi extender expands your existing WiFi network, helping you achieve better coverage and improve connectivity. They work best in limited areas where your WiFi coverage may be spotty, like remote bedrooms, garages, or backyard areas. If you want to improve coverage at home, WiFi extenders are often an affordable solution. In this guide, learn what a WiFi extender is, how a WiFi extender works, how far a WiFi extender works, and what to look for in a device. What is a WiFi extender? A WiFi extender is a little device that can help spread your WiFi signal to the difficult-to-reach spots in your house. WiFi extenders are portable but powerful enough to help broadcast your WiFi signal to areas of your home without coverage. How a WiFi extender works A WiFi extender connects first to your router via a wireless connection. Once it connects to your router, it becomes part of your overall network. Using radio waves, the WiFi extender converts and redistributes your internet connection as a wireless signal from each access point, so you can access it from the parts of your house that are normally dead zones. Sometimes you will need a smartphone or computer to complete the setup for an extender and connect it to your router. Each WiFi extender may create its own network name and password, but most allow you to override the default and copy your existing WiFi network name and password. If they share the same network name, your internet-connected devices, like your phone or computer, will determine which signal is stronger and will connect to it automatically. WiFi extender bands Just like WiFi routers, WiFi extenders come in single- or dual-band frequencies. In dual-band versions, there are two frequencies: 2.5 GHz and 5 GHz. The 2.4 GHz band offers a wider coverage area and can penetrate walls and other solid objects better, while 5 GHz offers fasters speeds and is less prone to interference. How far does a WiFi extender work? Your mileage may vary. Just like your main WiFi signal, your WiFi extender may be impacted by the distance between devices, the construction of your home, and the number of devices connected to your internet. Some tools allow you to maximize your WiFi extender’s range by using smartphone apps or LED lights to let you know where the best location for optimal WiFi coverage is. Can I move a WiFi extender? Yes, you can move a WiFi extender around your home to better distribute your WiFi signal based on your needs. The best place to set up a WiFi extender is right at the midpoint between your dead spot and your wireless router. How is a WiFi extender different from a mesh WiFi system? Mesh WiFi systems also work to improve the WiFi signal around your home. To set up a mesh WiFi system, you will install several “nodes” or satellites throughout your home. Your devices will connect to the nodes for a WiFi signal as you move through your home. While a mesh WiFi system may sound similar to a WiFi extender, they do serve slightly different functions. WiFi extenders work best when devices aren’t competing for connection. Your device may actually jump from network to network if you use too many WiFi extenders or as you move through the home, which can disrupt your coverage. Use a WiFi extender to expand coverage to a stationary device or to a room or area at one end of the house. This will reduce any competition for coverage from your devices. Mesh systems create whole-home WiFi coverage. The completely interconnected nodes are controlled by the logic of the mesh system; as you move through your home, your devices will seamlessly transition from node to node for the best performance. WiFi extenders create better coverage in certain areas of the home, while mesh WiFi systems create whole-home coverage. WiFi systems tend to be on the more expensive side, while WiFi extenders are an affordable option for many people. It all depends on the type of coverage you need in your home. What to look for in a WiFi extender First, make sure that whatever WiFi extender you choose will work with your router. Some devices are not compatible with certain routers, while others, particularly older models, may require a wired or electrical connection to work. Next, pick an extender that is at least as fast as or faster than your router. WiFi standards typically come as 802.11ac/ax, so be sure your extender works with the same standard listed in the specifications of your router. You should also consider the range offered by the WiFi extender, to ensure that you get the coverage you need in your home, back yard, or any other space you want to bring a WiFi signal to. If you want to minimize WiFi connections, you can actually consider an extender that also has an ethernet port. That way, you can use a wired connection for devices like desktop computers or gaming consoles, extending the coverage over Ethernet. WiFi extenders work best in smaller areas with lower WiFi coverage, like your patio or basement. For best results, try to limit the number of WiFi extenders you use in your home. If you are wanting to add several extenders, it may be better to consider a mesh network instead. Now that you know how a WiFi extender works, you can get more out of your internet connection. You can also avoid dead spots and enjoy better coverage throughout your home. For more on getting and staying connected, check out these articles:
<urn:uuid:4c2c44a3-9524-4b49-84a1-44db4cb5c77c>
CC-MAIN-2022-40
https://discover.centurylink.com/how-a-wifi-extender-works.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00369.warc.gz
en
0.925909
1,230
2.546875
3
Top 6 Benefits of the Cloud The cloud is likely the most valuable form of computing in the IT space today. After all, cloud services support the remote working model that’s become critical to the contemporary business climate in which people routinely work from home and decentralized locations. While many professionals depend on the cloud to do their jobs every day, some still don’t know much about the cloud and its potential. Cloud computing is the most modern, efficient, and cost-effective means of accessing data resources currently available. It has many benefits, allowing people to access business applications and systems, as well as collaborate and work together at any time, regardless of where they are. These advantages are essential for continuing business operations as the workforce grows more distributed — and remote interactions become more necessary. Looking into the cloud With the cloud, resources are virtually accessed, off-premises, through different hosts. Although it’s possible to have private clouds hosted by individual companies, most organizations choose popular public vendors such as Amazon Web Services or Microsoft Azure. Almost every internet activity takes place via the cloud, including standard tasks such as sending emails or backing up information on mobile devices. Video conferences, social media sites, and interactive applications such as Google Docs and others — these are examples of software-as-a-service (SaaS), a computing model in which users or companies subscribe to use a product or service. People don’t buy cloud resources, such as storage for paid email accounts. They pay for them on an ongoing basis as a service. The array of services available with this model is impressive, including front office or back-office tools such as business intelligence or automation software for processes used in finance and accounting, HR, operations, and other groups. Those tools include Robotic Process Automation (RPA)-as-a-Service (considered part of SaaS offerings). There’s also cloud-based infrastructure needed to support IT teams such as web servers, infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), various storage forms, and more. What the cloud offers The benefits of this architecture largely pertain to how cloud services work and include: - Time to value: With the cloud, businesses can be up and running much faster than with on-premises solutions because there is nothing to download or install. Businesses don’t have to wait to purchase and set up all their infrastructure. On-premises setup can take a long time, especially for deployments at scale. Additionally, cloud automation technologies, such as RPA, can accelerate common business functions. - Business agility: Cloud enables companies to move fast and adapt to changes in the business landscape. In cases of business disruption such as natural disasters or the current situation where companies became remote overnight, the cloud provides business continuity so that systems can continue to be up and running even with unexpected system outages. - Accessibility: Implicit to the architecture is the fact that its resources can be accessed from anywhere where there’s an internet connection, any time users want them. This advantage is extremely useful for remote access, distributed workforces, and working from home. - Ease of adoption: Cloud democratizes access to application and workplace systems. The accessibility of cloud services is terrific for remote work and collaborations. Users of the services can log in without any installations or cumbersome instructions and experience an intuitive experience associated with web-based interfaces. People can access the same database, document, or project management platform from wherever they want to effectively work together as though they were in the same office. Ease of access and ease of collaboration are key characteristics of cloud solutions, which become important operating principles in the new normal of business conduct. - Scalability: The capability to scale quickly, cheaply, and both horizontally (meaning to include greater amounts of data sources or locations from where they’re accessed) and vertically is one of the premier benefits of cloud computing. This advantage is critical for large quantities of big data needed for most AI use cases. - Low total cost of ownership: The ownership and maintenance costs of this form of computing is much lower than the costs of on-premises deployments. Instead of dealing with the upfront, capital expenses for purchasing and maintaining hardware, servers, and networking components—as well as an IT staff to install and maintain them— organizations can rent these resources from providers as monthly operating expenses The top choice Many of the disruptions of the current business climate can be overcome by working in the cloud. This approach is the most inexpensive means of managing IT resources, working remotely, and collaborating in distributed settings. These benefits are essential for continuing operations in difficult circumstances and even optimizing them.
<urn:uuid:7cf480bc-854a-4c26-a4b7-c8d44583782d>
CC-MAIN-2022-40
https://www.automationanywhere.com/company/blog/rpa-thought-leadership/top-6-benefits-of-the-cloud
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00369.warc.gz
en
0.951836
973
2.515625
3
COVID-19 Contact-Tracing App Must-Haves: Security, PrivacyGovernments Have One Chance to Earn Users' Trust, Says Security Expert Alan Woodward As the COVID-19 pandemic continues, many nations have introduced - or announced plans to introduce - smartphone-based contact-tracing apps to help fight the virus. But while such programs may have public health benefits, hundreds of scientists and researchers, in an open letter, have warned that they could facilitate "unprecedented surveillance of society at large," unless these apps get rolled out in a transparent and open manner, with security and privacy safeguards in place (see: Contact-Tracing Apps Must Respect Privacy, Scientists Warn). "Everybody accepts that extraordinary times call for extraordinary measures, but that has to be done in a measured way and you have to … have this public debate about the risk," says Alan Woodward, a signatory to the letter who's a visiting professor at England's University of Surrey. "Our motivation behind all of this is not that we're all privacy nuts and we think that the government's going to be spying on us, although some governments will doubtless try to use this for that. It's that people have to trust this." In this video interview with Information Security Media Group, Woodward discusses: - Why manual-based contact tracing methods are too slow to combat COVID-19; - The limits of Bluetooth for tracking physical location and duration of contact; - Centralized versus decentralized approaches to contact-tracing apps; - Balancing security, privacy, engineering, usability and epidemiological concerns while rolling out a public-health-technology project of unprecedented scale. In addition to his role as visiting professor at the department of computing at University of Surrey in England, Woodward is an adviser to TeenTech, which encourages teenagers to pursue careers in the fields of science, engineering and technology. He is also an academic cybersecurity adviser to Europol.
<urn:uuid:3d28cb15-5a60-4c6e-b3f2-80716987af73>
CC-MAIN-2022-40
https://www.inforisktoday.com/covid-19-contact-tracing-app-must-haves-security-privacy-a-14163
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00369.warc.gz
en
0.940505
395
2.625
3
In part of the the forest authentication blog post, we’ve seen that a particular path is used depending on Kerberos or NTLM authentication. We’ve also seen that domain controllers rely on other domain controllers of the forest to find the right domain (and thus object in the AD). The question now is, which domain controller of the other forest is used to authenticate the user? What happens during a trust creation, do we really need the PDC emulator? Will LMHOSTS still help us, like it did in the old days? Those questions we will answer in this series of authentication across trusts part 2, 3 etc.. First a little drawing about the used infrastructure for this and the next to come posts: In the drawing above, we see two forests, rootdomain and oceanfloor. These forests are going to trust each other using a ‘normal’ trust. To establish the trust, each domain controller has a conditional forwarder for DNS setup to point to DNS servers of the other forest. So, when we type perform an NSLOOKUP on the rootdomain domain controller to find the oceanfloor forest we get:> oceanfloor.local Server: fdc01.forestroot.local Address: 172.16.6.31 Name: oceanfloor.local Address: 172.16.5.196 And from oceanfloor.local we get the IP addresses of all the domain controllers for forestroot (172.16.5.31, 172.16.5.32, 172.16.5.33, 172.16.5.34). Now although all those addresses are within the same subnet, they are split to different sites (just for this demo I’ve used /32 sites). Figure 1: Forestroot.local Sites Now let’s assume datacenter 1 is located in Amsterdam, datacenter 2 is located in New York, One of the branch-sites is somewhere in South Africa and the BRANCH-SITE site is a site with a Read Only domain controller. When we requested the domain name in DNS, we got ALL domain controllers including the South African one (only not the RODC). When we request the SRV records for the forestroot domain, we receive the following: Note that all domain controllers are registered and are received. So what does this mean? If we do not take care of some things, for a user who is authentication over the trust, the authentication could end up on ANY domain controller (listed as above). In this example, that is nothing to worry about, since all domain controllers are well connected, but what if the OCEAN domain is closer to Datacentre 2? Can we force the cross-forest authentication towards that datacenter, so that every user that needs to be authenticated over the forests, does not cross the physical ocean WAN line? To discover if we can force that, we need to find out, how a domain controller (in case of NTLM) or a client (Kerberos) finds domain controllers in the other domain. but off course we need to create a trust first, how is that done and how do domain controllers find each other during the creation of a trust. When creating an external trust, it only allows for NTLM authentication. So we create a trust between the two domains, being an external trust. We open domains and trusts and create an external trust to the forestroot domain from the oceanfloor domain, while running a packet capture. The packet capture shows something funny that has to be taken into account. Info: _ldap._tcp.ATLANTIC._sites.dc._msdcs.forestroot.local: type SRV, class IN While everyone would expect the PDC to be targeted, this is NOT the case. So there we have lesson number one: !The DNS query for domain information is NOT to the PDC service record! And immediately we have lesson number 2: !During the setup of a trust, the CURRENT site of the DC is looked up on the other forest BEFORE a generic query takes place! The generic DNS service record lookup Since our site plan of Forestroot does not have such a site, we retrieve an error back from the DNS server indicating it has no record. Next a query for the generic service records is performed: Info: _ldap._tcp.dc._msdcs.forestroot.local: type SRV, class IN Now that query does receive an answer, just like we got during the NSLOOKUP manually. The response includes al domain controllers and service records like we saw before. Lesson number two can be learned here. Each service record has a priority and a weight, manipulating these weights can influence the results received and thus influence the next steps. See also Jorge’s blog for DNS optimization. Next what we see, is that ALL LDAP domain controller srv records received are used. The OCEANFLOOR domain controller fires an LDAP lookup towards all domain controllers in the following packets. Now comes the fun part, the FIRST domain controller to respond to this seachRequest gets to be the lucky winner and just like in the real world the other responses that come in late, are disregarded. In my case, it’s the 172.16.6.32 that is the fastest domain controller to respond to the request with a successful lookup. Now you where probably expecting the next query to be the PDC emulator, but no, an SMB connection is tried to the 172.16.6.32 (referred to as FDC02.forestroot.local), however login failures are shown. Note: No packets to 172.16.6.31 (FDC01.forestroot.local) that is the PDC of Forestroot.local are being sent, nor received. But this is only the first part of the trust, we enabled the trust incoming and outgoing on the OCEANFLOOR.local domain. Now we must enable it also on the forestroot, and let us see what happens, for the time being, I’m creating the other end of the trust on the FDC02.forestroot.local. Again we see the site specific query from FDC02.forestroot.local where FDC02’s site is DATACENTER2 Dns: QueryId = 0x77C, QUERY (Standard query), Query for _ldap._tcp.DATACENTER2._sites.dc._msdcs.OCEANFLOOR.local of type SRV on class Internet The rest is about the same, up to the point of the SMB2 connection. This time (because the trust passwords ARE now present and the same on both ends) the connection IS successful. So what have we learned, when we create a trust, the PDC emulators do not really come in to play, as all traffic is from domain controller to domain controller based on DNS information. We can manipulate this information to speed up the process (and optimize it), by either manipulating DNS information, OR! We add another site to our forest that has the exact site name of the site where the domain controller(s) is that we want to target!.
<urn:uuid:b6c3920d-0768-47ed-8c0e-1ad01cff0779>
CC-MAIN-2022-40
https://blog.azureinfra.com/2010/03/26/cross-forest-authentication-part-2-creating-trusts/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00369.warc.gz
en
0.883304
1,707
2.578125
3
What is a ZIP, GZIP, or TAR file? A ZIP (.zip) is (usually) a compressed archive file, a GZIP (.gz) is a file compressed using GZIP, and a TAR (.tar) file is an uncompressed archive file. All of these file types are used for archive files. They are making storing data easier for you. Although zipped tarballs may reduce file size, they are challenging to work with. Compression tools do not compress individual files but treat the whole tarball as one big file, which results in a significant loss of quality when dealing with semi-compressed files like GIFs and JPG. The disadvantage of a zip file is that it does not offer compression across files. The advantage is that you can access any files by looking at only a specific (target file dependent) section of the archive (as the catalog is separate from the collection).
<urn:uuid:08cd4560-b85c-4c78-83c4-de91f7d46564>
CC-MAIN-2022-40
https://www.gigasheet.com/what-is/what-is-a-zip%2C-gzip%2C-or-tar-file%3F
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00369.warc.gz
en
0.938201
195
3.265625
3
There is no sugarcoating the truth that different generations interact with technology differently. For a sweet comparison, consider how it used to be the norm to knock on a neighbour’s door to borrow some sugar — and your grandparents still might do just this. These days, we can have an entire bag of sugar delivered to our doorsteps within a few clicks on a grocery delivery app. Tech-savviness is not necessarily something that runs in the family. To some degree, it is something we are born into, depending on how tech-forward the world was when we entered it. This is a part of the reason younger generations are more comfortable interacting with technology because they know the world no other way. They are digital natives. It is also why older generations may be more hesitant about engaging in online activities because they must adapt to them. They are digital immigrants. Here we dig into these technology gaps across digital generations to unveil just how each group interacts with technology today, including cybersecurity dilemmas they might face and how to stay cyber safe, no matter one’s age. An Overview of Digital Generations What are digital generations? You can look at it from several viewpoints, but as a singular view, a digital generation can be considered as encompassing only people who were born into or raised in the digital era. This means with widespread access to modern-age technology such as smartphones, tablets, computers, and digital information like the internet. For this overview, we prefer the viewpoint that every living person today can be considered part of a digital generation because no matter how much we engage with technology we are living in a digital-first world. Of course, the degree to which each person is comfortable and willing to embrace technology is also dependent on when they entered the world. Just look to the following generations and when they were born: - Silent Traditionalists were born between 1925–1945 - Baby Boomers were born between 1945–1965 - Gen X was born between 1965–1980 - Millennials were born between 1980–1995 - Gen Z was born between 1995–2010 - Gen Alpha born in 2010 to the present day Considering how far evolved technology was when each generation was born, we can categorize these generations into digital natives and digital immigrants, terms first coined by Marc Prensky in 2001. Digital immigrants were born or raised in an era before the world started turning toward a tech-first society. They do remember a world before the internet was on the rise, which means Gen X, Baby Boomers, and Silent Traditionalists, are all considered digital immigrants. Incorporating technology into their daily lives is not so inherent for these digital generations. They have had to learn to adapt to the digital-first world and sometimes are sceptical of doing it, oftentimes considering old-school solutions before humouring new-wave conveniences. To go back to our sugar example, digital immigrants might first think to knock on a neighbour’s door for some sugar over ordering an entire bag online or even texting a neighbour for some. Generally, digital immigrants are also slower to leverage the internet or connected devices and they are also sometimes less aware of how they work in the first place — and the threats associated with them. Digital natives know no world other than a digital-first one. They are typically more fluent in tech talk because they were born into or raised in the digital era, meaning when there was widespread access to the internet and devices like computers and smartphones readily available. To digital natives, adapting to the evolution of technology is inherent, but that does not always mean they are aware of the threats associated with new internet and device advancements. So, what age groups are digital natives? Most regard digital natives as people born after the internet was developed in the early ’80s, which means Millennials, Gen Z, and Gen Alpha are all considered digital natives. With a concrete example, digital natives are people who would choose to have a bag of sugar delivered to their doorstep using an app over asking a neighbour for some sugar. Technology Uses Statistics Across Digital Generations Technology is ever-evolving and each digital generation adapts to these advancements at their own pace, whether it’s toddlers tuning into YouTube Kids on a tablet or seniors preferring email over texts to stay in touch with family. To paint a clearer picture of how each digital generation fits into our digital-first world today, we have rounded up the technology used in statistics across each generation. For due diligence, we have also put a cybersafety spin on the data to shed light on some of the risks these generations might face, including corrective measures to help protect their online activities. We shall respect our elders and address the oldest digital generation first. Silent Traditionalists and Technology Born between 1925–1945 — years of war and depression — Silent Traditionalists are considered the oldest living digital generation. The jukebox was perhaps the most used piece of technology when they entered the world. These days, data shows that Silent Traditionalists turn to technology mostly to stay in touch with families. For perspective on how even the youngest of Silent Traditionalists interact with technology, according to an AARP study: - 63 percent of Silent Traditionalists say going online increases communication with family members and 56 percent use a computer to connect with people. - Still, Silent Traditionalists are not using social media to stay connected, as 40 percent avoid or do not use social media sites altogether. - Instead, Silent Traditionalists prefer email most in terms of digital communications, with 60 percent indicating so. (Worth mentioning is that 72 percent of Silent Traditionalists prefer using a phone above all to stay in touch with others.) Cybersafety Risks and Remedies for Silent Traditionalists Considering Silent Traditionalists most often turn to computers and email for the sake of communication, there are a few common internet scams this digital generation should watch out for. The risks: Tech support scams can occur on computers, especially in the form of malicious pop-ups that indicate you need to download software to correct a computer virus. It might be an attempt to download malware on a device. Another risk to Silent Traditionalists… phishing scams often show up right in our inboxes and are messages from an illegitimate source asking for your personal information perhaps in an attempt to commit identity theft. The fix: Installing antivirus software on a device can instill confidence in a user that their computer is already protected from pop-ups that might be malicious. In addition, Silent Traditionalists should be aware of common signs of fraudulent emails — think misspellings, poor grammar, urgent demands with threats of financial consequences, and logos that do not quite look right — and not interact with the email if they see these red flags. Baby Boomers and Technology Born between 1945 and 1965, Baby Boomers entered the world at the tail-end of World War II and the upswing of the economy. This digital generation is most regarded for its work ethic and also being some of the first adopters of home computers. While often more open to adapting to technological advancements than Silent Traditionalists, Baby Boomers are now more cosied up to technology than ever in light of the Covid-19 pandemic and have no intention of rescinding their tech-savvy ways. For some context to how Baby Boomers use technology today, according to one survey during the pandemic: - Baby Boomers showed a 431 percent increase in using grocery curbside pickup, meaning using an app or online service to order groceries. - Baby Boomers showed a 469 percent increase in the utilization of telehealth. - 88 percent of Baby Boomers say they will continue to use these types of technologies to make their daily life easier. CyberSafety Risks and Remedies for Baby Boomers When we turn to technology out of convenience, we sometimes pay for this by giving up our online privacy. Knowing how to safeguard information on common apps and platforms we utilize almost every day can go a long way in offsetting this. The risks: The personal information we save in grocery delivery or pickup apps, including names, email addresses, delivery addresses, phone numbers, and payment methods, can be compromised in the event of data breaches. In a similar vein, the medical and billing information we provide to our doctors online can also be at risk in the event hackers can intercept it during telehealth video appointments. The fix: Instead of saving your personal information for later on grocery delivery or pickup app, opt to input your information manually for each transaction. Also, always use strong, unique passwords and opt for two-factor authentication (2FA) if it is an option in these applications. To protect your medical information, encourage your provider to only share this information with you by phone instead of via text or email and use a video-conferencing service that relies on encryption for telehealth appointments. Gen X and Technology Born between 1965 and 1980, Gen X is also known as the latchkey generation for being brought up with little adult supervision. While still considered digital immigrants, Gen X was born at the point when our world turned into a digital-first society. Growing up, Gen X preferred email and phone calls for communication and they also were the first generation to embrace the Walkman. To the same tune, Gen X was also known as the MTV generation. Nowadays, Gen X continues to lean on technology mostly for communication but they have expanded beyond home phones. For perspective about Gen X’s place in our digital-first world: - Gen X embraced social media the fastest of all digital immigrants, with 74 percent of Gen Xers on social media today. - Facebook is Gen Xers’ preferred social media platform, with nine in 10 Gen X social media users on it and spending roughly seven hours a week on the platform. - True to their MTV Generation nickname, Gen X still loves TV, with Gen Xers watching about 165 hours of TV per month, including on smart TVs. Cybersafety Risks and Remedies for Gen X Social media is wonderful to stay in touch with others, but it also can be a breeding ground for personal privacy complications, if you don’t know how to share responsibility. Likewise, TV is a great escape from our reality, but these devices also come with a few privacy risks to consider. The risks: On social media, oversharing in your “about me” fields, such as including your date of birth or alma mater, and also documenting your every move makes it easy for others to piece together your identity — and possibly steal it. On a separate note, today’s standard TVs that are smart TVs with internet-connected, voice-enabled features can track what you are searching and watching. The fix: When it comes to social media, less is more in terms of protecting your privacy. Know that you don’t need to fill out every detail about yourself in those “about me” fields. Moreover, set your profile to private to ensure you are approving anyone who follows your activity. Similar to social media privacy settings, check your smart TV settings and disable the data tracking to keep your data secured. To be continued…
<urn:uuid:d67fa3ca-08dc-4cab-aa40-78a3701a0e73>
CC-MAIN-2022-40
https://cybersecfill.com/digital-generation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00369.warc.gz
en
0.954007
2,298
3.265625
3
Joseph Cione, lead research meteorologist for NOAA’s Atlantic Oceanographic and Meteorological Laboratory’s Hurricane Research Division, notes that scientists have long conducted manned reconnaissance of storms through old-school “hurricane hunting.” However, drones can provide NOAA and other researchers with more detailed information that can help communities prepare for disasters. “We live in the boundary layer, we live low, we don’t live at 20,000 feet,” Cione says. “So, when storms make landfall, we want to know what the winds are doing right at that boundary layer.” NOAA notes that the agency does not want to fly manned planes low, for safety purposes. Like the Navy, NOAA uses P-3 aircraft to fly into hurricanes. The Navy suggested to NOAA that it follows its lead and drop drones out of P-3 planes into hurricanes. “So we did, and it worked,” Cione says. NOAA had “some great success” with this approach between 2014 and 2018, Cione says. The agency learned several lessons from using the drones, but because they were designed from the military, Cione wanted to design something from scratch. The drone would need to be sophisticated and intelligent. “We have every intention of making these things artificial intelligence-driven,” Cione says. DIVE DEEPER: Here are some best practices for drone technology innovation in government. According to Cione, the AI-driven drones are not automated in such a way that they are dropped out of planes and then fly only one preplanned route. “Each storm is different, so that won’t work,” he says. “As it’s flying, it senses what’s going on,” Cione adds. “It’s using its sensors, its machine learning, its artificial intelligence, understanding its environment, making decisions based upon the sensors and then going into the environment that we want it to go into.” Such tools show the “potential for these systems to really leverage and increase our ability to get more data in these locations,” Hall says. Those AI tools will enable NOAA to gain “situational awareness in a storm that is going to make landfall,” Cione says. That can help NOAA and state and local authorities evacuate communities with more precision. “You save lives,” Cione says.
<urn:uuid:b913e50d-6aa4-44a6-b5b4-f7b93a3663f2>
CC-MAIN-2022-40
https://fedtechmagazine.com/article/2021/09/how-noaa-uses-drones-study-everything-seals-hurricanes
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00569.warc.gz
en
0.954798
516
2.84375
3
Information Technology isn’t the most environmentally friendly of industries. For example, a recent study from the IT and Environment Initiative, a research consortium working to improve understanding of IT’s effect on environmental issues and sustainability, found that at least 1,200 grams of fossil fuels and 72 grams of chemicals were required to produce one 2-gram 32MB DRAM memory chip. The study said the amount of environmentally sensitive materials used to make the chip far belies its tiny size; fossil fuels for production are some 600 times the weight of the chip. By comparison, the total fossil fuels needed to produce an automobile is one to two times its weight and four to five times for an aluminum can. Data centers have their own environmental issues. IBM (NYSE: IBM) estimates that data center energy usage accounts for 2 percent of global man-made carbon emissions, about equal to the entire airline industry. But with environmental issues growing in importance — and a new Administration that’s likely to increase that focus — governments and organizations are ramping up efforts to mitigate some of the problems. In fact, many data storage vendors have plans in place to make storage solutions more eco-friendly, such as setting up programs for engineers and scientists to develop innovative products and services that provide superior computing power while requiring less energy. Fujitsu, for example, has developed products that use just a third of the power of previous generations. “We have also driven improvements at the silicon level that reduce overall power consumption,” said David James, vice president of advanced engineering at Fujitsu Computer Products of America. “Each generation of disk drive has improved performance, but power consumption is lower.” IBM says it’s trying to help by ensuring the evolution of the LTO tape program. Bruce Master, IBM’s senior program manager for worldwide tape storage systems marketing, said the LTO program “has continued to evolve tape specifications to provide the additional capacities, speeds and security measures that are needed for today’s — and tomorrow’s — data centers.” Tape, he said, is gaining favor among users as an eco-friendly strategy that can reduce energy — and financial — costs. EMC (NYSE: EMC) has also made energy efficiency an important priority, said Dick Sullivan, EMC’s director of enterprise solutions marketing. Nearly every major product announcement that the company has made in the last two years has included some element of energy improvement, he said. Advancements include capabilities such as increased consolidation, greater total energy efficiency, data de-duplication, disk spin-down, larger capacity and lower power disk drives, solid state flash drives (SSDs) and dynamic cache partitioning, among others, said Sullivan. EMC’s ‘Green Team’ Sullivan said EMC has put together a cross-functional, engineering-focused “Green Team” to focus on developing new integrated efficiencies for hardware and software. The company also has a “Design for Environment” program that “meshes manufacturing and engineering imperatives to develop the most environmentally efficient and business effective products from the first sketch to final disposal,” he said. And EMC’s Green Business Initiative encompasses 18 departments working on cross-company initiatives “that address every element of environmental, economic and social sustainability that impacts EMC’s business,” he said. EMC’s comprehensive efforts might be the right approach, according to a CDW Green IT study. CDW vice president Mark Gambill said organizations that are successful at reducing IT energy costs dig deeper and attack the problem consistently across all facets of their IT systems. “More than 90 percent of those same organizations are taking ownership of their energy bills and advocating efficiency improvements throughout their respective IT organizations,” he said. 3PAR (NYSE: PAR) hopes to connect with storage users by linking environmental responsibility with cost savings. “As a pioneer of thin provisioning, 3PAR has disrupted the economics of primary storage by minimizing power consumption and promoting environmental responsibility in the data center,” said 3PAR CEO David Scott. “We have done this both through hardware and software innovation.” Scott said 3PAR’s InSpire Architecture features a massively scalable and highly clustered design that allows customers to purchase only what they need, enabling them to start with an InServ array as small as 2.3TB and then scale that system as the business demands, to as large as 600TB. The company’s “Fast RAID 5” technology, he said, boosts performance for RAID 5 data protection to within 10 percent of RAID 1, but with significantly less capacity outlay. This lets customers achieve protection and performances levels comparable to RAID 1 with 33-88 percent less capacity, said Scott. Geoff Noer, senior director of product marketing and management for Rackable Systems (NASDAQ: RACK), said IT departments need to measure the efficiency of their data centers. Industry organizations such as The Green Grid can help with metrics such as PUE and DCiE that help users “understand the impact of storage and servers at the data center level,” said Noer. Such metrics, he said, make it “much easier to compare different data centers and evaluate the impact of potential building and/or equipment upgrades to increase efficiency. Studies have shown that there are often data center energy efficiency improvements that can pay for themselves in as little as six to18 months that are easy to justify.” In a tough economy, the Green IT initiatives that are likely to gain the most traction will be the ones that can also benefit the other kind of green: corporate bottom lines. And don’t overlook simple things you can do in your own data center that can save money while lessening strain on the environment. Just unplugging unused equipment can save a bundle, according to one study.
<urn:uuid:7d887ac0-d70c-4e3f-9bed-f3b4c57fe272>
CC-MAIN-2022-40
https://www.enterprisestorageforum.com/management/data-storage-vendors-keep-an-eye-on-green-it/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00569.warc.gz
en
0.950799
1,228
2.75
3
Python will soon be the world’s most prevalent coding language. That’s quite a statement, but if you look at its simplicity, flexibility and the relative ease with which folks pick it up, it’s not hard to see why The Economist recently touted it as the soon-to-be most used language, globally. Naturally, our threat research team had to poke around and see how popular Python is among bad actors. And the best place to do that, well, Github, of course. Roughly estimating, more than 20% of GitHub repositories that implement an attack tool / exploit PoC are written in Python. In virtually every security-related topic in GitHub, the majority of the repositories are written in Python, including tools such as w3af , Sqlmap, and even the infamous AutoSploit tool. At Imperva, we use an advanced intelligent Client Classification mechanism that distinguishes and classifies various web clients. When we take a look at our data, specifically security incidents, the majority of the clients (>25%) we identify — excluding vulnerability scanners — are based on Python. Unlike other clients, in Python, we see a host of different attack vectors and the usage of known exploits. Hackers, like developers, enjoy Python’s advantages which makes it a popular hacking tool. Figure 1: Security incidents by client, excluding vulnerability scanners. More than 25% of the clients were Python-based tools used by malicious actors, making it the most common vector for launching exploit attempts. When examining the use of Python in attacks against sites we protect, the result was unsurprising – a large chunk, up to 77%, of the sites were attacked by a Python-based tool, and in over a third of the cases a Python-based tool was responsible for the majority of daily attacks. These levels, over time, show that Python-based tools are used for both breadth and depth scanning. Figure 2: Daily percentage of sites suffering Python-based attacks The two most popular Python modules used for web attacks are Urllib and Python Requests. The chart below shows attack distribution. Use of the new module, Async IO, is just kicking off, which makes perfect sense when you consider the vast possibilities the library offers in the field of layer 7 DDoS; especially when using a “Spray N’ Pray” technique: Python and Known Exploits The advantages of Python as a coding language make it a popular tool for implementing known exploits. We collected information on the top 10 vulnerabilities recently used by a Python-based tool, and we don’t expect it to stop. The two most popular attacks in the last 2 months used CVE-2017-9841 – a PHP based Remote Code Execution (RCE) vulnerability in the PHPUnit framework, and CVE-2015-8562 which is a RCE against the Joomla! Framework. It isn’t surprising that the most common attacks had RCE potential, considering how valuable it is to malicious actors. Another example, which isn’t in the top 10, is CVE-2018-1000207, which had hundreds of attacks each day for several days during the last week of August 2018. Deeper analysis shows that the attack was carried out on multiple protected customers, by a group of IPs from China. CVEs over time You can see that the number of CVEs which are being used by attackers, according to our data, has increased in the last few years: In addition, Python is used to target specific applications and frameworks – below you can find the top 10, according to our data: When we looked at all the frameworks targeted by Python, the attacks that stand out are those aimed at Struts, WordPress, Joomla and Drupal, which is not surprising as these are currently some of the most popular frameworks out there. The most popular HTTP parameter value we’ve seen used in attacks, responsible for around 30% of all different param values used, belongs to a backdoor upload attempt through a PHP Unserialize vulnerability in Joomla! using the JDatabaseDriverMysqli object. The backdoor uploaded payload is hosted on ICG-AuthExploiterBot. We’ve also seen a recurring payload that turned out to be a Coinbitminer infection attempt, more details on that are in the appendix — note, the appendix is only meant as an example. Since Python is so widely used by hackers, there is a host of different attack vectors to take into consideration. Python requires minimal coding skills, making it easy to write a script and exploit a vulnerability. Unless you can differentiate between requests from Python-based tools and any other tool, our recommendations stay the same – make sure to keep security in mind when developing, keep your system up to date with patches, and refrain from any practice that is considered insecure. Appendix – Example of an Attack Here’s an interesting, recurring payload we’ve observed (with a small variance at the end): After base64 decoding it, we get a binary payload: In the above payload, there is a mention of a GitHub repository for a deserialization exploitation tool and a wget command download in a jpg file, which strongly suggests there is malicious activity. After downloading the file from http://188.8.131.52/jre.jpg we can see that it’s actually a script containing the following: The two last lines in the script try to get http://184.108.40.206/static/font.jpg%7Csh, which is identified as Trojan. Coinbitminer by Symantec Endpoint Protection. This finding relates to a tweet from the end of August 2018, talking about a new Apache Struts vulnerability CVE-2018-11776 used to infect with the same Coinbitminer. While you’re here, also read: Imperva Python SDK – We’re All Consenting SecOps Here Try Imperva for Free Protect your business for 30 days on Imperva.
<urn:uuid:8b9e984a-0c1d-4e09-ac25-f99cd70c803e>
CC-MAIN-2022-40
https://www.imperva.com/blog/the-worlds-most-popular-coding-language-happens-to-be-most-hackers-weapon-of-choice/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00569.warc.gz
en
0.932389
1,267
2.53125
3
Clearview AI has created one of the broadest and most powerful facial recognition databases in the world. Their application allows a user (usually law enforcement) to upload a photo of an individual into the application. Once the photo is analyzed within the app, it shows the requestor all the public photos of that same individual found in their database of more than 3 billion images; along with links to where those images may rest online (often in social media sites but many other places as well). What Does This Mean For You and I? This application has been used by more than 600 law enforcement agencies over the last year, using it to help solve shoplifting, identity theft, credit card fraud, murder and even child sexual exploitation cases. The New York Times analyzed the computer code underlying the application, and found that Clearview AI code includes programming language to pair it with augmented-reality glasses; meaning users would potentially be able to identify every person they saw. The application claims real-time identification. This could be a big win for law enforcement. In a benevolent nation state, this powerful application will help law enforcement identify and catch criminals. However, privacy advocates point out that not every nation using this technology is benevolent. From China to Russia, powerful nation states are using this to track their citizens, their dissidents, and their everyday citizens. What could go wrong? ,What if this application was accessible by anyone? What if a man riding the subway in New York City snapped a photo of a woman they found attractive and uploaded that to this application? The man may now be able to see the woman’s name, address, much of her public online activity. If that man has criminal intentions, that could negatively impact the woman’s safety. While this is an extreme example of what could happen with this technology in the wrong hands, how can we be certain that will never happen? While today, the application is limited to law enforcement agencies, Clearview investors predict it will eventually be available to the public. Even if it is NOT officially made available to the public, hundreds of thousands of network and computer breaches have shown that valuable databases like this are breached and stolen by nation states or other hackers all over the world. Something this valuable will be targeted by hackers. It’s only a matter of time before this data is breached and stolen and placed in the hands of bad actors. Could CCPA, GDPR, or Other Legislation Help? The recent passing and enforcement of the California Consumer Protection Act (aka: CCPA) has gotten businesses in the US thinking and talking about what data they collect, how they use it, how they protect it, report on it, and delete it. CCPA requires businesses to provide consumers with the following: - a summary of the data they collect and what sharing practices they follow; - a process to request that their data be deleted (all privacy policies should now spell this out); - the right to opt out of the sale or sharing of their personal information - the ability to prohibit from selling personal information of consumers under the age of 16 without explicit consent California isn’t the only state that has adopted privacy requirements like these. Nevada has recently implemented legislation similar to the CCPA. The states of Colorado, Massachusetts, New York and Washington are also working on legislation that will try to protect their residents’ privacy. It is important that these states include biometric information within the definition of personal information, allowing people to opt out of having their information sold to companies such as Clearview AI. What Can I do Now to Protect Myself? You can reach out to companies like Clearview AI via their Privacy Page links and request that your data be deleted. You could also write to your congress delegate and request that the US Senate pass comprehensive privacy reform legislation that protects all citizens of the US instead of each state creating a Patchwork quilt of different privacy legislation. Technology is rapidly outstripping our ability to regulate it with the existing laws we have on the books. A holistic approach to privacy protection is needed. Reading this article and others like it to stay informed, delete your data where you don’t want it stored, and fight for privacy rights is what we need to do to protect ourselves. As of February 27, 2020, CNN reported that Clearview AI has been hacked, they store billions of photos from people across the world and their entire client list has been stolen. The customer list includes police forces, law enforcement agencies and banks. Clearview AI said that the hacker didn’t obtain any search histories conducted by customers, which include some police forces. Clearview AI noted that they are working to strengthen their security as much as possible in the near future to prevent a breach from happening again. New York Times Article: The Secretive Company That Might End Privacy as We Know It GeekWire Article: Washington Privacy Legislation Includes Facial Recognition Provisions 5-9-14 Eyes Privacy and Surveillance Article: https://vpnalert.com/blog/5-9-14-eyes/
<urn:uuid:ca40fee4-04d6-4c27-8918-9fa074a2dbc6>
CC-MAIN-2022-40
https://cyberhoot.com/blog/clearview-ais-groundbreaking-facial-recognition/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00569.warc.gz
en
0.949616
1,204
2.546875
3
It can be difficult to completely prevent cyberbullying, but there are steps yourself can take to help protect you from being a victim. Follow our 10 steps below, audit your online profile and block the cyberbully! Understand the different forms of cyberbullying. You can’t prevent it if you don’t understand how it happens. We have developed a blog post explaining the 10 Forms of Cyberbullying, so be sure to read it as a first step (link) Think before you post. Never forget that the Internet is public. What you put out there can never be erased. Google has an everlasting memory! If you wouldn’t say something in front of your parents, teachers or a room full of strangers, then don’t say it online! Also, be careful about sharing private information in Instant Messaging as this could be shared with or seen by somebody else. Review your privacy settings. All social networking sites have privacy settings, which allow you to control who sees what you post. Review these settings and follow best practice. It’s important to check your privacy settings frequently, because Internet sites sometimes change their policies. Keep private information private. Don’t reveal identifying details about yourself — address, phone number, school, credit card number, etc. — online. Passwords exist for a reason; sharing them with friends is like passing out copies of your house key to friends and strangers alike. If anyone besides you knows your passwords, it should be your parents and your parents only. Educate yourself. Reading this infographic is a good starting point. Why not consider taking a course to empower yourself in using the Internet safely and securely while having fun? Educate others. Share your knowledge with your friends at school and online. Offer to share your knowledge in your school, at your sports or at another club. Post top tips on your social networking sites. Speak out. If you see somebody you know being a cyberbully, do not protect them, or be a bystander. You must speak out! Also, if you see a friend being cyberbullied, encourage them to speak out. They may need somebody else to reassure them it’s the right thing to do. Set boundaries. It’s really important to have boundaries and rules around social media and Internet use. This is not aimed at preventing you from having fun online but to ensure you do it safely. So it’s a good idea to have some ground rules at home. Take immediate action. If you suspect you are being cyberbullied, tell a trusted adult immediately. Take screenshots of the suspected cyberbullying messages / incidences and block the individual concerned. Finally, don’t ever be a cyberbully. Don’t be part of an online discussion which has the potential to become sinister or hurtful towards another person. Always, always be a good digital citizen!
<urn:uuid:0a0d2067-60f6-44d7-a0d4-2bf057ff607f>
CC-MAIN-2022-40
https://kids.kaspersky.com/10-steps-to-prevent-cyberbullying/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00569.warc.gz
en
0.927984
634
3.40625
3
IBM Research has unveiled the first fully integrated wavelength multiplex chip, which uses pulses of light to move data. The solution will leverage the production of 100Gbps optical transceivers working with optical and electrical components side-by-side using sub-100nm semiconductor technology. IBM Research explained its silicon photonics chips use four distinct colours of light travelling within an optical fiber, each with a 25Gbps optical channel. The chips bandwidth has been estimated to download an HD movie in two seconds and digitally share 63 million tweets or six million images every second. IBM engineers in New York, Zurich and the IBM Systems Unit revealed their ambition to develop a solution that will connect data centre interconnects within a range of up to two km. Arvind Krishna, senior VP of IBM Research said: "Making silicon photonics technology ready for widespread commercial use will help the semiconductor industry keep pace with ever-growing demands in computing power driven by Big Data and cloud services. "Just as fiber optics revolutionised the telecommunications industry by speeding up the flow of data — bringing enormous benefits to consumers — we’re excited about the potential of replacing electric signals with pulses of light. "This technology is designed to make future computing systems faster and more energy efficient, while enabling customers to capture insights from Big Data in real time."
<urn:uuid:80e6b8a3-7b37-419c-96ad-3e932d11da58>
CC-MAIN-2022-40
https://techmonitor.ai/technology/data-centre/ibm-research-set-to-break-100gps-barrier-with-silicon-photonics-chip-4579296
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00569.warc.gz
en
0.893133
274
3.171875
3
In one second, the human eye can only scan through a few photographs. Computers, on the other hand, are capable of performing billions of calculations in the same amount of time. With the explosion of social media, images have become the new social currency on the internet. An AI algorithm will identify a cat in the picture on the left but will not detect a cat in the picture on the right Today, Facebook and Instagram can automatically tag a user in photos, while Google Photos can group one’s photos together via the people present in those photos using Google’s own image recognition technology. Dealing with threats against digital privacy today, therefore, extends beyond just stopping humans from seeing the photos, but also preventing machines from harvesting personal data from images. The frontiers of privacy protection need to be extended now to include machines. Safeguarding sensitive information in photos Led by Professor Mohan Kankanhalli, Dean of the School of Computing at the National University of Singapore (NUS), the research team from the School’s Department of Computer Science has developed a technique that safeguards sensitive information in photos by making subtle changes that are almost imperceptible to humans but render selected features undetectable by known algorithms. Visual distortion using currently available technologies will ruin the aesthetics of the photograph as the image needs to be heavily altered to fool the machines. To overcome this limitation, the research team developed a “human sensitivity map” that quantifies how humans react to visual distortion in different parts of an image across a wide variety of scenes. The development process started with a study involving 234 participants and a set of 860 images. Participants were shown two copies of the same image and they had to pick out the copy that was visually distorted. After analysing the results, the research team discovered that human sensitivity is influenced by multiple factors. These factors included things like illumination, texture, object sentiment and semantics. Applying visual distortion with minimal disruption Using this “human sensitivity map” the team fine-tuned their technique to apply visual distortion with minimal disruption to the image aesthetics by injecting them into areas with low human sensitivity. The NUS team took six months of research to develop this novel technique. “It is too late to stop people from posting photos on social media in the interest of digital privacy. However, the reliance on AI is something we can target as the threat from human stalkers pales in comparison to the might of machines. Our solution enables the best of both worlds as users can still post their photos online safe from the prying eye of an algorithm,” said Prof Kankanhalli. End users can use this technology to help mask vital attributes on their photos before posting them online and there is also the possibility of social media platforms integrating this into their system by default. This will introduce an additional layer of privacy protection and peace of mind. The team also plans to extend this technology to videos, which is another prominent type of media frequently shared on social media platforms.
<urn:uuid:7aaa9487-ca64-4c25-8861-f4c65da67d22>
CC-MAIN-2022-40
https://www.helpnetsecurity.com/2020/07/03/online-photos-safe-from-facial-recognition-algorithms/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00569.warc.gz
en
0.931124
622
3.296875
3
The Event Registry is used in IFS Cloud for keeping track of the activities related to the handling of events that occur inside the application. Events are programmatically defined, but the actions and conditions that apply can easily be configured. The Event Registry is a means of creating an active behavior on top of an Oracle Server and is based on the ECA-rule: The event occurs, conditions are evaluated, if true then an action is executed. The ECA-rule is a rule that almost all active databases follow. The rule can be described as follows: An EVENT occurs in the database that generates a call to the condition handler. The condition handler evaluates certain CONDITIONS that are registered, together with the action. If the evaluation is successful, then the registered ACTION is performed. An event can be described as something that happens in a transaction that generates a method call to the condition handler. There are two types of events: - Application Defined Events These events are built into the applications logic at places where important things happen. - Custom Defined Events These events are defined at installation time and maintained by triggers generated during configuration. This type of event gives the individuals who configures the system the opportunity to add events without coding a single line. Conditions decide whether or not the registered action should be performed. Action is the response that can be generated by the system when an event occurs. The action can be set up so they are only executed when all of the specified conditions are met. Uses in the platform¶ The idea of an active database is very useful in the platform. Implementation of this in applications should provide a flexible way to control events and the actions connected to them. The use of entities makes it easy to find a location for the calls that have to be made in the database when a specified event happens. The use of an executor of actions makes it easy to support the actions that an event should result in. The process to access Events can logically be divided into four different sections: Event Production, Event Handling, Condition Handling and Action Handling. Principal design of event handling in the architecture context The Event Registry process¶ - The business logic calls the Event handler for the event to determine if any actions are registered on the event. This call should be included within the involved entity's package body, (in the method controlling the event), or as a trigger on the base table of the LU. It is possible to have different conditions and different actions on the same event, but the event doesn’t need to have any actions at all connected to it. An event can be created without any knowledge about the actions that will be connected to it in the future. Actions, if required, can be added afterwards. If at least one of the actions is active, then the event is triggered and its condition must be evaluated. - The involved entity's business logic supplies different pre-defined parameters with data and sends them packed in a string to the Event handler. - The Condition handler checks if the conditions for the actions are true by unpacking the text string supplied by the Event handler. - The Action handler controls the type of action required and will activate this action when the stated conditions have been met. - The Action handler is responsible for ensuring that all actions are executed in the correct manner. Actions can be sent to clients, to the business logic or some external source such as mail. This is handled through IFS Connect. If end users subscribe to any actions, then the action handler will send the appropriate actions to the subscribers as well as to the original receiver.
<urn:uuid:ce946281-d489-4be9-9237-97693efa3740>
CC-MAIN-2022-40
https://docs.ifs.com/techdocs/21r2/010_overview/225_configuration/220_events/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00569.warc.gz
en
0.927115
753
2.515625
3
[ Image Credit Flickr – École Polytechnique by J.Barande ] Cybersecurity is one of the fastest-growing technology careers, as it’s challenging, constantly evolving, and pays well. Whether you’re looking for a challenge or stability in your career with a broad range of opportunities, look no further than cybersecurity. The best part? You don’t need a computer science degree to get into cybersecurity. It’s a field where your experience and skills matter more than your degree. All you need is a decision and dedication to obtaining a unique set of skills. Below, you will learn how to get into cybersecurity without experience, so let’s jump in. Table of Contents What is Cybersecurity? Cybersecurity keeps cyber attacks at bay with different technologies, programs, and controls. Simply put, cybersecurity prevents security breaches and other exploitation through various computer systems. Moreover, as many companies recently took their business online, cybersecurity systems have become more protective. Many people consider cybersecurity their next career move, as it’s high-paying and intriguing. It’s also a high-profit field with expected revenue of $366,10 billion by 2028. Cybersecurity Skills Necessary for a Cybersecurity Career Popular cybersecurity positions include security specialist, penetration tester, information security analyst, and computer forensics investigator. These positions don’t require a degree, but you still need certain skills for any of these positions. A competent cybersecurity specialist will need: - Soft skills and hard skills - Excellent organizational skills - Detail-oriented skills and a good eye for trends - Strong communication skills and time-management - A clear understanding of confidentiality issues and relating laws - Working under pressure and meeting tight deadlines - Creativity and thinking outside the box - Making decisions with confidence - Interest in the IT sector Pursuing a career in cybersecurity with no degree has benefits and downsides, but the pros definitely outweigh the cons in the cybersecurity world. What Qualifications Do You Need for Cyber Security? If you’re passionate about information security and possess some of the skills above, you stand a chance in the industry. The next step is taking a relevant degree subject, such as the following: - Computer science - Physics, mathematics, or other STEM subjects - Networks and security - Network engineering - Forensic computing If you’re currently working in cybersecurity, you can take extra degrees or qualifications and land better jobs. Here’s a detailed description of extra qualifications you can consider: SSCP (Systems Security Certified Practitioner) SSCP makes a perfect choice for beginners with solid technical skills and security knowledge. To obtain the SSCP qualification, you must pass the exam and provide at least five years of cumulative, paid work experience in IT (three years in information security and one year in the six domains of (ISC)² CCSP Common Body of Knowledge – CBK). CISSP (Certified Information Systems Security Professional) Most information security specialists hold the CISSP qualification. It’s usually critical for career growth in cybersecurity. Before taking a four-hour exam (reduced from six-hour), the qualification process encompasses eight cybersecurity disciplines, such as asset security, identity and access management, and security engineering. Once you pass, you will get a globally recognized achievement of excellence, which is one of the requirements of the ISO/IEC Standard 17024. This certification requires at least five years of cybersecurity work experience. CISM (Certified Information Security Manager) After you have over five years of cyber security experience up your sleeve in the industry, you can apply for a CISM certification. It’s a qualification for cyber security professionals looking for ways to improve their skills and land better cybersecurity jobs. This certification also requires at least five years of work experience (three in information security management before getting the qualification). However, you can only receive this accreditation if you’ve completed your five years of work experience within ten years before submitting your application. > Learn more on how to prepare and pass the Certified Information Security Manager (CISM) exam. CCSP (Certified Cloud Security Professional) After you have over five years of cyber security and cloud experience up your sleeve in the industry, you can also apply for the CCSP certification. The CCSP is the premier cloud security certification from (ISC)². This vendor-neutral certification validates IT and information security professionals’ knowledge and competency to apply best practices to cloud security architecture, design, operations, and service orchestration. It shows you are at the forefront of cloud security. The CCSP certification was first released in 2015 and require also 5 years of experience in IT. CCSP is harder compared to the Certificate of Cloud Security Knowledge (CCSK), so if you want to follow to pursue the CCSP path, we highly recommend starting first with CCSK and then with CCSP because by attaining the CCSK certification, you can request a one-year experience waiver by submitting documentation of your Cloud Security Alliance CCSK certificate to (ISC)² towards earning your CCSP certification. If you are interested to learn more about Cloud Security Knowledge and how to prepare and pass the exam, we highly encourage you to check the following CCSK study guide. > Learn more on how to prepare and pass the Certified Cloud Security Professional (CCSP) exam. How to Get Into Cybersecurity With No Experience: 5 Helpful Tips The following steps can help you become a professional cybersecurity analyst with no college degree, so take a look. 1. Get a High School Diploma or GED The first step toward becoming a cybersecurity analyst is completing high school or a General Educational Development (GED) program. Meanwhile, you can attend cybersecurity courses that help build your skills and learn security concepts. You can pick computer programming or software development because they’re vital for a future cyber security career. Computer programming courses cover important computer programs, while software development courses teach the development of software protected against possible cyber threats. 2. Obtain Online Certifications You can quickly become a cybersecurity professional without a degree if you obtain online certifications that prove your skills. These online certifications will make you more appealing for cybersecurity positions. To pass the certification exams for whichever branch you pick, consider using tools like handbooks, YouTube videos, study guides, and other materials. Some of the certifications you can obtain without a degree include: - CEH (Certified Ethical Hacker) - CompTIA Security+ - (CCNA) Cisco Certified Network Associate - CompTIA CySA+ 3. Build Experience in Cybersecurity Suppose you don’t have a college degree. In that case, it’s highly important to build experience within the cybersecurity field so that you can learn vital data, understand how cyber threats work, and learn more about software. Many IT sectors provide on-the-job cybersecurity training for beginners, so you can observe other professionals while they work. That said, look for companies that offer training to familiarize yourself with digital forensics, penetration testing, risk assessment, ethical hacking, cloud security, network security, etc. You will gain experience and connect with other cybersecurity specialists before you start your cybersecurity career. 4. Establish Connections Connections are very important. You will meet cybersecurity professionals and establish connections if you undergo training in a cybersecurity company. Connections in the cybersecurity industry can help you get better entry-level cybersecurity jobs and advance your cybersecurity profession. 5. Update Your Resume Regularly You should update your CV as you gain new cybersecurity skills, yield certifications, build experience, and land new cybersecurity jobs on your career path. Make sure you list your cybersecurity qualifications, the date you obtained them, the date they expire, which institution issued them, and other important details. You can boost your chance of landing a better position with a comprehensive, up-to-date resume, so don’t forget about this part of your cyber security journey. Final Tips for Future Cybersecurity Professionals Cybersecurity keeps gaining popularity among IT professionals because it’s loaded with job opportunities, doesn’t require a degree, and offers lucrative salaries. The median annual salary for entry-level cybersecurity jobs is $65,000; Remember that this number can change due to many factors like the location of the job, your certifications, or a college degree, which isn’t necessary but provides a small boost, etc. Nonetheless, you will likely be able to advance within your cyber security career and earn up to six figures in the future like many other cybersecurity professionals nowadays. Thank you for reading my blog. If you have any questions or feedback, please leave a comment.
<urn:uuid:9c43609a-6d96-429a-8dc6-0eb49a5fc923>
CC-MAIN-2022-40
https://charbelnemnom.com/get-into-cybersecurity-with-no-experience/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335609.53/warc/CC-MAIN-20221001101652-20221001131652-00569.warc.gz
en
0.915414
1,868
2.71875
3
A smoke alarm is critical for the early detection of a fire in your home and could mean the difference between life and death. Fires can occur in a variety of ways and in any room of your home. But no matter where or how, having a smoke alarm is the first key step toward your family’s safety. This document is intended to inform you about some of the safety aspects and the importance of having and maintaining working smoke alarms; it is not all-inclusive. Why is a Smoke Alarm Important? Every year in the United States, about 2,000 people lose their lives in residential fires. In a fire, smoke and deadly gases tend to spread farther and faster than heat. That’s one reason why most fire victims die from inhalation of smoke and toxic gases, not from burns. A majority of fatal fires happen when families are asleep because occupants are unaware of the fire until there is not adequate time to escape. A smoke alarm stands guard around the clock, and when it first senses smoke, it sounds a shrill alarm. This often allows a family the precious, but limited, the time needed to escape. About two-thirds of home fire deaths occur in homes with no smoke alarms or no working smoke alarms. Properly installed and maintained smoke alarms are considered to be one of the best and least expensive means of providing an early warning of a potentially deadly fire and could reduce by almost half the risk of dying from a fire in your home. Where Should Smoke Alarms be Installed? Smoke alarms should be installed on every level of the home, outside sleeping areas, and inside bedrooms, A smoke alarm should be installed and maintained according to the manufacturer’s instructions. When installing a smoke alarm, many factors influence where you will place the alarm, including how many are to be installed. Consider placing alarms along your escape path to assist in egress in limited-visibility conditions. In general, you should place alarms in the center of a ceiling or, if you place them on a wall, they should be near the ceiling. A Smoke alarm should be on every level, and in each sleeping room, and outside the sleeping area Replace batteries every year Replace smoke alarms every 10 years Which Smoke Alarm to Install? Guide to selecting the smoke alarms to protect you and your family Because both ionization and photoelectric smoke alarms are better at detecting distinctly different yet potentially fatal fires, and because homeowners cannot predict what type of fire might start in a home, the CPSC staff recommends using these guidelines to help best protect your family: CPSC staff recommends the following: What Are The Differences in Smoke Alarm Types? Although there are several choices to make in selecting the right smoke alarms to buy, the most important thing to remember is that smoke alarms save lives. For that reason, you should install a smoke alarm if your home does not have one. Installing additional smoke alarms throughout the house provides greater protection. Smoke alarms may contain different or multiple sensors There are two main types of smoke alarms, which are categorized by the type of smoke detection sensor, ionization and photoelectric, used in the alarm. Each type of smoke alarm may perform differently in different types of fires. A smoke alarm may use multiple sensors, sometimes with a heat detector or carbon monoxide detector, to warn of a fire. Ionization detectors contain a chamber with two plates that generate a small, continuous electric current. When smoke particles enter the ionization chamber, the smoke particles disrupt the current flow, which triggers the alarm. Photoelectric detectors use a light beam and light receptor (photocell). When smoke particles are present between the light and receptor, depending on the type of smoke chamber configuration, the reduction or increase of light on the photocell sensor triggers the alarm. Smoke alarms may perform differently Both ionization and photoelectric detectors are effective smoke sensors, and even though both types of smoke detectors must pass the same tests to be certified to the voluntary standard for smoke alarms, they can perform differently in different types of fires. Ionization detectors respond quickly to flaming fires that give off heat and hot gases with smaller (sub-micron) combustion particles; photoelectric detectors respond more quickly to smoldering fires that give off larger combustion particles. There are combination smoke alarms that combine ionization and photoelectric detectors into one unit, called dual sensor smoke alarms The amount of time a person may have to escape depends on many factors, such as the type of fire, location of the fire, and the closest smoke alarm. Information Provided By CPSC Image Provided By: Stamfordfire If you would like liquidvideotechnologies.com to discuss developing your Home Security System, Networking, Access Control, Fire, IT consultant, or PCI Compliance, please do not hesitate to call us at 864-859-9848 or you can email us at email@example.com.
<urn:uuid:03ad9298-5880-41de-8f11-8e8de31b0820>
CC-MAIN-2022-40
https://liquidvideotechnologies.com/smoke-alarm-and-fire-safety/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00569.warc.gz
en
0.924542
1,009
2.984375
3
When listening to discussions of many of the core concepts of the big data world, it often can feel like being caught in a hurricane of technobabble and buzzwords. Three of the most relevant concepts to understand, though, are data warehousing, data analysis, and business intelligence (BI). Individually, each of these concepts engenders one-third of an overall process. When that process comes together, a company can more efficiently collect data, analyze it and turn it into actionable information for decision-makers at all levels of an operation. Data warehousing is the most straightforward of the three concepts to understand. As the term suggests, it’s the process of taking collected data in a company and storing it in places where it can be kept secure and accessible. This means having access to either on-site database servers or off-site cloud storage platforms. Data analysis is the process of scanning through the available data an organization has in order to produce insights. Many people misuse this concept interchangeably with BI. The distinction is that data analysis tools help professionals handle the tasks of: - Acquiring data from sources - Prepping data for analysis - Confirming data integrity - Identifying statistically grounded methods for gaining insights - Using computing resources to rapidly cull massive amounts of data - Iterating through permutations of statistical models to generate insights - Verifying that any generated insights are statistically valid Business intelligence is about taking the raw insights gained using those data analysis tools and turning them into actionable information. BI platforms are designed to provide visualizations and data to stakeholders. For example, a U.S. retailer might offer its buyers in China real-time data streams of insights derived from scanning millions of influencers’ feeds on Twitter, Instagram, Facebook and other social media platforms. This allows the buyers to look at the insights and quickly make decisions about what’s likely to sell well in the upcoming fashion season. All of this work calls for the support of folks who have experience in working with computing resources at large scales. There’s a lot more going on here than simply putting entries into a spreadsheet. The industry employs plenty of data scientists, computer programmers and IT professionals. Likewise, individuals with business backgrounds in consulting are often in high demand. From end to end, a company has to build its training and hiring practices around fostering a culture that values big data and insights. Building such a culture often presents its own set of challenges, as many people prefer to make choices based on tastes, gut reactions and “eye tests.” If you want an insight into how this process unfolds, look no further than the world of professional baseball. Few sports are now as driven by analytics as baseball. Starting at the turn of the century, small clubs that were strapped for cash began hunting for market inefficiencies. Two decades later, everyone in the business is using data analytics tools to make decisions. In 2019, the Houston Astros announced they were cutting their scouting department significantly while adding more people in analytics. One of the classic examples of how statistically driven insights can defy expectations is the so-called Monty Hall problem. The original version of the show “Let’s Make a Deal” featured a game where a contestant had to choose one of three doors to win a prize like a new car. Behind one door was something no one wanted, such as a goat. Another door hid the car, and a third one hid a lesser prize. After the contestant picked a door, the host would reveal what was behind one of the other doors. For the sake of dramatic tension, the host never showed the goat or the car in the first reveal. The host then would ask, “Do you want to change your pick?” According to volumes of computer simulations and PhD-level stats papers, the answer should always be “yes.” By switching, the contestant improves their chance of winning from 1/3 to 2/3. If that feels wrong to you, don’t feel bad. The answer is not intuitive. Most people assume the contestant has somewhere between a 1/3 and 1/2 chance when switching. Thousands of respected mathematicians even tried to refute the solution. Lots of business decisions are basically the Monty Hall problem scaled into the thousands, millions or even billions. There are plenty of doors to pick from, and the goats far outnumber the cars. Also, you’re competing against numerous other contestants simultaneously. Unless you need to pay a dowry, you probably don’t want that many goats. How do you improve your chances of finding the winning prize? You embrace the value of data warehousing, data analysis and business intelligence.
<urn:uuid:2f62bc23-a66c-4349-b6fd-c822bb415801>
CC-MAIN-2022-40
https://www.inzata.com/business-intelligence-data-warehousing-data-analytics/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00569.warc.gz
en
0.952347
979
2.703125
3
Typosquatting is a form of cyber attack where threat actors register a similar, yet incorrectly spelled version of a legitimate website URL, assuming that some users will input the name incorrectly into the address bar. When users misspell the website name, they are taken to a website that is mimicking the brand of the original, where data can be stolen from visitors who do not realize they are not browsing on the legitimate website that they intended to visit. This article will investigate this type of cyber attack and what you can do to keep your business and your customers safe. Does typosquatting have any other names? Yes! Typosquatting is also commonly known as URL hijacking, and can also be referred to by the names “sting site” or simply a false or fake URL. The idea is all the same. Attackers look at popular websites which involve the collection and safe use of customer data, and then leverage accidental misspellings or common errors in the website’s expected name to create similar sounding or similarly spelled domains. For example, a URL hijacking attempt against Amazon.com could be the domain Amazom.com, while the bank HSBC.com could be HBSC.com. Is typosquatting common? There are many famous instances of typosquatting, including a high-profile attack on Google.com, through the site Goggle.com. The website was active from 2004-2007, and caused a lot of damage. “Once it was accessed, the domain would instantly download several viruses and other malware and start to spam pop-ups, some of which contained pornographic imagery. In addition to the malware it downloaded on the victim’s computer, it used the WMF exploit to install the rogue antivirus SpySheriff. All the malware together had the potential to damage the computer severely and may require the victim to re-install their operating system, losing all of their files and data on the computer. “ Interestingly, while the site has now been taken down and similar forms blacklisted, Google has agreed that Goggle is not a misspelling of their name legally, as it is a word in and of itself. What other forms does typosquatting take? There are other ways that hackers can leverage the broader category of cybersquatting, but typosquatting is mainly about misspellings. The common theme of cybersquatting is that this threat attacks users who use the URL bar to search for websites, and who don’t use a search engine like Google or Bing. When you enter the URL directly into the search bar, you need to make sure you know the exact website you’re looking for, including the TLD (top level domain). This is because some hackers will leverage similar looking domains and redirect users to a phishing website. In some cases this will be a typosquatting attack and rely on spelling errors, for example .om is the TLD for Oman, so if a user drops the c by mistake they can find themselves on a malicious website. However, in other cases there will be no spelling mistakes. Users may legitimately think they need a .com website when actually the business in question uses .co.uk or any other regional or local top level domain. In this case, eager attackers can buy another likely domain name, set up the fake website and just wait for browsers to walk right into the trap. Other instances of cybersquatting include: Mixing the order of words in the URL: For example, if the website in question is Bed, Bath and Beyond, the attacker might buy bathbedandbeyond.com, assuming that some users will get mixed up. Adding punctuation to confuse browsers: For example, adding a hyphen into a website name, so that facebook.com becomes face-book.com. This can confuse users, especially those who are in a hurry. Adding or taking away a reasonable extra word into the URL: Think about if ebay.com was ebaysell.com, or if Wikipedia.com was Wikilearn.com. These definitely sound like legitimate URL names. Is a homographic attack the same as typosquatting? Actually, in many ways a homographic attack is the opposite of typosquatting, but you could say that it comes under the category of cybersquatting. Here’s how it works. An attacker buys a domain that is indistinguishable to the human eye from the legitimate brand’s website. This is done with characters from other alphabets, or with something as simple as exchanging a lowercase L for a capital I. When users see this URL written down or linked from their email, they would have no reason to presume it wasn’t safe to click on. In this way, unlike other forms of cybersquatting – homographic attacks do not rely on the user making a mistake or using the URL bar directly. In fact, using the URL bar directly is the way to avoid being taken in by a homographic attack, by going directly to the address bar and typing in the URL manually, rather than clicking on a link. How can you protect against typosquatting? Unfortunately, there is no easy way to protect against this kind of threat, but the best practice is to use a legitimate search engine to find the websites you need, and to never click on links from emails. We all make typos from time to time, so double check the URL if you are typing directly, and if anything seems at all unusual about the website you’re visiting (for example if it has its own typos or grammatical errors, or if the page seems poorly designed), stop and recheck the URL immediately. If you’re worried about typosquatting attacks on your own business, which could have a devastating impact on your reputation, it can be helpful to buy similar domains yourself, redirecting them back to your own main website. Make sure to register your brand as a trademark so that if you need to take legal action against cyber attacks of this kind – you have a legal standing for your case. Looking for more cybersecurity tips from Atera? Check out our recent webinar by our CISO, Oren Elimelech – it’s chock full of actionable ideas for keeping your business secure. See Atera in Action RMM Software, PSA and Remote Access that will change the way you run your MSP Business
<urn:uuid:fbbd3ce3-f1dc-4e8c-b1ea-4c7ac850881f>
CC-MAIN-2022-40
https://www.atera.com/blog/what-is-typosquatting-and-how-can-you-stay-protected/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00569.warc.gz
en
0.927748
1,362
2.515625
3
Researchers have proposed a swarm of collaborative robots as a better means of exploring Mars. Malek Murison reports. Mars is a long way away. At its closest to Earth, it would be 33.9 million miles away, and at its furthest, 250 million miles. NASA’s preferred measurement is 140 million miles, with Earth’s closer orbit to the Sun lapping Mars every 2.16 years, providing the space agency with launch windows between planets whose relative positions are constantly changing. Whichever measurement you choose, it’s clear that exploring Mars comes with enormous challenges. For human beings, these include a one-way trip lasting six to eight months of zero gravity and radiation exposure, during which time they would have to keep themselves fed, hydrated, and constantly exercised to prevent atrophy of body and mind. Then they would have to land safely – tricky in a thin atmosphere – survive on Mars for long enough to make the project worthwhile, launch from the surface back into space, and make the journey home in the same hazardous conditions. That means carrying enough food, water, and fuel to survive a return trip that may be even longer than the first. Finally, it may take weeks or months to adjust to life back on Earth. So far, these obstacles have proved to be insurmountable, especially since the last time astronauts ventured beyond Earth orbit was 46 years ago, when Apollo 17 touched down on the Moon. This why all explorations of Mars to date have been via telescope, space probe, or landing a robot on the surface – several attempts at which have failed. So NASA’s solar-powered, 185 kilograms (408 lb) Mars rover, Opportunity, is a stunning scientific and technological achievement, and to date it has spent more than 5,000 days roaming the planet. But while there is no doubting the scale of NASA’s Mars progress to date, a single rover moving slowly over the surface doesn’t represent an efficient way to explore a planet. NASA is well aware of this, and has invited research teams to submit alternative methods as part of its Innovative Advanced Concepts program, which aims to “nurture visionary ideas that could transform future NASA missions with the creation of breakthroughs radically better, or entirely new aerospace concepts.” Harnessing the power of swarms Among the 25 shortlisted proposals are plans to develop a swarm of small, flying robotic drones, called Marsbees. An article published by the University of Alabama’s Chang-kwon Kang provides details of a robotic program that could “increase the set of possible exploration and science missions on Mars by investigating the feasibility of flapping-wing aerospace architectures in a Martian environment.” Put simply, NASA wants to see whether a swarm of small, flying reconnaissance robots could operate in tough Martian conditions – including its much thinner atmosphere – or if the idea belongs in the realm of science fiction. The proposed system would use a Mars rover as a kind of beehive – the home base where recharging takes place. The Marsbees might be around bumblebee size, with wings the size of a cicada’s. Each robot would be fitted with sensors and wireless communication devices. Should the concept prove successful in tests, the exploration of Mars could benefit from an swarm that creates an adaptable, resilient sensor network. Environmental samples and data collection could be carried out by single Marsbees, or by groups working collaboratively. Additional reporting: Chris Middleton. Internet of Business says Developing the Marsbee concept will bring together expertise from both the US and Japan, with greatest challenge being to address the physics of winged flight in the thin Martian atmosphere. Fortunately, the team from Japan has already developed similar technology, highlighted by one of the only hummingbird micro air vehicles (MAV) in the world. The University of Alabama team will now work to optimise the technology to suit the atmospheric conditions on the red planet.
<urn:uuid:2c97ce89-982c-40d2-ad64-57f65f01b8fc>
CC-MAIN-2022-40
https://internetofbusiness.com/nasa-robot-bees-mars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00569.warc.gz
en
0.931994
819
4
4
Session hijacking is a concern when using the internet. Whether you’re browsing a news website, social media network or any other site, you’ll send and receive data with a server. Sessions allow for the exchange of data between your website and the server. Sessions, however, can be hijacked. The good news is that you can prevent session hijacking by taking some basic precautions. What Is Session Hijacking? Session hijacking is a cyber threat that involves a hacker intercepting or “predicting” the token for a user’s session. Sessions allow users to communicate with websites. Networking protocols are based on sessions. A token is essentially a digital key that authenticates the user’s identity. The Pitfalls of Session Hijacking If you’re the victim of session hijacking, you may have your personal information stolen. The hacker will essentially be able to log in and use the website to which you’re connected under your identity. If you have any personal information on the website, the hacker may access and use it for nefarious purposes. Session hijacking can lead to phishing. The hacker may modify the content of the website to which you’re connected for phishing purposes. Some phishing schemes are conducted over email, but others are conducted over websites via session hijacking. Tips to Prevent Session Hijacking Selecting choosing Hypertext Transfer Protocol Secure (HTTPS) websites will lower your risk of being targeted with session hijacking. HTTPS is a networking protocol. It’s essentially an upgraded and more secure version of HTTP. Both networking protocols are based on sessions, but HTTPS includes encryption. When you connect to an HTTPS website, your data will automatically be encrypted. Therefore, hackers won’t be able to hijack your session. Even if a hacker intercepts your session token, he or she won’t be able to read it. HTTPS will encrypt your data. You can also use a Virtual Private Network (VPN) to prevent session hijacking. A VPN is an application that creates a secure private network over the internet. Like the HTTPS networking protocol, it will encrypt your data. VPNs are designed to encrypt data so that hackers can’t access or use it. Installing antivirus software on your computer will better protect you from session hijacking. Session hijacking has many different causes. One of the leading causes, though, is malware. There are certain types of malware that are designed specifically to steal cookies and, thus, hijack users’ sessions. With antivirus software, you can keep this and other forms of malware off your computer.
<urn:uuid:0c19054d-8221-4a8e-8993-54ff46d4ee43>
CC-MAIN-2022-40
https://logixconsulting.com/2022/08/23/how-to-prevent-session-hijacking/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00569.warc.gz
en
0.871671
549
3.265625
3
IoT Security: Tips on How to Make Your Smart Devices Really Secure 5G is going to change the game for industrial operations. We’ll be able to access more data than before, faster than ever before. For manufacturers, though, one of the primary advantages will come in the form of IoT devices. IoT devices like industrial smart sensors on a 5G network will operate thanks to the improved signal strength differently. 5G networks will allow us to not only monitor a specific shipment, but also far more sensors per shipment. Say, for example, that you manufacture microchips. In the past, it was impossible to place sensors with each microchip. We had the technology to incorporate the sensors into the chips. What we didn’t have was the signal strength to differentiate between each different sensor on our networks. 5G will change that. The improved signal strength also means that coverage over rural areas will improve. As a result, we’ll be able to incorporate more time-saving IoT devices onto the network. The convenience comes at a cost, though – those devices present a cybersecurity risk. Why? IoT devices are attacked around 5,300 times a month. IoT devices like scanners are typically considered low-risk because they don’t contain a lot of sensitive company information. Paradoxically, that’s precisely why you must secure them. Because they’re considered low-risk items, their in-built security systems might be less than ideal. And while they don’t store sensitive information, they link through to your company’s internal systems. Welcome to the digital age, where everything is connected. Hackers might not even go as far as the computer. They might hack the smart TV in the boardroom to spy on meetings. They’ve also been known to hijack IoT devices and use them as part of a bot army to conduct large scale attacks. Your company’s own devices could be used in a DoS attack on your servers. Now, let’s look at ways to secure our IoT devices. Consider Creating Your Own 5G Network It’s an expensive option, but gives you complete control over your security protocols and who has access to your systems. Naturally, you’d still need sophisticated cybersecurity protection, but a private network is a good option for those that can afford it. Use Top-Level Encryption Be prepared to devote more of your annual cybersecurity budget to encrypting your data and your network. 5G makes things faster for you, and for the cybercriminal as well. Consider Partitioned Networks This might sound a little paranoid, but it’ll help keep your company more secure. Create a secondary network to keep your IoT devices separate. At some point, the systems will need to interact, but you can create higher security levels at these points. Use the same policy when you have to give third-party vendors access to your systems. Bring them in as guest users with minimal privileges and access. Change the Default Names of IoT Devices Manufacturers of IoT devices usually set a simple, generic password for all their devices. This makes things a lot easier for them but also for hackers. Cybercriminals have access to the same manufacturer data that you do. When you buy new devices, make sure to change the password. If you’re dealing with huge quantities of devices, this could be time-consuming. An alternative might be to run those devices on a separate network, as outlined above. Password Security is Just as Important on IoT Devices Again, this tip depends on the number of devices in use. You don’t want to change the passwords on twenty thousand sensors, for example. That said, there’s no excuse not to do so for devices like scanners, POS devices, printers, and so on. Start with a strong password for each device, and don’t use the same passwords for every device. If a hacker gets one password, they’ve got them all. What classifies as a secure password? - At least 16 characters - A mixture of numbers, letters, and special characters - At least one upper and lowercase letter - A random selection of characters. Check the Individual Settings for Every Device This seems a little tedious, but it’s simple enough if you check the settings while setting up the device. It’s best to enable the highest level security settings that are practical for your company. Disable Features You Won’t Use Are there features that your company won’t use on devices? Most features are enabled automatically. Check through these and see which ones you can disable. Update Your Software All company software must be updated regularly. Cybersecurity software should be updated daily, but don’t forget programs like Office, Adobe, and so on. Check Devices that You Already Own In the sixties, a computer took up a whole room. Today we have flash drives with higher capacities. It’s time for companies to check their inventories and consider updating their technology. The problem with outdated technology is that the company making it may stop providing support. Aside from that, the security within technology can also become obsolete. See if the latest models offer better security options. Enable Multi-Factor Authentication If the manufacturer gives you the option of multi-factor authentication, use it. It might add a step or two to the process, but it makes it harder for a hacker to gain control of your device. Even if they do figure out what the password is, they won’t have the second authentication factor. Reduce Physical Access You secure your devices against theft. Do you secure them against cyber criminals using them to plant malware, though? Companies are usually cautious about who enters their server rooms, but how much attention do they pay to their POS, scanners, and so on? Keep access to all devices as restricted as possible as a further precaution against bad actors. Another aspect to consider here is who has physical access to your company’s Wi-Fi signal. It’s common knowledge that hackers often use physical proximity to make it easier to find the company’s Wi-Fi network. With 5G technology, that network will have a wider range. It’s time to start looking at ways to reduce the range of the signal linked to your primary systems. Overall, IoT security is a lot like standard cybersecurity and should be taken just as seriously. Use multiple protection factors and increase the overall security of each device to keep your company data safe. More resources on cybersecurity here. This article was written by Nikola Djurkovic, the Editor in Chief at carsurance.net. He has worked in the insurance industry as an assistant underwriter, and through his experience he learned the inner workings of insurance policies, from quotes to renewals—becoming a real insurance guru. He’s passionate about helping other people navigate the complex structures of insurance coverage and understand its intricate terminology. He is also an avid reader, movie/series binge-watcher, and nature enthusiast.
<urn:uuid:42a0baba-e08e-4b76-b673-6a4f633057c1>
CC-MAIN-2022-40
https://www.iiot-world.com/ics-security/cybersecurity/iot-security-tips-on-how-to-make-your-smart-devices-really-secure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00569.warc.gz
en
0.94331
1,484
2.6875
3
Updated: Jun 14 With Aaron Elder In this article, you will learn how to create and utilize a database view in ServiceNow. What Do I Need? You might be wondering what type of scenario you would need to do this for. A pretty common scenario for database views is where you want to report on a specific type of record, but information relating to that is stored in a separate table. The most common example would be relating metrics to incidents in a report. Here’s a hypothetical situation: let's say a user tries to report on longest resolve times for incidents, and because they're just a user, try to do that against the regular incident table, but they don't get the results they want, so they reach out to you. Because you’re the awesome administrator and know exactly what table you need for that, you create a report for them. Creating the Report For this situation, we create a report and title it incidents with longest resolve time. Keep in mind, we have not created our database view yet. To create the report, we click next, show bars, group by the incident ID, sum our duration, and click next. We have now just run our basic starter chart from our metric instance table. One thing to notice is that it's pulled in the incident and the problem.The next step is to filter this down. When choosing the conditions, let’s run the following: definition table is incident. After, we'll hit the run so that it’s narrowed down. We can filter the initial chart further by the fields that are on the incident. Let's say the user wants incidents from specific assignment groups. Because this is a document ID type field that is pointing at the incidents, we're not able to dot walk back into the incidents to filter that way. Now we’re starting to hit the limits of the information that we can extract from just one table, and we need to look at doing some type of SQL join statement in order to bring multiple records together and get more robust reporting. Incident Metric Database Let’s go ahead and look at our database views. There is an incident metric database view already created out of the box, but for this training exercise, let's remake it (reference video). Now that we've created our top-level view which is a virtualized database table, we're going to bring in the tables that make up the view. The first one will be our metric definition table. We need to put a variable prefix on our view tables so that when we try to reference a database field from a script, the system knows exactly which view it's supposed to pull from to get that information. Let's say the tables that you’re pulling together have the same name on different tables. This becomes confusing for the system — that's what the variable prefix is for. Since this is a metric definition, we'll call that MD. The where clause is basically a SQL join, so where the metric definition underscore table field equals incident. Notice I added the prefix inside the where clause so that it will know exactly what field to pull from. Now you’re just going to repeat this process with the metric instance table. Give that a variable prefix, and make sure to specify the order because these things have to run in order. The where clause will be metric instance underscore definition equal to metric definition underscore Sys ID. The next step is to bring in the incident table, assign it a variable prefix, specify the order, and specify our join clause. The join clause metric instance underscore ID is equal to incident underscore Sys ID. After that is done, the database view is created! Incident Metric Database It’s time to run the report against the database view (reference video) We can see now that we have our incident fields being pulled. This happens due to the fact that we have them joined through our where clauses. There is now a more robust filtering; for example, we can filter by assignment groups if our instances have that information available. This is a scenario where you want to use a database view — when you want to pull in data from multiple tables into your reporting solutions. Database view can also be used in a back-end script.just make sure when you use GlideRecord against your database view, you also use your variable prefixes before column names so that the system knows which view table to pull those from. Did you find Creating and Using a Database View in ServiceNow article helpful? Are you ready to start your journey with ServiceNow? If you want to find out more information about GlideFast Consulting and our ServiceNow implementation services, you can reach out to us here. About GlideFast Consulting GlideFast is a ServiceNow Elite Partner and professional services firm that provides tailored solutions and professional services for ServiceNow implementations, integrations, managed support services, application development, and training. Reach out to our team here.
<urn:uuid:56e51242-f9cc-4011-87d1-684cc25270f3>
CC-MAIN-2022-40
https://www.glidefast.com/post/creating-and-using-a-database-view-in-servicenow
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00769.warc.gz
en
0.926881
1,033
2.546875
3
Thank you for Subscribing to CIO Applications Weekly Brief Nanotechnology Is The New Standard For The Healthcare Industry Many researchers predict that the global nanomedicine market will reach US$482.99 billion by 2027, with a CAGR of 11.9 percent from 2020 to 2027 as the realm of medtech products changes. Fremont, CA: Nanotechnology is a branch of science concerned with creating objects on the scale of atoms and molecules. This technology is usually used for improving and revolutionizing technologies as well as various industry sectors, such as homeland security, medicine, transportation, food safety, and environmental sciences. This technology has been gaining momentum and has begun to line up in healthcare sector. How does Nanotechnology Transform the Healthcare Industry Nanomedicine: Medical nanotechnology is used in treatments and diagnostics of various diseases using nanoparticles in medical devices as well as nanoelectronic biosensors and molecular nanotechnology. Nanobots: Nanobots are micro-scale robots that are used as miniature surgeons. Inserted into the body, they can repair and replace intracellular structures. By replacing DNA molecules, they can also correct genetic deficiencies or even eradicate diseases. Nanofibers: Nanofibers are used in wound dressings and surgical textiles, as well as in implants and tissue engineering. Nanotech-based wearables: Cloth-based nanotechnology in healthcare is a new but gaining popularity method of monitoring patients remotely. Nanosensors embedded in the cloth of such wearables record medical data such as heartbeat, sweat components, and blood pressure. It alerts the wearer and medical professionals to any adverse changes in the body. There are still many hurdles to be cleared when it comes to the application of nanotechnology in healthcare. There is a need for more research into nanotechnology's long-term effects on the environment. Authorities need to set clearer guidelines regarding nanotech-based devices and potential health risks.
<urn:uuid:07340fe7-677d-45fb-a759-2c94cde1fe6e>
CC-MAIN-2022-40
https://www.cioapplications.com/news/nanotechnology-is-the-new-standard-for-the-healthcare-industry-nid-9541.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337480.10/warc/CC-MAIN-20221004054641-20221004084641-00769.warc.gz
en
0.936781
409
2.84375
3
The Binary system works in essentially the same way, with the only difference being that it only has two digits. these are visually expressed by the digits 0 and 1, and every number expressed in the binary system is the combination of 0 and 1. The Binary system is essential in modern electronic technology, because any electronic circuit can have only two possible states, on or off. Here in computer language 0 is for on and 1 is for off. This is true of computers too. So, all the information in a computer is stored and transmitted as sequence of binary Digits i.e 0 and 1. Computers can't understand our language and can only understand only its language. A computer's language is made up of a combination of 0s and 1s only as it can only understand these two digit's. As computer can understand only 0s and 1s, so a special coding technique had to be formed so that the numbers, letters and other characters could be converted into a computer understandable format. This coded from of number, letters, words etc is called "The Binary Number Concept". Binary Number System In the binary number system of representation, the base is 2 and only numerals 0 and 1 are required to represent a number. The numerals 0 and 1 have the same meaning as in the decimal system, but a different interpretation is placed on the position occupied by a digit. In the binary system the individual digit represent the coefficients of Powers of 2 rather than 10 as in the decimal system. For example, the decimal number 19 is written in the binary representations as 10011. 10011= 1×24 + 0×23 + 0×22 + 1×21 + 1× 20 = 16 + 0 + 0 + 2 + 1 Binary digit is often referred to by the common abbreviations as bit. Thus, a bit in computer terminology means either a 0 or 1.
<urn:uuid:1aa34b7b-64be-4dc7-a228-7f11c68c9124>
CC-MAIN-2022-40
https://www.cyberkendra.com/2012/12/what-is-binary-system.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00769.warc.gz
en
0.955662
391
4.375
4
Who are you? What are you allowed to do? In tech circles, those two questions are formally known as authentication (“who are you?”) and authorization (“what are you allowed to do?”) and are collectively referred to as “auth”. Clearly, service providers have a vested interest in making sure that they know the correct answers to both of those questions; they don’t want to allow the wrong user to log into an account and they also don’t want to allow users to interact with data that they are not permitted to access. Authentication and authorization are shared responsibilities between service providers and end-users. This shared model is necessary to achieve the ultimate goal: avoiding unauthorized account access and keeping data secure. Neither service providers nor end-users can achieve that goal single handedly. Service providers are responsible for implementing strong, easy to use security features for end-users. They are responsible for documenting how those features work, maintaining them over time, and promoting their use. In turn, users are responsible for taking advantage of those security features and actually enabling them. A user facing feature doesn’t provide any effective security if it goes unused. Service providers are also responsible for implementing transparent security mechanisms on behalf of end-users. However, regardless of how much effort is taken to make a service secure, there will always be some action that careless users can take to put their account at risk. Users are responsible for educating themselves about the best practices for staying safe online so that they can avoid common pitfalls that are outside the control of service providers. Let’s explore some specific examples to clarify why service providers and end-users both have a role to play in achieving effective security when it comes to authentication and authorization. Imagine that you are a service provider You have a team of the most talented engineers, designers, writers, researchers, marketers, and do-ers of all kinds. You have overwhelming internal support and the necessary budget to do everything within your power to build an inherently secure service. It sounds like you’re setup for success! Surely, you’ll reach the ultimate goal of achieving effective security, right?! Frustratingly, even after taking all of these necessary steps to secure your service, end-users play a huge role in determining whether their interaction with your service is actually secure. Here is a (very) non-exhaustive list of security related projects that your team might have tackled: You implement password best practices - You enforce minimum password requirements and explain to users how to create a strong passphrase. - You prevent users from using commonly known passwords like “password”, “123456” and “qwerty”. - You follow best practices for securely storing passwords in your database. - You even have a recommendation on your signup page for users to download and use a password manager. And then a user ends up reusing their favorite password that they use for every other service. One of those other services isn’t as diligent as you are, stores the password insecurely, has a breach, and now the user’s password is in the hands of a hacker. That hacker happily tries the user’s password on your service and is pleasantly surprised when the login succeeds. This scenario is a scary reality. LastPass found that “91% [of people] know there is a risk when reusing passwords, but 61% continue to do so.” Google analyzed 1.9 billion usernames and passwords exposed via data breaches and estimates that “7-25% of exposed passwords match a victim's Google account.” For a real world example, read about how a Dropbox employee’s password reuse led to the theft of over 60 million user credentials. You configure HTTPS so that user connections are secure - You make sure that all connections to your service are served over HTTPS so that the green lock appears in the browser address bar to inform users that connection is secure. - You use an EV certificate so that your company name appears next to the green lock in the browser address bar for additional user trust. - You make sure that you support only the recommended versions of TLS, configure the strongest cipher suites and important features, and get high ratings from respected TLS analysis tools. And then a user gets an email that looks like it came from you, clicks the link asking them to login, and doesn’t verify the URL in the address bar before entering their username and password. They’ve just fallen victim to a phishing attack and handed their credentials over to a hacker. The Anti Phishing Working Group (APWG) stated in their 2016 Q4 report that phishing attack campaigns in 2016 shattered all previous years’ records with over 1.2 million phishing attacks, a 65% increase over 2015. To give an example closer to home for all of those Gmail users out there, Google recently found that “victims of phishing are 400x more likely to be successfully hijacked compared to a random Google user.” You support multi factor authentication (MFA) - You support the best implementations of two factor authentication (2FA). - You support the best biometric authentication options available in the industry. - You implement adaptive authentication and dynamically change security and authentication policies based on user and device context. - You allow administrators to force all users within the organization to enable multi factor authentication (MFA) on their accounts. And then an administrator chooses not to check the box next to “require MFA on all accounts” and users in the organization choose not to enable MFA. That password leaked in the phishing attack now grants full access to the user’s account. Duo, a leading provider of two factor authentication (2FA) solutions, found that only ~28% of people in the US use 2FA and only ~56% even knew what 2FA was before their survey. Pew Research Center conducted a cybersecurity quiz in which only 18% of users could correctly identify multi factor authentication (MFA). The adoption rates of MFA are disappointingly low and general education is clearly lacking. However, even tech folks have experienced account hijacks that could have been prevented by MFA. ArsTechnica reported that Fox-IT, a Dutch IT firm, “fell victim to a well-executed attack that allowed hackers to take control of its servers and intercept clients' login credentials and confidential data. The biggest lapse on Fox-IT's part was the failure to secure its domain register account with two-factor authentication.” You specifically address organizations with multiple users - You allow organizations to provision accounts for each individual user. - You implement robust audit capabilities so that administrators can verify which employees accessed certain data and how they used it. And then users decide to share a single account and password instead of creating one for each employee in the company. Perhaps, one of those employees uses the shared account for something nefarious; good luck figuring out who is responsible! Laughably, a similar scenario played out recently when UK politician Damian Green got into hot water for allegedly accessing pornography on his government computer. A colleague, Nadine Dorries, jumped to Green’s defense implying it could have been someone else on his PC using his identity. My staff log onto my computer on my desk with my login everyday. Including interns on exchange programmes. For the officer on @BBCNews just now to claim that the computer on Greens desk was accessed and therefore it was Green is utterly preposterous !!— Nadine Dorries (@NadineDorries) December 2, 2017 Troy Hunt, a well known security expert, explained why sharing a single account in this way is problematic and also highlighted many common tools which allow users to maintain individual accounts while still sharing data to get their jobs done. You provide robust permission framework to help restrict access to data - You implement a robust permission framework which allows administrators to limit sensitive data to smaller groups of users based on their roles. - You have a team of tech writers who develop useful tutorials, documentation, and examples of how to get the most value out of the permission framework. And then a user decides to assign everyone in the organization the administrator role. Now, all users have access to all data within the organization and the intern decides to access sensitive customer data just for fun. Imagine that you are a highly motivated user You have taken the initiative to research and understand all of the cybersecurity best practices recommended by industry experts. Where appropriate, you have spent a small amount of money so that you have the best tools available to help you stay secure. It sounds like you’re setup for success! Surely, you’ll reach the ultimate goal of achieving effective security, right?! Frustratingly, even after taking all of these necessary steps to secure your account, the service provider still plays a massive role in determining what level of security you can actually achieve. Here is a (very) non-exhaustive list of steps you might have taken to remain secure: You are prepared to enable multi factor authentication (MFA) - You purchase a few U2F security key that works with your computer and your phone so that you can leverage the strongest two factor authentication (2FA) available on all of your devices. - You look forward to enabling Push Notification 2FA, such as Google Prompt, where available. - You are ready to swipe your thumb, scan your iris, or take a photo of your face while enabling biometric authentication options. And then the service provider claims that the knowledge questions they require you to fill out qualify as 2FA when combined with your username/password and that your account is secure. Of course, you know that knowledge questions count as a duplicate knowledge factor rather than real 2FA and are so weak that they border on useless anyways. Your account remains highly vulnerable since you cannot enable a possession or inherence factor of authentication to achieve actual 2FA. How about losing millions of dollars for a real world example? Hackers stole millions of dollars worth of bitcoin from a victim’s account that had SMS based 2FA enabled. Wait, what?! They got hacked even with 2FA enabled? Unfortunately, not all 2FA implementations provide the same level of security and SMS is notoriously one of the weakest. The victim’s account likely would have been safe with a strong type of 2FA, but imagine how trivial it would have been for hackers if no 2FA was enabled at all! You use a password manager - You use a password manager to ensure that each service has a unique strong password. And then the service provider stores your password insecurely in the database, has a breach, and your password is now in the hands of a hacker. Since the service provider doesn’t implement actual 2FA, your password and some easily answered knowledge questions allow hackers to easily hijack your account. It is shocking and frightening that even with all of the readily available resources that cover, in great detail, the best practices for storing passwords in a database, some service providers throw security out the window entirely and store passwords in plain text. In late 2015, Troy Hunt reported that 000webhost.com, a low priced web host, was hacked and lost 13 million user emails and plain text passwords. Similarly, The Hacker News reported in mid 2016 that VK.com, Russia's biggest social networking site, lost 100 million email addresses and plain text passwords in a breach. You install browser extensions to secure your web traffic - You install HTTPS Everywhere in your browser so that your connections to websites default to secure HTTPS as much as possible. And then the service provider only supports deprecated TLS versions, such as SSLv3, or maybe doesn’t even support HTTPS at all. Now, anyone on the same network as you can easily record the data you send to the service, such as your username and password; the hacker sitting next to you in the coffee shop now has your login credentials. Let’s Encrypt, a project that offers free TLS certificates, reports that the Web went from 46% encrypted page loads to 67% in 2017 - a gain of 21 percentage points in a single year! This is truly encouraging news, but it also highlights that a third of the Web is still using HTTP and is entirely unencrypted! Hardenize reports that only 38% of the top 500 websites are using well configured HTTPS; 19% of the top 500 are using configurations with severe deficiencies. As recently as late 2017, a UK bank called NatWest was serving their homepage over insecure HTTP. You try to follow the principle of least privilege - You work with the security and compliance teams at your company to document which information employees need access to in order to efficiently do their jobs. You sit down to configure the service to enforce those permission policies. And then the service doesn’t support the concept of an organization with individual user accounts and you’re forced to have multiple employees share credentials to a single account. You don’t have any ability to limit access to data based on an employee’s department or role. Everyone has access to everything! Also, you cannot enable 2FA for the account since it is shared between many people and they cannot all share the same trusted device, thumb print, or iris. We’re all in it together With a steady cadence of massive data breaches making front page news, cybersecurity is more present in the public conversation now than ever before. In addition to the less known examples discussed in previous sections, there are many high profile cases where fault lies clearly with the service provider. Equifax exposed the data of 147 million people after it failed to patch server software for two months. Hackers stole the data of 57 million customers and drivers when Uber engineers shared a secret credential in a code repository on GitHub. Sadly, there are too many examples to list here; it is quite clear that service providers have a massive amount of work to do in order to fulfill their responsibilities of implementing secure systems. Unsurprisingly, users are having trouble trusting service providers to keep their data safe. However, as we’ve discussed, users also have an important responsibility to educate themselves to avoid contributing to the problem. According to a 2017 Pew Research Center survey, “many Americans do not trust modern institutions to protect their personal data – even as they frequently neglect cybersecurity best practices in their own personal lives.” Tenable conducted a survey in late 2017 which draws similar conclusions on the sad state of cybersecurity literacy among users. They wrap up their report by highlighting the same tenent of the Shared Responsibility Model discussed in this essay: “[Organizations] need to lead the way in basic security practices that keep their customer and critical business data safe. But individuals must do their part, too – both as consumers and, in many cases, as employees of those same enterprises – and that starts with cyber literacy.” Authentication and authorization are shared responsibilities between service providers and end-users. All Things Auth is dedicated to helping service providers and end-users alike navigate those shared responsibilities and make progress towards the ultimate goal: avoiding unauthorized account access and keeping data secure. Neither group can do it alone. We’re all in it together.
<urn:uuid:98a45fa7-94f2-4000-b3e8-814abdcb11d3>
CC-MAIN-2022-40
https://allthingsauth.com/2018/01/12/shared-responsibility-model/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335058.80/warc/CC-MAIN-20220927194248-20220927224248-00169.warc.gz
en
0.939122
3,168
2.640625
3
You’d better hope so. Ann Cavoukian knows a thing or two about privacy. For 17 years, she served as Ontario’s provincial Information and Privacy Commissioner and was outspoken on a variety of topics, ranging from government surveillance powers to online ad tracking. Now, as executive director of the Privacy and Big Data Institute at Ryerson University, she’s got as much to say as ever about how privacy can survive amid a seismic shift in information architecture. Cavoukian, who will deliver a keynote speech at SecTor next month, has turned her eye toward two related emerging technologies: Big Data and the Internet of Things. The Internet is becoming a constellation of billions of devices, many of which will work autonomously, gather data from their surroundings, and communicate in peer networks. Big data capabilities will be necessary to understand all of the information that they generate, and to realise real-world benefits. Big data and the IoT may offer huge benefits, but concerns about the privacy implications are already emerging. Data streams are becoming fatter, and faster. Software is connecting disparate data sources and enabling organisations to draw new conclusions about people. Isn’t that at least a little worrying? Having your cake and eating it, too Cavoukian is convinced that big data and privacy can co-exist. Privacy isn’t a zero-sum game, she suggested, arguing instead for a “positive sum” solution, in which technology users can have innovation and privacy in one go. The way forward, she believes, involves a concept of her own design. In the 1990s, she invented the term Privacy by Design (PbD), which contains seven principles ranging from the early mitigation of privacy issues when developing IT systems, through to the adoption and integration of privacy-enhancing technologies. Organizations can enjoy innovation and privacy by adopting the principles of PbD, according to Cavoukian. “It requires a lot more creativity and innovation, but the end result is so much superior, because you end up getting big data and analytics with privacy embedded into the system,” she said. De-identification lies at the heart of a PbD approach to big data. This process strips personally-identifiable information (PII) out of data sets, leaving analytics systems to concentrate on processing aggregated numbers. “There are now so many standards and protocols out there to show you how to do it,” she said. “Once you do that, then you’re free to engage in data analytics and connect the data in a variety of ways. The sky’s the limit. That’s what enables privacy and big data, or privacy and IoT. You can, you must be able to do both”. De-identification has had some bad press over the years, though. In several cases, researchers have been able to cross-reference large data sets to re-identify personal information that had been stripped out. In 2006, one of the most famous cases embarrassed Netflix. The firm had released the movie-watching history of 500,000 customers in anonymous form, as part of a competition to improve its recommendation engine. University of Texas researchers honed in on their identities using statistical techniques. Netflix subsequently cancelled the competition. Just today Latanya Sweeney, the same Harvard professor who nailed the medical records of Massachussetts Governor William Weld, revealed how she used news stories to find the identities of patients in anonymous patient data. Researchers have also been able to re-identify individuals who provided genetic material, and pinpoint driver information from poorly-anonymized taxicab logs. That’s the point, argued Cavoukian: they’re poorly anonymized. “In each of those, a dozen cases, no more, the protocol that was used to de-identify the data was weak,” she said, drawing a comparison with weak vs strong encryption. “So I completely reject that premise. It’s nonsense. It’s only easy to re-identify the data if you haven’t done a good job of de-identifying it at the source.” For some, the jury is still out. In his paper, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, Paul Ohm, former senior policy advisor to the Federal Trade Commission and professor of law at Georgetown University Law Center, suggested that privacy and utility are fundamentally opposed in the context of data science. “So long as data is useful, even in the slightest, then it is also potentially reidentifiable,” he said. “Moreover, for many leading release-and-forget techniques, the tradeoff is not proportional: As the utility of data increases even a little, the privacy plummets.” Jane Yakowitz, associate professor of law at Arizona’s James Rogers College of Law, carries an opposing view. “The risks imposed on data subjects by datasets that do go through adequate anonymization procedures are trivially small,” she said in her paper, The Tragedy of the Commons, citing the low probability of an adversary existing and the level of identification risk from other sources. As our understanding of the techniques evolves, new concepts come to light, such as differential privacy, which aims to maximize the accuracy of data queries while minimizing the chance of re-identification. And scientists are also making progress with homomorphic encryption, in which we can get results from encrypted data without decrypting it. One subset of homomorphic encryption, called ‘somewhat homomorphic encryption’, promises new opportunities here, Cavoukian said. “The things that prevented people from using this in the past, which was that it was cumbersome and required a lot of computing power, have been addressed through somewhat homomorphic encryption, which preserves the value of homomorphic encryption but speeds it up considerably,” she said. This is already in use in some projects such as MIT’s Enigma secure cloud initiative. One eye on the future, the other on the past One way or another, big data and privacy must co-exist in the future. When they do, how will we deal with the past? History is fraught with poorly-designed systems, but one of the promises of big data and the Internet of Things is that we get a do-over. We build new systems that work together in new ways, and because we’re designing from the ground up, we learn from our mistakes and code them properly. That’s the dream. Even if we’re that well-organized, the world is full of legacy systems with the privacy transgressions already baked in. What of those? “Legacy systems are fraught with problems, not just in terms of privacy, but in terms of all the security-related issues, and the procedural issues. Over time, they’re going to have to be redesigned anyway. They’re going to have to be upgraded,” Cavoukian said. That old tin won’t be going away that quickly, though, and while it’s here, we’re going to have to cope with it. In 2011, working with W.P Carey School of Business associate professor Marylin Prosch, Cavoukian conceived Privacy by Redesign, an attempt to articulate the redesign of those systems. What does this look like in practice? It may require that less data be collected, and may see some databases being slimmed down, the document said. “It could involve building in additional code that would embed de-identification processes into data that is to be reused for secondary purposes,” Cavoukian added. There’s no doubt that we’re at a pivotal point in the privacy discussion. With new analytical models and rapid innovations in the kinds of devices connecting to the Internet, the stakes are higher than ever. If organizations don’t get privacy right as they design and implement IoT and big data technologies, we may find ourselves struggling to staunch the flow of data after the fact. Interested in finding out more? Register at SecTor, which takes place at Metro Toronto Convention Centre in downtown Toronto on October 20-21, with a training day on October 19.
<urn:uuid:2b715904-6eff-4ac7-8540-ffbdecac86f8>
CC-MAIN-2022-40
https://sector.ca/can-privacy-and-big-data-coexist/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335304.71/warc/CC-MAIN-20220929034214-20220929064214-00169.warc.gz
en
0.949627
1,739
2.6875
3
How Cobwebs Helps Authorities to Manage Social Distancing During a Pandemic with Location Intelligence enero 18, 2021 Since March 2020, governments worldwide are struggling with policies to keep the COVID-19 pandemic contained. One of the policies that have proven to be effective (also during previous pandemics) is social distancing. To be effective, government agencies need to collaborate to manage social distancing throughout the country. To find out if social distancing is kept and effective, government agencies need to collect, process, and analyze open-source data from online sources such as social media platforms to get actionable insights. This is crucial for keeping a watchful eye on relevant local hot spots over a long time. But collecting and analyzing these huge amounts of data is time-consuming and requires a lot of manpower – a luxury that government agencies just don’t have. That’s why an AI-powered smart solution is needed that can keep a watchful eye on and provide a global view of not only local emergencies but also global ones like the COVID-19 pandemic. Such a solution can harvest relevant and/or location-based discussions and sift through vast data amounts to extract relevant mentions that can be used for optimal response to e.g., the COVID-19 pandemic and other crises and emergencies. Such a system must constantly evaluate information sources, follow social media content on a wide range of pages, including comments and real-time posts and feeds on multiple web platforms. Furthermore, the search results must be presented in a geographically relevant way – ideally on an easy-to-use interactive map. That’s why government agencies need a location intelligence system that can collect, process, analyze, and present geospatial data for public safety, such as social distancing. The Webloc solution of Cobwebs is able to use geolocated intelligence derived from enormous amounts of location-based data in the complex web ecosystem. This cutting-edge location solution automatically reveals and analyzes location-based data using interactive maps. It is able to find and extract location-based intelligence and provide actionable insights from masses of hidden and complex data signals. By connecting open-source web data with live and real-world information, it provides authorities with comprehensive intelligence to address the issue of social distancing. The geospatial intelligence platform reveals real-world insights about locations, people, and data related to social distancing. Our unique capabilities contribute to public protection by automatically analyzing location-based information, enabling the production and dissemination of intelligence, and contributing to investigative reports. Webloc is designed to meticulously race through and scan endless digital channels in the web ecosystem, collecting and analyzing huge amounts of location-based data to help authorities getting insight into the status and effect of social distancing. In short, Webloc is a cutting-edge location-centric solution that provides access to vast amounts of location-based data in any specified geographic location. It harvests relevant or location-based discussions, sifts through vast data, and extracts pertinent mentions that can yield optimal response to e.g., social distancing. By constantly evaluating information sources and following social media content from a wide range of pages, it generates and presents geographically representative search results, also by accessing real-time feeds on multiple social media platforms. In other words, Webloc utilized innovative, cost-effective, and comprehensive technologies to provide a critical solution for handling COVID-19 policies such as social distancing. In general, WebLoc reveals real-world threats to public safety using non-intrusive methods enabling law enforcement and national security agencies to benefit from geolocated intelligence to prevent isolation and social distancing violations. Analyzing publicly available big web data, Webloc is designed to meticulously race through and scan endless digital channels from the web ecosystem, generating effective intelligence for follow-up.
<urn:uuid:0a6725f3-4d96-457e-954f-b9fa0897fc49>
CC-MAIN-2022-40
https://cobwebs.com/es/manage-social-distancing-during-pandemic-a-pandemic/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337398.52/warc/CC-MAIN-20221003035124-20221003065124-00169.warc.gz
en
0.914103
839
2.609375
3
Artificial intelligence (AI) was supposed to be the panacea for a huge range of business ills, but it doesn’t always produce positive, actionable outcomes. Why is this? More and more organizations currently use AI, machine learning (ML), or data analysis in their operations, yet the business value from these implementations simply isn’t being realised. There’s a range of reasons why AI is flailing rather than surging ahead. People struggle to understand what AI really does, and its limitations. Often these misunderstandings are media-driven, with sensationalist or overly simplified reporting. However, these data science myths are also trotted out by many people who claim to know what they’re talking about. 8 Myths About AI 1. AI will magically fix all your problems AI promises great things; increasing revenue, decreasing costs, identifying fraud before it happens, taking away all your repetitive and monotonous work. But organizations who go into AI and ML with grand dreams will often find the reality falls short. AI should be a gradual, incremental process. Organizations should start with projects like improving processes and increasing customer satisfaction or automating business processes. Over time, as capabilities and understanding of AI grow, it can be used to tackle the big financial challenges. Remember the Pareto Principle too; 80% of the outcomes come from 20% of the inputs. There’s no need to delve down into granular details and problems that aren’t going to yield measurable results. They simply bog the system — and your staff — down. 2. Machine learning is about ‘thinking like a human’ Humans are complex creatures, and our brains are incredibly complicated. We use heuristics all the time, ‘rules of thumb’ that we’ve learned with years of experience. We learn stereotypes that allow us to make snap judgements but aren’t necessarily correct. We don’t want computers to think like humans because humans have faulty thinking. The truth is that machine learning is about making predictions from data. If that data is of poor quality, the results won’t be objective. Garbage In = Garbage Out Machine learning simply learns biases in the data, and the assumptions the team make. Why does this matter? Bias in algorithms, data and the team can result in measurable losses to the business. For example, banks use AI to make decisions about who to lend money to. Who is a risk? Who will be more likely to pay back their loan on time? We know that much data is biased, so machine learning based on that data will be flawed too. Historically, men were the mortgage holders. Women applying for loans were routinely turned down for reasons that had nothing to do with their financial means or ability to pay back the bank. If AI was to look at that data, it doesn’t say ‘Oh I see the banking system was historically patriarchal’, it says ‘women are turned down for loans more often, therefore women’s mortgage applications should be rejected’. Remember the Apple Credit Card fiasco that gave a husband a line of credit 20x higher than his wife? Women are actually a lower credit risk than men, on every measure. Women pay loans on time, and default less. Therefore if a bank implements ML that is flawed, they are not only loaning money to higher-risk men, but they are missing out on the income that low-risk women will provide. The other side of this is that laws mean you can’t discriminate on the basis on gender; it’s illegal to consider gender when determining creditworthiness. But gender-blind credit lending discriminates against women. Machine learning is about learning from data, which is often flawed, biased, and far from objective. Most of the time, it’s not overt, but rather subtleties and proxies in the training data set. 3. AI is plug and play With all the SAAS programmes and big promises made by software companies, you’d be forgiven for thinking that AI is easy. Just put the data in, and the machines will whiz through the information and spit out what you what you want to know. No coding knowledge needed! But even if staff understand the programme, there is so much work that needs to happen first. Data cleansing: There’s data, and there’s good data. There’s no point putting in huge amounts of data if it’s wrong, incomplete, there’s too small a sample, or if it’s recording the wrong information altogether. Understanding the outcome of the task: When a business or client says they want a drill, a good data scientist knows they actually want a hole. Not only that, but they know if the data will be able to provide that information. Domain knowledge: The reality of data science is that the industry is lacking talent; there simply aren’t enough quality data analysts and scientists. There’s a lack of trained and experienced staff, and this is hampering AI’s effectiveness and uptake in the market. Organizations don’t have (or, can’t find) appropriately skilled data scientists as staff, and so outsource to third party providers. Replying on external vendors is only a short term fix; domain knowledge is vital to produce accurate results. 4. Machine learning predicts the future This is true, if the future is exactly the same as the past. ML trains on data which is historical, and makes predictions based on the theory exactly the same thing will happen again. There’s more to ML than just making predictions though. You can use it to create business insights and simplify processes, to add new products or features, as well as forecasting. If you don’t use ML to change the behavior of your business decisions, what’s the point? 5. Predictions automatically get better over time ML uses different algorithms, called models, to create their predictions. The minute you start a model in production, it starts degrading. This is because data can change, the environment can change, and people change. A model will be consistent. This is why models need to be retrained from the very start, or new models used if they are a better fit. These degrading models are due to data drift. This is when whatever the model is trying to predict is changed by unforeseen variables. For instance, if you’re predicting sales in a physical store, other variables need to be taken into account, such as the weather, what holidays are coming up, and what your competitors are doing. An example of concept drift is when a skin cancer diagnostic system misses skin cancers because of ignored variables. The machine knows to look for raised edges, irregular shapes, and changes over time, which alert the clinician to the suspected cancer. However, if the machine does not consider the color of the skin (due to sun exposure or race), there will be false negatives. Generalization, or covariate shifts, are another problem that plagues models. If data used to train a model was from one population, perhaps a western, wealthy country, then it overfits for that group of data. Other groups and unseen data mean that the predictions will not be accurate as they don’t generalize well. Measures must be taken to prevent model degradation. ML performance must be monitored after it’s deployed. If the model degrades, either restructure the model, or try another, better-fitting model. It might need new features to be added or changed parameters. This is called continuous learning, and if predictions are to be accurate, they need to be checked and adjusted. 6. Machine learning is about delivering higher accuracy Accuracy is good, but that doesn’t indicate performance. A model that has 51% accuracy could predict lottery numbers correctly, and you win ten million dollars. A model with 99% accuracy could give a false negative when predicting a fraudulent loan application that results in huge losses. ML works on probabilities, not certainties. Much like the constant reassessment needed of models, results need to be checked for precision. How many false negatives to false positives? What’s the business value of these errors? How much in potential revenue did you lose? Is the system not discriminating enough and you’re overloading your sales team with too many leads, or are they twiddling their thumbs because the system is too fussy and rejects too many leads? 7. AI and ML is replacing people Yes, and the sky is falling, Chicken Little. Every time there’s a big, threatening change, people panic that jobs will be lost. This creates resistance to adoption of AI, as people dig their toes in to resist job insecurity. One study showed that 38% of people expect that technology will eliminate jobs at their workplace in the next three years. It’s predicted that by 2030, there will be up to 20 million jobs lost to robots in the manufacturing sector. They are some scary numbers. The truth is that AI and ML are augmenting people. They are taking boring, repetitive tasks, and allowing people to get on with creative, unpredictable, more complex tasks. AI should be working hand in hand with humans to make positive changes in the workplace. We can look back at the industrial revolution to see what the future holds for the AI revolution. This significant overhaul of almost everything about working in the 18th and 19th centuries did not cause long term widespread job loss and suffering. People always find new jobs (often after a period of painful adjustment) and fear of mass unemployment is ill-founded. While AI will cause job losses, it’s expected any losses will be offset by new jobs created in the stronger, wealthier economy. Automation and AI will change jobs and lives, that is inarguable. But for the most part, those changes will be positive. 8. The more data the better for machine learning GI:GO. If you’re feeding the machine irrelevant information, data that hasn’t been cleansed, or is wrong, the results are going to reflect that. Data scientists say that about 50% of their role is cleansing data, and there’s a reason for that. Even the cleverest machine can’t create insights from faulty data. The benefits of data science could be massive This one isn’t a myth; the business outcomes from data science, when done well, could be all those things that were promised. Faster, better, stronger, the superman of organizations. But to use AI on an organizational level there needs to be broader understanding of what it can do, and where it’s not going to be useful. Otherwise, it’s simply another of those 80-something percent of data science projects that never get off the ground. To get ROI on your data science investment, be realistic about what AI can do, and apply it judiciously to well defined projects with great data. While that doesn’t sound as enticing (and easy) as plug-and-play-and-predict-the-future, it’s a far more successful strategy to get the results AI can deliver. Read This Next If you’re interested in more, Shatter the Seven Myths of Machine Learning. Read This Next How to Increase ROI Through Data Democratization See 4 things you should do to increase ROI through democratization. Why Alteryx is a Better Choice for Enterprise Analytics Why Alteryx is a better choice for organizations that want to experience a unified analytics platform that is easy to use and upskills your existing workforce to enable a democratized approach to data. Three Ways 7-Eleven Is Optimizing Its Retail Analytics With Alteryx Throughout the pandemic, 7-Eleven has used Alteryx to make data-driven decisions.
<urn:uuid:7e54e531-bb4e-475b-b87f-9e7722ea6609>
CC-MAIN-2022-40
https://www.alteryx.com/input/blog/8-myths-about-ai
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334515.14/warc/CC-MAIN-20220925070216-20220925100216-00369.warc.gz
en
0.943902
2,491
2.9375
3
Do you know who’s reading the information you post on your Facebook page? A recent Consumer Reports study found 13 million U.S. Facebook users don’t use the website’s privacy controls! It may not seem like a big deal, but making information available to people outside your personal network of friends, family and coworkers could leave you vulnerable to something known as social engineering hacking. What is social engineering hacking? It’s kind of a mix between old-school face-to-face conning and technology-based hacking. A Forbes article published last month dubs social engineering as “hacking the human mind,” because it doesn’t rely entirely on malicious computer code to exploit victims. A social engineer hacker collects information about the target’s lifestyle habits and personal preferences to design attacks that specifically target the individual. According to the Forbes article, one of the ways social engineers are collecting personal information to use against their targets is through social media. Armed with this information, social engineer hackers can mix in traditional hacking methods to compromise security and access personal information. For example, social engineers can collect work history information from a target’s LinkedIn profile and use that to design attacks that trick users into clicking malicious links or downloading software that infects their computers. Risks extend beyond your computer Your privacy controls protect more than just your computer’s health, they also protect you. The Consumer Reports survey also found a 30 percent increase in the number of people who said they had Facebook-related trouble. Some users even said they had been harassed. The problem goes back to privacy settings and limiting the amount of personal information you publish. Unless you methodically manage the information you post, even applications that your friends are using may be able to access more of your profile than intended. How to protect yourself against social hacking With hackers trying to hack both you and your computer, it’s easy to be a little intimidated. The Chicago Tribune recently published an article highlighting ways to increase social media security. One of the big ones is to keep your antivirus software and browsers up-to-date. Because social media applications have access to your profile, review your applications on a regular basis and delete the ones you don’t use. Remember to block applications you don’t recognize, and, If you see an interesting story posted to your friend’s wall and aren’t sure of the source, use Google to find the story on a reputable news source, so you don’t unintentionally click your way to a malicious website.
<urn:uuid:e1d64cdf-0efc-40d6-968a-ab3c1ab01066>
CC-MAIN-2022-40
https://www.faronics.com/news/blog/guarding-against-social-engineering-2
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00369.warc.gz
en
0.918801
532
3.03125
3
When a security flaw is revealed in a popular piece of software, the internet becomes ablaze with excitement. Affected parties rush to do research so they can protect themselves, hackers scramble to exploit any vulnerabilities and software engineers bunker down to craft defenses that can protect consumers and stop bad actors in their tracks. Out of every kind of software on the market, however, web browsers are among the most commonly used. When a security hole opens up in a browser, the consequences can be vast and widespread throughout the internet — and that’s something nobody wants. If you frequently browse the web on your computer, you may want to check for an update. One of the most popular browsers on the market was revealed to have a critical security flaw that can compromise your computer, and worst of all, it even affects Mac and Linux. We have the latest details on this new security threat and how you can update this browser to protect your system. Google Chrome security flaw affects users on every operating system In terms of usage, Google Chrome is the most popular browser on the internet. It absolutely dwarfs its competition as well, making it a high profile target for ambitious hackers and cybercriminals. With so many users to potentially attack, cracking Chrome has been a long-sought goal. And now, a new security vulnerability makes that scenario a frightening possibility. According to new reports from researchers at the Center for Internet Security, a vulnerability was detected in Google Chrome that could easily be exploited by code on a malicious website. If a user were to accidentally stumble upon such a website, cybercriminals could execute code that would allow for a full system takeover. Once inside, the hackers could make changes, install programs and even create new accounts with administrator privileges. Because this exploit is at the browser level, it doesn’t matter which operating system you’re using. Macs, PCs and even Linux computers all share the same risk of compromise. In response to this revelation, Google has since released version 76.0.3809.132 of Chrome, which patches this flaw alongside two others that were not previously reported. Am I at risk? How can I protect my computer? If you use Google Chrome on any type of desktop or laptop computer, your system is at risk of compromise if you choose not to update your web browser. As for iOS and Android users, the report specifically states that these mobile browsers do not share the same risk as their computer counterparts, so mobile web browsers are safe for now. Updating Chrome, however, isn’t too complex of a process. Simply click on the three-dot icon in the upper right-hand corner of your browser window, hover over Help and click on About Google Chrome. On the page that appears next, your update should automatically download. Once it’s complete, click Relaunch to restart your browser and apply the changes. This emergency update underscores just how critical it is to keep your software and equipment as up-to-date as possible. Even though some software launches can be buggy or have holes of their own, the rewards outweigh the risks in terms of protecting your computer from harm. As always, however, make sure to back up your data before performing any drastic updates. It’s less of an issue for a web browser, but it’s always worth doing for that extra peace of mind. Stay safe out there.
<urn:uuid:13e6e0c6-92be-497d-ac5a-feed16f4354a>
CC-MAIN-2022-40
https://www.komando.com/security-privacy/patch-chrome-now/592304/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00369.warc.gz
en
0.936548
696
2.828125
3
Mines are unique operating environments with highly specific health and safety challenges. In particular, underground mining operations typically experience low-visibility conditions and light pollution from flashlights, vehicle lights and reflective strips on equipment and clothing, making traditional surveillance and safety monitoring difficult. These were some of the challenges facing Jiangzhuang Coal Mine in the Shangdong Province of China, which covers an underground area of 43 square kilometres and produces more than 1.8 million tons of coal each year. The top priority for the mine’s management team is worker safety and working practices and production are monitored 24 hours a day to minimise accident risks. Kong Qingwei, Director of the Jiangzhuang Coal Mine Dispatch Office, says, “We need to respond immediately to unsafe situations in the mine, whether they are caused by environmental factors, poorly performing machines or employees not following authorised work procedures.” Although the mine invests heavily in safety training and equipment for workers, its aging surveillance system made health and safety monitoring difficult in key areas of the mine. “Our previous surveillance system required us to monitor around 30 screens, 24 hours a day, often with sub-optimal image quality caused by low-light conditions or light pollution,” says Kong Qingwei. “This made our jobs extremely difficult and tiring, as well as impacting our ability to respond to safety issues quickly enough.” Maximising worker safety To address its health and safety challenges, Jiangzhuang Coal Mine has implemented an intelligent video surveillance and control system from Hikvision. The Hikvision solution supports crystal-clear video imaging, even in low-light conditions or where light pollution is created by lights or reflective strips. This quality and clarity of imaging ensures that hidden risks can be identified more quickly and easily, allowing the safety team to respond more quickly and to protect workers in all areas of the mine. In addition to the improved imaging capabilities, the Hikvision cameras incorporate deep learning technologies to identify and respond to health and safety risks in the mine automatically and in real time. Specifically, the cameras can identify when employees deviate from approved work procedures and send alerts to the safety team to ensure staff can be deployed before accidents occur. For example, it is prohibited for workers to come too close to winches when they are working due to safety risks, but this is hard to monitor with traditional video cameras. “The new Hikvision system increases worker safety by monitoring the areas around winches and other equipment and by sending alerts if employees get too close,” says Kong Qingwei. Improving worker health and safety with real-time insights In the first three months of operation, the new Hikvision system identified more than 30 deviations from safe operating procedures. Zhang Liu, Deputy Chief Engineer at Jiangzhuang Coal Mine, says, “In the past, many of these safety risks could have gone unnoticed. However, the Hikvision system has allowed us to identify every incident in real time and to take immediate action to protect our workers, which is a hugely satisfying outcome for us.” Delivering continual improvement As well as alerting the team to potential security risks in real time, the Hikvision system also records the details of any safety incident for later analysis. “As well as accurately capturing deviations from safe working procedures, the Hikvision system supports playback and download functions,” says Zhang Liu. “We can use the insights we record to deliver continual improvement for safety procedures and, ultimately, to support our vision for a ‘zero-accident’ mine,” he adds. Addressing mining-specific safety requirements The Hikvision solution is configured to support specific mining-safety applications, such as constant monitoring of surface water levels in different areas of the mine. “Constant seepage from rock formations means that surface water can accumulate in different areas of the mine, which is a problem in terms of potential flooding, damage to infrastructure and worker safety risks,” says Zhang Liu. “With the Hikvision system, we can manage surface-water levels constantly and take action to deal with any problems that arise before water levels exceed safe limits,” he adds. In addition to surface water management, the Hikvision solution supports improved safety in other potentially dangerous areas of the mine, including inclined tunnels that are used for transporting coal and other materials underground. “The Hikvision system is like an intelligent ‘eye’ for us in all areas of the mine, helping us to identify potential safety issues in a timely and accurate way and to protect our workers at all times,” says Kong Qingwei. Increasing effectiveness for the safety team With automated alerts for all manner of potential safety threats, the safety team can be far more effective, with no need to monitor video images constantly. “Instead of looking at grainy images on 30 screens, we can now spend more of our time responding to incidents, supporting workers and keeping them safe,” says Zhang Liu. “This is a classic example of how automation can help to improve mine safety, while also reducing the tiring workloads associated with manual monitoring of screens.”
<urn:uuid:8980cf73-94f3-4f82-85ac-f355314f6198>
CC-MAIN-2022-40
https://internationalsecurityjournal.com/helping-jiangzhuang-coal-mine-to-create-a-zero-accident-environment-with-hikvision-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336978.73/warc/CC-MAIN-20221001230322-20221002020322-00369.warc.gz
en
0.945567
1,073
2.703125
3
What is the Stop Hacks and Improve Electronic Data Security (SHIELD) Act? How does it affect the residents of New York? What does it mean for the future of companies? Read on. The past few years have seen data breaches affecting millions of people in ways ranging from harmless to disastrous. High-profile breaches at companies over the past three years alone have resulted in millions of users and individuals being placed at risk, and billions of dollars’ worth of data being seized. While the US government has taken some steps towards constructing stronger security frameworks on a national level, individual users must rely on state governments to protect their interests. In this regard, the response has been mixed, but there are positive signs on the horizon. Most recently, the State of New York passed the Stop Hacks and Improve Electronic Data Security (SHIELD) Act, which sets requirements for companies to protect the data of New York residents. The law is one of several that have been passed across the US at the state level with the aim of protecting individuals from companies which are increasingly exposed to threats and repeatedly found to be lacking in both protections and concern. With the damage wrought by breaches also on the rise, these new laws represent a significant change in the status quo for companies that have until now neglected their security and users’ privacy. Shielding Users From Negligent Tech Security The increasing digitization of most day-to-day services—from e-commerce to paying utilities and even buying groceries—means that users’ data is held or partially owned by a variety of companies. Despite this expanded digital footprint, and the easy access malicious actors have to users’ information, corporations have been woefully slow to implement security measures that defend against current threats. Most people still hold the common view that hacks and breaches are perpetrated by lone-wolf hackers and malicious actors sitting alone at their computer typing in lines of code. However, hacking today is far removed from these dated perceptions. Today’s virtual attackers have increased their sophistication, and especially when it comes to targeting state and enterprise-level targets. More than simply attempting to brute force their way in, today’s hacking groups prefer the advanced persistent threat (APT) model. More than a constant stream of threats, APT refers to long-term attacks on corporations, enterprise companies, and even state actors undertaken by large collectives. APT attacks start when groups infiltrate targets’ networks and slowly expand their presence. After securing themselves, undetected, within servers and networks, these groups gain full access and can safely extract any amount of data they want or need, as well as do serious harm to existing infrastructure. These attacks have already been wildly successful, and companies have suffered in more than one way as a result. Equifax, for instance, ended up paying nearly $650 million to resolve claims that resulted from its massive 2017 breach in which 147 million consumers’ data was stolen. Elsewhere, Quest Diagnostics was slapped with a class-action lawsuit following a breach that saw 12 million patients’ personal data leaked, while Capital One received a similar notice for a hack that saw 100 million users’ data compromised. Uber reached a settlement with all 50 states to pay a then-record $148 million after it failed to disclose a 2016 data breach. What the SHIELD Act Means New York’s SHIELD Act seeks to crystalize protections for individuals and set standards for companies that have access to users’ private information. The law clarifies what counts as a data breach (even including “access to data” which reduces the threshold to simply viewing data without authorization instead of obtaining copies of it) and expands the enforcement capabilities and consequences for companies that fail to comply. Some of that language clearly stems from recent high-profile cases such as the Cambridge Analytica fiasco, where Facebook let the analytics firm access user data without their consent. More importantly, the SHIELD Act raises the bar for security requirements, including the ways to test and assess risk vulnerability, the designation of people in charge of network security, and the development of better technical frameworks for security. For companies that already have security systems in place, this means creating better testing standards and tools to evaluate their protection. For those without strong security, it means having to invest in better infrastructure. This will undoubtedly be a positive catalyst for the cybersecurity sector, which is already forecast to experience significant growth over the coming years. More specifically, the market for automated breach and attack simulation testing is set to reach over $720 million by 2024. This sector includes testing for APT alongside more immediate threats such as DDoS and malware attacks. Stronger Standards, Safer Experiences New York’s legislation raises the bar on data protection laws with sweeping language that clarifies a previously murky topic. Although most states already have data privacy laws on the books, many of them remain concerningly vague, or simply toothless when it comes to enforcement and actual consequences. The SHIELD Act brings a much needed and welcomed clarity to the matter, expanding the definition of a breach and creating a stronger framework for enforcement. With the number of breaches seemingly on the rise and companies still none the wiser, the SHIELD Act could be a serious motivator for upgrading to stronger security standards and constructing better user protections.
<urn:uuid:3c8757f6-5d81-47e2-83e3-69f7ec8b5ae6>
CC-MAIN-2022-40
https://dataconomy.com/2019/09/new-york-deploys-its-shield-act-is-the-tech-world-ready-for-tougher-regulation/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00369.warc.gz
en
0.953043
1,082
2.65625
3
What is an Identity Provider (idP)? An identity provider (IdP) is a system that creates, stores, and manages digital identities. The IdP can either directly authenticate the user or can provide authentication services to third-party service providers (apps, websites, or other digital services). Simply put, an IdP offers user authentication as-a-service. For example, you can use your Google account credentials to log in to Spotify. Here your Google Sign-In is the IdP and Spotify is the service provider (SP). Any website that requires a login, for example, uses an IdP to authenticate users. A password or other authentication factor may be used to authenticate the user. From an IdP perspective, a user is known as a principal. A principal can be a human or a machine. An IdP can authenticate any entity, including devices. The purpose of an IdP is to track these entities and know where and how to retrieve the principal identities that determine whether a person or device can access sensitive data. What is an IdP workflow? An IdP enables a user’s identity to facilitate access to all their resources, from email to company file management systems. An IdP workflow involves three key steps: - Request: The user is requested to enter some form of identity, such as a username and password or biometric authentication. - Verification: The IdP checks to determine if the user has access, and what they have access to. - Unlocking: The user is given access to the specific resources to which they are authorized. What is a service provider (SP) and how does it work with an IdP? A service provider is the entity that provides the service being accessed, whereas an IdP is the entity that creates, stores, and manages identities as well as the ability to authenticate a user. Both SPs and IdPs are part of federated identity management (FIM), where users are allowed to use the same verification method to access different resources. FIM is achieved through standard protocols like SAML, OAuth, OpenID Connect (OIDC), and SCIM. The IdP establishes a trusted relationship with an SP by sharing identities and authenticating users across domains. For example, when a user attempts to access any third-party apps (SPs), the request is sent to an IdP like Entrust Identity as a Service (IDaaS). The IdP authenticates the user identity and indicates the SP using a SAML assertion that the user is verified and has permission to access the service. What are the benefits of having an IdP? There are several benefits, including: - Stronger authentication: An IdP can provide tools and solutions that ensure secure access across apps, websites, and other digital platforms such as risk-based adaptive multi-factor authentication (MFA). - Simplified user management: Another solution most IdPs provide is single sign-on (SSO), which saves users the hassle of creating and maintaining multiple usernames and passwords. - Bring Your Own Identity (BYOI): With BYOI, users can access services with identity credentials that they already have (e.g., Google, Outlook, etc.) instead of creating new ones. This further improves the efficiency of onboarding and managing users while still maintaining a high level of security. - Better visibility: An IdP will maintain a central audit trail of all access events, thereby making it easier to prove who is accessing what resources and when. - Reduces identity management burden: The SP does not need to manage user identities as it becomes the IdP’s responsibility. Types of Identity Providers (IdP) SAML is an XML based markup language used for authentication via identity federation. SAML is a ubiquitous protocol that is supported by various service provider applications such as Office 365, Salesforce, Webex, ADP, and Zoom. SSO is an access management function that enables users to log in with a single set of identity credentials to multiple accounts, software, systems, and resources. For example, when an employee enters their credentials to login to their workstation they are also authenticated to access their apps, resources and cloud-based software. Use Cases for Identity Providers (IdP) Identity providers (IdP) can help solve several administration headaches that businesses face. With an identity service provider, long lists of usernames and passwords are virtually eliminated, administration is simplified and there’s a detailed paper trail of access attempts, should an issue arise. Most consumers are familiar with apps that give them the option of logging in by tapping a button that connects that account to the user’s Facebook or Google account. The concept is similar in the business world, with a few added benefits. First, compliance is simplified with an audit trail of all access events. Second, businesses can reduce IT costs by upwards of 20% by reducing helpdesk time for password resets. Is Entrust IDaaS the right IdP solution for your business needs? Yes. Entrust Identity as a Service (IDaaS) is a cloud-based identity and access management (IAM) solution that includes multi-factor authentication (MFA), credential-based passwordless access, and single sign-on (SSO). Offering an exhaustive set of IAM capabilities, IDaaS is the right IdP to maximize your protection with its Zero Trust approach to security. What is identity and access management? Identity and access management (IAM) is a framework of security policies and technologies that ensures that the right entities can gain access to the right resources at the right time. An entity can be a person or a device. Resources include applications, networks, infrastructure, and data. IAM can apply to workforce, consumer, and citizen use cases. IAM is based on the premise of establishing and maintaining trusted digital identities. With IAM, organizations are able to authenticate and authorize entities to grant secure access to the right resources. As well, trust is maintained over time with adaptive risk-based authentication that provides a step-up challenge when conditions warrant.
<urn:uuid:b335b45f-d4fb-48cd-a6ae-22c5041021dd>
CC-MAIN-2022-40
https://www.entrust.com/resources/faq/what-is-an-identity-provider
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00369.warc.gz
en
0.9193
1,279
2.59375
3
What is single sign-on (SSO)? We explain how SSO works and why you need it You might think that using the same credentials for everything means a bigger chance of a data breach. After all, aren’t we always being warned not to reuse passwords to avoid compromising a large string of accounts rather than just one? Not when it comes to single sign-on, or SSO. Usually used in a business context, SSO is an authentication method and just one component of identity and access management (IAM), a security strategy giving users access only to the business applications they need for work so that any hackers only get so far within a victim’s limited network. SSO allows your organisation to control access through a single log-in portal that then gives your employee access to all approved applications within your business. As the use of cloud applications, hybrid work, and the sophistication of cyber attacks grow, this tech is especially helpful for replacing many of the on-premises security measures that are no longer as effective. So should your organisation adopt a single sign-on platform as part of its security strategy? How does SSO work? SSO solutions hold your credentials and identity data in a single identity repository, or identity store, giving you access to all the apps and services your organisation has given permission for you to access. When you log in with an identity provider, such as logging into a site via Facebook or Google, the provider verifies your identity and passes along a token of authentication to the site you’re trying to access. The idea is that once logged in via the identity provider, it’s the token that gets you seamless access to all permitted sites and services, rather than a different set of credentials each time. The benefits of single sign-on Still wondering how having one password instead of multiple means stronger security rather than weaker? Implementing SSO offers your organisation a plethora of benefits, and one of these is that by nature of only having one password to remember, users can create stronger ones and are less likely to use previous or simpler passwords to save time. Instead, they save time by not having to sign in to different apps and websites multiple times a day or waste time with password recovery for all of the passwords they’re forced to keep track of. In addition to an improved user experience, SSO saves administrators time and headache by giving them central management of a variety of security controls. From one platform, you can set required password complexity, how often users have to reset their passwords or re-enter them to ensure they’re still active, what apps and websites users have access to, and more. It also makes it easier to implement multi-factor authentication (MFA), which improves security by requiring users to confirm their identity through other avenues, such as a code received by text. Instead of identifying and launching MFA on each app, you simply need to set it up for one portal and be done with it. The drawbacks of single sign-on There are still a few issues with SSO that you need to consider before adopting it. You could run the risk of employees still using easy-to-guess passwords, which then gives a hacker access to all applications once they have that one password. As mentioned earlier, you can prevent this from happening by setting requirements for the complexity of the password, or using MFA. The centralised server that makes management so much easier can also cause everyone to lose access to their applications if it were to go down. This makes it a prime target for attackers, and arguably a single point of failure. However, by filling the security gaps ahead of time, you can reduce the risk of a breach happening and the damage any successful breach can cause, while still reaping the benefits of better security, user experience, and efficiency. Three ways manual coding is killing your business productivity ...and how you can fix itFree Download Goodbye broadcasts, hello conversations Drive conversations across the funnel with the WhatsApp Business PlatformFree Download Winning with multi-cloud How to drive a competitive advantage and overcome data integration challengesFree Download Talking to a business should feel like messaging a friend Managing customer conversations at scale with the WhatsApp Business PlatformFree Download
<urn:uuid:b27c61ad-96a2-437d-a87b-b5f3ad7247b1>
CC-MAIN-2022-40
https://www.itpro.com/security/single-sign-on-sso/361728/what-is-single-sign-on-sso
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00369.warc.gz
en
0.942949
883
2.546875
3
What are deepfakes? Deepfakes are the new frontier of computer-generated fake media. Using machine learning, this synthetic media can be created using a subject’s face and voice to create real-time content that is convincing enough to scam anybody. The most common use is for the impersonation of celebrity faces and voices, but convincing deepfakes can be used for a variety of malicious purposes such as fake news and to spread misinformation. Deepfakes work through deep learning methodologies. In essence, it is a complex face swap done using Generative Adversarial Networks (GAN) technology. Using algorithms, it takes high quality datasets of images or videos of a person and maps their facial expressions onto another person’s face. Once this has been done, the editing begins: facial expressions are mapped onto the target’s face in each frame, and audio clips are placed on top of the video so it sounds as though someone famous is saying something that wasn’t originally said. This creates deepfake content that can put a serious threat to cybersecurity. Deepfake tools can be acquired through open-source platforms such as DeepFaceLab, and used together with visual effects and editing software like PhotoShop. What are deepfakes used for? Deepfakes are a new technology, and like any new technology, it has caused a lot of concern. Part of the reason for this is that deepfakes have been used to make a lot of pornography. Studies have shown that an estimated 96% of deepfake videos are pornographic in nature. Most of these feature fake celebrity porn videos where popular women celebrities’ faces are mapped out onto porn star’s bodies and distributed across popular sites. This is a clear violation of privacy and is seen as cyber-violence against women. Deepfakes are also used to create funny or entertaining videos by combining one actor’s face with another actor’s body, or taking someone else’s footage and adding faces. This makes them similar to other forms of meme-like entertainment you see on TikTok and other social media platforms—but unlike many memes, deepfakes are almost impossible to detect unless you’re looking for it. Tom Cruise and Other Common Deepfake Videos When you think deepfake videos, there are a few popular ones that come to mind that you may have already encountered across the internet. For instance, Tom Cruise’s deepfakes are shared and ‘celebrated’ by many TikTok users. The handle @deeptomcruise on TikTok depicts over hundreds of deepfake posts featuring the infamous actor’s face. None of them, of course, are actually real and yet the account has 3.6 million followers and over 13.7 million likes. Deepfakes can also be used in Hollywood as a film technique, and this is something that’s expected to grow in the future. The beloved return of Mark Hamill as Luke Skywalker in The Mandalorian sparked a lot of buzz for the future of deepfake technology in the entertainment industry. Furthermore, deepfakes have also been used in the political world. A digitally edited video was posted of speaker of the US House of Representatives, Nancy Pelosi, where she seemed to speak drunkenly through a speech. Even former President Donald Trump posted the clip on Twitter, captioning it “PELOSI STAMMERS THROUGH NEWS CONFERENCE.” However, this was later revealed to be a fake. The damage had been done though—with the video viewed millions of times and Facebook itself refusing to take it down. Thus, deepfake videos can be used for a number of reasons ranging from mildly entertaining to harmful political initiatives. Using Deepfakes to Apply for Remote Jobs We’ve already seen people attempting to use deepfake technology to create revenge porn or celebrity memes. This means that at some point in the near future, hiring managers could receive fake videos from applicants—a risk to employers and job seekers alike. The scary part is: it’s already happening. One of the most controversial uses of deepfakes is by scammers applying for remote jobs by stealing an applicant’s identity through online sources like LinkedIn to apply for remote job positions. Here’s everything you need to know. A lot of people don’t realize that deepfakes aren’t just limited to celebrities, but can be used to impersonate anyone. The FBI reports that some companies have been victimized by scammers who pose as a job applicant, using another person’s identity, to get access to sensitive company information through successful hiring. The Internet Crime Complaint Center has observed an increase in the number of these incidents over the course of the pandemic. In this sense, the risk is two-fold: for both real applicants for the job whose personal information is stolen and the company promoting the job opening itself. Risks to Organizations By using this technology, nefarious users can apply for jobs under a false name. Because an applicant’s name is less important than their skills, fake applicants can be more difficult to spot in the hiring process. This makes it more likely that they will get hired, which means they will have access to the company’s proprietary information. This is a security risk because companies invest a lot of time and money into making sure their processes are as private as possible. For example, many companies use encryption on their internal networks, which is much less effective if a malicious employee gains access to the network through their job. Additionally, some companies hold sensitive data about their employees or clients on their internal networks. If an unauthorized person gains access to sensitive data, it could be used to attack customers or steal intellectual property from competitors. What Scammers Are Looking For It’s been widely reported that scammers are using deepfakes to apply for remote jobs. The scammers use their victims’ identities and then go on to make fake resumes, which they send out to prospective employers. It’s not just the victims’ identities that are used, but even their social media profiles. It’s a pretty serious invasion of privacy and highlights some important points about identity theft, pre-employment background checks, and remote work in general. To do this successfully, scammers rely on identity theft to get away with it. They use their victims’ names and faces, pretending to be them in real life (as well as on the internet). Once they have sent a resume and cover letter out to a few prospective employers, they can continue to follow up with these companies as though they were the victim. This is why it is absolutely vital that you keep your personal information safe at all times. Security awareness is of the essence. Protect your team from deepfakes with Inspired eLearning With the rise of artificial intelligence and machine learning, it’s becoming easier than ever to take an existing video of a person and manipulate their facial expressions to make them say or do things that they never actually did. This technology is being used for everything from simple jokes to building fake video evidence for crimes. Staying on top of security awareness tips and how to avoid online harassment is important. While companies have hosted the deepfake detection challenge and Microsoft , for instance, announced their own deepfake detection tool, it’s always good to be equipped with the right tools you need to detect disinformation on your own. Our deepfake course is a valuable resource that gives employees the tools they need to identify fake videos and images so they can avoid the risk of being manipulated. Your team can develop strategies for identifying and avoiding these fakes, as well as ensure they have the tools to protect themselves from this threat. Get started today!
<urn:uuid:6e7288ad-64ed-44ed-a30f-211269dee84d>
CC-MAIN-2022-40
https://inspiredelearning.com/blog/using-deepfakes-to-apply-to-remote-jobs/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337680.35/warc/CC-MAIN-20221005234659-20221006024659-00369.warc.gz
en
0.951161
1,616
3
3
What about this course? Python 101 for Network Engineers is focused towards building your first steps in Network programming using Python. This is a beginner level course. Python is a high level programming language that supports sequential as well as object oriented based coding. This makes python an ideal choice of language for network engineers as they are familiar with the device cli cmds. This course walks you through all the fundamental components of python that one needs to start coding using Python. Unlike any introductory programming course, Python 101 starts with simple data types and gradually dives into object oriented programming in Python with network applications wherever possible. Course Goal is to equip an individual with basic knowledge of python and allow them to create scripts applicable to the network. Instructor for this course This course is composed by the following modules Strings, Integers & Raw Input List, Tuples & Dictionary Condition & Loops Functions & Modules Error Handling, Classes & Objects Device SSH using Python Common Course Questions If you have a question you don’t see on this list, please visit our Frequently Asked Questions page by clicking the button below. If you’d prefer getting in touch with one of our experts, we encourage you to call one of the numbers above or fill out our contact form. Do you offer training for all student levels? Are the training videos downloadable? I only want to purchase access to one training course, not all of them, is this possible? Are there any fees or penalties if I want to cancel my subscription?
<urn:uuid:531e13ba-08a6-41e9-bdc6-37002cad0cf0>
CC-MAIN-2022-40
https://ine.com/learning/courses/python-101-for-network-engineers
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00369.warc.gz
en
0.901659
330
2.671875
3
Cyber espionage is a type of cyberattack conducted by a threat actor (or cyber spy) who accesses, steals, or exposes classified data or intellectual property (IP) with malicious intent, in order to gain an economic, political, or competitive advantage in a corporate or government setting. It can also be used to harm an individual or business’s reputation. Cyber espionage does not have to be sophisticated, but it can involve complex tactics and long, patient breaches of a target’s network. Common methods of cyber espionage include advanced persistent threats (APT), social engineering, malware attacks, and spear phishing. The cyber espionage threat landscape is constantly evolving as attacks become more sophisticated. Becoming a victim of cyber espionage can have damaging consequences for an organization's reputation and can erode trust between corporations and their customers. Cybercriminals target corporate and government entities rich with sensitive informaiton. Targets of cyber espionage include: Research and development (R&D) data, employee salaries, and operational data. Proprietary plans, sensitive projects, and any property that an attacker could sell for profit. Client lists, the services they are provided, and how much they pay. Brand names, domain names, logos, unique website design, and creative assets. Cyber-spying is a form of corporate espionage. The aftermath of a cyberespionage attack can not only damage customer-company trust, it can also damage shareholder confidence. Cyber-spies can inflict financial damage on corporations by disrupting their operations. Ramifications of these attacks include theft of competitor marketing strategies used to manipulate unfair market conditions. Attacks can target large and small businesses, as well as individuals ranging from business executives to public figures. Infamously, in 2009, 30 high-profile Fortune 500 companies were targeted by a cyber espionage campaign designed to steal trade secrets. Among the victims, only Google publicly admitted that it was breached, disclosing that Gmail accounts belonging to Chinese human rights advocates had been compromised. The attack, known as Operation Aurora, is believed to have originated from China. Operating since 2008, Fancy Bear is a Russian-based cyber espionage group that attacks government and military organizations. Politically motivated, the group targets American electronic systems, including the infamous 2016 spear-phishing attack on the Democratic National Committee (DNC). In 2021, the United States NSA and FBI disclosed that Fancy Bear was behind “widespread, distributed, and anonymized brute force access attempts” against the cloud-based systems of hundreds of government and private sector targets around the world. Victims of cyber espionage extend beyond the US border. Government agencies, academic institutions, political leaders, and officials around the world can become targets of computer espionage. Other examples of global cyber espionage cases: Cyber espionage attacks are ongoing and remain relevant today. In January 2022, a Chinese hacking group breached German pharmaceutical and technology companies targeting high-value intellectual property. With an increase in work from home activity, organizations have become harder to defend and are responding by prioritizing cyber-risk and mitigation strategies. Growing more sophisticated and advanced, cyber-spy attacks are increasingly able to evade traditional cybersecurity methods. Tips to prevent cyber espionage attacks: Cyber espionage can be a form of corporate espionage. It is a type of cyberattack that involves an adversary (or cyber spy) who steals, corrupts, or damages intellectual property or sensitive data. Cyber espionage is used to cause reputational or financial damage to an organization, individual, or governement entity. In cyber spying attacks, the threat actor steals classified information to seek monetary gain or competitive advantage. Select your language
<urn:uuid:81d4405a-3a6b-44b5-9f69-016d838444fa>
CC-MAIN-2022-40
https://www.malwarebytes.com/cybersecurity/business/what-is-cyber-espionage
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338001.99/warc/CC-MAIN-20221007080917-20221007110917-00369.warc.gz
en
0.941996
749
3.515625
4
Much of our business and leisure is increasingly moving to the digital domain in our society. While this has made access to information and other necessities more convenient, it has also rendered us far more exposed to certain digital risks as we move forward. Look below to discover some of the important cybersecurity trends to watch for in 2022. Cybersecurity Powered by AI Artificial intelligence (AI) systems can help fight cyberattacks by detecting patterns of activity that indicate something unusual is going on. Importantly, AI works in systems that must deal with millions of operations per second, the most common sites that attackers pursue. Many businesses are turning to these technologies to help them counter more technologically savvy cybercriminals attempting to exploit possible flaws. Continued Expansion of Internet of Things In 2022, the number of linked devices, often known as the Internet of Things (IoT), is expected to reach its highest level ever. The increased usage of mobile and wireless gadgets, which provide tremendous convenience but significantly expose themselves to security risks, has pushed up this figure. Past IoT cybersecurity hacks include smart appliances such as fridges and cloud-based voice services such as Alexa. One of the best ways for organizations to counteract these risks is to employ Ethernet connections over wireless counterparts for their computers and digital devices network as much as possible. Another way to help prevent attacks is to educate yourself and others on the potential dangers and solutions you might face. Cyberattacks on Supply Chain Supply chains are interconnected systems that transport commodities from producers to businesses such as shops and pharmaceutical companies. Cybercriminals took advantage of COVID-19 to target vulnerable IT systems operated by overburdened IT teams and seem poised to continue to do so in 2022. These assaults are a significant reason for the existing supply chain bottlenecks, and addressing them is necessary to solve these issues. Ransomware Threats Continue to Ramp Up Ransomware infects computers with a virus that encrypts data and threatens to delete everything until you pay a ransom, usually an untraceable cryptocurrency. Many of these attacks result from phishing scams when employees get duped into submitting personal information or downloading malware directly onto a computer. The attackers may also expose the data they find to hurt an individual or company publicly. Many companies are trying to teach their staff how to recognize suspicious links before they click on them. The more you know about these sorts of frauds, the more likely you are to avoid them. The above cybersecurity trends to watch for in 2022 represent significant threats to our infrastructure. It’s essential to train yourself to recognize potential flaws and learn how to counteract them in your workplace and home.
<urn:uuid:fa350b34-d7e1-4f1b-8c68-10c6ba6382c6>
CC-MAIN-2022-40
https://coruzant.com/security/cybersecurity-trends-to-watch-for-in-2022/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00569.warc.gz
en
0.958051
538
2.78125
3
Because file servers may contain large amounts of data, you can create user-defined subclients to manage your backups. User-defined subclients are optional because a default subclient is automatically created by the software. User-defined subclients can be used to back up the following: Specific file systems or a portion of a file system Different directories on a file server For example, a file server has two directories that you want to back up. Create a user-defined subclient for /fs1/dir1 and another user-defined subclient for /fs1/dir2. Before You Begin Review the guidelines for adding subclient content. If you plan to exclude some files and directories from backups, review the filter guidelines. For example, you might want to exclude system-related files which consistently fail during a backup operation. From the navigation pane, go to Protect > File Servers. The File servers page appears. In the Name column, click the NAS server. The NAS server properties page appears. In the Protocols table, in the Type column, click NDMP. The NDMP page appears. In the right area of the page, above the Subclients table, click Add subclient. The Add subclient dialog box appears. In the Subclient name box, type a name. From the Plan list, select the server plan. The plan defines the storage for the backup data, the RPO (recovery point objective), and the data retention period. On the Content tab, enter a path and then click the add button +. To exclude files and directories from backups, on the Exclusion tab, enter a path and then click the add button +.
<urn:uuid:a6481071-a0f4-42ce-a0cf-d1ea9a7f5b43>
CC-MAIN-2022-40
https://documentation.commvault.com/v11/essential/118862_creating_user_defined_subclient_for_ndmp_backup_operations.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00569.warc.gz
en
0.792552
370
2.6875
3
Thank you for Subscribing to CIO Applications Weekly Brief Reading Eggs Reaches Over 20 Million Users Globally One of the ways Reading Eggs helps learners get up to speed quickly and maintain their reading skills is by offering a placement test upfront so they can start learning at the appropriate level and make progress at their own pace. FREMONT, CA: Reading Eggs, a Blake eLearning program that creates a unique online world where children learn to read, announced that the platform now has over 20 million users worldwide. Over the past year, learners spent more than 43 million hours using the Reading Eggs and Mathseeds programs, which create a level playing field for learners with sequenced lessons that feature interactive animations, fun games, songs, and rewards that keep children engaged and motivated to learn. "Reading Eggs is a popular worldwide reading and math program," said Katy Pike, Chief Product Officer at Reading Eggs. "Starting last year, demand spiked, and now we have more than 20 million users to date. Just in the past year, learners spent more than 43 million hours using our platforms to learn in a fun, interactive online environment. Kids love playing in the program, and parents are pleased when children are learning instead of playing video games or watching TV." One of the ways Reading Eggs helps learners get up to speed quickly and maintain their reading skills is by offering a placement test upfront so they can start learning at the appropriate level and make progress at their own pace. In addition, children can access Reading Eggs anywhere, anytime, and if they forget a skill, they can repeat a lesson to refresh. With half of the year remote learning due to COVID-19, Kindergarten students who spent an average of 26 minutes per week using Reading Eggs progressed 1.1 grade levels in a study conducted during the 2019-2020 school year. The songs, animation, and rewards keep children motivated, and so does the success they achieve by learning at the rate that works best for them. An incredibly powerful learning tool, Reading Eggs is used by many schools, which take advantage of the program's hundreds of lessons and its library containing thousands of online books. Reading Eggs is an excellent supplement for in-class or remote instruction, and the enormous increase in users and hours spent on the program suggests that more parents and teachers are reaching out for programs that promote active learning in the wake of the pandemic.
<urn:uuid:ae84e8c6-8b80-412d-9ad5-6ff67ae89cd4>
CC-MAIN-2022-40
https://www.cioapplications.com/news/reading-eggs-reaches-over-20-million-users-globally-nid-8415.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334528.24/warc/CC-MAIN-20220925101046-20220925131046-00569.warc.gz
en
0.956211
485
2.984375
3
How big of a problem is spam? The 2004 National Technology Readiness Survey estimated that spam costs the United States economy US$21.58 billion per year in lost productivity. That estimate is based on the assumption that United States Internet users spend an average of three minutes sorting through and deleting spam every day they go online. It does not include costs related to viruses and other malware that may accompany spam. That figure from 2004 also does not include losses related to legitimate e-mails that are not received because blocking efforts have caused commercial disruptions. For many of us, the significance of the spam problem does not become a concern until it results in blocking of legitimate e-mails. Then the global problem of spam can instantly become an individual problem. Blocking is primarily done by recipients’ e-mail services not on the basis of individual e-mail addresses or even domain names. Instead, blocks are imposed on IP addresses, which are a series of four sets of numbers, e.g., 22.214.171.124. What Is Spam? The Spamhaus Project defines spam as unsolicited bulk e-mail. According to Spamhaus: - “Unsolicited means that the recipient has not granted verifiable permission for the message to be sent. Bulk means that the message is sent as part of a larger collection of messages, all having substantively identical content. A message is spam only if it is both unsolicited and bulk.” Solicited bulk e-mail may be considered to be sent with permission if the recipients have gone through a double opt-in. E-mail reputation firm Habeas states that double opt-in is achieved when: “The recipient explicitly provides you with permission to have their e-mail address placed on a mailing list, you send the recipient a confirmation e-mail, and the recipient confirms their permission by e-mailing back or by visiting your Web URL to confirm.” The three principal methods spammers use for coming up with lists of e-mail addresses are: - Buying or stealing an existing list; - Harvesting e-mail addresses from compromised computers that form part of spam or bot networks; and - Namespace mining. Namespace mining uses an automated program to generate likely addresses that can be spammed, e.g., firstname.lastname@example.org, email@example.com and firstname.lastname@example.org. At business-to-business ISP Adhost, large numbers of e-mails received in alphabetical order are flagged, and if many of them are found to be sent to e-mail addresses that do not exist, then the entire batch is considered an attempt at namespace mining. Once namespace mining is detected from an IP address, then everything from that IP address can be blocked. Development staff at Adhost wrote their own mining blocker because they were unable to find a publicly available namespace mining blocker. Adhost’s mining blocker only blocks an IP address for a limited time (2-4 hours), and then the block expires. According to Richard Stockton at Adhost, this is usually enough to discourage the miners. US Is the Spam Leader For Americans who primarily conduct business internationally, it may appear that most spam problems originate from overseas. This appearance is not supported by statistics. The Spamhaus project lists the ten worst spam countries according to the number of currently listed spam issues as follows: - United States – 2606 - China – 456 - South Korea – 320 - Russia – 210 - Taiwan – 196 - Japan – 151 - Canada – 148 - Brazil – 126 - Argentina – 97 - United Kingdom – 90 According to Spamhaus, the top six ISPs currently providing connectivity and hosting to known spammers directly responsible for the world’s spam problem are as follows, listed according to the number of outstanding spam issues: - MCI.com – 242 - Comcast.net – 111 - SBC.com – 109 - Managed.com – 66 - XO.com – 59 - Road Runner (RR.com) – 56 All top six ISPs listed above are located in the United States. These ISPs are largely known as low-cost service providers that target small business and home users who are not always very tech savvy or security conscious. Large blocks of IP addresses used by the six aforementioned ISPs are on lists that other ISPs use to block e-mails. Actions Needed to Control Spam If ISPs around the world did more to monitor and police their own networks, then there would be less blocking and consequently less interference with legitimate commercial e-mails. Foreign governments also need to become active in enforcement. Otherwise firms in countries with lots of IP address blocks will suffer competitive disadvantages. Adhost rarely blocks IP addresses from the UK, but it does block large swaths of IP addresses from South America, China and some European countries, especially in Eastern Europe. Germany is notorious as a source of spam. Adhost’s Stockton says he sees less spam from India and Pakistan than from China, Brazil and Korea. He said that other ISPs in the United States are known to block all e-mail from Asia. There are rarely any legal ramifications for spammers operating outside the United States. While in the U.S., spammers may be referred to law enforcement authorities, the major risk for overseas spammers is having their ISP terminate their accounts, a fairly minor consequence. Given the cost to the American economy from foreign spam, the U.S. government needs to put spam control and Internet security on its international diplomatic agenda. An annual report needs to be published by the United States government on efforts to control foreign and domestic spam and other forms of Internet abuse, including abuse of instant messenger systems. This report should include an assessment of each country and each Internet infrastructure organization. Without a coherent program to quash spam, U.S. businesses and consumers are bound to remain inundated. Good old-fashioned law enforcement approaches need to be combined with modern technologies, particularly in the United States, where businesses and consumers have long been appealing for assistance with the problems caused by Internet abuse. As with other forms of high technology crimes, the National Association of Attorneys General and the National Governors’ Association need to take a more active role. Only when e-mail communication ceases because of IP address blocks do many of us begin to take notice of the spam problem and how it is interfering with core business processes. The imposition of IP address blocks against legitimate e-mails would not be as necessary or widespread if government authorities around the globe would give e-mail problems and Internet security issues the attention they deserve. Anthony Mitchell , an E-Commerce Times columnist, has beeninvolved with the Indian IT industry since 1987, specializing through InternationalStaff.net in offshore process migration, call center program management, turnkey software development and help desk management.
<urn:uuid:d4d3a97d-3424-4a77-b6bc-107851d46fc5>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/spotting-swatting-sources-of-spam-45789.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00569.warc.gz
en
0.943658
1,430
3.125
3
Cell phone users may have more to worry about than poor reception or using too many minutes, according to a recently released study from Sweden’s Institute of Environmental Medicine. The three-year study included 750 participants, 150 of whom suffered from acoustic neuroma, a normally benign tumor that affects the auditory nerve. It found the tumor risk nearly doubled for those who had used mobile phones for at least ten years. When the side of the head where the phone was held was taken into consideration, the tumor risk was almost four times higher for the side where the phone was held and normal for the other side. Analog Phones Tested The study tested only analog phones, and most phones used since the late 1990s are digital (GSM) models, which have yet to be tested. The study did not draw any conclusions for GSM models, nor for analog phones in use for less than 10 years. A spokesperson for the Acoustic Neuroma Association in Atlanta, Georgia, says the group does not see any reason for concern, especially since the study included relatively few people. In addition, information provided to the group by the medical community has not shown an increase in tumors that would correspond to the increase in cell phoneusage. The association also has not been alerted to the problem by its medical advisory board. Acoustic neuroma normally occurs in one out of every 100,000 people annually. It is a slow-growing tumor that can remain undiagnosed for years. Symptoms include a loss of hearing in one ear, sometimes accompanied by noise or tinnitus. Balance problems and facial tingling or numbness can also occur. The tumors can continue to grow if left untreated, resulting in increased intracranial pressure, a condition that can be life threatening. In some patients, however, the tumors do not grow at all. The tumor can bepositively identified with a specific type of MRI that uses gadolinium contrast.
<urn:uuid:c0e22c8f-8d53-4446-ab29-6a74dfd0d2e6>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/swedish-study-raises-concerns-about-cell-phones-37411.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00569.warc.gz
en
0.968424
404
3.234375
3
The process of signing off on a document has progressed leaps and bounds in the past century. With the rise of the internet and telecommuting, the need to obtain approval has both sped up and improved in terms of security. In this piece, we’ll look at how this has evolved and what it means for conducting business or legal proceedings today. Signatures of Yesteryear Obtaining binding signatures was once quite the ordeal. If the party was not present, you would be at the mercy of the postage service to deliver the document you wanted to be signed — if the document made the trip at all. Even within cities, entire messenger services were developed and hired to run sensitive documents all over a town via car, bicycle, or even horse. Later, fax machine technology made it possible to send documents to be signed over vast distances in relatively little time. While most faxed signatures will hold up in court, some governmental filings may require an original signature. Digital Signatures of Today A digital signature is a signature captured by hand on an electronic device such as a stylus on a sensitive screen, one’s finger, or sometimes even with a computer mouse. These electronic signatures are considered legally binding according to the Uniform Electronics Act (UETA). How Are Digital Signatures Not Just Scribblings? Digital signatures are still considered legally valid in most instances because of the built-in cryptography. Signatures are encrypted during submission so they cannot be intercepted maliciously and used fraudulently. Digital Signatures Reduce Wastes & Increase Organization As more organizations are attempting to reduce the amount of paper they use, digital documents and signatures are welcome advances. Digital documents are easily searchable by their content within a database. The encrypted signature files are easier to obtain, send, and maintain as they are not sensitive to aging or the elements.
<urn:uuid:8bce8f25-9693-4a03-82c2-899810093e91>
CC-MAIN-2022-40
https://www.jdyoung.com/resource-center/posts/view/204/electronic-signatures-how-do-they-work-jd-young
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334915.59/warc/CC-MAIN-20220926175816-20220926205816-00569.warc.gz
en
0.950687
376
3.015625
3
Artificial intelligence (AI) is no longer just a sci-fi fantasy. It’s here and fits right into our data-driven world. Many people don’t realise they already use AI technology daily. Do you use voice recognition on your smartphone? Then you’re using AI technology. copyright by www.cbronline.com Despite the impressive growth of AI over the last few decades, it’s still in its infancy. We’re only scratching the surface. This makes it a pivotal time to be part of this industry. Consumers are using more and more apps and smart technology. This is driving the global AI market. Couple that with the growing use of AI to improve customer services, and you see the depth of its potential. So how is AI evolving and, more importantly, how will it improve our lives? The answer is far from simple. AI will improve most, if not all, aspects of our everyday life. AI to conquer the world AI will help make drones, connected cars, connected appliances and robots an everyday reality. Imagine 3 years from now: The internet of things (IoT) and other new technologies will create an explosion of communication. We see a future where you’ll be able to communicate with your home appliances to make shopping requests to prepare a meal, your self-driving car will take itself for repairs and back home without you lifting a finger. The possibilities are endless. But it doesn’t stop there. AI goes beyond convenience and enters the world of saving lives. The first step toward advanced AI is smart automation and predictive analytics. Wearables and e-doctors Today, smart watches monitor your heart rate, blood pressure and the number of steps you’ve taken in a day. Soon, smart devices may collect advanced vital signs from blood tests, glucose measurements and more. And the technology would collect this routinely, just like visiting your doctor. AI is set to be able to change the medical industry. It could help identify abnormalities and early signs of illnesses and reduce hospital visits and wait times. And, perhaps most importantly, it might detect health concerns faster than routine check-ups. It could even suggest the best course of treatment based on billions of data points. […]
<urn:uuid:a4f41dd3-5119-4780-a7d3-0585a67c1ae2>
CC-MAIN-2022-40
https://swisscognitive.ch/2017/07/28/no-longer-just-robots-the-future-of-ai/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00569.warc.gz
en
0.927925
465
2.5625
3
IT functions are under increasing pressure to deliver results for their organisations’ ESG (Environmental Social Governance) agenda. Perhaps unfairly IT is frequently accused of being non-inclusive and not diverse, but it is on the E, the environmental, element of the ESG agenda that we can perhaps make an immediate and effective impact. Data centres, be they our own or through an external provider, are large scale contributors to the carbon footprint of many organisations. The question is, what can we as IT do to reduce that carbon footprint? There are many opinions ranging from academic papers to industry views on how much power data centres are consuming. The trouble with all of this data is that it generally takes a macro view of data centres in general and it can be very difficult to extrapolate a meaningful view for a single organisation. There are, however, some fairly straightforward facts that we can identify and apply to the situation and allow us to identify whether we could improve our current position. For example, it is accepted that larger, hyperscale data centres run more efficiently than smaller data centres. It is also accepted that many of the newer hyperscale data centres use innovative cooling methods that significantly reduce power consumption and there are modern data centres that run entirely or mainly on renewable energy. The problem for many of us is that our data centre infrastructure is not new and is not able to run as efficiently as the new data centres being built by the hyperscale providers. Of course, the other problem is that we don’t know the scale of the problem we are facing or, indeed, if we are facing a problem at all. What is the carbon footprint of the technology in your data centre? That is a question few organisations can answer with any degree of confidence. But if we could estimate our current carbon footprint and we could build a roadmap to reduce that footprint significantly over time we could be making a dramatic contribution to the future of our planet. Technology is only going to grow, and our reliance will grow with it, but if we are smart, that reliance does not have to be as environmentally destructive as it is now. If we own our own data centres then we can invest in sustainable power, innovative cooling techniques and newer and far more efficient hardware. To make this investment we do need to be in discussion with our business because the budget we will need for such transformational capital projects will be significant. If our data centres are co-location facilities, then we can bring pressure to bear on our providers to make the necessary changes to their infrastructure. This could be a time-consuming approach, but it is one we again need to embark upon as soon as we can. There is also the option to move as much as we can into the more efficient hyperscale data centres. This has the advantage of being faster and does not come with any capital outlay issues as the hyperscalers have been effectively investing in greener data centres for quite some time. Rather as we would measure the Return on Investment for any project we were considering, then so should we measure the return on the environment for a change of this nature but how do we go about this? The problem is that many of us do not know properly what we have within our data centres in terms of hardware and software, let alone carbon generation. We need to take stock and assess the scale of the situation. Ensono have been auditing and assessing data centres for many years with an approach that is non-intrusive yet thorough and allows us to create a comprehensive and detailed map of the components and the connections and dependencies within your data centres. Our assessment approach also allows us to comprehensively evaluate all of the hardware within your data centre and to extrapolate an accurate assessment of the carbon footprint of the data centre. Using our experience and a database of information developed over many years, we can establish the current position of your organisation with regards to your carbon footprint. With the data gathered from the assessment, we can then make recommendations. We will know through our engagement with you your general technology preferences. Some organisations are very open to moving workloads to a hyperscale cloud or multiple hyperscale clouds and we can model this both for the technology and the carbon footprint. We can model what will work, the commercial impact and the environmental impact as part of the same exercise. We can also model the impact of a hardware refresh within the current environment and, as our assessments will provide operational efficiency data, we can often reduce the overall hardware footprint as well. Undertaking a carbon assessment of your technology operations will provide you and your board with the comfort that you are doing the most you reasonably can to safeguard the future of our planet and will help the board deliver against their ESG requirements. Too often technology is seen as a villain in the environmental world, but by understanding our current situation, we can move towards being heroes by making impactful changes that will affect the future.
<urn:uuid:b10d5d25-718c-438f-9821-2ad8749665df>
CC-MAIN-2022-40
https://www.ensono.com/resources/blog/reducing-our-carbon-footprint/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.43/warc/CC-MAIN-20220928020513-20220928050513-00569.warc.gz
en
0.957211
994
2.578125
3
It’s hard to believe, right, parents? In just a blink or two, you went from being the teenager dropping cool phrases like “rad” and “gnarly” to monitoring a teenager texting words like “lowkey,” “IRL” and “CD9” into her smartphone non-stop.* For generations, teens have been crafting terms to differentiate themselves from other age groups. The difference today is that smartphone texting has multiplied the scope of that code to include words, emojis, numbers, and hashtags. The times have changed, fo’ sho.’ You don’t have to speak your child’s language (please don’t). However, with new terms and risks emerging online each day, it’s a good idea to at least understand what they are saying. Since kids have been spending more time online due to the pandemic, we thought we might discover a few new and interesting terms. We were right. We found stories of teens referring to the Coronavirus as “Miss Rona” and “Rona,” and abbreviating quarantine to “Quar.” A “Corona Bae” is the person you would only plan to date during a lockdown. Much of the coded language kids use is meant to be funny, sarcastic, or a quick abbreviation. However, there are times when a text exchange can slip into risky territory. Seemingly harmless, text exchanges can spark consequences such as bullying, sextortion, privacy violations, and emotional or physical harm. To help kids avoid dangerous digital situations, we recommend three things: 1) Talk early and often with your kids about digital risk and behavior expectations, 2) Explore and use parental monitoring software, and 3) Know your child’s friends and communities online and in real life. Note: Context is everything. Many of these terms are used in jest or as casual banter. Be sure to understand the context in which a word is used. A Few Terms You May See ** Flex. This term means showing off. For example, “Look at her trying to flex with her new car.” Crashy. Description of a person who is thought to be both crazy and trashy. Clap back. A comeback filled with attitude. Cringey. Another word for embarrassing. Hop off. Mind your own business. Spill tea or Kiki. Dishing gossip. Sip tea. Listening to gossip. Salty. Mad, angry, jealous, bitter, upset, or irritated. “She gave me a salty look in class.” Extra. Over the top or unnecessarily dramatic. Left on read. Not replying to someone’s message. Ghosting. Ending a friendship or relationship online with no explanation. Neglext. Abandon someone in the middle of a text conversation. Ok, Boomer. Dismissing someone who is not up to date enough. (Throw) shade. Insult or trash talk discreetly. Receipts. Getting digital proof, usually in the form of screenshots. THOT. Acronym for That H__ Over There. Thirsty. A term describing a person as desperate or needy. “Look at her staring at him — she’s so thirsty.” Thirst trap. A sexy photograph or message posted on social media. Dis. Short for showing blatant disrespect. Preeing. A word that describes stalking or being stalked on Facebook. Basic. Referring to a person as mainstream, nothing special. Usually used in a negative connotation. Chasing Clout. A negative term describing someone trying too hard to get followers on social media. 9, CD9, or Code9, PAW, POS. Parents are around, over the shoulder. 99. All clear, the parents are gone. Safe to resume texting or planning. KPC. Keeping parents clueless. Cheddar, Cheese, or Bread. These are all terms that mean money. Cap. Means to lie as in “she’s capping.” Sending the baseball cap emoji expresses the same feeling. No capping means “I’m not lying.” Hundo P. Term that is short for “hundred percent;” absolutely, for sure. Woke. Aware of and outspoken on current on political and social issues. And I oop. Lighthearted term to describe a silly mistake. Big oof. A slightly bigger mistake. Yeet. An expression of excitement. For example, “He kissed me. Yeeeet!” Retweet. Instead of saying, “yes, I agree,” you say, “retweet.” Canceled. Absurd or foolish behavior is “canceled.” For example, “He was too negative on our date, so I canceled him.” Slap or Snatched. Terms that mean fashionable or on point. For instance, “Those shoes are slap” or “You look snatched.” And just for fun, here’s a laugh out loud video from comedian Seth Meyer’s on teen Coronavirus slang you’ll enjoy on YouTube. * lowkey (a feeling you want to keep secret), IRL (In Real Life), CD9 also Code9 (Adult Alert used to hide secretive activity). ** Terms collected from various sources, including NetLingo.com, UrbanDictionary.com, webopedia.com, and from tweets and posts from teens online. Follow us to stay updated on all things McAfee and on top of the latest consumer and mobile security threats.
<urn:uuid:7e9352b1-cbde-4079-a52d-5e78df176232>
CC-MAIN-2022-40
https://www.mcafee.com/blogs/family-safety/can-you-decode-your-teens-texting-language/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.22/warc/CC-MAIN-20220930181143-20220930211143-00569.warc.gz
en
0.918511
1,245
2.890625
3
A solid-state drive (SSD) is one of the most popular, fastest, and energy-efficient solutions for… DriveSavers published a handful of articles from past fire seasons that remain relevant today. The Northern California fires of 2017 taught us some lessons about the different effects that a wildfire has on data storage devices as opposed to the average house fire. Data recovery for these devices requires different techniques and, sometimes, a hard drive or other device that was in a wildfire is not recoverable at all. This article explores the effects of different types of fire on data storage devices and shares some tell-tale signs that the data from a device may or may not be recoverable. Many customers don’t understand how much data can be recovered, even from a hard drive, that has been in a fire. Over the past thirty-fire years, DriveSavers has recovered data from thousands of fire-damaged devices. Each type of data storage device comes with its own challenges, but we can overcome most of them. This article explains in depth what happens inside a data storage device when it is in a fire and what needs to be done in order to successfully recover data. Devices examined include solid state drives (SSD), smartphones, tablets, camera cards, SD and micro-SD cards. Mike Cobb, DriveSavers Director of Engineering, lost his home in the Sonoma County fire last October 2017. He uses personal tragedy to help explain why we should all be protecting our business and personal information from potential loss, and how to do just that. Keeping your data safe and out of harm’s way can make all the difference when recovering from a disaster. Accounting and project files can get a business back up and running. Family photos and videos can help bring back laughter and a sense of normalcy at home. This article is not specific to fire; however, it contains some great tips for general disaster preparedness and data protection.
<urn:uuid:9df5b6ae-0c55-486c-9e41-aafcfd62c11b>
CC-MAIN-2022-40
https://drivesaversdatarecovery.com/data-tips-for-wildfire-season-2000/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337415.12/warc/CC-MAIN-20221003101805-20221003131805-00569.warc.gz
en
0.943624
400
2.546875
3
A lot of people these days talk about how they have jailbroken their smartphones. The fact is that most people think that a jailbreak means connecting their devices to a computer, pressing a button, waiting for a couple of minutes, and voilá. The reality is a little more complex than that. What is a Jailbreak? Jailbreak means allowing third-party applications to be installed on your Apple iDevice. Contrary to popular beliefs, it's entirely legal to run third-party applications on your device since James H. Billington's DMCA revision. Jailbreaking permits root access to the iOS file system and manager, allowing the download of additional applications, extensions, and themes that are unavailable through the official Apple App Store. The only thing that prevents people from doing a jailbreak is Apple itself. Types of Jailbreaks When a device is booting, it starts with loading the Apple kernel initially. The device must then be exploited and have the kernel patched each time it is turned on. An "untethered" jailbreak is a process where a jailbreak is achieved without the need to use a computer. As the user turns the device off and back on, the device starts up completely, and the kernel is patched without the necessity of a computer. While this sounds easy, this kind of jailbreak is harder to make and requires a lot of reverse engineering and experience. With a "tethered" jailbreak, a computer is needed to turn the device on each time it is rebooted. If the device starts back up on its own, it will no longer have a patched kernel, and it may get stuck in a partially started state. Basically, the purpose of the computer is to "re-jailbreak" the phone each time it is turned on. There is also a third kind called a "semi-tethered" solution. What this essentially means is that when the device boots, it will no longer have a patched kernel, which means you will not be able to run any modified code. But it can be used for normal functions. When you need to use features that require a modified code to run, the user must start the device with the help of a jailbreaking tool. Why to Avoid Jailbreak? There are numerous reasons why you should avoid jailbreaking your iPhone or iPad. Once you jailbreak, you will be forced to jailbreak each time you a new iOS update. Moreover, you will get those updates much later than other devices. Jailbroken devices are known to be potentially unstable too. You might have problems with your system and your apps will tend to crash frequently and the phone will reboot more often. During the jailbreak process, there is also a chance that you might end up bricking your iDevice. This term refers to some software issue which might lead to your device becoming useless without getting some hardware replacement. And most importantly, the fact that jailbroken devices are more prone to cyberattacks can’t be denied How does a Jailbreak Work? Jailbreak allows you to get control over the root and media partition of your device. This is where all the iOS files are stores. To do this, /private/etc/fstab must be patched. fstab is like a switch that controls permissions to the root and media partitions. By default, this is set to a 'read-only' mode allowing you to only view but not make any changes. To be able to make modifications, we have to set the fstab to 'read-write' mode. It is the switch room of your iDevice, controlling the permission of the root and media partition. While this might sound easy, the biggest problem is getting in all the files that you need through the various checkpoints. The checkpoints are Apple's way of ensuring that the file is legit or a third-party. Every file is signed by a key, and without it, the file will be put aside and be unusable. So where do we get the key? Well, it's not as easy as it sounds. Now, we'll have to act like Sherlock and solve the mystery of the hidden keys. In simple words, the access to the door can be provided if we either unscrew the lock (patch all checkpoints) or find a back door entry (bypass). Patching is a difficult task and mostly not worth the effort. So most people who jailbreak will try to find a backdoor entry or a bypass. Before we understand how we can bypass these checkpoints, we must enlighten ourselves with some more information. Essential Things to Understand Jailbreak Further 1. The Boot Process Every time an Apple device boots up, it goes through something called as a "chain of trust." This is basically a series of checks that ensures everything that is running is something that Apple approves of. Usually, the order is as follows: - Runs Bootrom: Also called "SecureROM" by Apple, it is the first significant code that runs on an iDevice. - Runs Bootloader: Generally, it is responsible for loading the main firmware. - Loads Kernel: Bridge between the iOS and the actual data processing done at the hardware level. - Loads iOS: The final step to the chain, iOS starts and we get our nice "Slide to Unlock" view. Now that you know how to boot your device let's go a step further. 2. The Roadblock Every movie has to have a villain. The bad guy is what makes everything challenging. In this case, the signature checks are the bad guys. While the kernel is loading, there are thousands of tests being done to make sure everything being loaded is Apple approved. To be more specific, there are many checks throughout the boot process which look, for one thing, a signature, or a key. If the key is correct we get a green light, if it is wrong, depending where the check was at or what file it was, it will either crash the iDevice causing a loop, or simply ignore it and does not execute that particular file at all. 3. The Objective of a Jailbreak As a Jailbreaker, your objective is to either patch the checks or bypass them. As mentioned before, the conventional and fairly less cumbersome process is to bypass. This brings us to two broad categories of exploits: - bootrom exploit: Exploit done during the bootrom. It can't be patched by a conventional firmware update, and must be patched by new hardware. Since it's before almost any checkpoint, the malicious code is injected before everything, thus allowing a passageway to be created to bypass all checks or simply disable them. - userland exploit: Exploit done during or after the kernel has loaded and can easily be patched by Apple with a software update. Since it's after all the checks, it injects the malicious code directly into the openings back into the kernel. These openings are not so easy to find, and once found can be patched. Few Security Complications are Jailbreaking your iDevice has some pros, most important of which is you get to access and use third party apps. A jailbreak can also open up a lot of security loopholes: 1. Third Party Apps Can Be Dangerous There’s a reason why Apple imposes more restrictions that any other mobile OS out there. A malicious app can cause a lot of havoc on your device. It’s always possible that you’ll get a bad app, but if you start downloading apps that haven’t been okayed by Apple for the App Store, the chances of getting malware goes up. 2. Security Patches Will Not Download After you’ve jailbroken your iPhone or iPad, you won’t be able to update iOS without reverting to the un-jailbroken default mode. While this isn’t a big deal, most people who have jailbroken their iOS devices will wait until a new jailbreak is available for the update before they download and install it so that they don’t have to go back to the stock iOS implementation for an extended period of time. 3. Everyone Knows the Default Password One of the worst-kept secrets about iOS is its root password, “alpine.” Everyone knows it, and Apple doesn't intend to change it. Having the root password gives a user access to the core functions of the device, and this can be disastrous if it falls into the wrong hands. The Good thing is that this password can be changed from a shell app, but post jailbreakers often forget to do this leaving their devices open to vulnerabilities. To Sum It Up It is not easy to Jailbreak a device. It requires a lot of skill, experience and a hell lot of patience. I hope this post helps establish that point. I hope that next time you think about jailbreaking your device, you understand the whole process and are also aware of the security issues that come along with it. Apps that are installed on jailbroken devices are more exposed of their critical information. Ensure your app is secured even if it sits on a jailbroken device.
<urn:uuid:c3f6832a-a9a6-4a8c-ab07-95cddc6569d1>
CC-MAIN-2022-40
https://www.appknox.com/blog/how-does-jailbreak-work
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00569.warc.gz
en
0.947883
1,913
2.640625
3
What is Threat Hunting? Threat hunting is when a skilled cybersecurity analyst uses manual or machine-based methods to identify security incidents or potential threats that current detection methods did not find. To be successful, they must know how to coax their toolsets into finding the most dangerous threats. These cybersecurity analysts also require extensive knowledge of different types of malicious software, exploits, and network protocols. The goal of cyber threat hunting is to discover new vulnerabilities before attackers do. This means recognizing when an attack has occurred, understanding what happened during it, and determining if there are any other attacks underway. The best way to accomplish this is to access all your data from every device on your network. How does threat hunting work? A typical threat hunter will use multiple techniques to gather information about a target system. These include, but are not limited to, the following: Network traffic analysis: Analyzing network traffic can reveal many things, including IP addresses, communication ports, and file transfers. System monitoring: Monitoring systems such as servers, desktops, laptops, mobile devices, and Internet of Things (IoT) devices allow you to see what's happening inside them at any given time. You can even monitor specific processes running within these machines. File activity: File activity includes looking through logs, registry entries, event viewer records, and more. Behavioral analytics looks at user behavior across various platforms like web browsers, email clients, instant messaging apps, and social media sites. Malware analysis: Malware analysis helps you look at suspicious files, executables, scripts, archives, and more. Data loss prevention: Data loss prevention analyzes log events generated by DLP solutions installed on endpoints. Security intelligence: Security Intelligence provides insight into known threats and indicators of compromise. SI may include Indicators of Compromise (IOC) databases, vulnerability scanners, exploit kits, and malware repositories. What are some key threat hunting characteristics? Real-Time Response: RTR refers to the ability to react immediately after discovering something unusual. To achieve this, threat hunters need to have visibility into everything going on throughout the entire enterprise. They should be able to analyze data from anywhere, anytime. Automation: An essential characteristic of threat hunting is automation. There are several ways to automate tasks related to threat hunting. For example, some organizations use automated scanning services that scan networks for signs of intrusion. Other companies leverage AI, machine learning, and natural language processing technologies to automatically classify and categorize detected anomalies. Visibility: Another critical aspect of threat hunting is visibility. If you don't have visibility into your environment, you cannot effectively perform threat hunting activities. To ensure that you're getting complete visibility, you'll want to deploy security tools across your organization's IT infrastructure. These tools provide detailed reports detailing anomalous behaviors or malicious actions on computers, servers, and applications. Scalability: One of the most challenging aspects of performing threat hunting is scalability. As an attacker becomes increasingly sophisticated, it becomes harder and harder to keep up with their tactics; This means that if you only focus on one type of attack, you might miss out on other types of attacks. The best way to address this challenge is to expand your threat hunting efforts beyond focusing on one particular area. By doing so, you increase the likelihood of finding new vulnerabilities before they become widespread. Collaboration: Finally, collaboration is another critical component of successful threat hunting. When working together, teams will share information about potential issues and collaborate on how to resolve those problems. Without proper communication between team members, there's no guarantee that all parties involved will know when and where an issue has been resolved. How To start performing threat hunting activities There are many different approaches to threat hunting activities. Some teams start small while others begin large-scale operations. Regardless of what method that you choose, make sure that you follow these steps: Identify Goals: Identifying goals and defining requirements can help you determine whether threat hunting is right for your business. Once you've decided that threat hunting is appropriate for your company, you must define what you want to accomplish by implementing threat hunting within your organization. Are you looking to improve network defenses? Increase employee awareness? Reduce downtime? Whatever your goal may be, make sure that you communicate them to your employees and management. Define Resources: Now that you understand why you'd like to implement threat hunting, you need to figure out how much time and resources you'll require. To do this, you should first identify any existing processes that could benefit from being replaced with more effective methods. Next, you'll want to consider how long each task takes to complete. Then, you'll want to estimate how often you plan to repeat the specific task. After completing these calculations, you'll be better equipped to decide how much time and money you'll need to invest in threat hunting. Develop an Action Plan: Before beginning your threat hunting journey, you'll also want to develop an action plan. A good strategy includes identifying who needs to be informed about your plans, Determining which individuals will take part in executing your plan, and how frequently you'll update everyone on progress. Execute: Make sure that you document everything that happens during your execution/implementation process. Doing so will allow you to review your actions later and provide feedback to ensure that things run smoothly. If something goes wrong or doesn't work correctly, don't hesitate to ask questions until you're satisfied with the results. Evaluation: Lastly, evaluate your success after every step along the way. Ask yourself, "Did we achieve our objectives?" If not, then adjust accordingly. Don't forget to include metrics such as the number of incidents detected, average response times, etc. Once you have completed all of these steps, you'll be ready to perform threat hunting activities. How is threat hunting conducted? The best way to perform threat hunting depends entirely upon your situation. For example, if you're just getting started, you might only need to focus on one aspect of threat hunting. However, if you already have some experience under your belt, you might find that performing multiple types of attacks simultaneously would yield even more significant benefits. Regardless of where you stand now, there's no denying that threat hunting has become increasingly popular over recent years. As a result, the demand for qualified professionals capable of conducting threat hunts continues to rise. The following is a list of typical ways to perform threat hunting: Manual Analysis Manual analysis refers to analyzing security logs manually. While manual analysis isn't necessarily complex, it does involve significant amounts of time spent reviewing log files. Additionally, because most organizations use several tools to monitor their networks, they typically have hundreds of thousands of log entries stored locally. Therefore, when analyzing them manually, you may spend hours searching through large volumes of information before finding anything useful. Automated Detection: Automated detection refers to scanning systems that automatically scan network traffic looking for malicious activity. These scans can either occur at regular intervals or whenever new suspicious behavior occurs. When automated detection finds evidence of malware, it sends alerts to administrators who must determine whether those findings warrant further investigation. Because this type of system requires less human intervention than traditional methods, it produces fewer false positives. Hybrid Methods: Hybrid approaches combine both manual and automatic techniques into a single solution. They often rely heavily on machine learning algorithms to identify potential indicators of compromise without requiring any prior knowledge of what constitutes a legitimate event. Once a hybrid approach generates an alert, humans still play a role in determining how serious the issue is. Machine Learning: Machine learning uses artificial intelligence to automate many aspects of cybersecurity operations. One common application of AI within the context of cybersecurity is anomaly detection. Anomaly detection looks for patterns in everyday events that deviate from expected behaviors. In other words, anomalies represent deviations from standard operating procedures. If these deviations continue long enough, then they could indicate a breach. This process allows machines to quickly spot potentially dangerous situations while allowing humans to take action as needed. What are some additional threat hunting techniques? Below are the common threat hunting techniques used to pinpoint threats in an organization's environment, including: Searching: When searching for threats, it is crucial to balance being overwhelmed by receiving too many responses and missing out on threats by getting too few responses. Clustering: This typically uses AI and machine learning technology. It separates clusters of similar data based on specific characteristics from a more extensive database. The practice allows analysts and others to gain a broader view of data that's of the most interest. They can also find similarities or related correlations and weave those into a clearer picture of what is going on within their organization's network and determine how to move forward. Grouping: This technique involves taking multiple unique items and identifying when multiples appear together based on the predetermined search criteria. While similar in concept to clustering, this technique only includes searching an explicit list of items that have already appeared suspicious. Stacking: This practice involves counting the occurrence of values of a particular type and analyzing the outliers. Stacking is most useful with data sets that produce finite results and when inputs can be organized, filtered, and manipulated. Leveraging technology — even something as simple as Microsoft Excel — is essential when stacking. Why are threat hunters important? The importance of threat hunters cannot be overstated. As organizations become more reliant upon digital technologies, their business can only run if the network is running. These networks contain valuable information about your business, employees, customers, partners, and suppliers, making them attractive targets for hackers looking to steal intellectual property or otherwise disrupt your operation. As such, you need to ensure that all of your security measures are up to date and effective. You should always keep abreast of new developments in malware and hacking methods and ensure that your defenses can identify and block any potential attacks before they reach critical levels. In summary, if you want to protect yourself against malicious actors who may try to access sensitive information stored online, you must first understand exactly where that information resides. Once you know where it lives, you will be better equipped to prevent unauthorized users from accessing it.
<urn:uuid:5d959b40-9711-41fe-bd09-397731971bcc>
CC-MAIN-2022-40
https://intel471.com/glossary/threat-hunting
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00569.warc.gz
en
0.938689
2,075
2.65625
3
Big Data is an umbrella term which encompasses all sorts of data which exists today. From hospital records and digital data to the overwhelming amount of government paperwork which is archived – there is more to it than we officially know. You can’t categorize Big Data under one definition or description, because we are still working on it. The great thing about information technology is that it has always been available for technology companies, businesses and all types of institutions. It was the emergence of cloud computing which made it easier to provide the best of technology in the most cost-effective packages. Cloud computing not only reduced costs, but also made a wide array of applications available to the smaller companies. Just as the cloud is growing steadily, we are also noticing an explosion of information across the web. Social media is a completely different world, where both marketers and common users generate loads of data every day. Organizations and institutions are also creating data on a daily basis, which can eventually become difficult to manage. Take a look at these statistics on Big Data generation in the last five years; - 2.5 quintillion bytes (2.3 Trillion Gigabytes) of data are created every day. - 40 zettabytes (43 Trillion Gigabytes) of data will be created by 2020. - Most companies in the US have at least 100 Terabytes (100,000 Gigabytes) of stored data. These high volumes of data present a challenge to the cloud environment. How to manage and secure the essence of this data rather than just stacking it? It seems like cloud computing and big data are an ideal combination for this. Together, they provide a solution which is both scalable and accommodating for big data and business analytics. The analytics advantage is going to be a huge benefit in today’s world. Imagine all the information resources which will become easily accessible. Every field of life can benefit from this information. Let’s look at these advantages in detail; The traditional infrastructure of storing and managing data is now proving to be slower and harder to manage. It can literally take weeks to just install and run a server. Cloud computing is here now, and it can provide your company with all the resources you need. A cloud database can enable your company to have thousands of virtual servers and get them working seamlessly in only a matter of minutes. Cloud computing is a blessing in disguise for a company that wishes to have updated technology under a budget. Companies can pick what they want and pay for it as they go. The resources required to manage Big Data are easily available and they don’t cost big bucks. Before the cloud, companies used to invest huge sums of money in setting up IT departments and then paid more money to keep that hardware updated. Now the companies can host their Big Data on off-site servers or pay only for storage space and power they use every hour. The explosion of data leads to the issue of processing it. Social media alone generates a load of unstructured, chaotic data like tweets, posts, photos, videos and blogs which can’t be processed under a single category. With Big Data Analytics platforms like Apache Hadoop, structured and unstructured data can be processed. Cloud computing makes the whole process easier and accessible to small, medium and larger enterprises. While traditional solutions would require the addition of more physical servers to the cluster in order to increase processing power and storage space, the virtual nature of the cloud allows for seemingly unlimited resources on demand. With the cloud, enterprises can scale up or down to the desired level of processing power and storage space easily and quickly. –Source Big Data analytics require new processing requirements for large data sets. The demand for processing this data can raise or fall at any time of the year, and cloud environment is the perfect platform to fulfill this task. There is no need for additional infrastructure, since cloud can provide most solutions in SaaS models. Challenges to Big Data in the Cloud environment: Just as Big Data has provided organizations with terabytes of data, it has also presented an issue of managing this data under a traditional framework. How to analyze the large sum of data to take out only the most useful bits? Analyzing these large volumes of data often becomes a difficult task as well. In the high speed connectivity era, moving large sets of data and providing the details needed to access it, is also a problem. These large sets of data often carry sensitive information like credit/debit card numbers, addresses and other details, raising data security concerns. Security issues in the cloud are a major concern for businesses and cloud providers today. It seems like the attackers are relentless, and they keep inventing new ways to find entry points in a system. Other issues include ransomware, which deeply affects a company’s reputation and resources, Denial of Service attacks, Phishing attacks and Cloud Abuse. Globally, 40% of businesses experienced a ransomware incident during the past year. Both clients and cloud providers have their own share of risks involved when making an agreement on cloud solutions. Insecure interfaces and weak API’s can give away valuable information to hackers, and these hackers can misuse this information for the wrong reasons. Some cloud models are still in the deployment stage and basic DBMS is not only tailored for Cloud computing. Data Acts is also a serious issue which requires data centers to be closer to a user than a provider. Data replication must be done in a way which leaves zero room for error; otherwise it can affect the analysis stage. It is crucial to make the searching, sharing, storage, transfer, analysis, and visualization of this data as smoothly as possible. The only way to deal with these challenges is to implement next-generation technology which can predict an issue before it causes more damage. Fraud detection patterns, encryptions and smart solutions are immensely important to combat attackers. At the same time, it is your responsibility to own your data and keep it protected at your end while looking for business intelligent solutions that can ensure a steady ROI as well.
<urn:uuid:d8fc9a5c-9dd6-403d-a9c2-2e9de1071d74>
CC-MAIN-2022-40
https://www.crayondata.com/big-data-and-cloud-computing-challenges-and-opportunities/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338073.68/warc/CC-MAIN-20221007112411-20221007142411-00569.warc.gz
en
0.945257
1,247
3.015625
3
While monitoring malwares being actively distributed, ASEC analysis team discovered a new dynamic analysis bypass technique. To avoid detection, many of the malwares being distributed check the malware execution environment first, and if it matches the requirement, they crash not to activate. The technique that will be introduced in this blog is a method of using a certain assembly command and checking if a large-sized memory can be allocated. 1. AVX support availability (VXORPS command) If a malware that uses ‘VXORPS’ command to run in an environment where AVX is not supported crashes, this malware is likely created with Visual Basic. As mentioned in the recent blog post ‘Distribution of a Visual Basic Malware that Assumes Identity of a Korean Company’ (https://asec.ahnlab.com/1286), many of the malwares that are being distributed use Visual Basic. This ‘VXORPS’ command is included in AVX (Advanced Vector eXtensions) command set, and it is an XOR logical operator that uses floating-point arithmetic. For CPU to process this command, OS and CPU that support AVX command set must be used. If malware is executed in an environment where AVX is not supported, it crashes due to an illegal instruction. Some dynamic analysis tools intentionally configure an old version environment where security update hasn’t been done. Due to such, the environment may not support certain technologies. In the set environment, when the samples composed of the VXORPS command are run, dynamic analysis is bypassed as a malware is not activated due to AppCrash. Most of the CPUs produced after 2011 support AVX command set, and major OSs that support them are shown in Figure 1. |Windows||Windows 7 SP1, Windows Server 2008 R2 SP1,| Windows 8, Windows 10 |Linux||Kernel version 2.6.30 or later| |macOS||10.6.8 (Snow Leopard) or later| Table 1. List of the major OS that supports AVX To check whether the current environment supports AVX, use Coreinfo tool provided by Windows Sysinternals. 2. Memory Allocation Availability This technique uses VirtualAlloc() API to allocate a memory with the size of 0x3B9AC00 (about 1GB). If this size cannot be allocated, it causes the environment to refer a wrong address leading to crash. Since the normal dynamic analysis environment (Sandbox) runs various virtual machines (VM) in a PC at once, a great amount of resources is required. Thus, the maximum efficiency can only be achieved by properly allocating resources to each virtual machine (VM), and usually during this process, only the bare minimum RAM is required to run the malware. This is why 1GB or less memory is allocated most of the time, and the malware was designed to target it. The method to check the memory size and similar techniques have been introduced before. Upon checking RAPIT result (see Figure 5), we can see that the malware has a feature of injection & time delay for svchost.exe after the technique is conducted and it is a NetWiredRC Backdoor which connects with a specific C2 server. In the past, some malware used the method of immediately terminating itself if the execution environment was found to be VM or Sandbox. However, the recent-found version forcibly causes crash if such environment is detected as an analysis environment. This is done to stall the discovery of the cause. Crash can occur if a file is damaged, but a crash that stems from the technique above requires the code responsible for the crash to be found within the malware, and VM must be adjusted in accordance with the troubleshooting procedure. Furthermore, like the method introduced in one of the previous posts (https://asec.ahnlab.com/1202), if it leads to design problems of the VM, more time will be wasted than now, and will eventually lead to the decline of dynamic analysis efficiency. In addition, as for Cuckoo Sandbox which is commonly known, there is a method of causing a crash on a target monitoring process. We believe that there will be more diverse bypass techniques in the future. AhnLab’s V3 products detect the malware under the following aliases: - Trojan/Win32.Inject (2020.02.20.00) - Malware/Win32.RL_Gerneric (2020.02.14.00)
<urn:uuid:c51f6788-45f7-4ffc-bdfb-ed345d3dde75>
CC-MAIN-2022-40
https://asec.ahnlab.com/en/16540/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00769.warc.gz
en
0.907321
965
2.515625
3
A new report has found that America’s emergency helpline number, 911, is vulnerable to Distributed Denial of Service (DDoS) attacks. Interestingly, the threat of attack has previously been issued by Department of Homeland Security as well as the FBI. Mark James, Security Specialist at ESET commented below. Mark James, Security Specialist at ESET: “These days DDoS attacks are easy to accomplish and can be relatively low cost to put in place. As technology gets cheaper, and the ability for botnets to infect more users thus making themselves stronger, seems easier and easier. The art of causing problems through brute force is now more readily available than ever before. In this modern day of internet everywhere, we take for granted the ability to access websites as and when we want them. When a website gets a DDoS attack those services become unavailable, now it’s not such a big problem when it’s a news delivery site but when it’s a service it’s a very different kettle of fish, and even more of a problem if it disrupts emergency or medical services. If you’re unable to access services in an emergency the worst case scenario could be life threatening. An effective defence to DDoS can be as easy as utilising third party services but there may be consequences for data security or disruption to the smooth user experience. Effective security is only achieved through multi-layered protection.”
<urn:uuid:a96f6dd7-8306-445d-b985-1f9b05b30c08>
CC-MAIN-2022-40
https://informationsecuritybuzz.com/expert-comments/911-vulnerable-ddos-attacks/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00769.warc.gz
en
0.9483
299
2.578125
3
A port scan is a network reconnaissance technique designed to identify which ports are open on a computer. This can enable the scanner to identify the applications running on the system as certain programs listen on particular ports and react to traffic in certain ways. For example, HTTP uses port 80, DNS uses port 53, and SSH uses port 22. IP addresses are vital to routing traffic over a network. An IP address uniquely identifies the device where a packet should be routed. However, knowing that a particular computer should receive a packet is not enough for it to reach its destination. A computer can be running many different applications at the same time, and several may be simultaneously sending and receiving traffic over the network. The TCP and UDP protocols define the concept of ports on a computer. An application can send traffic and listen on a particular port. The combination of an IP address and a port enables routing devices and the endpoint to ensure that traffic reaches the intended application. A port scanner, such as nmap, works by sending traffic to a particular port and examining the results. If a port is open, closed, or filtered by a network security solution, it will respond in different ways to a port scan, including: Different computers will respond to different packets in different ways. Also, some types of port scans are more obvious than others. For this reason, a port scanner may use a variety of scanning techniques. Some of the more common types of port scans include: A port scan can provide a wealth of information about a target system. In addition to identifying if a system is online and which ports are open, port scanners can also identify the applications listening to particular ports and the operating system of the host. This additional information can be gleaned from differences in how a system responds to certain types of requests. Port scanning is a common step during the reconnaissance stage of a cyberattack. A port scan provides valuable information about a target environment, including the computers that are online, the applications that are running on them, and potentially details about the system in question and any defenses it may have (firewalls, etc.). This information can be useful when planning an attack. For example, knowing that an organization is running a particular web or DNS server can allow the attacker to identify potentially exploitable vulnerabilities in that software. Many of the techniques used by port scanners are detectable in network traffic. Traffic to many ports, some of which are closed, is anomalous and can be detected by a network security solution like an IPS. Also, a firewall can filter unused ports or implement access control lists that limit the information provided to a port scanner. Check Point’s Quantum IPS provides protection against port scanning and other cyber threats. To learn more about the other threats that Quantum IPS can manage, check out Check Point’s 2022 Cyber Security Report. You’re also welcome to sign up for a free demo to see the capabilities of Quantum IPS for yourself.
<urn:uuid:e5673b0e-ff22-4c7c-b510-db61532432ed>
CC-MAIN-2022-40
https://www.checkpoint.com/cyber-hub/network-security/what-is-a-port-scan/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00769.warc.gz
en
0.929757
597
3.765625
4
The primary objective of multimodal learning is to consolidate the learning process from heterogeneous data streamed from various sensors and other data inputs into a single model, either for prediction or inference. Multimodal learning systems can improve on unimodal ones because modalities can carry complementary information about each other, which will only become evident when they are both included in the learning process. Therefore, learning-based methods that combine signals from different modalities can generate more robust inference, or even new insights impossible in a unimodal system. Multimodal learning has been a research topic in computer science since the mid-1970s, but recent improvements in Deep Learning reignited interest in the field. In the initial phase of multimodal learning, rules-based approaches dominated implementations. However, increasingly, a hybrid mixture of rules-based and deep learning based multimodal learning is becoming the most popular style of software implementation, creating specific implementation requirements for multimodal learning systems. The market is currently experiencing the first wave of multimodal learning applications and products that draw on Deep Learning techniques to both interrupt sensor data and increasingly inform the multimodal learning process itself. Multimodal learning exploits complementary aspects of modality data streams, making it a powerful technology and enabling new business applications that fall into three categories: classification, decision making, and HMI. Shipments of devices using multimodal learning will increase from 24.74 million in 2018 to 514.12 million in 2023. The market sectors most aggressively introducing multimodal learning systems include automotive, robotics, consumer devices, media and entertainment, and healthcare. At present, several applications are driving the uptake of multimodal learning, creating demand for systems which can support it. Implementing multimodal learning is still challenging, as open source software efforts remain limited, while capable hardware platforms that bring multimodal learning inference to devices at the edge are only just starting to emerge. The inference of hybrid multimodal learning software has compute requirements that are best served by heterogeneous computing architectures. Consequently, some companies are now building specialized chips based on heterogeneous architectures.
<urn:uuid:aca911b3-a243-4b66-a14b-b8080bd6e80b>
CC-MAIN-2022-40
https://www.abiresearch.com/market-research/product/1031007-multimodal-learning-technology-development/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335365.63/warc/CC-MAIN-20220929194230-20220929224230-00169.warc.gz
en
0.939068
432
2.703125
3
The ABCs of UWB Technology The term "UWB" has been appearing in technology news articles over the past few years. Several industry groups have been established with the aim of promoting the use of UWB technology, and smartphone manufacturers have started incorporating UWB capability into their flagship phones. Even Apple’s AirTag uses UWB technology. So, what exactly is UWB, and what makes it useful? Although the term UWB (or Ultra-WideBand) can have a broader meaning in the wireless world, we will focus on the definition of UWB as the short-range, impulse radio, wireless technology specified in the IEEE 802.15.4z standard, and more specifically the subset of that standard which has been adopted by the FiRa™ Consortium, of which HID Global is a founding member. A Closer Look Unlike conventional wireless technologies such as Bluetooth, Wi-Fi and FM radio, which send information by modulating a continuous carrier wave, UWB operates by transmitting streams of very short (around two nanoseconds each) pulses of carrier wave. The short duration of these pulses is what gives UWB its wide bandwidth (499.2MHz) and its name. UWB can employ carrier frequencies between 3.1GHz and 10.6GHz (although FiRa currently uses only 6.0GHz to 10.6GHz), and the transmit power is strictly limited by regulatory bodies. This results in a practical range of a few tens of meters and makes it license-exempt. In the telecommunications world, a wide bandwidth is usually associated with high data rates. However, the more valuable feature of UWB is its ability to perform accurate, fast and reliable measurements of direction and distance (or “ranging”) between two or more UWB-enabled devices. This ability lends itself to everyday applications such as seamless access and real-time location services. At this point, you might be thinking, “Why do we need another wireless technology? Bluetooth and Wi-Fi can already give us distance and direction measurements.” Yes, they can, but UWB offers distinct advantages over those other technologies. The majority of Bluetooth and Wi-Fi ranging systems use received signal strength as a proxy for distance. This is a crude approximation which is very sensitive to changes in the surroundings, requires additional signal processing and gives measurements with an accuracy of several meters at best. UWB ranging measurements, on the other hand, are based on a time-of-flight (ToF) calculation. A device measures how long it takes for messages to “fly” across the air to a second device and back again, and then calculates the distance travelled based on the speed of light. Measurements are accurate within a few centimeters. Bluetooth and Wi-Fi ranging involves signal filtering and processing to achieve a usable level of accuracy and gives ranging updates once every few seconds. This could tell you if a person (with a smartphone) is in a room or not. A UWB ranging operation can take as little as a few milliseconds, resulting in potentially hundreds of ranging updates per second. This can tell you which direction that person is moving within the room and at what speed. The reliability of UWB ranging stems from its wide bandwidth and error correction mechanisms. The energy of each pulse is spread over a wide frequency range, and this makes it less sensitive to narrowband interference. Error detection and correction features within the UWB message allow successful operation even if some of the pulses in a message are not received correctly. Secure ranging features provide a protection against so-called relay and replay attacks, where a would-be thief can fool one UWB device into thinking another UWB device is nearby. This attack has been used to steal cars by tricking the car into detecting that the “owner” is approaching and therefore unlocking the doors. The FiRa Consortium’s name derives from the Fine-Ranging nature of UWB, and this underpins all of the use cases which FiRa standardizes and promotes, such as contactless door opening, asset tracking, real-time location services and social distancing. Explore the growing list of FiRa’s use cases. Paul Cabble has been part of the HID Global team since 2009. During that time, he has been a Senior Development Engineer and Hardware Engineering Team Lead for the Physical Access Control Business Area. He is currently a Senior Principal Engineer in the central Hardware Platform Team, helping to roll out new technology to the company’s design teams. Since 2019 Paul has also been a member of several FiRa Consortium Working Groups.
<urn:uuid:eaebc9bb-9dec-47d2-b44d-2be1e559f723>
CC-MAIN-2022-40
https://blog.hidglobal.com/2022/04/abcs-uwb-technology
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00169.warc.gz
en
0.958085
968
3.40625
3
Even the most diligent employees and IT teams can fail to quickly identify a spoofing attack. But the stronger our awareness of spoofing in all its forms, even as it rapidly evolves, the easier it is to proactively protect against it—so let’s take a look at the state of spoofing today. What Is a Spoofing Attack Anyway? The term “spoof” basically means to mimic something or trick someone. A spoofing attack is a category of cyberattack in which the attacker takes on a false identity—that of another device or a trusted source—to appear as safe, in an attempt to steal money, collect sensitive data, hack systems, or spread malware. Spoofing is often just the first step in large-scale cyberattacks. How Spoofing Attacks Work There are many ways cybercriminals can spoof both companies and individuals, as I’ll cover below. Whatever the strategy, spoofing relies on two types of vulnerabilities to be successful: - Network vulnerabilities. These days, organizations must take proactive steps to guard against network vulnerabilities—or flaws in software, hardware, or organizational processes—to ensure the security of their customers’ data. - Human vulnerabilities. Spoofing attacks leverage social engineering, which refers to the manifold methods of deceiving us into doing something dangerous, like divulging confidential information or installing malware onto our devices. Sometimes all it takes is for an attacker to simply invoke the name of a trusted organization for us to click something we shouldn’t. But spoofing attacks can be so sophisticated that they play on powerful human emotions like excitement, fear, and empathy without us even realizing it. Here are some of the most common examples of different types of spoofing attacks to look out for and how they work, from the relatively simple to more advanced. Example #1: Email Spoofing We’ve all seen these in our spam folders. Email spoofing means making an email appear as if it came from a trusted source, like a big company, coworker, or friend. Attackers can do this by: - Using a fake sender that looks similar to the original except for a few characters—for example, using a zero (0) in place of the letter O (known as a homograph attack) - Disguising the from field to match the sender name or address of one of your contacts - Addressing you by name and with personalized language (known as spear phishing) - In the case of imitating a company, using its logo, colors, font, buttons, and other familiar branding elements Email spoofing is often used as part of a phishing attack. The goal is to trick you into clicking a malicious link or attachment designed to infect your computer with malware or steal money or your sensitive information. A famous example of this is known as the “grandparent scam,” when the attacker exploits the elderly by pretending to be a grandchild in need of money. Or a spoofed email from “PayPal” or “Amazon” might inquire about purchases you never made, making you concerned enough about your account’s security to click through. Example #2: Website Spoofing Website (also known as URL or domain) spoofing means making a malicious website look like a legitimate one. It’s common for attackers to spoof multiple points of contact, and website spoofing is generally used in conjunction with a spoofed email that links to the website. The goal is usually to obtain the visitor’s sensitive information. For example, attackers will spoof the login page of a familiar website to have you attempt to log into your account, at which point the attacker can harvest, use, and/or sell your login credentials (aka login spoofing) or drop malware onto your computer (known as a drive-by download). Similar to email spoofing, fake websites commonly feature legitimate branding, as well as a URL or domain name that’s almost identical to the real site except for a few characters. What’s missing is the green padlock, which represents the SSL certificate and used to be a big giveaway that a website isn’t legit. But SSL certificates are now free and easy to obtain. One security researcher found that today it’s possible to create a fake website that appears to have a secure connection and shows the correct URL. Example #3: Caller ID Spoofing Using a VoIP (Voice over Internet Protocol) phone service, anyone can create a phone number or caller ID name of their choice. Companies legitimately spoof business caller IDs to appear more professional. But spoofed spam calls are also on the rise. Attackers use caller ID spoofing to make an incoming call appear to come from somewhere it isn’t. Often scammers will use a phone number with your area code to make it seem like the call is local because they’ve learned that people are more likely to answer a local number. Once you answer the phone, the attacker will try to convince you to divulge sensitive information. They might pose as a customer service agent who needs personal information like your SSN, date of birth, or password to complete a transaction. More advanced spoofers can reroute a call to a long-distance carrier, causing you to rack up expensive fees. Example #4: Text Message (or SMS) Spoofing Text message spoofing is similar to caller ID spoofing, only here the attacker sends a text or SMS with someone else’s phone number or hides behind an Alphanumeric Sender ID. Again, legitimate organizations use Alphanumeric Sender IDs to replace their phone number with a short and easy-to-remember ID, like a brand name, in marketing and communication with customers. In spoofing attacks, the scammer might pose as a legit organization. For example, they’ll take advantage of people signing up for text tracking alerts by posing as a staffing agency or shipping company with an update to a job application or shipment. But the message will include links or text that leads to malware downloads or SMS phishing sites (smishing). Example #5: IP Spoofing Now we’re getting into the most advanced forms of spoofing. In simple terms, IP spoofing is altering an IP address to hide the location from which you are sending or requesting data online. As with other types of spoofing, IP spoofing has legitimate applications. VPN services use it to protect your privacy, and IT teams create multiple spoofed IP addresses to perform load balancing or testing. But IP spoofing is also a type of cyberattack commonly used to: - Keep authorities from finding out the true identity of an attacker - Bypass security services that would have otherwise blocked the attacker - Prevent the compromised device from sending attack alerts to security services You’ll see this play out in threats like distributed denial of service (DDoS) attacks, which prevent the targeted website from functioning properly by flooding it with traffic and limiting access for authentic users. One of the biggest factors for the recent growth in DDoS attacks was COVID-19, which drove a rapid shift online and gave hackers more targets than ever before. Example #6: ARP Spoofing ARP (Address Resolution Protocol) is the communication protocol that maps an (ever-changing) IP address to a (fixed) MAC address, to transmit data over a LAN. Computers learn when an IP address matches a MAC address, so that when a data packet arrives at a network gateway, the gateway machine can ask the ARP program to find a MAC address that matches the IP address in the data packet. If it’s a match, the data can move ahead. In an ARP spoofing (also known as an ARP poisoning) attack, an attacker will link their own MAC address with the IP address of a legitimate computer or server on the network. They can then intercept data meant for the owner of that IP address, and steal or modify the data. ARP spoofing is another method used in DDoS attacks, as well as man-in-the-middle attacks. Example #7: Man-in-the-Middle (MitM) Attack A man-in-the-middle (MitM) attack is akin to eavesdropping. It’s when an attacker (the middle man) infiltrates communication between two parties, a user and a web application. In doing so, the attacker can impersonate one party or alter the communication to reroute sensitive information to themselves—for example, login credentials they can change, or banking details they can use to transfer funds. This is one of the risks of using free public wifi, for example. A man in the middle can either hack the wifi or set up a fake wifi network with the name of a nearby business. Once you connect, the attacker can monitor your activity and intercept your login credentials, credit card details, and whatever else you happen to enter. Example #8: DNS Spoofing Besides email spoofing, another way to route users to malicious websites is through Domain Name System (DNS) spoofing. DNS spoofing, also known as cache poisoning, is an attack that alters Domain Name records to redirect traffic to a malicious site. In other words, the attacker replaces the IP addresses stored in a legitimate DNS server with IP addresses of a server under their control, so that when you click a link to what you think is your bank’s website, for example, you end up on the malicious site instead. In one real-life example in 2018 involving health insurance provider Humana, hackers used DNS spoofing to steal the complete medical records of at least 500 people. Example #9: GPS Spoofing GPS spoofing means tricking a device’s GPS into thinking you’re in one location or time zone when you’re actually in another—another type of spoofing that has relatively innocent uses. You might know it from Pokemon GO when players use GPS spoofing to catch Pokemon all over the country without ever leaving their house. Some people spoof their own devices to protect their privacy or personal data from being tracked. More nefarious reasons for GPS spoofing might include hacking the navigation system of a truck to reroute cargo to a destination where it can be stolen, or luring an individual toward dangerous destinations or online connections with malicious intent. Anyone that relies on GPS-enabled devices or location data could be a target. Example #10: Extension Spoofing Extension spoofing means spoofing the extension name of a file—in this case, a malicious file, to disguise it as legitimate. One common trick is to disguise malware as a text file using a file name like filename.txt.exe. Attackers know that file extensions are hidden by default in Windows, so Windows users will only see the filename.txt and think they’re opening a text document, which actually runs a malicious program. Example #11: Facial Spoofing Facial recognition is an emerging technology, making it particularly vulnerable. Research has found that it’s possible to build 3D models from photos available on social media and use them to unlock a phone using Face ID. In other famous examples, we’ve seen how it’s possible to create fake news videos and embarrassing content featuring the voices and likeness of high-profile people, with implications varying from extortion to swaying political campaigns. Researchers have used deepfake technology to produce fake video footage of former President Barack Obama that perfectly matches real audio, and attackers once used AI to use a CEO’s voice to extort money. Right now, the real-world applications for facial recognition are fairly limited—we use it to unlock our devices and not much else. But the risk of facial spoofing will only increase as we find ourselves using our faces to make payments, sign documents, and more. How to Protect Against Spoofing Attacks Start with the Basics Here are a few basic precautions individuals can take right now to reduce their chances of falling victim to an attack: - Switch on your spam filter to prevent most spoofed emails from reaching your inbox - Use Google or another Password Manager to create strong passwords, autofill your login credentials, and alert you to compromised passwords - Enable two-factor authentication on online accounts for websites that support it, for another layer of protection - Get into the habit of hovering over links before clicking to check for unusual URLs Know the Signs Be aware of the common indicators of spoofing. For example: - Unicode or other strange characters in a sender’s address or ID - Emails from random email addresses using a business name as their sender name - Typos or bad grammar in messages that appear to be from a legitimate business - Links and attachments from unknown senders - Calls or texts from unknown numbers - Websites that aren’t secured or encrypted, represented by the green padlock or https:// at the beginning of the URL (not necessarily a spoof, but proceed with caution) - Fields that don’t autofill your login credentials - Someone asking you for sensitive information without verifying they’re a trusted source - Messages or web pages with content that seems too good to be true Invest in Security Good antivirus software is the average person’s front-line defense in protecting against cyber threats. It’s worth the peace of mind. If you do click on a bad link or attachment, antivirus software can alert you to the threat and stop the download. For businesses, security measures are more advanced and may be specific to certain types of spoofing attacks. For example, GPS security involves obscuring your GPS antennas. Remote service solutions must be properly patched and configured. Teams must be properly educated about security risks. Be sure to consult security solutions and best practices in your business area or industry. Report Anything Suspicious When you get a message that’s unexpected, seems too good to be true, or looks suspicious in any way, always do your research. Double-check the address of the sender or website and whether it matches real information. Don’t be afraid to contact the real sender or owner directly to verify that they indeed sent the message, or Google the phone number, address, or contents of the message to see if it’s a known scam. If the message asks you to log in to your account and take some kind of action, instead of clicking the link, open a new tab or window and navigate to the site directly. If it turns out you might have been spoofed: block the sender, delete the message, and report it to the FCC or the real company’s security team.
<urn:uuid:d25c6938-3cb8-4796-94d9-484c9ea5c8d7>
CC-MAIN-2022-40
https://nira.com/spoofing-attack-types/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00169.warc.gz
en
0.912408
3,061
3.4375
3
Of the many hot topics at the moment, anything related to artificial intelligence is obviously right up there. Like the holy grail, AI has always been just beyond our reach. But technology is rapidly approaching the threshold beyond which it becomes easy to confuse artificial logic with human intelligence. This leads to ethical discussions about human rights for digital beings, but comes no closer to actually defining these digital beings. The Saudi Arabian government’s statement about giving a bot named Sophia human rights, is perhaps one of the most significant statements on this issue. But, what does that really mean? Humans are good in seeing human faces and human traits based on the simplest features. We immediately attribute emotions and characteristics when see a human face. Two dots and a curved line is immediately recognized as an emoji and the direction of the curve is immediately recognized as happy or sad. Cars look friendly, aggressive, happy, or sad according to how their headlights and the grille are placed. Designers know this, and have been leveraging our human tendencies very effectively for a long time. This anthropomorphization also applies to bots. Sophia is effectively a mechanized mannequin doll with a sophisticated chatbot behind it that runs somewhere in a cloud platform. This, in turn, calls on web-based services for whatever response is needed. So, who exactly in this contraption is the recipient of those rights: the physical doll, the machine code orchestrating the mechanic movements, the machine code calling the services, the services in general, or any service that Sophia provides in particular? Anthropomorphization is a powerful tool in improving user interaction. The anime-style eyes in Pepper (the Japanese anime series) are immensely endearing to most people. The soft voices of Alexa and Siri provide an appealing conversational basis between human and machine. The recent launch of Google Duplex proved the natural acceptance of the machine by its human counterparts. Humans are extremely gullible in attributing human traits based on simple tricks: eyes, voice, a tactical pause in a conversation and we are sold. From my perspective, it is critical to distinguish and separate any digital agent into two logical parts: the highly anthropomorphized user interface, and the digital logic driving the interaction. We must not mistake human appearance with human characteristics like morality, ethics, emotions, and human logic. Alexa isn’t your shopping buddy, and a self-driving car is not evil for running over a pedestrian. The rapidly developing intelligence in machines might appear human because we are wired to believe it to be. That said, regardless of the how, digital intelligence will always be vastly more alien than the intelligence in primates or even cephalopods. I believe we should not strive to replicate humans. We should strive to build entities that are exceptionally good in what they were intended to do: drive cars better than humans ever could, make coffee for us before we know we want it, filter through vast amounts of data and find patterns we simply can’t even begin to grasp. And, yes, all of this should happen through a humanoid interface because it is the one form we immediately accept as a credible counterpart. So tell me, how much of a friend is Alexa to you, really?
<urn:uuid:4e8a2c91-74b6-4243-b431-91da0c966207>
CC-MAIN-2022-40
https://www.capgemini.com/pl-pl/2018/07/why-you-shouldnt-ask-alexa-to-be-your-friend/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00370.warc.gz
en
0.953172
656
2.71875
3
Act No. 25.326 of 2000, Landmark Personal Privacy Law The Personal Data Protection Act, Act No. 25.326 of 2000, also known as the Act for short, is an Argentinian data protection law that was passed in 2000. As the first data protection law to be passed within the continent of South America, the landmark law set forth the regulations that data controllers and processors within Argentina must follow when processing the data of citizens. While the National Congress of Argentina had spent the last few years attempting to pass an updated data protection law in a similar vein to that of the EU’s General Data Protection Regulation or GDPR, these attempts were ultimately unsuccessful, as the proposed Bill lost parliamentary status in March of 2020. Nevertheless, the Personal Data Protection Act, Act No. 25.326 of 2000 still serves as a means of protecting the personal privacy of data subjects within Argentina. What is the scope and application of the Personal Data Protection Act, Act No. 25.326 of 2000? As it relates to the personal scope of the law, the Personal Data Protection Act, Act No. 25.326 of 2000 applies “to processors and controllers of databases, meaning all natural persons or legal entities, either public or private”. Conversely, the territorial scope of the law states that “the Regulations apply whenever personal data is processed in the territory of Argentina”. Effectively, this means that isolated data processing activities that occur within Argentina fall under the jurisdiction of law, even if the rest of said activities take place outside of the country. Moreover, the material scope of the law applies to processors and controllers of databases in respect of any personal data processing that takes place in Argentina”. Under the Personal Data Protection Act, Act No. 25.326 of 2000, data processing is defined broadly to mean “any systematic operation or procedure, either electronic or otherwise, which enables the collection, integration, sorting, storage, change, relation, assessment, blocking, destruction, disclosure of data, or transfer to third parties”. To this end, the law also protects the following forms of sensitive personal data - Personal data related to racial and ethnic origin. - Personal data related to political opinions. - Personal data related to religious, philosophical, or moral beliefs. - Personal data related to union affiliation. - Personal data related to health or sexual life. What are the requirements of data controllers and processors under the Personal Data Protection Act, Act No. 25.326 of 2000? While the Personal Data Protection Act, Act No. 25.326 of 2000 was passed over 20 years, the law still set forth various principles as it relates to the protection of the personal data of Argentine citizens. These data protection principles include: - Purpose proportionality– Personal data that is collected for the purpose of processing must be relevant and not excessive in relation to the scope and purpose for which it was collected. - Transparency– Data collection cannot be performed in a manner that is not unfair, fraudulent, or unlawful. - Purpose restriction– Personal data must not be used for any other purpose outside of that for which it was collected. - Data accuracy– All personal data that is collected for the purposes of processing must be correct and accurate at all times. Data controllers and processors are responsible for updating, correcting, and partially or completely deleting personal data when needed. - Access right– Personal data must be stored in a manner that enables data subjects to exercise their right to access. - Data retention– Personal data must be destroyed after it is no longer needed for the purposes for which it was collected. - Confidentiality– Data controllers and processors are responsible for binding individuals and agencies involved in any aspect of their data processing activities by a duty of confidentiality. - Accountability– The principle of accountability has changed over time, in large part due to the influence of the EU’s GDPR law. Subsequently, in order to maintain compliance with the accountability principle of the Personal Data Protection Act, Act No. 25.326 of 2000, individuals and agencies who process personal data are required to implement appropriate technical and organizational measures for the purposes of guaranteeing the security and confidentiality of personal data in their possession. In addition to these data protection principles, the Personal Data Protection Act, Act No. 25.326 of 2000 also places obligations on data controllers that are found in other privacy laws that have been passed in recent years. This includes ensuring that data controllers and processors register with the National Registry of Personal Databases for the purposes of providing data subjects with data processing notifications, as well as maintaining detailed data processing records. Data controllers and processors are also obliged to Data Protection Impact Assessment or DPIA. However, unlike many other privacy policies which do set forth guidelines regulating DPIAs, all DPIA under the Personal Data Protection Act, Act No. 25.326 of 2000 must include the following phases: - Identification of participants and documentation of the processes of development of the evaluation assessment. - Analysis of the applicable laws. - Preliminary analysis. - Processing context. - Risk management. - Risk treatment plan. What are the rights of data subjects under the law? In keeping with the somewhat outdated nature of the Personal Data Protection Act, Act No. 25.326 of 2000 when compared to the legislated privacy protections set forth in the past decade, the Act does not provide data subjects within Argentina with many rights that are now deemed as standard and common. Such rights include the right to erasure, to object or opt-out, and data portability. Alternatively, the Personal Data Protection Act, Act No. 25.326 of 2000 provides data subjects with the right to be informed, access, and rectification. Additionally, while the law does not provide data subjects with the right not to be subject to automated decision making, data subjects do have the right to request an explanation of the logic that applied to such decisions, in instances where such decisions led to adverse consequences for data subjects. As it pertains to penalties that can be imposed against individuals and agencies found to be in violation of law, the Argentinian data protection authority, or AAIP for short has the authority to enforce the provisions of the law, and utilizes a three-tier system in doing so. As such, violators of the law are subject to the following penalties: - Basic level, or minor infringements, which include up to two warnings and a fine from ARS 3,000 ($42) to ARS 25,000 ($355). - Mid-level, or serious infringements, which include up to four warnings and/or suspension from one to 30 days and a fine from ARS 25,000 ($355) to ARS 80,000 ($1,137). - Critical level, or very serious infringements, which includes up to six warnings and/or suspension from 31 to 365 days and/or closure or cancellation of the file, register, or databank, and a fine from ARS 80,000 ($1,137) to ARS 100,000 ($1,421). The Personal Data Protection Act, Act No. 25.326 of 2000 also provides Argentinian citizens with the private right of action to bring civil liability charges against individuals and agencies who violate their rights under the Law. However, the following four requirements must be proven by claimants who bring forth such charges: - The illegality of the damaging action - The real and actual damage caused. - The cause-effect relationship between the action and the damage. - Any negligence, wrongful misconduct, or objective liability that occurred. While some of the provisions and rights offered by the Personal Data Protection Act, Act No. 25.326 of 2000 are somewhat archaic by the standard of 2021, the law nevertheless provides Argentinian citizens with a comprehensive level of privacy protection. This dedication to personal privacy was reflected in Argentina’s decision to sign the Convention 108+ for the Protection of Individuals with Regard to Processing of Personal Data or the modernized Convention 108 for short. As such, the nation of Argentina is sure to continue to improve upon the level of privacy protection they offer their respective citizens.
<urn:uuid:40e9527a-3f61-4788-9e2f-6435aa270d5d>
CC-MAIN-2022-40
https://caseguard.com/articles/act-no-25326-of-2000-landmark-personal-privacy-law/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00370.warc.gz
en
0.928606
1,669
2.703125
3
In this podcast, Jason speaks to Helen Blaikie, an interim Chief Data Officer, Data Strategist and associate at Cynozure. Listen to this episode on Spotify, iTunes, and Stitcher. You can also catch up on the previous episodes of the Hub & Spoken podcast when you subscribe. For more on data, take a look at the webinars and events that we have lined up for you. As organisations increasingly rely on data to make decisions, it’s more important than ever for employees to be data literate. Data literacy is the ability to read, understand and communicate data; it’s a critical skill in today’s workplace. [00:51] Helen’s transition from Finance Director to data expert [02:34] Parallels between finance and data [05:15] Examining the similarities between financial and data literacy [07:56] Why organisations need to care about data literacy [12:39] What needs to be in place before you can start introducing data literacy [14:30] Avoiding ‘data chaos’ and more effective decision making through data literacy [20:30] Processes organisations can use to implement data literacy [22:00] Tailoring data literacy for different needs across the organisation [24:45] Other things that can help with engagement in data literacy [27:11] Getting HR on board to cultivate an environment with people who enjoy learning [31:00] The biggest challenges in implementing data literacy in organisations More and more, businesses are realising the importance of data literacy to make informed decisions. However, what is often overlooked is the difference in data literacy needs between different departments and personnel within the company. The levels of data literacy within an organisation can vary greatly. In order for businesses to maximise the value of their data, it helps to ensure that everyone in the organisation has the appropriate data literacy skills. Some employees need to be able to analyse and code, while management needs to understand how to use data strategically to make decisions. However, every employee should have a basic understanding of data and how it is used to assist with different parts of the organisation. In order to have a robust system in place to improve data literacy, an organisation should have some basic data implementation and use. Without much of a data infrastructure it will be difficult to convey to employees and stakeholders why they need to be aware about data literacy. Once you have a data framework, create use cases to showcase the use of data in your organisation and utilise it as a teaching opportunity. Encouraging data literacy in the workplace starts with creating a work environment that promotes open communication and collaboration. It’s important to provide employees with the necessary tools and resources to enable them to be successful, and promote learning opportunities whenever possible. While some people may have a natural affinity for working with numbers, others may need some help to develop their data literacy skills. Organisations can improve their data literacy by training employees on how to use data. Some ideas include providing resources such as books or articles on data analysis, hosting competitions or challenges that encourage employees to use data creatively, or coffee and cake sessions where employees can learn more about data. To be able to make informed decisions in an increasingly data-driven world, it’s important that everyone in an organisation – not just those in technical fields – has a basic understanding of how data works and how it can be used. By improving your organisation’s data literacy, employees will be empowered to make better decisions based on evidence, identify opportunities, and solve problems.
<urn:uuid:d1d31333-bd51-478d-b7c0-bc2d45d9b50d>
CC-MAIN-2022-40
https://www.cynozure.com/hub-spoken/empower-your-organisation-with-data-literacy/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00370.warc.gz
en
0.924043
746
2.65625
3
It’s official: in New York City public schools there is now a ban on cell phone use. We can all understand how that might happen. As with any communication device, abuse does occur. But simultaneously, there is a wave of interest in education in 1:1 learning initiatives (supplying one laptop to every child), meaning that educators believe in the learning benefits of one laptop for each student. Does this leave the technology world scratching its head? From QWERTY keyboards on handheld devices to Internet browsing on cell phones to e-mail access from virtually anywhere on the globe, the trend is convergence. PCs and cell phones are becoming so similar in functionality that they are often lumped into the same category. Change is coming — and it’s coming in various form factors including handheld devices like phones, PDAs and ultra-mobiles. Control the Communication, Not the Device Students are more than just engaged by technology; it is a part of their natural habitat. Air, water, food and, yes, technology in all forms and functions are critical to student engagement. Educators must understand their natural affinity to technology and the significance of being born into a technology-rich world. Teachers must translate this broader experience into a student’s more formal learning experience. Banning cell phones, iPods and other devices may work in the short term to address certain concerns, but in the long run it means missing out on a huge opportunity to prepare today’s students for the real world of the 21st century. Take the huge success of Web 2.0 and online social networking applications. Kids are connecting and gaining valuable Web experiences that keep them coming back for more. They are processing the world through video, audio, chat, RSS (really simple syndication) feeds and more. As a result, we’re not talking about computers as much as we once did in educational technology. Instead, we’re talking about what we plan on bringing to the kids via computers. As part of this trend, it becomes increasingly important to provide our students with a relevant environment for virtual learning. Unfortunately, to date, many of the virtual learning platforms that have been available to schools were built for college and university settings, set up to manage large numbers of students subscribing to courses, not for the kind of individual learning that we expect in a K-12 classroom. These platforms also require K-12 teachers with a limited set of instructional minutes at their disposal to learn to implement the learning spaces for their students practically from scratch. As such, many of our schools haven’t seen the value in 1:1 initiatives, and often the emphasis of the initiatives becomes a focus on computer literacy skills instead of the enhanced learning opportunities that multimedia brings. In that scenario, it makes sense that we would overlook convergence, and all the multimedia excitement that the Internet can bring gets relegated to the entertainment category. With multimedia activity considered synonymous with entertainment, it’s no wonder it becomes easy to ban cell phones. However, looking at the world outside of education, everyone is using their cell phone to do many things they used to do exclusively on their computers. Much of this is informational and productivity related. Furthermore, cell phones or other PDAs are very cost effective and significantly more portable. Schools that cannot afford to deploy a 1:1 laptop initiative could achieve a 1:1 with a mix of laptops and lower-cost portable devices like an iPod touch. Part of the solution, I believe, is not to ban enabling technologies that possess great value in the learning environment, but rather manage the risks that may surface in the same way we manage all risks to our children. Some of this can be done with technology, some with education and some with policy and supervision. We don’t prohibit playing in the schoolyard because it borders a street; we put up a fence, we add a supervisor and we instruct the children not to run after a ball that goes into the street. Similarly, with well-constructed software learning environments, we can provide teachers with management tools to protect our students; we can access Web 2.0 technologies and push value to our children and at the same time provide a safe and secure learning space for them to explore and prepare for the excitement of the technology-rich world they live in. Plus, as previously mentioned, there are often important trade-offs to consider when implementing a ban. Recently, I spoke to the head of a school in Switzerland. On a field trip, one of their students found himself in a dangerous situation. Thankfully he had a cell phone with him and was able to call for and receive help. The consequences without mobile communication might have been grim. Now the school is planning to provide digital phones for all students and leverage both the mobile computing capabilities for learning and the communication capabilities for safety and security reasons. University students wouldn’t think about not having a cell phone, and now with incidents like we saw at Virginia Tech, a school can instantly send out an alert to all mobile devices and computer systems within their learning community. Content Specific to Learning The best way to explain the content a student can experience in terms of multimedia and Internet connectedness would be to look at the Nassau, New York, BOCES (Board of Cooperative Educational Services) as an example. Teachers and students at the BOCES Long Island School of Performing Arts have orchestrated a unique cross-global connection with a school in Canberra, Australia. After watching the popular movie adaptation of Al Gore’s book “An Inconvenient Truth,” students from Long Island and Canberra Grammar School created a dramatic response to the film and recorded their productions. Their recordings were uploaded to the Internet, allowing students from each school to watch the productions and discuss their work. Students used video blogs and chat to comment on their pieces, and teachers were able to facilitate and monitor the discussions. Teachers found that the ability for students to use interactive media to share their work and ideas enabled communication to be more dynamic and meaningful. The project encouraged cross-cultural discussion and sharing of different perspectives while further boosting global awareness and promoting active learning in a global network. A far cry from simple computer literacy skills development, this is complex content captured over a global network, real-world problem solving and student-generated content — in short, learning at its best. Already, much of the Nassau BOCES experience could have been conducted using handheld devices. This would have created an even more immediate experience, allowing for increased access in real-time. The convergence is coming — cell phones are acting more like laptops all the time. If the learning value of content in educational 1:1 initiatives improves, and the increasing use of a K-12 dynamic learnspace permits this improvement, then I predict that the value will outweigh the drawbacks of smaller devices in schools. Our approach to handhelds and cell phones will look a lot more like appropriate-use policies, not like out-and-out bans. Bob Longo is executive vice president of Studywiz Spark, the Dynamic LearnSpace Company and the U.S. arm of Etech Group.
<urn:uuid:02697071-8892-40ef-9166-fdd10b3da76e>
CC-MAIN-2022-40
https://www.ecommercetimes.com/story/think-before-you-ban-a-handheld-is-a-powerful-learning-tool-62820.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00370.warc.gz
en
0.952902
1,472
3.203125
3
How to Prepare for a Career in Cyber Security If you’re reading this page, then you probably already have a notion of how explosive the growth of opportunity within cyber security has been over the past few years. One of the fastest growing technology fields — which are some of the fastest growing job sets in the world — cyber security had over 1,000,000 job openings in America just last year. While you may have a notion that you would like to pursue a career in cyber security, there are still a few things you need to decide to clarify exactly what sort of position you would like to prepare for. Cyber security is a wide and varied set of jobs, everything from pure management to researching new threats, to responding to a massive attack in real time. Below you will see a variety of ways to sort cyber security positions, based on pay, experience level, job responsibilities, and education required. We hope that our outline below can help to point you in the most productive direction for achieving your goals. For some, that might be returning to school, while others might elect to work on projects on the site or spend a little longer gaining experience in their current position. Check out our guide on how to prepare for a career in cyber security below. There is an expected shortfall of 1.5 million skilled security professionals by 2020. — Booz Allen Hamilton Study Discerning Which Cyber Security Job is a Good Fit Based on Educational Requirements While many cyber security careers require on-the-job training, there are a number of cyber security degree options available at each level. By discerning what degree level you might be interested in, you’ve already narrowed down the number of cyber security careers that you may automatically quality for. Associate degrees in cyber security examine computer information technology, Cisco networking, network security and more. Associate degrees prepare individuals for entry level positions related to computer support, programming, help desk, IT, and network administration. For further information on where your associate degree can take you check out our resource on the types of associates degrees you can obtain related to cyber security here . Bachelor’s degrees in cyber security offer an even wider range of classes and open many more doors towards employment. Topics covered in a cyber security bachelor’s degree may include information assurance management (IAM), technology, IT, network infrastructure, software development, network security, forensic, and tactics to defeat cyber crime. Bachelor’s degrees prepare individuals for mid to upper level positions like an information security analyst, computer support specialist, cryptographer, forensic expert, and much more. For more information on what you can do with your bachelor’s in cyber security we recommend you check here . While years of experience, particularly in cyber security due to the versatility of skills required, are essential for landing an upper level position in cyber security, a master’s degree in cyber and information security is also a core component. These master’s degrees in cyber security are a fantastic way to stand out and secure advancement in your career. Topics covered in a cyber security master’s degree may include information assurance, information assurance for mobile devices, internal protection, vulnerability mitigation, assured software analytics, cryptography and more. Master’s in Cyber Security related fields can help to prepare you for senior level roles, or enable you to step into mid level roles with less experience. For more information on what to do with your master’s in cyber security check here If you decide to further your education with a doctoral degree in cyber security that means you are more-than-likely ready to continue your career through research projects or are seeking policy roles for the future. Doctoral programs like these prepare you for a leadership role in information assurance and cyber security. Some of these programs will continue to train in advanced technical skills and utilize your work/ life experiences. These doctoral programs work well for individuals who are ready to think outside of the cyber security box and who innovate solutions for information assurance. Determining Future Plans by Examining Cyber Security Career Progressions Choosing a career in cyber security will often times directly correlate with the experience you have already. Whether you have experience managing individuals in IT firms, security guard experience, or linguistics, matching your skills to the right degree and the right position are essential. For many of the degrees below it is essential to arrive equipped with prerequisite skills though many of the skills are actually acquired on-the-job. Sound a little like a chicken and egg scenario? It’s not really. Like with many fields, working your way up into the desired role is how you gain that experience. While some technology or management experience (even outside of cyber security) and an information assurance degree can help you segue directly into a mid or upper-level cyber security role, more commonly individuals work their way up though a few common paths. - Entry level jobs requiring associates and bachelors can often position you to work up through the engineering ranks of a cyber organization. Check out careers such as network administrator, system administrator, junior security developer, or security administrator for good options for this path. - For those who like to be at the center of all the action, the incident response vertical of cyber security organizations can often be entered with a bachelors degree in cyber security. Additional experience and certifications further pave the way for advancement in this vertical. - For those seeking policy roles, security analyst careers can start with information assurance bachelors. Information assurance bachelors are often very similar to cyber security bachelors, but focus slightly more on policy aspects of cyber security. Alternatively, studying forensic computing can help you to land a job in a government agency or crime fighting unit, another way to enter into policy from the bottom. - For those with management experience and a few years experience with cyber security, a masters degree or a few certifications can often land you a job as a security architect. Security architects are often the head technical leads over an organization’s cyber security products. Deciding your path by salary For many professionals, the intended path may have dollar signs attached. Knowing what you need to support yourself and your family is a huge motivator. Below we have organized common cyber security careers by salary. A security architect is the individual responsible for maintaining the security of a company’s computer system. They must think like hackers to anticipate many of the tactics used to gain unauthorized access. Information security analysts plan and implement security measures to protect an organization’s computer networks and systems. Their responsibilities continually expand along with the increase in cyber attacks.
<urn:uuid:861a9b9f-72db-4580-84d8-7ed32298918d>
CC-MAIN-2022-40
https://www.cybersecuritydegrees.com/faq/prepare-career-cyber-security/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335573.50/warc/CC-MAIN-20221001070422-20221001100422-00370.warc.gz
en
0.937442
1,374
2.5625
3
Researchers at the Wyss Institute at Harvard University have found a way to trigger the self-assembly of tiny water-filled gel-like cubes into larger structures, a discovery that could lead to practical applications in tissue engineering. The scientists developed the self-assembling system by programming DNA to act as a glue that guides the hydrogels into the larger structures. Their results are published in the Sept. 9 issue of Nature Communications. Researchers have attempted to program hydrogels in the past, but ran into trouble trying to bind them to other biological components, prompting the team at Wyss to devise a new strategy. Using DNA as Glue Enter DNA. It is made up of four bases — adenine, guanine, cytosine and thymine, or A, G, C and T. In order to form the coiled, double-helix structure of DNA, those bases have to be linked in a specific order: A with T and C with G. If one side of a strand of DNA should begin with AC, for example, then the corresponding rung would have to begin with TG. Because snippets of DNA can be synthesized with any sequence of those letters, it is more programmable than other biomaterials, the Wyss researchers found. DNA can be, in effect, a glue. To test their theory, the researchers covered hydrogel cubes with a coat of a specific DNA base molecule. When those small cubes were placed in a solution with larger cubes, the smaller ones attached only to cubes that were made up of their corresponding DNA base. Therefore, the scientists were able to program the hydrogels to mold into specific shapes, including a square and a T-shaped structure. Eventually, the same method could potentially be used to create or repair more complex structures, including human tissue. “This paper is a fundamental study of this capability, and it’s not quite ready for application yet, but we think this is a very promising direction for developing applications that could assemble these gel-like bricks into functional tissues,” Peng Yin, assistant professor of systems biology at Harvard Medical School and senior co-author of the study, told TechNewsWorld. “My colleagues and I hope to move forward together in this direction.” This research shows a potential solution for one of the main difficulties in tissue engineering — creating structures that go beyond two dimensions, said Robert Van Buskirk, professor in biological sciences at SUNY-Binghamton. “This technology may prove to be critical for the next advance in tissue engineering,” he told TechNewsWorld. The next steps in developing this research would be to conduct tests to determine how well the method could hold up in the sometimes unpredictable human body, said George Truskey, Ph.D., professor of biomedical engineering and senior associate dean for research at Duke University. “The hydrogels will need to be populated with different cell types, and the investigators will need to show that self-assembly can induce different functions that might occur in a tissue,” he told TechNewsWorld. “The bigger challenge will be to establish that this approach will work in vivo, given the potential for enzymatic degradation of DNA and immunogenicity.” It might be some time before researchers have answers about those questions, but the discovery is nonetheless a significant step forward in the field, Truskey noted. “This is an important achievement by a talented group that shows the approach is feasible.”
<urn:uuid:d772943d-3bfe-43bd-ab9f-c561243aa330>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/dna-glue-may-someday-repair-damaged-organs-78926.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337595.1/warc/CC-MAIN-20221005073953-20221005103953-00370.warc.gz
en
0.951608
729
3.671875
4
Since its inception, Blockchain has been said to bring in a revolution on part with the birth of the internet. What started as a decentralizing currency and assets, is now disrupting every mainstream industry including healthcare, pharmaceuticals, insurance, and digital security. But one area where blockchain technology can completely revolutionize is the fintech industry. As we all know, Blockchain is a public ledger of information collected through a network that sits on top of the internet. The information recorded on blockchain can be anything. It is stored in the form of blocks and each block is connected like a chain. This technology is being continuously experimented in the fintech industry to bring out new use cases and applications to solve redundant and complex issues. Here are a few applications of blockchain in the fintech industry. Supply chain financing and management This is one of the important applications of blockchain in fintech. Blockchain technology can help the fintech industry with several aspects of supply chain management. This allows higher settlement turnaround time at lower cost by providing a single source of truth regarding key areas in the supply chain including creditworthiness, supplier inventory levels, purchase order receipt and approval, and more. It also helps the fintech industry in keeping financial operations risk in check. Secure payment solutions Blockchain technology promises to bring in fast, reliable, secure, and low-cost international payment processing services. This is done with the help of encrypted distributed ledgers that provide trusted real-time verification of transactions. One major benefit here is that this eliminates the need for intermediaries such as correspondent banks and clearinghouses. Thus, with blockchain fintech apps, the process of sending money regardless of how much is significantly faster and it further ensures a smooth, transparent, and error-free transaction. Blockchain could give the fintech industry the fraud prevention capabilities that it has been looking for for a long time. The need for collaterals, the exchange between multiple parties, currency differences, and many other things complicate financial transactions. But with blockchain, information can be shared in real-time thus reducing the fraud propensity. Smart contracts that can be enabled with blockchain also helps in reducing financial fraud in a big way. A smart contract is a computer code running on top of a blockchain containing a set of rules under which the parties agree to interact with each other. This facilitates, verifies, and enforces the negotiation or performance of an agreement or transaction. Record storage and management One inevitable thing in the finance sector is the myriad of documents and paperwork. Many existing products and services that provide secure and verified document management tend to be expensive and often require the involvement of a third party. Documents in physical or digital form can be modified and copied. But with blockchain technology, you can embed the authentication into the document itself and with the help of a closed-loop tracking system, you can protect the document against tampering or modification. Regulatory compliance and audit Carrying out efficient and fool-proof auditing, accounting, and record-keeping, has been a difficult task for the financial services industry for many years now. Blockchain technology can totally eliminate the error-prone and time-consuming traditional way of record-keeping and book-keeping. It provides new, disruptive methods using distributed ledgers secured by cryptography that can make audit and financial reporting transparent and error-free. Blockchain technology, with its immutable nature, can remove risks, uncertainty, and complexity associated with regulation because once data is saved into the blockchain, no one can modify or delete it. There is no doubt that the fintech industry’s growth is with blockchain. At Intone, we provide innovative expertise and capabilities needed to deliver the future of fintech today. Whether we’re helping to transform and modernize core banking operations, enable a mobile banking experience to become a social one, create world-class payment and credit processes, or provide data monitoring, analytics, and quality assessment and compliance and assurance reporting, our services empower our clients with data-driven insights and the right tools to excel in today’s digital landscape. Explore the future of fintech and banking with us and understand today’s reality and how world-class financial institutions win in an ever-changing landscape.
<urn:uuid:fb306d5a-2a7b-4f28-a442-0e6219227aae>
CC-MAIN-2022-40
https://intone.com/blockchain-in-fintech/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00370.warc.gz
en
0.938048
866
2.6875
3
April 10, 2018 What Is a VMware DRS Cluster? A cluster is a group of hosts connected to each other with special software that makes them elements of a single system. At least two hosts (also called nodes) must be connected to create a cluster. When hosts are added to the cluster, their resources become the cluster’s resources and are managed by the cluster. The most common types of VMware vSphere clusters are High Availability (HA) and Distributed Resource Scheduler (DRS) clusters. HA clusters are designed to provide high availability of virtual machines and services running on them; if a host fails, they immediately restart the virtual machines on another ESXi host. DRS clusters provide the load balancing among ESXi hosts, and in today’s blog post, we are going to explore the DRS cluster system in depth. How Does the DRS Cluster Work? Distributed Resource scheduler (DRS) is a type of VMware vSphere cluster that provides load balancing by migrating VMs from a heavily loaded ESXi host to another host that has enough computing resources, all while the VMs are still running. This approach is used to prevent overloading of ESXi hosts. Virtual machines can have uneven workloads at different times, and if an ESXi host is overloaded, performance of all VMs running on that host is reduced. The VMware DRS cluster helps in this situation by providing automatic VM migration. For this reason, DRS is usually used in addition to HA, combining failover with load balancing. In a case of failover, the virtual machines are restarted by the HA on other ESXi hosts and the DRS, being aware of the available computing resources, provides the recommendations for VM placement. vMotion technology is used for this live migration of virtual machines, which is transparent for users and applications. Resource pools are used for flexible resource management of ESXi hosts in the DRS cluster. You can set processor and memory limits for each resource pool, then add virtual machines to them. For example, you could create one resource pool with high resource limits for developers’ virtual machines, a second pool with normal limits for testers’ virtual machines, and a third pool with low limits for other users. vSphere lets you create child and parent resource pools. When Are DRS Clusters Used? The DRS solution is usually used in large VMware virtual environments with uneven workloads of VMs in order to provide rational resource management. Using a combination of DRS and HA results in a high-availability cluster with load balancing. The DRS is also useful for automatic migration of VMs from an ESXi server put in maintenance mode by an administrator. This mode must be turned on for the ESXi server to perform maintenance operations such as firmware upgrades, installing security patches, ESXi updates etc. There cannot be any virtual machines running on an ESXi server entering maintenance mode. DRS Clustering Features Main DRS clustering features are Load Balancing, Distributed Power Management, and Affinity Rules. Load Balancing is the feature that optimizes the utilization of computing resources (CPU and RAM). Utilization of processor and memory resources by each VM, as well as the load level of each ESXi host within the cluster, is continuously monitored. The DRS checks the resource demands of VMs and determines whether there is a better host for the VM to be placed on. If there is such host, the DRS makes a recommendation to migrate the VM in automatic or manual mode, depending on your settings. The DRS generates these recommendations every 5 minutes if they are necessary. The figure below illustrates the DRS performing VM migration for load balancing purposes. Distributed Power Management (DPM) is a power-saving feature that compares the capacity of cluster resources to the resources utilized by VMs within the cluster. If there are enough free resources in the cluster, then DPM recommends migrating the VMs from lightly loaded ESXi hosts and powering off those hosts. If the cluster needs more resources, wake-up packets are sent to power hosts back on. For this to function, the ESXi servers must support one of the following power management protocols: Wake-On-LAN (WOL), Hewlett-Packard Integrated Lights-Out (iLO), or Intelligent Platform Management Interface (IPMI). With the DRS cluster’s DPM, you can save up to 40% in electricity costs. Affinity Rules allow you some control over placement of VMs on hosts. There are two types of rules that allow keeping VMs together or separated: - affinity or anti-affinity rules between individual VMs. - affinity or anti-affinity rules between groups of VMs and groups of ESXi hosts. Let’s explore how these rules work with examples. 1. Suppose you have a database server running on one VM, a web server running on a second VM, and an application server running on a third VM. Because these servers interact with each other, three VMs would ideally be kept together on one ESXi host to prevent overloading the network. In this case, we would select the “Keep Virtual Machines Together” (affinity) option. 2. If you have an application-level cluster deployed within VMs in a DRS cluster, you may want to ensure the appropriate level of redundancy for the application-level cluster (this provides additional availability). In this case, you could create an anti-affinity rule and select the “Separate Virtual Machines” option. Similarly, you can use this approach when one VM is a main domain controller and the second is a replica of that domain controller (Active Directory level replication is used for domain controllers). If the ESXi host with the main domain controller VM fails, users can connect to the replicated domain controller VM, as long as the latter is running on a separate ESXi host. 3. An affinity rule between a VM and an ESXi host might be set, in particular, for licensing reasons. As you know, in a VMware DRS cluster, the virtual machines can migrate between hosts. Many software licensing policies – such as database software, for example – require you to buy a license for all hosts on which the software runs, even if there is only one VM running the software within the cluster. Thus, you should prevent such VM from migrating to different hosts and costing you more licenses. You can accomplish this by applying an affinity rule: the VM with database software must run only on the selected host for which you have a license. In this case, you should select the “Virtual Machines to Hosts” option. Choose “Must Run on Host” and then input the host with the license. (Alternatively, you could select “Must Not Run on Hosts in Group” and specify all unlicensed hosts.) You can see how to set affinity rules in the setup section below. Requirements for Setting Up a DRS Cluster The following requirements must be met to set up a DRS cluster: - CPU compatibility. The maximum compatibility of processors between ESXi hosts is required. Processors must be produced by the same vendor and belong to the same family with equivalent instructions sets. Ideally, the same processor model should be used for all ESXi hosts. - Shared datastore. All ESXi hosts must be connected to shared storage such as SAN (Storage Area Network) or NAS (Network Attached Storage) that can access shared VMFS volumes. - Network connection. All ESXi hosts must be connected to each other. Ideally, you would have a separate vMotion network, with at least 1Gbit of bandwidth, for VM migration between hosts. - vCenter Server must be deployed to manage and configure the cluster. - At least 2 ESXi servers must be installed and configured (3 or more ESXi servers are recommended). How to Set Up the DRS Cluster First, you need to configure the ESXi hosts, network connection, shared storage, and vCenter server. After configuring those, you can set up your DRS cluster. Log in to vCenter server with the vSphere web client. Create a datacenter in which to place your ESXi hosts: vCenter -> Datacenters -> New Datacenter. Then, select your datacenter and click Actions -> Add Host to add the ESXi hosts you need, following the recommendations of the wizard. Now you are ready to create a cluster. In order to create a cluster, do the following: - Go to vCenter -> Hosts and Clusters. - Right-click on your datacenter and select “New Cluster”. - Set the name of the cluster and check the box marked “Turn on DRS”. Click “OK” to finish. If you have already created a cluster, follow these steps: - Go to vCenter -> Clusters -> Your cluster name. - Open Manage -> Settings tab. - Select “vSphere DRS” and click “Edit”. - Check the box marked “Turn ON vSphere DRS”. Click “OK” to finish. Now that you have created the DRS cluster, you can configure DRS automation, DPM, affinity rules, and other options. DRS automation. In order to set up the load balancing, you need the “DRS Automation” section. Here, you can select the Automation Level (Manual, Partially Automated, or Fully Automated), as well as the Migration Threshold (values from 1 to 5, with 1 being conservative and 5 being aggressive). If you want to set up individual virtual machine automation levels, then check the appropriate box. Power Management. You can set up DPM by selecting one of the following values: Off, Manual, or Automatic. As with the load balancing feature described above, you can select the DPM threshold values from 1 (conservative) to 5 (aggressive). Advanced Options. You can manually set the advanced options for detailed tuning of your cluster. For example, you can set “MinImbalance 40” for computing target imbalance. The default value is 50, while 0 is the most aggressive. You can read more about this and explore all the advanced options in the VMware documentation. Affinity Rules. In order to set up affinity and anti-affinity rules, follow these steps: 1. Go to vCenter -> Clusters -> your cluster name 2. Go to Manage -> Settings tab 3. Select “DRS Rules” and click “Add”Set a name for the rule 4. Select the rule type: - Keep Virtual Machines Together (affinity) - Separate Virtual Machines (anti-affinity) - Virtual Machines to Hosts (affinity or anti-affinity) 5. Select VMs for the first two rule types, or VM groups, host groups and policy for the third rule type 6. Click “OK” to finish. Resource Pools. If you would like to create a resource pool for your VMs in a cluster, do the following: - Go to vCenter -> Clusters -> Your cluster name. - Click Actions -> New Resource Pool. - Give the pool a name, then define limits and reservations for CPU as well as memory. Click “OK” when done. Now you can add your virtual machines to the resource pool. Here is how you can migrate an existing VM to the resource pool: - Go to vCenter -> Virtual Machines. - Select your virtual machine. - Click Actions -> Migrate. The wizard window appears. - Select “Change Host” in the “Migration Type” section and click “Next”. - Select your resource pool in the “Select Destination Resource” section and click “Next”. - In the “Review Selections” section, click “Finish”. After configuration, you can check the state of your newly created DRS cluster. Just go to vCenter -> Clusters -> Your cluster name and click the “Summary” tab. The Advantages of Using DRS The main advantage of using a VMware DRS cluster is effective resource management with load balancing. This improves the quality of services provided, while also allowing you to save power (and, thus, money) with DPM. You can control the virtual machine placement manually or automatically, which makes maintenance and support more convenient. The DRS cluster solution is a part of VMware’s vSphere virtualization software, and is particularly useful in large virtual environments. DRS features such as load balancing, power management, and affinity rules help you optimize resource use, as well as performance of the cluster. With Distributed Power Management, you can save on electricity costs. Using DRS in combination with HA gives you a balanced High Availability VMware vSphere cluster that is an effective, high-performance solution for any virtual infrastructure. NAKIVO Backup & Replication is a product designed for protection of VMware virtual machines as well as clusters. When adding vCenter with a cluster to the product’s inventory, all the VMs of the cluster are automatically added, too. If a cluster is selected for backup or replication job, all the VMs of such cluster are selected automatically regardless of the ESXi host on which they are residing. Try out the cluster-related and other features of NAKIVO Backup & Replication in your own environment - download the full-featured Free Trial.
<urn:uuid:0167214c-ae1a-49a8-b150-6591ecfc9a97>
CC-MAIN-2022-40
https://www.nakivo.com/blog/what-is-vmware-drs-cluster/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00570.warc.gz
en
0.898249
2,837
3.25
3
Many organisations operate logically separate networks, keeping voice and data traffic apart, and this separation can be maintained when sites are connected using, say, an operator’s virtual private network (VPN) service. BT’s MPLS network, for example, can be used to connect the IP telephony systems at different locations, enabling calls between employees to be kept ‘on network’. BT can also supply secure PSTN-IP telephony gateways as a part of a VPN. The isolation of the corporate IP telephony network from the public internet removes the need to enable calls to pass through externally-facing firewalls and consequently reduces the risk of many forms of attack. However, even where logical network separation is used, some connections between the organisation’s IP telephony infrastructure and its data network will remain. Such connections may be able to be exploited by attackers who successfully breach the organisation’s outer defences, and should therefore be minimised. Softphones create bridges between voice and data networks, which is why the US National Institute of Standards and Technology is among those to recommend that such devices are prohibited whenever high standards of security and availability are required. Over the coming years, operators such as BT will be using IP networks to replace their current public switched phone networks and older types of data networks. As a result, IP telephony will eventually become the dominant – potentially even the only – way of providing public phone services. These new 21st century networks will, however, be more like current converged corporate voice and data networks than the public internet. The available capacity will be split to create a number of logically-separate networks that will carry different types of traffic. Phone calls will therefore be kept separate from other types of traffic, notably internet traffic. The way in which the 21st century networks operated by different companies will be interconnected to allow phone calls to flow around the world has yet to be fully defined but, with regard to security, these new public phone networks will effectively be private (i.e., owned and operated by one company), which will allow a high degree of security to be provided. VoIP is no longer a new technology, with Gartner positioning it firmly on its way to the ‘plateau of productivity’ on its widely-respected technology hype cycle. However, neither is it a mature technology. While it is used extensively in the corporate environment, for example, the adoption of public VoIP-based phone services is still limited. Thus far, this has helped VoIP and IP telephony achieve a comparatively good security record. The technology does have weaknesses and vulnerabilities, but hasn’t been a sufficiently tempting target for attackers. This situation will change as levels of adoption increase, making it increasingly imperative for any user of the technology to have an effective security policy and appropriate precautions in place.
<urn:uuid:1dc7c3cf-01e4-4339-a461-61e82f0ac6e0>
CC-MAIN-2022-40
https://it-observer.com/voip-security8.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00570.warc.gz
en
0.942046
581
2.6875
3
Questions are being raised about the ethical treatment of the data that powers the advances in AI. In fact, data ethics was the focus of Collibra’s NYC Data Citizens Meetup held in early November. More than 50 industry leaders came together to explore the new area of data ethics. Organizations are recognizing that the poor handling of data, deliberate or not, could lead to negative outcomes for both the organizations and individuals to whom the data refers. Propelling these concerns is the rise of AI because it promises to deliver considerable economic gains–IDC estimates that $52 billion will be spent on it annually by 2021. Companies around the globe are exploring ways in which they can transform their data through AI solutions to reduce costs, meet regulatory demands, deliver an enhanced customer experience, and innovate. Getting it (the capturing, processing, managing and storing of data) right is not as straightforward. The Cambridge Analytica scandal may have brought the issue of data ethics to the headlines particularly in the context of social media platforms, but other concerns are being raised within the technology industry that are more subtle. For example, can an AI machine learn morality and just which set of morals should it learn? Morals differ dramatically from culture to culture, as a recent Massachusetts Institute of Technology (MIT) experiment showed. Others are asking if the conscious and unconscious biases of those who assemble an AI solution will be then found baked into that solution. Think tanks are focusing on defining what the ethical treatment of data should look like. The Information Accountability Foundation recently published a paper that asks probing questions about risks and benefits around data ethics within organizations. The Center for Information Policy Leadership has published a report that also examines data ethics issues. Google recently issued its own AI principles. Expect other technology companies to follow suit with their own data ethics policies over the next year or two. Investors may not be asking for data ethics policies today, but they will be soon, the reputational risk from a data ethics failure can destroy considerable shareholder value overnight. Five key steps that organizations should consider taking today to promote a more ethical culture around their data resources include: - Integrating data ethics into your culture and code of conduct - Setting up an ethics review board to oversee AI and training data being consumed - Establishing an ethics learning program focusing on data professionals - Publishing a public statement on intentions of AI and data usage - Updating whistleblower policies to include data misconduct In summary, data ethics is a relatively new field that is gaining more attention with the development of AI. Organizations, particularly boards and senior management, should consider next steps in creating a data ethics approach over the coming year as a priority.
<urn:uuid:1c4dbdb3-3cd3-4d0d-ac10-0a6094b6c0eb>
CC-MAIN-2022-40
https://www.collibra.com/us/en/blog/data-and-ethics-at-the-beginning-of-the-conversation
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00570.warc.gz
en
0.947456
543
2.640625
3
The amount of calories impact directly on weight. Getting the number of calories that the body burns over time, and weight stays stable. Consume more than the body burns, weight goes down. Taking good diet combined with a regular physical activity is the main skill for a successful weight loss process. However, a successful weight loss may come from maintaining a diet regularly providing your body with the nutrients it needs to function properly. Many of the foods that are beneficial for weight control also help prevent heart disease, diabetes, and other chronic diseases. Good diet for weight loss Grains, Fruits and Vegetables Wheat, brown rice, barley foods are digested more slowly than refined grains. The weight control evidence is stronger for whole grains than it is for fruits and vegetables. According to latest research, people who increased their intake of whole grains, fruits, and vegetables reduces the weight. Fiber is an important source in these foods. Fiber slows digestion, helping to curb hunger. Water in fruits and vegetables may help people feel fuller on fewer calories. Nuts and Weight Eating nuts does not increases weight gain and may help to control weight. Nuts have rich proteins and fiber. To eat nuts regularly can avoid from heart attacks or die from heart disease than those who rarely eat them, which is another reason to include nuts in a healthy diet. Dairy and Weight Milk and other dairy products or calcium intakes help with weight loss. In adolescent’s high milk intakes increases body mass index. The beneficial bacteria in yogurt may influence weight control. Sugar-Sweetened Beverages and Weight According to latest research, sugary drinks can lead to weight loss in both children and adults. Sugary drinks have become an important target for obesity prevention efforts. Physical exercise for Weight Loss Exercising is one of the best way to burn calories and strong your muscles. Instead of doing exercise for controlling your weight, first to take care and control your body, and then only get a result. However, weight loss is more of a mental challenge than a physical challenge. 1 High Intensity Interval Training High Intensity Interval Training (HIIT) involves short intervals of exercise at almost your efforts, followed by longer recovery periods. High-intensity exercise releases of growth hormones, which deploy fat to use as fuel. In 20-minute workout burn more calories throughout the day than a long, easy jog around the block. 2 Strength Training It helps you for slim and increases your metabolism permanently. Also, circuit training, which involves to burns 30% more calories than a typical weight workout. It blasts fat and sculpts muscle, burning up to 10 calories a minute. 3 Surya namaskar Surya Namaskar, one of the basic yoga asanas, it focuses on various parts of the body and work wonders with weight loss. Surya namaskar also calls sun salutation, it helps to strengthen your skeletal system and ligaments. It is a great way to keep your body active, also reduces stress and anxiety. To keep control your breathing during the poses, it helps you lose more weight. 30 minutes of regular walking could burn about 150 calories a day. Also, walking is an easiest exercise to control weight, and low intensity of course. Zumba is just like a dance. With this to improve your fitness. Zumba is all about loosening up and burning calories. It helps to relieve stress, improves energy and strength. Also, it incorporates healthy exercise and high intensity movement which helps sculpt the body. Swimming helps you get stronger, fitter and healthier than ever. It can burn up to 500-700 calories an hour. Swimming is the best exercise for weight loss and toning.
<urn:uuid:cc86dba0-b2f9-4797-be24-7aded9bdca92>
CC-MAIN-2022-40
https://areflect.com/2017/11/18/best-diet-physical-exercise-weight-loss/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00570.warc.gz
en
0.937464
793
3.15625
3
Control plane as defined in previous posts is the part of the router architecture that is responsible for collecting and propagating the information that will be used later to forward incoming packets. Routing Protocols and label distribution protocols are parts of control plane. The forwarding plane is the part of the router architecture used to decide how the packet is going to be switched after being received on the inbound interface. CEF tables are parts of forwarding plane. Its important to understand and differentiate between control plane and forwarding plane in MPLS networks, specially in troubleshooting problems. I am going to explain this in brief down here: The control plane information in MPLS is represented by two main tables which are the RIB and LIB - Routing protocols are used to build the Routing Information Base (RIB) which represents the routing table. OSPF, EIGRP and BGP are examples. - Label distribution protocols are used to build the Label Information Base (LIB) table. LDP and TDP are examples. The forwarding plane information in MPLS is represented by the CEF FIB table and LFIB table. - The Forwarding Information Base (FIB) is built from the information in the RIB and is used to forward incoming unlabeled IP packets. The egress can be IP packets or labeled packets. - The label Forwarding Information Base is built from the FIB and the LIB. LFIB is used to forward incoming labeled packets to the egress interface. The result can be a labeled packet (label swapping) or normal IP packet (label disposition). Note: CEF is the only packet switching method that supports MPLS because it is the only method that is capable of forwarding IP packets to labled packets. Hence CEF must be globally enabled on the router.
<urn:uuid:17bf5b4d-5e33-4be7-ad85-23cdc1984189>
CC-MAIN-2022-40
https://www.networkers-online.com/blog/2008/12/mpls-control-and-forwarding-planes/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337625.5/warc/CC-MAIN-20221005105356-20221005135356-00570.warc.gz
en
0.928812
377
3.5
4
Most advice for proper IoT firmware cybersecurity will mention the need for secure boot or secure data in transit. Let’s look at a side-channel attack that can harm both: timing attacks. We are constantly jolted by time – minutes until the next web call or old photograph reminding us of our age. Cops and robbers use time. The police use time to corroborate or disprove a suspect’s alibi. Robbers have used time of the year to pull off coordinated theft. Hackers can use micro-timing to pull off fascinating hacking stunts in a computing device. Introduction – Weaponizing Time Timing attacks can break the privacy guarantees of a device. Timing attacks can reveal a secret RSA private key under the right circumstances. Once the key is stolen, an interloper can mount an active MiTM (man-in-the-middle) attack to eavesdrop on a legitimate device or service communications. The CPU Meltdown and Spectre flaws broke privacy guards within a system. The time difference between CPU’s cache memory and main memory read access time allows an attacker to infer inaccessible values within protected memory — an attacker can “read” private data in memory. A third example of breaking privacy is bypassing iPhones’ data wipe feature. The firmware code for checking a correct passcode responded faster than processing a failed passcode. An attacker can use the time difference to quickly power down before the iPhone applies a strong lock. A similar flaw likely exists in other implementations, such as a passcode-protected JTAG interface. A timing attack combined with other techniques creates a potent concoction to implement a remote code execution attack. An attacker in some situations may use timing information to steal address space data to subvert a device through a zero-day vulnerability utilizing ROP or JOP attacks for remote code execution (RCE). Alternatively, a well-timed power glitch can skip past a memcmp() test result for verifying cryptographic hash value during the secure boot process to allow untrusted code to execute, as demonstrated in the Xbox 360 boot hack. Conclusion – Counter Awareness Randomization and doing more work (compute instructions) than is necessary are some of the techniques to counteract timing attacks. Secondary code integrity checks are advisable too. There are numerous examples of timing attacks found in the wild. A little awareness and proper development tools can help mitigate some of these threats.
<urn:uuid:6ac055fb-9ea1-40ac-a475-d7386094bc22>
CC-MAIN-2022-40
https://dellfer.com/episode-iv-the-phantom-timer/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00570.warc.gz
en
0.875358
497
2.90625
3
What do you do if you want to know more about someone? You'll do a search on their name in your favorite search engine, right? And your image about that person will be formed by the first search results you see. It's hard to change a first impression, so it better be a good one. Well other people will also search for you online. In this post I'll tell you more on how you can manage the information that's public about you to a certain extent. But it's not only about your image or reputation. Like I described in yesterday's blog there are also privacy and security risks. Know what information is online about you You should regularly search for your own name in your favorite search engine. Even better, use different search engines, because they all give (slightly) different results. Combining them will give a bit more complete picture about the information that's online about you. I was curious how many people do this. That's why I did another poll on Twitter. Even if the majority of voters are probably active in information security or privacy, only 55% regularly searches for its own name. Note that only half of the people that do so, use different search engines. Except for your name you could also search for your email addresses or user names you use online. But there's a lot more information about you on the internet that might not be discoverable via a simple name search in a search engine. For instance you could look for the white pages site of your country to find your address and telephone number. Another example is the DOBsearch website where a lot of personal data of US citizens can be found. Or another one: The below screenshot is from the osintframework.com website that lists freely available tools that can be used to retrieve particular personal information about people. As you can see I only expanded a few of the different types of tools. Or this site which groups a lot of useful Addons, tools, search engines and databases to search for publicly available information. Play around with some of these tools to discover what information you can find about yourself. You might be surprised. Keep in mind, if you can find it, motivated people with bad intentions certainly can. Get informed when information about you is published In google you can set up email alerts. Create an alert for your own name. You can do so by just entering your name and click on the "Create alert" button. Whenever google indexes content that contains your name you'll receive an email alert. In the options you can specify how often you want to receive the email alerts and on which email address. Remove as much personal data as possible. Now try to get the personal, maybe sensitive, data that you have found removed. If that doesn't work or you need data removed from other websites request the website owner - who published the information - to delete it. Here are a few ways to find contact info. - Look on the website for a "contact us" link or something similar. - This document document might be very helfpul. It contains contact information and a bit of info about the removal process for a lot of websites. - You could use a site called whois.com and search for the particular website there. You might find contact info in the "Registrant Contact" or "Administrative Contact" section of the result. If you can't reach the website owner or they don't help you, look online for solutions. Other people probably have tried the same. Here you can find for instance how you can delete your birthday and opt out of DOBsearch. If you still have no success you can request Google to remove your personal information. This is what they might remove and what not. What if you don't get the data removed? You'll have to accept that in a lot of cases you'll not get your personal data removed because neither the website owner nor Google can or want to remove the data. Like I said in the previous blog, the fact that a lot of our data is publicly retrievable means that we must be careful when we create passwords or answer security questions for account recovery purposes. You can influence though what people see first when they search your name. There are ways to get the information that you want on the first search result pages. Watch this video to see how this works for Google. Try to take control about what people find when they online search for you. Use the tips I gave to see what information is online about you. And if you don't like what you find, try to get as much as possible personal data removed. That's all for today. Tomorrow I'll be back with more security tips. In the meantime stay safe online!
<urn:uuid:b536ec9d-c957-45ec-8f4a-f9d24b714b56>
CC-MAIN-2022-40
https://johnopdenakker.com/how-to-control-your-online-exposure/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333541.98/warc/CC-MAIN-20220924213650-20220925003650-00770.warc.gz
en
0.945069
981
2.78125
3
NASA officials predict that the agency’s latest state-of-the-art satellites – launching soon – will generate an unprecedented amount of data that will be difficult to manage and optimize with NASA’s current software. Therefore, NASA researchers have turned to cloud computing to make the most of that treasure trove of information. Currently, NASA satellites send data back to ground stations where engineers turn the raw information into information that people can utilize and understand. That dataset is then sent to an archive that keeps that information on servers. Typically, when a researcher wants to use a dataset, they log on to a website, download the data they want, and then work with it on their machine. Processing the raw data increases the file size; this isn’t a huge problem for older missions that send back relatively smaller amounts of information. However, NASA officials expect that the quantity of data will grow, making this process unfeasible. “Five or six years ago, there was a realization that future Earth missions [would generate] a huge volume of data and the systems we were using would become inadequate,” Suresh Vannan, manager at the Physical Oceanography Distributed Active Archive Center for NASA’s Jet Propulsion Laboratory (JPL), said in a statement. The Surface Water and Ocean Topography (SWOT) mission – slated for a 2022 launch – is expected to produce 20 terabytes (TB) of science data a day. While the NASA-Indian Space Research Organization Synthetic Aperture Radar (NISAR) mission – slated for a 2023 launch – will generate roughly 80 TB daily. Together the data collected from these missions is enough digital storage for approximately 250 feature-length movies. With missions like SWOT and NISAR, NASA’s current data management infrastructure is not feasible. For example, suppose a researcher wanted to download a day’s worth of information from the SWOT mission onto their computer. In that case, they’d need 20 laptops, each capable of storing a terabyte of data. Or, if they wanted to download four days’ worth of data from the NISAR mission, it would take about a year to perform on an average home internet connection. “Working with data stored in the cloud means scientists won’t have to buy huge hard drives to download the data or wait months as numerous large files download to their system,” Lee-Lueng Fu, JPL project scientist for SWOT, said in the statement. “Processing and storing high volumes of data in the cloud will enable a cost-effective, efficient approach to the study of big-data problems.” The first satellite to this cloud system is the Sentinel-6 Michael Freilich satellite, part of the U.S.-European Sentinel-6/Jason-CS mission. Working with Sentinel-1 data in the cloud, engineers produced a colorized map showing Earth’s surface change from more vegetated areas to deserts. “It took a week of constant computing in the cloud, using the equivalent of thousands of machines,” Paul Rosen, JPL project scientist for NISAR, said in the press release. “If you tried to do this outside the cloud, you’d have had to buy all those thousands of machines.” However, NASA officials clarified that utilizing cloud computing does not replace how agency researchers work with these datasets; it simply makes working with datasets more efficient. Additionally, NASA officials realized that the current space available to store and archive these datasets is minimal. For future missions expected to generate a large quantity of data, it’s simply not enough. “We just don’t have the additional physical server space at JPL with enough capacity and flexibility to support both NISAR and SWOT,” Hook Hua, a JPL science data systems architect for both missions, said. However, by utilizing cloud technology, NASA expects infrastructure limitations not to be as much of a concern since it won’t have to pay to store mind-boggling amounts of data or maintain the physical space for all those hard drives. “This is a new era for Earth observation missions, and the huge amount of data they will generate requires a new era for data handling,” Kevin Murphy, chief science data officer for NASA’s Science Mission Directorate, said.
<urn:uuid:6bad1f2f-9ca1-4e3c-ade1-7dbaabebf56b>
CC-MAIN-2022-40
https://origin.meritalk.com/articles/nasa-turns-to-cloud-for-next-gen-earth-missions/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00770.warc.gz
en
0.920189
910
3.4375
3
A problem that arose in NASA’s Hubble Space Telescope over the weekend means that many of the orbiting telescope’s key capabilities may be permanently lost, officials announced Monday. The observatory entered protective “safe mode” on Saturday as a result of a power malfunction in its Advanced Camera for Surveys (ACS), the primary instrument aboard the telescope. Hubble was recovered from safe mode the next day, but officials are not optimistic about the chances of repairing the ACS, which had already been operating on redundant electronics as a result of a malfunction last summer. A ‘Severe Loss’ The ACS, which was installed in March 2002, is a third-generation instrument consisting of three electronic cameras, filters and dispersers that detect light from the ultraviolet to the near infrared. It was developed jointly by NASA’s Goddard Space Flight Center, Johns Hopkins University, Ball Aerospace and the Space Telescope Science Institute (STSI), which conducts Hubble science operations for NASA by the Association of Universities for Research in Astronomy. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. “This is a severe loss,” Mario Livio, senior astrophysicist with the Space Telescope Science Institute, told TechNewsWorld. “More than two-thirds of the observations currently done with Hubble are done with the ACS camera.” What remains operational on Hubble now are two other cameras — the Wide Field Planetary Camera 2 and the Near Infrared Camera Multi-Object Spectrograph — and Fine Guidance Sensors. The Hubble’s next service mission, scheduled for September 2008, is already filled with other repair projects, so work on the ACS may not be able to be included then, Livio said. The Next Generation “Repairing ACS was not in the plan,” Livio explained. “We’ll be looking closely into whether or not we can add ACS repair, but the plan at the moment looks pretty full. It’s not clear whether it can be included.” However, part of what is scheduled for that service mission is the installation of Wide Field Camera 3, a next generation device that may in fact outperform the ACS’ capabilities anyway, along with a new spectrograph, he added. An Anomaly Review Board was appointed on Monday to investigate the ACS problem. After a thorough investigation and assessment, the board will present its findings and recommendations by March 2, NASA officials said. Science Goes On In the meantime, a set of backup non-ACS science programs — selected by STSI last November for use in case of a future ACS problem — will now be implemented. “We are doing our best to try to see which programs can actually be done with the other equipment instead of ACS,” Livio stated. STSI has also extended the Jan. 26 deadline for project proposals that relied on the equipment to give astronomers a chance to determine if their ideas can still be implemented without the ACS. “Some can be done, some can’t. Science will continue. We’ll do our best to do all the science we can with the existing instruments,” he declared. A Question of Spectrum “What makes the ACS so important is that it’s multispectral,” Paul Czysz, professor emeritus of aerospace at St. Louis University told TechNewsWorld. “It’s not just one camera but many that cover different parts of the electromagnetic spectrum, from infrared to ultraviolet.” What we can see of outer space from Earth is filtered by the atmosphere, Czysz explained, so that in fact, we sometimes can’t see them as they are. With the Hubble telescope, we’ve been able to see about 12 billion years back in time, he added, “pretty close to the ‘big bang.'” If the ACS doesn’t get fixed, our understanding of outer space could be compromised, Czysz said. “We’ll be losing the spectrum that that camera takes. We’ve got to get out of the atmosphere to really see, and that’s what Hubble did.”
<urn:uuid:ed27291e-7079-47e5-9912-5b840af3bcf4>
CC-MAIN-2022-40
https://www.linuxinsider.com/story/key-hubble-telescope-cameras-feared-lost-55462.html
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00770.warc.gz
en
0.937778
885
2.515625
3
A pair of recently published Apple patents suggests that the tech company is looking for ways to make self-driving cars safer and more secure. The first of the two patents is particularly intriguing in relation to Project Titan, Apple’s foray into self-driving vehicles. Filed as “Wireless vehicle system for enhancing situational awareness,” the proposed device would essentially allow cars to speak to one another, using multiple transmitters to relay information such as speed and location to other cars on the road. The goal is to raise situational awareness for drivers and vehicles alike. In the case of a self-driving car, the system would supplement the information the car receives from external sensors, alerting the vehicle to changing traffic conditions to create more reaction time and mitigate the chances of a potentially fatal error in a high-speed situation. The second patent is a little more straightforward, and points to the use of biometric authentication for vehicle security. The detailed “System and Method for Vehicle Authorization” would use a biometric identification system like Apple’s Face ID to allow cars to recognize individual drivers. The car would be able to store personal profiles for multiple users, while the system itself would offer more layers of security than keys and key fobs that can be used to gain access to a stolen vehicle. Apple has explored biometric entry systems in the past, although it is worth noting that the patent for “System and Method for Vehicle Authorization” was filed two years ago and Apple has since reduced its investment in Project Titan. However, both patents do seem to be in keeping with the latest trends for self-driving cars, and could show up in some form in the future.
<urn:uuid:d3afe0fa-790b-42ec-9e8f-81e15638d16d>
CC-MAIN-2022-40
https://mobileidworld.com/two-apple-patents-could-bring-security-self-driving-cars/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00770.warc.gz
en
0.963695
344
2.546875
3
lolloj - Fotolia Businesses need to take the economic impact of cyber crime more seriously, say researchers, with the cost of cyber crime now up to 0.8% of global gross domestic product (GDP) or $600bn a year, a study has revealed. This is up from 0.7% of GDP in 2014 and represents a 34% increase from $445bn, which is an average rise of 11.3% a year for the three years to June 2017 – steady and significant growth. Europe suffers the highest economic impact of cyber crime, which is estimated at 0.84% of the regional GDP, compared with 0.78% in North America, according to the latest report on the economic impact of cyber crime by security firm McAfee and the Center for Strategic and International Studies (CSIS). The main drivers of this growth include the easy availability of cyber crime tools, the rapid adoption of new technologies by cyber criminals, the expanding number of cyber crime centres, and the growing sophistication of top-tier cyber criminals. “There is a serious problem with under-reporting of cyber crime, with up to 95% going unreported, so the $600bn figure is extremely conservative and is based purely on the figures we have available,” said Raj Samani, chief scientist and fellow at McAfee. “It is bound to attract criticism, but people need to look beyond the metrics at the real story of how the economic impact is growing, and they will realise that it has value because, all of a sudden, we begin to get a different debate. “The cost of doing business in the digital age is to protect your IT systems and investments, and the economic impact of cyber crime should be one of the most important things businesses are focusing on because failure to protect their intellectual property [IP], financial information and IT networks does have an economic impact.” According to Samani, too much attention is paid to which country or cyber crime group is behind attacks to identify who is to blame, whereas the more important focus should be on the economic impact, how that can be reduced and the return on investment in cyber defences. “The reality is that cyber crime is just an evolution of traditional crime and has a direct impact on economic growth, jobs, innovation and investment,” he said. “Companies need to understand that in today’s world, cyber risk is business risk.” IP theft alone accounts for at least 25% of the cost of cyber crime and threatens national security when it involves military technology, the report said. “IP theft and loss of opportunity are two areas of cyber crime impact that are extremely difficult to measure, but we have seen that IP theft and lost opportunities can be fatal for companies, especially for small and medium-sized businesses,” said Samani. The report identifies cyber crime-as-a-service as a key driver of cyber crime, noting that the industry has become more sophisticated, with flourishing markets offering a broad range of tools and services, such as exploit kits, custom malware and botnet rentals. “Ever since cyber crime services became commercialised in the mid-2000s, this market has grown and evolved to become bigger and more accessible than it has ever been, with the result that even an 11-year-old could mount and run a ransomware campaign,” said Samani. Crimeware-as-a-service has not only lowered the barrier to entry, but cyber criminals can now outsource much of their work to skilled contractors, said Steve Grobman, chief technology officer at McAfee. “Ransomware-as-a-service cloud providers, for example, efficiently scale attacks to target millions of systems, and attacks are automated to require minimal human involvement,” he said. Add to these factors cryptocurrencies, which ease rapid monetisation while minimising the risk of arrest, said Grobman, and it is clear that recent technological accomplishments have transformed the criminal economy as dramatically as they have every other part of the economy. Although ransomware is the fastest-growing cyber crime tool, with more than 6,000 online criminal marketplaces and ransomware-as-a-service gaining in popularity, Samani said cyber attackers seeking easy financial gains are increasingly following the money and switching their focus to stealing cryptocurrency. “Attacks on cryptocurrency exchanges and vaults are fast emerging as a new area of growth for cyber criminal activity, along with associated fraud,” he said. Greater standardisation of threat data and better coordination of cyber security requirements would improve security, particularly in key sectors such as finance, according to the report, which noted that banks remain the favourite target of cyber criminals. However, nation states are the most dangerous source of cyber crime, the report said, with Russia, North Korea and Iran being the most active in hacking financial institutions, and China the most active in cyber espionage. “Our research bore out the fact that Russia is the leader in cyber crime, reflecting the skill of its hacker community and its disdain for western law enforcement,” said James Lewis, senior vice-president at CSIS. The UK recently attributed to Russia the NotPetya malware attacks that affected companies around the world in June 2017, declaring that the UK and its allies will not tolerate malicious cyber activity. “North Korea is second in line, as the nation uses cryptocurrency theft to help fund its regime,” said Lewis, “and we are now seeing an expanding number of cyber crime centres, including not only North Korea but also Brazil, India and Vietnam.” The types of cyber crime that have the biggest economic impact include: - The loss of IP and business-confidential information. - Online fraud and financial crimes, often the result of stolen personally identifiable information. - Financial manipulation directed toward publicly traded companies. - Opportunity costs, including disruption in production or services and reduced trust in online activities. - The cost of securing networks, buying cyber insurance and paying for recovery from cyber attacks. - Reputational damage and liability risk for the affected company and its brand. The report also includes some recommendations on how to deal with cyber crime, including: - Uniform implementation of basic security measures and investment in defensive technologies. - Increased cooperation among international law enforcement agencies. - Improved collection of data by national authorities. - Greater standardisation and coordination of cyber security requirements. - Progress on the Budapest convention on cyber crime. - International pressure on state sanctuaries for cyber crime.
<urn:uuid:51bf6853-cffb-4a0f-8f3a-108bb0e1ff53>
CC-MAIN-2022-40
https://www.computerweekly.com/news/252435439/Economic-impact-of-cyber-crime-is-significant-and-rising
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00770.warc.gz
en
0.954842
1,359
2.609375
3
In recent years BigData technologies are rapidly disrupting entire economic sectors, through a shift towards more automated data-driven decisions. BigData facilitates the integration and consolidation of knowledge from numerous data sources, including different types of databases, streaming data sources, and data at rest. Furthermore, the application of BigData Analytics technologies such as Machine Learning (ML) and Artificial Intelligence (AI) enable the extraction of knowledge from these various. In several cases, the extracted knowledge includes the discovery of hidden patterns of knowledge, beyond what is already known in the target application domain. Hence, BigData analytics can produce unique insights that drive more timely and effective decisions. One of the sectors that leverages extensively the benefits of BigData technologies and BigData analytics in healthcare. The rationale behind the large-scale deployment and use of BigData technologies in healthcare is twofold. On the one hand, BigData technologies boost the accuracy of clinical decisions based on the collection and processing of large volumes of healthcare-related datasets. On the other, they also automate these decisions by accelerating the collection and analysis of heterogeneous datasets. Overall, BigData technologies and Artificial Intelligence are transforming the healthcare sector in five different, yet complementary ways. Medical decisions are usually driven by clinical knowledge. The latter consolidates research and clinical findings, which are typically contained in medical books, research papers, IT knowledge bases, and other sources of knowledge. These knowledge bases are combined with information from a limited number of sources such as the patient’s examinations and medical history as part of Medical Health Records. Nevertheless, they do not integrate other knowledge sources such as real-world data about the patient’s lifestyle and activities, even though there is clinical evidence that such data influence the way patients react to certain diseases. The advent of BigData technologies enables the collection, integration, and consolidation of data from more sources than ever before, which enables the development of richer and more integrated knowledge bases of clinical evidence. The latter are not limited to conventional medical, biochemical, clinical, molecular, and genomic data. They can also combine information from IoT (Internet of Things) devices (e.g., wearables, smartphones), medical devices, as well as alternative data sources such as information from patient networks (e.g., PatientsLikeMe.com). Based on integrated knowledge bases of clinical evidence, doctors, medical experts, and other healthcare professionals can derive unique insights that boost the accuracy and effectiveness of their clinical decisions. Integrated knowledge bases can enable the diagnosis of certain diseases with increased accuracy. This becomes possible thanks to the availability and use of more data for certifying a specific diagnosis. Most importantly, it is possible to apply Artificial Intelligence techniques to extract diagnostic insights beyond state-of-the-art clinical knowledge. Such diagnostic insights could lead to early diagnosis of diseases with high mortality rates (e.g., cancer), which could literally save lives. In this context, early diagnosis is performed by capturing diagnostic signals that provide more timely insights than conventional examinations (e.g., colonoscopy, mammography). In this way, BigData and AI technologies become the doctors’ valuable allies in combating one of their worst enemies, namely the delayed diagnosis of a deadly disease. Diagnostic decisions, yet extremely important, are only the tip of the iceberg. By considering and combining multiple sources of knowledge, clinicians can solve much more complex clinical problems, such as the prognosis of diseases and the selection of the optimal treatment. Prognosis is much more complex than diagnosis, as it considers not only medical examinations (e.g., hematological, and biochemical information), but also parameters like genomics, the patient’s lifestyle, and his/her medical history. Likewise, the selection of the optimal treatment requires the consideration of multiple parameters towards matching the patient’s phenotype with the optimal combination of available therapeutic options. In most cases, there no single (i.e. “one-size-fits-all”) treatment for all patients. Rather the treatment is tailored to the needs and the status of a specific patient as part of a BigData/AI enabled personalization of clinical decisions. BigData and AI technologies hold the provide to personalize healthcare delivery. The rationale behind this personalization is that no patient is the same with another patient. Hence, there is much room for personalizing diagnostic, prognostic, and treatment decisions. In this direction, healthcare service providers and biotech enterprises are working towards constructing digital models of the patients. These models serve as a basis for establishing a faithful digital image of the patient i.e. a computerized model that is connected and synchronized with the actual state of the patient. This model is sometimes coined as the patient’s digital twin and can be used to drive the personalization of clinical decisions. Likewise, the execution of certain AI algorithms (e.g., clustering, unsupervised learning) on their digital twins can enable the discovery of patients’ phenotypes. The latter can be used to study the personalization of drugs and treatments towards discovering the ones with the best efficacy for each phenotype. BigData technologies provide exceptional automation in the in-depth analysis of the data within very short time frames. Indeed, BigData systems can process data from integrated knowledge bases in a short time. This is hardly possible for human medical professionals, which are constrained by their processing capacity and the need to deal with numerous patients within a very short timeframe. In this context, automation is a key prerequisite for making the AI-based clinical decision making practical and applicable in real-life healthcare settings. Overall, the advent of BigData technologies is revolutionizing many healthcare processes, including diagnosis, prognosis, treatment, and management of diseases. Furthermore, they can assist patients regardless of their age, condition, and disease. In recent years we have witnessed many instances of the disruptive impact of BigData and AI in healthcare. Nevertheless, the application of BigData and AI in the healthcare sector is still in its early stages. In the coming years, we will witness an explosion in the number and the volume of healthcare datasets. Coupled with advances in AI and the abundance of computational capacity, these datasets will give enable the development of many practical healthcare applications. Quantum Computing for Business – Hype or Opportunity? Why is Data Fabric gaining traction in Enterprise Data Management? The Potential of Big Data in the Telecom Infrastructure Industry Achieving Operational Excellence through Digital Transformation Unbiased Human Centric AI Systems: The Basics you Need to Know Significance of Customer Involvement in Agile Methodology The emerging role of Autonomic Systems for Advanced IT Service Management How Metaverse could change the business landscape We're here to help! No obligation quotes in 48 hours. Teams setup within 2 weeks. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in touch. Outsource with Confidence to high quality Service Providers. If you are a Service Provider looking to register, please fill out this Information Request and someone will get in Enter your email id and we'll send a link to reset your password to the address we have for your account. The IT Exchange service provider network is exclusive and by-invite. There is no cost to get on-board; if you are competent in your areas of focus, then you are welcome. As a part of this exclusive
<urn:uuid:cfb67b99-fba0-4a01-8cd2-bffcf3241fae>
CC-MAIN-2022-40
https://www.itexchangeweb.com/blog/five-ways-bigdata-technologies-are-revolutionizing-healthcare/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00770.warc.gz
en
0.919914
1,518
3.078125
3
With 4G systems being deployed across the Globe, all eyes are on development of 5G technology. The new 5G technologies will need to be chosen developed and perfected to enable timely and reliable deployment. The new 5th generation, 5G technology for cellular systems will probably start to come to fruition around 2020 with deployment following on afterwards. So, it’s time to understand what 5G Network really is – 5G is the coming fifth-generation wireless broadband technology based on the IEEE 802.11ac standard. It will provide better speeds and coverage than the current 4G, operates on 5 GHz signal and offers speeds of up to 1 Gb/s for tens of connections.5G is a packet switched wireless system with wide area coverage and high throughput.5G wireless uses OFDM and millimeter wireless and is going to be a packed based network. Related- 5G vs Fiber Below diagram shows the vision of Huawei, One of the leading telecom Giants in Global Market To meet the preceding requirements, 5G should have the following performance advantages over existing mobile communication technologies: - 100 billion connections - 1 ms latency - 10 Gbps throughput According to the Groupe Speciale Mobile Association (GSMA) to qualify for a 5G a connection should meet most of these eight criteria – - One to 10Gbps connections to end points in the field - One millisecond end-to-end round trip delay - 1000x bandwidth per unit area - 10 to 100x number of connected devices - (Perception of) 99.999 percent availability - (Perception of) 100 percent coverage - 90 percent reduction in network energy usage - Up to ten year battery life for low power, machine-type devices 5 G will benefit the end users in following ways – - Faster Data Speeds: Currently, 4G networks are capable of achieving peak download speeds of 1 Gbps. With 5G, this would increase to 10Gbps. - Ultra-low latency: Currently with 4G, the latency rate is around 50 milliseconds, but 5G will reduce that to about one millisecond. This will be particularly important for industrial applications and driverless cars. - “Connected world”: The Internet of Things (wearables, smart home appliances, connected cars) is expected to grow exponentially over the next 10 years, and it will need a network that can accommodate billions of connected devices. 5G is designed to cater this capacity, and also to be able to assign bandwidth depending on the needs of the application and user. Below table will help understand how 5G technology differs from 4G technology in terms of technical specifications and services available –
<urn:uuid:e5a86621-23cd-44dd-9d17-baa3c640b78f>
CC-MAIN-2022-40
https://ipwithease.com/5g-mobile-technology/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00770.warc.gz
en
0.927933
555
3.03125
3
THE AI/ML MODEL.Once good data has been captured from the production recipes, this data is then broken up on a per sensor and per recipe basis. For each recipe a single machine learning model is trained per sensor. In many machine learning applications, a model is trained to classify between one of n classes. The goal is to determine when the component/equipment is behaving abnormally. Formally this is framed as an anomaly detection problem. Therefore, we will initially train a model to recognize what normal production looks like, and then over time, as anomalous states are encountered, incorporate them into the training datasets. The first step in creating the model is feature extraction from the streaming raw sensor data. In this write-up, we refer to Bosch XDK’s (XDK), but, user can deploy any type of sensor. The Bosch XDK’s are capable of streaming numerous types of sensor data, and the readings that our models are trained with incorporate acceleration in the x,y, and z directions as well as angular acceleration in the x,y, and z directions. This means that for every sample in time, six readings are being ingested by the ML model. With most time series data, a single timestep is not necessarily useful, but rather a series of timesteps must be processed at once in order to make meaningful inference from the data. When training and running a model, this is accomplished via a sliding window, who’s step size and length is optimized as part of a hyperparameter search during training. A set of features are calculated based on all of the samples that fall within a window, and then the ML model can process the entire time series as a single set of higher level features. While features can be manually fine-tuned, they can also be optimally found through the use of grid search or genetic algorithms, the latter being the techniques that we use when extracting features. Once the optimal features have been found, a model can be trained and built. Because the goal of the ML application is to identify anomalous behavior, we use a neuron based machine learning algorithm to learn bounds on clusters of data. A two dimensional rendering of what one of these neurons looks like is shown below. The bounding box is defined by an area of influence, and both the center, as well as this area of influence, are what is learned during training. As the model is exposed to more training data, it is able to learn more complicated clusters and structure by creating new neurons or modifying the center/area of influence of existing neurons.
<urn:uuid:d5bc1b0e-b2b8-4f08-b5b4-2ce6551f49f9>
CC-MAIN-2022-40
https://www.adapdix.com/knowledge-base/category/component-overview/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337360.41/warc/CC-MAIN-20221002212623-20221003002623-00770.warc.gz
en
0.960093
533
3.3125
3
Massive MIMO is a multi-user MIMO (multiple-input multiple output) technology that can provide uniformly good service to wireless terminals in high-mobility environments. The key concept is to equip base stations with arrays of many antennas, which are used to serve many terminals simultaneously, in the same time-frequency resource. The word “massive” refer to the number of antennas and not the physical size. The antenna arrays have attractive form factors: in the 2 GHz band, a half-wavelength-spaced rectangular array with 200 dual-polarized elements is about 1.5 x 0.75 meters large. Massive MIMO operates in TDD mode and the downlink beamforming exploits the uplink-downlink reciprocity of radio propagation. Specifically, the base station array uses channel estimates obtained from uplink pilots transmitted by the terminals to learn the channel in both directions. This makes Massive MIMO entirely scalable with respect to the number of base station antennas. Base stations in Massive MIMO operate autonomously, with no sharing of payload data or channel state information with other cells. Comparing performance f 64T64R and 8T8R antenna systems. Below the result from the Sprint LTE TDD network, where massive Mimo 64T64R and 8T8R are both deployed: “We observed up to a 3.4x increase in downlink sector throughput and up to an 8.9x increase in the uplink sector throughput versus 8T8R (obviously the gain is substantially higher relative to 2T2R). Results varied based on the test conditions that we identified. Link budget tests revealed close to a triple-digit improvement in uplink data speeds. Preliminary results for the downlink also showed strong gains. Future improvements in 64T64R are forthcoming based on likely vendor product roadmaps.” Six key differences between conventional MU-MIMO and Massive MIMO are provided below. |Conventional MU-MIMO||Massive MIMO| |Relation between number of BS antennas (M) and users (K)||M ≈ K and both are small (e.g., below 10)||M ≫ K and both can be large (e.g., M=100 andK=20).| |Duplexing mode||Designed to work with both TDD and FDD operation||Designed for TDD operation to exploit channel reciprocity, but can work also on FDD but low reliable with lower capacity| |Channel acquisition||Mainly based on codebooks with set of predefined angular beams||Based on sending uplink pilots and exploiting channel reciprocity| |Link quality after precoding/combining||Varies over time and frequency, due to frequency-selective and small-scale fading||Almost no variations over time and frequency, thanks to channel hardening| |Resource allocation||The allocation must change rapidly to account for channel quality variations||The allocation can be planned in advance since the channel quality varies slowly| |Cell-edge performance||Only good if the BSs cooperate||Cell-edge SNR increases proportionally to the number of antennas, without causing more inter-cell interference|
<urn:uuid:90c3f670-27d5-4e17-a3ec-90fde171b923>
CC-MAIN-2022-40
https://www.5gworldpro.com/blog/2019/04/27/60-benefits-of-5g-massive-mimo/
null
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00770.warc.gz
en
0.914199
672
2.625
3